19:58, EEST
October 26, 2016
Hi,
I’m just wondering how many onDataChange() notifications the ProSys library can handle per second? I know how incredibly vague this question is due to hardware, load, and so on, but I’m running into a serious issue where I’m testing against 40 subscriptions with 1000 metric updates a second, and any additional load starts causing updates to be lost. (In this example I’m connecting to 4 different ProSys servers (ver 2.2), and it appears that somewhere between those servers and the onDataChange() call activating in my code updates are being lost.
I’ve removed all blocking and synchronization from my own onDataUpdate () call so it shouldn’t be slowing things down (having read the thread about the onDataUpdate () invocation code being single threaded).
Thoughts on the load I should be able to support? Areas I should look at? Diagnostic info I can access to further refine where the problem is? My next option is probably to create standalone apps for each subscription and find a method to forward the data to my primary app.
edit: I’m also seeing:
Bad_Timeout (code=0x800A0000, description=”The operation timed out.”)
com.prosysopc.ua.ServiceException: Bad_Timeout (code=0x800A0000, description=”The operation timed out.”)
ServiceResult=Bad_Timeout (0x800A0000) “The operation timed out.”
Which is causing a disconnect and me forcing a reconnect.
Thanks
-Mark
10:01, EEST
April 3, 2012
Hi,
I need to ask first some clarifications.
Which version of the SDK you are using for the client? Generally I would assume the latest release one if you do not specify (i.e. 4.3.0). If not, then please test with that one, in general any fixes would be on top the current release.
A single UaClient object is for a single connection, thus are you having 4 separate UaClients here?
While the version of the SDK for the servers doesn’t really matter that much here (assuming no bug influences the data sent etc.), it could still be useful to know. If by “ver 2.2” you mean 2.2.0 please note that it is very old.
There is … a lot of parameters that can influence this all in Subscription, MonitoredDataItem and UaClient. And in addition the common threadpools of the “stack layer” (or “the stack” if talking about SDK before 4.0.0) I guess in theory could be a bottleneck (though this would need a separate post; is complicated; but I think it should be probably something else before that anyways).
But let’s start by the most obvious one and continue in follow up posts once I’ll know the details listed above.
Can you tell what you are doing within the onDataChange calls? While removing blocking and synchronizations do affect (or well the synchronization only if you had 4 different UaClients call the same listener) performance, are you still e.g. doing individual operations per monitored item within the method (such as logging or calling anything that can take a bit of time). If we have 40k updates per second, this would mean you’ll have 1/40000 second timeframe per individual operation (as it is sequential per items per subscriptions), meaning if you are doing _anything_ within the method that takes more than 25µs (or 0.025 milliseconds) it will slow it down. If this was the case, then basically you should do a queue for which you will queue up some operations based on the datachanges and then do a mass-operation in a different worker thread etc. which could be faster (or if you notice the queue is filling faster than you can handle them that would be basically indication that something needs optimization e.g. SDK code, your code, hardware etc.).
Another, but not that pretty option is to attach a SubscriptionNotificationListener to the Subscription and check the onDataChange there. However in that case you would need to handle the raw NotificationData (which can be one of the subtypes DataChangeNotification/StatusChangeNotification/EventNotificationList), and would need to link the data with the client ids (which is basically what the SDK does for you when it calls the individual MonitoredDataItemListeners of the items). Anyway, SDK will still do most of the same calculations, so probably the queue option above is better.
Anyway you can use a profiler tool to check how long your onDataChange takes and try to see if it is too slow (do note then that the profiling itself does also slow it down so).
If you set logs to DEBUG level for Subscription, you should see lines “onPublishResponse: responseQueue.size()=” which will show how many responses are queued internally before our PublishTask Thread calls you listeners (needs more explanations I think). Anyway if that buffer overflows you will see a onBufferOverflow on a SubscriptionNotificationListener. By default per subscription the buffer size is 100 and can be set per individual subscription. Anyway, IF you do not see that filling, then it would mean we are not actually reading the notifications fast enough (which should not happen, unless thread schedulings prevent the Thread reading the raw socket from executing) or the server is not sending the data fast enough (given that it can only reply on the PublishRequests the client send, it could be an indication that we cannot send them fast enough). For those situations basically you should look using https://www.prosysopc.com/blog/opc-ua-wireshark/ (if possible, needs NONE or SIGN only modes to see traffic) is the data actually moving (and fast enough). In practice Wireshark is the best tool to see if network-wise everything makes sense.
Most Users Ever Online: 1919
Currently Online:
38 Guest(s)
Currently Browsing this Page:
1 Guest(s)
Top Posters:
Heikki Tahvanainen: 402
hbrackel: 144
rocket science: 88
pramanj: 86
Francesco Zambon: 83
Ibrahim: 78
Sabari: 62
kapsl: 57
gjevremovic: 49
Xavier: 43
Member Stats:
Guest Posters: 0
Members: 735
Moderators: 7
Admins: 1
Forum Stats:
Groups: 3
Forums: 15
Topics: 1523
Posts: 6449
Newest Members:
rust, christamcdowall, redaahern07571, nigelbdhmp, travistimmons, AnnelCib, dalenegettinger, howardkennerley, Thomassnism, biancacraft16Moderators: Jouni Aro: 1026, Pyry: 1, Petri: 0, Bjarne Boström: 1026, Jimmy Ni: 26, Matti Siponen: 346, Lusetti: 0
Administrators: admin: 1