12:22, EEST
August 20, 2014
Hi,
i recognized a strange behavior and don’t know the reason for this. We were wondering why reading a value of a node takes so long and so we stopped the time, this is what we do:
long readStart = System.nanoTime();
value = client.readValue(nodeId);
System.out.println(“read time: ” + ((System.nanoTime() – readStart) / 1000000.0) + “ms, ” + “value:” + value.getValue().toString());
and this is the resut:
read time: 198.693906ms, value:01010010
read time: 1.13903ms, value:WORKORDER_NUMBER
read time: 1.082199ms, value:PLANNED_START_DATE
read time: 1.678512ms, value:PROCESS_LAYER
read time: 1.576669ms, value:PROCESS_VERSION
read time: 0.877725ms, value:QUANTITY
read time: 2.069028ms, value:STATION_DESC
read time: 0.974388ms, value:STATION_NUMBER
read time: 1.022511ms, value:WORKORDER_DESC
All DataTypes are Strings. The first read is always very slow and all the others are fast, as you can see. The Server is also written with the prosys lilbrary. Do you have an idea what could cause this?
Thanks
Ibrahim
13:29, EEST
August 20, 2014
Hi Alex,
i ran the server on the same machine like the client, to see if this comes from the network. The results are still strange. sometimes the reads are all fast like this:
read time: 1.119098ms, value:String0
read time: 0.911098ms, value:String2
read time: 0.761525ms, value:String3
read time: 0.870075ms, value:String4
read time: 0.83461ms, value:String5
read time: 0.874663ms, value:String6
read time: 0.822393ms, value:String7
read time: 0.825417ms, value:String8
read time: 0.810488ms, value:String9
but sometimes the first read is again much slower than all the others, like this:
read time: 40.183857ms, value:String0
read time: 1.032204ms, value:String2
read time: 0.868942ms, value:String3
read time: 1.030098ms, value:String4
read time: 0.810517ms, value:String5
read time: 0.833531ms, value:String6
read time: 0.874972ms, value:String7
read time: 0.876113ms, value:String8
read time: 1.187112ms, value:String9
14:01, EEST
July 22, 2016
Hmm, hard to tell.. The only thing I could imagine is that the system has a sort of caching mechanism. When you read the first value, it assumes you might need the others as well and therefore already has loaded them into the RAM to speed things up.
Maybe someone with the actual insights into the sourcecode of the OPC ua technics could give you a more secure answer than me.
14:18, EEST
April 3, 2012
Hi
I need to check, but as a quick info the server does keep a reference to the last UaNode that was needed for a request/response in case it is asked something again in the next request (which is quite normal). It should not be that big of a difference still..
Also at some point the JVM will JIT compile parts that are used often (depends on the JVM you are using but at least Oracle/OpenJDK does this)
– Bjarne
14:55, EEST
August 20, 2014
18:41, EEST
August 20, 2014
Hi,
we made some more tests. The following lines are from the profile call stack, where you can find the 200ms again. Maybe this helps…
com.prosysopc.ua.client.UaClient.doRead (Double, org.opcfoundation.ua.core.TimestampsToReturn, org.opcfoundation.ua.core.ReadValueId[]) 206 ms (3,6%) 2,31 ms (0,1%)
.
.
.
org.opcfoundation.ua.transport.impl.AsyncResultImpl.waitForResult (long, java.util.concurrent.TimeUnit) 204 ms (3,5%) 0,0 ms (0%)
Another test we made was to read values in a loop. This worked without delay. But when we read values only after a subscribed node changed then the first read is slow. Could this come from the MonitoredDataItemListener ?
11:30, EEST
August 20, 2014
Hi,
we found a similar issue reported by gziemian in github, but this time in the .NET stack. Maybe it has something to do with our issue.. Sounds very similiar…
10:10, EEST
April 17, 2013
Hello,
This behavior seems to be related to read requests that happen simultaneously with publish messages. I got the same kind of profiling result as you did showing that the client application spends the time waiting for a response in AsyncResultImpl.waitForResult method.
In my own tests, this behavior wasn’t present when the server and client applications were on the same host. Actual network is needed to reproduce the longer read times or at least the network makes the longer wait times more visible.
The fact that you tested the reads from MonitoredDataItemListener has nothing to do with the longer read times. Also, this is not exactly related to the first read. You only need to have an active subscription and periodic read calls. Those read calls that happen within some specific time frame from publish messages will experience longer wait times. Example log:
reading took 1.427216 ms
reading took 1.289479 ms
Node: ns=2;s=MyLevel.Value | Status: GOOD (0x00000000) “” | Value: 62.0 | SourceTimestamp: 2016 Aug 03 (EEST) 09:31:52.150
reading took 204.047342 ms
reading took 1.437489 ms
reading took 1.696603 ms
reading took 1.712202 ms
Normally read calls experience wait time of about 1 millisecond. The one read call which happens to be executed with unfortunate timing will experience the longer wait time.
I don’t think that the mentioned .NET issue is related to this in any way. The slow read calls aren’t the first ones. The additional wait time is somehow caused by the publish requests or publish responses or something else in the subscription functionality.
Thank you for reporting this behavior! At the moment this doesn’t appear to be a serious defect but this may potentially be some kind of a timing related bug.
Initially you said that you were wondering why reading a value of a node takes a long time. According to my understanding, occasional 200 millisecond wait in read calls should be barely noticeable. Are you experiencing also some additional performance issues?
17:06, EEST
August 20, 2014
Hi,
our issue is that we’re using the subscription to fire triggers, that initiate remote api-calls to our MES System. But before we can call the api we must read some additional values from the OPC UA Server. So if we take the 200ms + the time for the api call then this could be too long in special cases.
When the Server fires the trigger (subscription dataChange), it waits until our client is ready with the api call and then continues it’s work. In a fast production areas this wait time could mean that the whole production process stops. Thats why we want to be fast enough to not be the reason for the stopping of machines.
It is essential for us to have the fastest possible read/ write times.
Regards
Ibrahim
Most Users Ever Online: 1919
Currently Online:
14 Guest(s)
Currently Browsing this Page:
1 Guest(s)
Top Posters:
Heikki Tahvanainen: 402
hbrackel: 144
rocket science: 88
pramanj: 86
Francesco Zambon: 83
Ibrahim: 78
Sabari: 62
kapsl: 57
gjevremovic: 49
Xavier: 43
Member Stats:
Guest Posters: 0
Members: 730
Moderators: 7
Admins: 1
Forum Stats:
Groups: 3
Forums: 15
Topics: 1529
Posts: 6471
Newest Members:
HypromeImpupe, toneylapham544, rondawolinski7, Marypof5711, roycedelargie91, kourtneyquisenbe, ellis87832073466, zkxwilliemae, gabriellabachus, DeakinModerators: Jouni Aro: 1026, Pyry: 1, Petri: 0, Bjarne Boström: 1032, Jimmy Ni: 26, Matti Siponen: 349, Lusetti: 0
Administrators: admin: 1