Avatar

Please consider registering
guest

sp_LogInOut Log In sp_Registration Register

Register | Lost password?
Advanced Search

— Forum Scope —




— Match —





— Forum Options —





Minimum search word length is 3 characters - maximum search word length is 84 characters

sp_Feed Topic RSS sp_TopicIcon
SecureChannelTcp Timeout
April 2, 2020
14:11, EEST
Avatar
hbrackel
Member
Members
Forum Posts: 144
Member Since:
February 21, 2014
sp_UserOfflineSmall Offline

Good morning,
I hope you all are safe. Given the Corona situation I am connecting a Prosys OPC UA Client, (SDK v 4.1.4) from my home office to a remote server through a rather slow DSL line. The connection succeeds, but I receive the following message after a short time:

[Blocking-Work-Executor-63] SecureChannelTcp : Request id=305 timeouted 132005ms elapsed. timeout at 132000ms

I am subscribing to the ServerState, which, as a result from the timeout goes from Running to CommunicationFault. At that time, multiple regular data subscriptions already delivery DataValue updates normally. One monitored item though does not delivery an initial value, although UaExpert has no problems subscribing to it.

I know that this might me difficult to advise on with that little information, but maybe you can hint me towards the problem area.

Thanks,
Hans-Uwe

April 2, 2020
16:12, EEST
Avatar
Bjarne Boström
Moderator
Moderators
Forum Posts: 1026
Member Since:
April 3, 2012
sp_UserOfflineSmall Offline

Hi,

Most likely if you call UaClient.setTimeout (before calling connect) to some high value it should help. By default it is null, then internally in lower levels we use 120000 (milliseconds) as a default. So try something crazy like 10 minutes.

P.S. While not helping for individual service calls, if you have the option to try SDK 4.3.0 it should greatly shorten the connection time if you do UaNode related operations (and prev versions do use it internally for OperationLimits). It will still send about the same data binary-wise (or a bit more) (when connecting and reading the types etc.), but the calls are packed with more operations, thus less overhead of latency for each request. https://downloads.prosysopc.com/opcua/Prosys_OPC_UA_SDK_for_Java_4_Release_Notes.html#version-4-3-0 (and like for a server that has large number of custom structures it is like going from 10000 service calls to like 100)

April 2, 2020
17:25, EEST
Avatar
hbrackel
Member
Members
Forum Posts: 144
Member Since:
February 21, 2014
sp_UserOfflineSmall Offline

Hi,

thanks for your quick reply. I changed to timeout first to 20000ms and then to 70000, but this actually made things worse. At 20000ms, the request timeout out after 5000ms, at 70000ms, the timeout happened after 11000ms (according to the log. the actual message happened at different times). So I don’t see a direct relationship between the timeout period and the actual request timeout…

April 3, 2020
10:18, EEST
Avatar
Bjarne Boström
Moderator
Moderators
Forum Posts: 1026
Member Since:
April 3, 2012
sp_UserOfflineSmall Offline

Hmm.. that should have worked.. interesting.

Could you please doublecheck you did it like e.g.

client.setTimeout(10, TimeUnit.MINUTES);
client.connect();

Is the server made with the Java SDK or something else? Does it’s log show anything?

IF you have option to https://www.prosysopc.com/blog/opc-ua-wireshark/, you could validate, that the correct number is sent in the RequestHeader TimeoutHint. Additionally if you do not see the server to reply, or see the timeout before that, then it means our client side did timeout the message. If instead you see server to return that code, it could be that server is timeouting them sooner. However the log line in the first post is client-side timeouting.

Client side timeouts at 1.1*RequestHeader.TimeoutHint. For 120000, this is 120000 * 1.1 == 132000 the number you saw.

While probably not related, do note that for initial connecting timeouts there are or at least can be operating system level timeouts, we cannot wait more than that, since the OS will timeout by itself. Typically those are 1 or 2 minutes. But that shouldn’t affect once the Socket connection is formed.

Also do check are the client and server clocks synchronized at least to some level. The RequestHeader does include a client-side timestamp, while that should only be used for logging purposes, in theory it is possible that it would incorrectly be used to calc timeouts in the server side.

April 3, 2020
14:29, EEST
Avatar
Bjarne Boström
Moderator
Moderators
Forum Posts: 1026
Member Since:
April 3, 2012
sp_UserOfflineSmall Offline

Additionally while numbers doesn’t exactly match for this case, you might need to also call UaClient.setStatusCheckTimeout in addition to that setTimeout. You can use the same values here as in the setTimeout.

The status check timeout is used when UaClient by default reads the server status once per second. The default status check timeout value is 10000 (ms), i.e. 10 seconds. Thus it might be this after all so please try to change it also to some crazy value. Note however, that the value shouldn’t be that high normally, since if it is too high, then the UaClient will also be unaware that the server is e.g. shutting down etc. So it is should be larger than normally expected for the server to answer, but how much larger is then a good question.

April 16, 2020
15:07, EEST
Avatar
hbrackel
Member
Members
Forum Posts: 144
Member Since:
February 21, 2014
sp_UserOfflineSmall Offline

I still did not figure out where exactly the problem is for the described situation. Changing the StatusCheckTimeouts do not seem to make any difference.

I create 2 subscriptions (plus a handful of monitored items) in the serverStatusListener when it changes from xxx to Running. This happens on a different thread though in order to not block the listener. If I only add a single subscription + monitored items, things work as expected.
I need to track this path further down….

May 7, 2020
17:46, EEST
Avatar
hbrackel
Member
Members
Forum Posts: 144
Member Since:
February 21, 2014
sp_UserOfflineSmall Offline

Hurray! After upgrading to SDK v4.3.0 the timeouts are no longer occurring! I still don’t know what caused them, but anyways… the problem has been solved.

Many thanks!

May 8, 2020
17:30, EEST
Avatar
Jouni Aro
Moderator
Moderators
Forum Posts: 1026
Member Since:
December 21, 2011
sp_UserOfflineSmall Offline

Super!

November 25, 2020
23:18, EET
Avatar
MichaelBraveDigital
Member
Members
Forum Posts: 14
Member Since:
March 10, 2016
sp_UserOfflineSmall Offline

I’m having the same issues, but unfortunately I’m on the 4.4.2 SDK and it’s still happening. This was not a problem while I was still on the 3.1.8 SDK.

Here’s my unchanged timeouts which have been working fine:
client.setTimeout(10000);
client.setAutoReconnect(false);
client.setStatusCheckTimeout(30000);
client.setStatusCheckInterval(30000);

And here’s the errors I’m getting:

com.prosysopc.ua.stack.transport.tcp.io.SecureChannelTcp 22:58:28.072 WARN [Work-Executor-50] – Request id=393 timeouted 6601ms elapsed. timeout at 6600ms
com.prosysopc.ua.stack.transport.tcp.io.SecureChannelTcp 22:58:28.072 WARN [Work-Executor-50] – Request id=393 timeouted 6601ms elapsed. timeout at 6600ms
com.prosysopc.ua.stack.transport.tcp.io.SecureChannelTcp 22:58:28.273 WARN [Work-Executor-55] – Request id=395 timeouted 6600ms elapsed. timeout at 6600ms
com.prosysopc.ua.stack.transport.tcp.io.SecureChannelTcp 22:58:28.273 WARN [Work-Executor-55] – Request id=395 timeouted 6600ms elapsed. timeout at 6600ms
com.prosysopc.ua.stack.transport.tcp.io.SecureChannelTcp 22:58:28.556 WARN [Work-Executor-11] – -453031844 Unidentified message, RequestId=393, type=ServiceFault!
com.prosysopc.ua.stack.transport.tcp.io.SecureChannelTcp 22:58:28.556 WARN [Work-Executor-11] – -453031844 Unidentified message, RequestId=393, type=ServiceFault!
com.prosysopc.ua.stack.transport.tcp.io.SecureChannelTcp 22:58:28.556 WARN [Work-Executor-22] – -453031844 Unidentified message, RequestId=395, type=ServiceFault!
com.prosysopc.ua.stack.transport.tcp.io.SecureChannelTcp 22:58:28.556 WARN [Work-Executor-22] – -453031844 Unidentified message, RequestId=395, type=ServiceFault!
com.prosysopc.ua.stack.transport.tcp.io.SecureChannelTcp 22:58:28.556 WARN [Work-Executor-11] – ServiceFault=ServiceFault [ResponseHeader=”ResponseHeader [Timestamp=”11/25/20 20:58:25.4978826 GMT”, RequestHandle=”391″, ServiceResult=”Bad_NoSubscription (0x80790000) “There is no subscription available for this session.””, ServiceDiagnostics=”Diagnostic Info:
“, StringTable=”[]”, AdditionalHeader=”null”]”]
com.prosysopc.ua.stack.transport.tcp.io.SecureChannelTcp 22:58:28.556 WARN [Work-Executor-11] – ServiceFault=ServiceFault [ResponseHeader=”ResponseHeader [Timestamp=”11/25/20 20:58:25.4978826 GMT”, RequestHandle=”391″, ServiceResult=”Bad_NoSubscription (0x80790000) “There is no subscription available for this session.””, ServiceDiagnostics=”Diagnostic Info:
“, StringTable=”[]”, AdditionalHeader=”null”]”]
com.prosysopc.ua.stack.transport.tcp.io.SecureChannelTcp 22:58:28.556 WARN [Work-Executor-22] – ServiceFault=ServiceFault [ResponseHeader=”ResponseHeader [Timestamp=”11/25/20 20:58:25.4983774 GMT”, RequestHandle=”393″, ServiceResult=”Bad_NoSubscription (0x80790000) “There is no subscription available for this session.””, ServiceDiagnostics=”Diagnostic Info:
“, StringTable=”[]”, AdditionalHeader=”null”]”]
com.prosysopc.ua.stack.transport.tcp.io.SecureChannelTcp 22:58:28.556 WARN [Work-Executor-22] – ServiceFault=ServiceFault [ResponseHeader=”ResponseHeader [Timestamp=”11/25/20 20:58:25.4983774 GMT”, RequestHandle=”393″, ServiceResult=”Bad_NoSubscription (0x80790000) “There is no subscription available for this session.””, ServiceDiagnostics=”Diagnostic Info:
“, StringTable=”[]”, AdditionalHeader=”null”]”]

November 26, 2020
12:05, EET
Avatar
Bjarne Boström
Moderator
Moderators
Forum Posts: 1026
Member Since:
April 3, 2012
sp_UserOfflineSmall Offline

Hi,

Just as a clarification, is this the same 500 UaClients use-case as you have in https://forum.prosysopc.com/forum/opc-ua-java-sdk/outofmemoryerror-java-heap-space-in-binarydecoder-getdatavalue/ ?

As additional info. Each connection uses 2 Threads, one “stack level” ReadThread that reads the data from the socket and then one “SDK level” PublishTask Thread within the UaClient.

4.x doesn’t exactly have a “Stack” anymore (read https://downloads.prosysopc.com/opcua/Prosys_OPC_UA_SDK_for_Java_4_Release_Notes.html#version-4-0-0), but some functionalities are still in practice as they were in a stack 3.x used. Some are not.

3.x did decoding of Messages directly within the ReadThread.

The 4.x (in 4.2.0-> to be precise) decodes messages within the blocking-work-executor pool, which is shared for all UaClients (and well UaServers) within the jvm.

We had to do it like this in our current design, since we now support decoding custom Structure types that are added after connection has done (as long as they are DataTypeDefinitions and have encoding nodes to do the reverse mapping from). The decoding Thread will Read/Browse the info, thus it cannot be the ReadThread, otherwise it would deadlock itself.

That being said, not sure what would be a solution for this in 4.x scope. You can try to increase the number of threads in the shared pool, call StackUtils.setBlockingWorkerThreadPoolCoreSize at the start of main or before any other calls to the SDK. It is 64 by default, maybe try something like 200, if you can.

At some point in the future we hopefully would have a way to get rid of the shared pool (or at least an option to have a dedicated “UaApplicationContext” etc., though even that might of an issue).

I could try to add a flag/strategy to override which Executor TcpConnection uses as a workaround in 4.x scope. Send email to uajava-support@prosysopc.com if you want a beta version (though, we are quite close to 4.5.0 release, I can try to include it there).

November 26, 2020
14:04, EET
Avatar
Bjarne Boström
Moderator
Moderators
Forum Posts: 1026
Member Since:
April 3, 2012
sp_UserOfflineSmall Offline

Ok, now there is in the latest beta (send email to uajava-support@prosysopc.com if you want to test ASAP, otherwise I think it will in the next release mostly like this):

public static setExecutorProvider(ExecutorProvider) for TcpConnection class. By default it is DefaultExecutorProvider, which just returns StackUtils.getBlockingWorkExecutor(), keeping the existing impl.

You could now do a:

public class CurrentThreadExecutorProvider implements ExecutorProvider {

private static final Executor CURRENT_THREAD = command -> command.run();

@Override
public Executor get(TcpConnection tcpConnection) {
return CURRENT_THREAD;
}

}

or equivalent and set an instance of that instead. Note that at least for now that get method is called again for every time the Executor is needed. Then it should be quite close to what 3.x was.

However! it should be noted that using a “current thread” approach in any other situations than when UaClient.setInitTypeDictionaryAutoUsage(false) is set will lead to the mentioned deadlock once an unkown encoding NodeId is encountered (so basically any custom Structure). Also in those cases the executor should be backed by more than one thread, since one would be used for resolving the custom Structure, which can take a number of service calls (e.g. if it’s fields are also custom Structures etc.). Also when using the current thread executor, one bad Message resulting a DecodingException also could break the whole connection requiring a reconnect (this was indirectly fixed by moving to use the blocking work pool).

Since in the other post you said that you could disable the typedictionary init, for the time being if you can get to work it with this would it be an acceptable workaround? Though if not I’m not sure is there anything to be done short-term (like still within this year), but we are open for suggestions.

December 1, 2020
16:55, EET
Avatar
MichaelBraveDigital
Member
Members
Forum Posts: 14
Member Since:
March 10, 2016
sp_UserOfflineSmall Offline

Thanks for this Bjarne,

Yes this is the same scenario with 500 UAClients. I moved our server over to hardware with more memory and increased the memory limits in docker and in java and the Bad_NoSubscription errors seem to have been fixed. I will try increase the blocking worker pool size because I’d like to stay compatible with v4.x if possible.

But my timeout issues are not resolved completely unfortunately.
I set up my Java client on my local machine and only connected 1 single OPC connection with a subscription active. I get the same timeout errors, BUT this seems to happen ONLY when I set my subscription.setPublishingEnabled(false). Once I resume publishing on the subscription, everything works fine again. Could this be the OPC server on the other end being outdated with v4.x of the SDK?

com.prosysopc.ua.stack.transport.tcp.io.SecureChannelTcp 16:47:29.128 WARN [Work-Executor-40] – 117143510 Unidentified message, RequestId=54, type=ServiceFault!
com.prosysopc.ua.stack.transport.tcp.io.SecureChannelTcp 16:47:29.128 WARN [Work-Executor-40] – 117143510 Unidentified message, RequestId=54, type=ServiceFault!
com.prosysopc.ua.stack.transport.tcp.io.SecureChannelTcp 16:47:29.128 WARN [Work-Executor-42] – 117143510 Unidentified message, RequestId=55, type=ServiceFault!
com.prosysopc.ua.stack.transport.tcp.io.SecureChannelTcp 16:47:29.128 WARN [Work-Executor-42] – 117143510 Unidentified message, RequestId=55, type=ServiceFault!
com.prosysopc.ua.stack.transport.tcp.io.SecureChannelTcp 16:47:29.128 WARN [Work-Executor-40] – ServiceFault=ServiceFault [ResponseHeader=”ResponseHeader [Timestamp=”12/01/20 14:47:28.5636852 GMT”, RequestHandle=”52″, ServiceResult=”Bad_Timeout (0x800A0000) “The operation timed out.””, ServiceDiagnostics=”Diagnostic Info:
“, StringTable=”[]”, AdditionalHeader=”null”]”]
com.prosysopc.ua.stack.transport.tcp.io.SecureChannelTcp 16:47:29.128 WARN [Work-Executor-40] – ServiceFault=ServiceFault [ResponseHeader=”ResponseHeader [Timestamp=”12/01/20 14:47:28.5636852 GMT”, RequestHandle=”52″, ServiceResult=”Bad_Timeout (0x800A0000) “The operation timed out.””, ServiceDiagnostics=”Diagnostic Info:
“, StringTable=”[]”, AdditionalHeader=”null”]”]
com.prosysopc.ua.stack.transport.tcp.io.SecureChannelTcp 16:47:29.128 WARN [Work-Executor-42] – ServiceFault=ServiceFault [ResponseHeader=”ResponseHeader [Timestamp=”12/01/20 14:47:28.5642180 GMT”, RequestHandle=”53″, ServiceResult=”Bad_Timeout (0x800A0000) “The operation timed out.””, ServiceDiagnostics=”Diagnostic Info:
“, StringTable=”[]”, AdditionalHeader=”null”]”]
com.prosysopc.ua.stack.transport.tcp.io.SecureChannelTcp 16:47:29.128 WARN [Work-Executor-42] – ServiceFault=ServiceFault [ResponseHeader=”ResponseHeader [Timestamp=”12/01/20 14:47:28.5642180 GMT”, RequestHandle=”53″, ServiceResult=”Bad_Timeout (0x800A0000) “The operation timed out.””, ServiceDiagnostics=”Diagnostic Info:
“, StringTable=”[]”, AdditionalHeader=”null”]”]
com.prosysopc.ua.stack.transport.tcp.io.SecureChannelTcp 16:47:30.107 WARN [Work-Executor-44] – Request id=57 timeouted 6606ms elapsed. timeout at 6600ms
com.prosysopc.ua.stack.transport.tcp.io.SecureChannelTcp 16:47:30.107 WARN [Work-Executor-44] – Request id=57 timeouted 6606ms elapsed. timeout at 6600ms
com.prosysopc.ua.stack.transport.tcp.io.SecureChannelTcp 16:47:32.228 WARN [Work-Executor-47] – Request id=58 timeouted 6610ms elapsed. timeout at 6600ms
com.prosysopc.ua.stack.transport.tcp.io.SecureChannelTcp 16:47:32.228 WARN [Work-Executor-47] – Request id=58 timeouted 6610ms elapsed. timeout at 6600ms
com.prosysopc.ua.stack.transport.tcp.io.SecureChannelTcp 16:47:32.468 WARN [Work-Executor-53] – Request id=59 timeouted 6610ms elapsed. timeout at 6600ms
com.prosysopc.ua.stack.transport.tcp.io.SecureChannelTcp 16:47:32.468 WARN [Work-Executor-53] – Request id=59 timeouted 6610ms elapsed. timeout at 6600ms

December 2, 2020
17:20, EET
Avatar
Bjarne Boström
Moderator
Moderators
Forum Posts: 1026
Member Since:
April 3, 2012
sp_UserOfflineSmall Offline

It is possible, we have done fixes to the Subscription part during the years, so depends how old it is. Or well are the servers you are connecting to also made with our SDK? If not then it is hard to say, since basically any such timeout could happen if the server simply would not respond (for some reason).

The 6600ms looks odd in the log. I’m not like finding at least that number in our codebase (other than finding like nodeid “16600” and that kind of scenarios). Have you set any custom timeout values?

If possible, you could try seeing with https://www.prosysopc.com/blog/opc-ua-wireshark/ is there anything odd, or if possible send a capture to us (uajava-support@prosysopc.com).

December 3, 2020
16:31, EET
Avatar
MichaelBraveDigital
Member
Members
Forum Posts: 14
Member Since:
March 10, 2016
sp_UserOfflineSmall Offline

Just for anyone else browsing this thread, my issue was that I had set UaClient.setPublishRequestTimeout(6000) but it’s better to leave the PublishRequestTimeout at it’s default value.

Forum Timezone: Europe/Helsinki

Most Users Ever Online: 1919

Currently Online:
23 Guest(s)

Currently Browsing this Page:
1 Guest(s)

Top Posters:

Heikki Tahvanainen: 402

hbrackel: 144

rocket science: 88

pramanj: 86

Francesco Zambon: 83

Ibrahim: 78

Sabari: 62

kapsl: 57

gjevremovic: 49

Xavier: 43

Member Stats:

Guest Posters: 0

Members: 735

Moderators: 7

Admins: 1

Forum Stats:

Groups: 3

Forums: 15

Topics: 1523

Posts: 6449

Newest Members:

rust, christamcdowall, redaahern07571, nigelbdhmp, travistimmons, AnnelCib, dalenegettinger, howardkennerley, Thomassnism, biancacraft16

Moderators: Jouni Aro: 1026, Pyry: 1, Petri: 0, Bjarne Boström: 1026, Jimmy Ni: 26, Matti Siponen: 346, Lusetti: 0

Administrators: admin: 1