16:31, EET
March 10, 2016
Hi, I’ve been getting the following error crop up in my Java server. The server has roughly 500 OPC connections going and sometimes one of them will produce a heap space error:
Uncaught Exception in Thread Thread[OPC-UA-Stack-Blocking-Work-Executor-49,5,main]
java.lang.OutOfMemoryError: Java heap space
at com.prosysopc.ua.stack.encoding.binary.BinaryDecoder.getDataValue(SourceFile:700)
at com.prosysopc.ua.stack.encoding.binary.BinaryDecoder$17.s(SourceFile:276)
at com.prosysopc.ua.stack.encoding.binary.BinaryDecoder$17.c(SourceFile:272)
at com.prosysopc.ua.stack.encoding.binary.BinaryDecoder.get(SourceFile:498)
at com.prosysopc.ua.types.opcua.Serializers$ReadResponseSerializer.getEncodeable(SourceFile:2120)
at com.prosysopc.ua.StructureSerializer.getEncodeable(SourceFile:83)
at com.prosysopc.ua.stack.encoding.utils.AbstractSerializer.getEncodeable(SourceFile:155)
at com.prosysopc.ua.stack.encoding.utils.SerializerComposition.getEncodeable(SourceFile:109)
at com.prosysopc.ua.stack.encoding.binary.BinaryDecoder.getMessage(SourceFile:1306)
at com.prosysopc.ua.stack.transport.tcp.io.TcpConnection$b$1.run(SourceFile:759)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
Is this just the result of bad data coming back which causes the BinaryDecoder to loop infinitely or is there some bug going on here? It usually happens when the connection between the Java server and the OPC client has been interrupted or when they are trying to reconnect. This last error happened when there was a general network issue and all 500 tried to reconnect in quick succession.
Another error I can see is the following which only occured once:
4/11/2020 10:22:55.643 ERROR [tack-Blocking-Work-Executor-30] – Uncaught Exception in Thread Thread[OPC-UA-Stack-Blocking-Work-Executor-30,5,main]
java.lang.Error: Maximum permit count exceeded
at java.base/java.util.concurrent.Semaphore$Sync.tryReleaseShared(Semaphore.java:198)
at java.base/java.util.concurrent.locks.AbstractQueuedSynchronizer.releaseShared(AbstractQueuedSynchronizer.java:1382)
at java.base/java.util.concurrent.Semaphore.release(Semaphore.java:619)
at com.prosysopc.ua.stack.transport.tcp.io.SecureChannelTcp$7.onMessage(SourceFile:1041)
at com.prosysopc.ua.stack.transport.tcp.io.TcpConnection$b$1.run(SourceFile:765)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
Please let me know if you need any system requirements or additional info.
ProSys SDK Version: 4.4.2 (1266)
13:52, EET
April 3, 2012
Hi,
In general, once memory runs out, any part of the code can get the OutOfMemoryError (OOM). Thus, it is not possible to determine which part of the code triggers the OOM by looking just at the stacktrace, i.e. it could be our SDK, or basically anything else as well.
Can you reproduce this issue?
Can you take a heap dump and look which instances take a lot of memory? https://www.baeldung.com/java-heap-dump-capture shows some ways to do that. You can also try the -XX:+HeapDumpOnOutOfMemoryError flag where possible.
Also I assume by connections you mean client connections to the server and not instead having a “general” server having 500 UaClient’s? Since each UaClient has a cache for UaNodes within the AddressSpace that stores 5000 nodes (plus any type nodes on top of that). Not that it should be enough for OOM in typical cases, but might still be something to consider. However we do read all types etc. on connect, so the number of instances within the cache can be large.
How much memory you are giving to java? Or if not specified, how much RAM your machine has (default typically is 1GB or 1/4th of RAM, whichevery is lower).
15:42, EET
March 10, 2016
Hi Bjarne, thanks for the reply. Yes unfortunately we are creating 500 instances of UAClient because each OPC connection is to a different PLC device. If there’s any way to reduce the caching footprint of the UAClient, please let me know.
I have not managed to find how to reproduce the issue as it doesnt seem to happen on my local docker container, only while running in a docker instance in AWS. The instance has 4GB of memory available and I’m executing the program with -Xmx3792m -Xms1792m -Xss2m
However the AWS metrics show that the instance has an average memory usage of 13% and a memory reservation of 20% so I’m not thinking it’s run out of memory at all. What might be happening is a single thread’s heap has been exhausted as there are over 1000 threads in this application (at least 1 for each UAClient).
I will look into that HeapDumpOnOutOfMemoryError thank you, any suggestions for making many instances of UAClient more efficient would be appreciated!
16:46, EET
April 3, 2012
My experience with Docker is still somewhat limited. But please make sure that you are running a “Docker aware Java” version. Not sure exactly what is the situation, but see e.g. https://www.docker.com/blog/improved-docker-container-integration-with-java-10/. Might be relevant or not.
Do you need UaNode-based API of the SDK? Do you read/write custom Structures? If no for both, you can set UaClient.setInitTypeDictionaryOnConnect(false) and UaClient.setInitTypeDictionaryAutoUsage(false) before connecting.
Since it is relevant to this somewhat, linking both of these chains here:
– https://forum.prosysopc.com/forum/opc-ua-java-sdk/inittypedictionaryonconnect-performance/
– https://forum.prosysopc.com/forum/opc-ua-java-sdk/source-code-does-not-match-the-bytecode/
Also it should be noted that the SDK uses a common threadpool that is shared. It is typically sized as 64 for blocking work and number of cores for non-blocking. StackUtils.setBlockingWorkerThreadPoolCoreSize can be used if needed, though probably here lowering that might not make sense.
23:04, EET
March 10, 2016
Thanks Bjarne, I have added those two flags since we dont have any custom types, the main thing we’ve using this for is to add nodes to subscriptions and get real-time monitoring of datapoints, with a little bit of read/write of simple data types (Int, Float, String).
Thanks for your assistance, I will look more into getting a HeapDump when it happens again.
Most Users Ever Online: 1919
Currently Online:
32 Guest(s)
Currently Browsing this Page:
1 Guest(s)
Top Posters:
Heikki Tahvanainen: 402
hbrackel: 144
rocket science: 88
pramanj: 86
Francesco Zambon: 83
Ibrahim: 78
Sabari: 62
kapsl: 57
gjevremovic: 49
Xavier: 43
Member Stats:
Guest Posters: 0
Members: 727
Moderators: 7
Admins: 1
Forum Stats:
Groups: 3
Forums: 15
Topics: 1529
Posts: 6471
Newest Members:
kourtneyquisenbe, ellis87832073466, zkxwilliemae, gabriellabachus, Deakin, KTP25Zof, Wojciech Kubala, efrennowell431, wilfredostuart, caitlynfajardoModerators: Jouni Aro: 1026, Pyry: 1, Petri: 0, Bjarne Boström: 1032, Jimmy Ni: 26, Matti Siponen: 349, Lusetti: 0
Administrators: admin: 1