Description of problem:
The engine failed suddenly with "java.lang.OutOfMemoryError: Java heap space" error. This made all host in the environment to non-responsive. Below log is registered in the server log.
==
2017-10-11 07:04:45,285+05 ERROR [stderr] (ResponseWorker) Exception in thread "ResponseWorker" java.lang.OutOfMemoryError: Java heap space
2017-10-11 07:04:42,372+05 ERROR [io.undertow.servlet] (default task-84) Exception while dispatching incoming RPC call: com.google.gwt.user.client.rpc.SerializationException: Can't find the serialization policy file. This probably means that the user has an old version of the application loaded in the browser. To solve the issue the user needs to close the browser and open it again, so that the application is reloaded.
Caused by: java.lang.OutOfMemoryError: GC overhead limit exceeded
==
The GC overhead limit exceeded is logged multiple times in the server log. At some point of time, I can see engine is not even trying to check the status of the host.
The environment is having 1GB of heap size configured and it's not a large environment and is only having less than 100 VMs and 20 host.
There is no specific event before the issue. I can only saw a clone operation before this event. As per the heap dump, the most of the memory is taken by org.ovirt.vdsm.jsonrpc.client.internal.ResponseTracker.
====
One instance of "org.ovirt.vdsm.jsonrpc.client.internal.ResponseTracker" loaded by "org.jboss.modules.ModuleClassLoader @ 0xc23ccf30" occupies 643,815,688 (62.78%) bytes. The memory is accumulated in one instance of "java.util.concurrent.ConcurrentHashMap$Node[]" loaded by "<system class loader>".
Class Name | Shallow Heap | Retained Heap
-------------------------------------------------------------------------------------------------------------------
org.ovirt.vdsm.jsonrpc.client.internal.ResponseTracker @ 0xc4dfac10 | 40 | 643,815,688
|- <class> class org.ovirt.vdsm.jsonrpc.client.internal.ResponseTracker @ 0xc4d957c8| 8 | 8
|- isTracking java.util.concurrent.atomic.AtomicBoolean @ 0xc4dfac38 | 16 | 16
|- runningCalls java.util.concurrent.ConcurrentHashMap @ 0xc4dfac48 | 64 | 536
|- map java.util.concurrent.ConcurrentHashMap @ 0xc4dfac88 | 64 | 2,456
|- hostToId java.util.concurrent.ConcurrentHashMap @ 0xc4dfacc8 | 64 | 643,812,640
|- queue java.util.concurrent.ConcurrentLinkedQueue @ 0xc4dfad08 | 24 | 24
|- lock java.util.concurrent.locks.ReentrantLock @ 0xc4dfad20 | 16 | 16
'- Total: 7 entries | |
-------------------------------------------------------------------------------------------------------------------
====
There was around 26GB available in the RHV-M server at the time of issue.
Version-Release number of selected component (if applicable):
rhevm-4.1.2.3-0.1.el7.noarch
Additional info:
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory, and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.
https://access.redhat.com/errata/RHEA-2018:1516