Bug 1028966

Summary: require openjdk version which solves the memory leak in RHEV-M: service ovirt-engine gets OOM killed after few days of uptime
Product: Red Hat Enterprise Virtualization Manager Reporter: David Jaša <djasa>
Component: ovirt-engine-setupAssignee: Alon Bar-Lev <alonbl>
Status: CLOSED ERRATA QA Contact: Jiri Belka <jbelka>
Severity: urgent Docs Contact:
Priority: urgent    
Version: 3.3.0CC: aberezin, acathrow, akotov, alonbl, aph, ashish_bunkar, avyadav, bazulay, cpelland, dfediuck, djasa, ederevea, eedri, ernest.beinrohr, fweimer, herrold, iheim, iweiss, jbw, juan.hernandez, lpeer, lyarwood, lzelkha, mkalinin, myakove, nobody, pablo.iranzo, parsonsa, paulds, pprakash, ptisnovs, Rhev-m-bugs, sabose, sbonazzo, yeylon
Target Milestone: ---Keywords: ZStream
Target Release: 3.3.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard: integration
Fixed In Version: is27 Doc Type: Bug Fix
Doc Text:
The java-1.7.0-openjdk-1.7.0.45-2.4.3.3.el6 package modified the default configuration of Java to use NSS as a default security provider, due to a bug in the PKCS#11 provider the JVM leaked memory. This issue is resolved by adding a conflict with java-1.7.0-openjdk-1.7.0.45-2.4.3.3.el6 so that older or newer packages will be pulled. There is now no memory leak as NSS is not the default security provider in these packages.
Story Points: ---
Clone Of:
: 1040049 (view as bug list) Environment:
Last Closed: 2014-01-21 17:38:51 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1036813    
Bug Blocks: 1030460, 1040049, 1049022    
Attachments:
Description Flags
/var/log/messages
none
Output of pmap -x after one day running
none
Output of the native memory tracker after one day running
none
Thread dump after one day running none

Description David Jaša 2013-11-11 11:23:43 UTC
Created attachment 822361 [details]
/var/log/messages

Description of problem:
possible memory leak in RHEV-M: service ovirt-engine gets OOM killed after few days of uptime. Last OOM on our setup occured on Sunday when there is no load on it (the system is idle over the weekend)... When used, the load is quite low (~ 4 concurrent manual users) and the VM has plenty of memory for such a load (6 GB, recently increased from 4 GB that was increased from 3 GB that sufficed for all previous versions).

Version-Release number of selected component (if applicable):
is21 / rhevm-backend-3.3.0-0.31.beta1.el6ev.noarch

How reproducible:
always on our setup in given version

Steps to Reproduce:
1. Use RHEV-M lightly 
2.
3.

Actual results:
ovirt-engine service gets OOM'd

Expected results:
ovirt-engine has +- constant memory requirements under a given load over time

Additional info:

Comment 4 David Jaša 2013-11-14 09:03:18 UTC
duplicate of bug 1026100?

Comment 5 Barak 2013-11-14 10:26:46 UTC
looks like a duplicate of Bug 1026100

David - is this a physical rhevm host or a VM - and what is the amount of memory assigned to it ?

Can you please verify the workaround mentioned in comment #23 of the above Bug ?

Comment 6 David Jaša 2013-11-14 11:02:35 UTC
(In reply to Barak from comment #5)
> looks like a duplicate of Bug 1026100
> 
> David - is this a physical rhevm host or a VM

VM hosted in another RHEV

> - and what is the amount of
> memory assigned to it ?
> 

currently 6 GB, (specified in Description...)

> Can you please verify the workaround mentioned in comment #23 of the above
> Bug ?

I just added it to engine configuration but I will see the results in day or two.

Comment 7 Juan Hernández 2013-11-14 11:48:52 UTC
The situation here is similar to bug 1026100, but not exactly the same. In this case the database has a total of 81 processes and consumes a total of 2.31 GiB, that is very similar to the other bug.

But in this case the engine is consuming 3.24 GiB of RAM. Unlike in the other bug, where the engine was consuming only 440 MiB. This isn't normal, as the heap of the engine is configured for a maximum of 1 GiB, and as this happens after a long time running it certainly looks like a leak of memory or threads.

I suggest the same that in the other bug, reduce the minimum number of connections:

  ENGINE_DB_MIN_CONNECTIONS=1

This will release some memory, so that the machine will survive longer.

I think that it is worth to look at the potential leak deeper. To do so we will need a heap dump of the Java virtual machine of the engine when this happens. As it is impossible to get a heap dump after the OOM killer has killed it, I would suggest to run this same job in a machine with more RAM and then, when the engine reaches this level of use of memory generate the heap dump so that we can study it.

Comment 8 David Jaša 2013-11-14 12:49:21 UTC
(In reply to Juan Hernández from comment #7)
...
> I suggest the same that in the other bug, reduce the minimum number of
> connections:
> 
>   ENGINE_DB_MIN_CONNECTIONS=1
> 

already done (comment 6)

> This will release some memory, so that the machine will survive longer.
> 
> I think that it is worth to look at the potential leak deeper. To do so we
> will need a heap dump of the Java virtual machine of the engine when this
> happens. As it is impossible to get a heap dump after the OOM killer has
> killed it, I would suggest to run this same job in a machine with more RAM
> and then, when the engine reaches this level of use of memory generate the
> heap dump so that we can study it.

OK, I've raised the memory of the VM to hopefully ample 20 GB and I'll keep eye on RAM usage in following days.

Comment 9 Juan Hernández 2013-11-18 10:41:17 UTC
RHS-C bug 1030460 looks very similar to this one.

Comment 10 Juan Hernández 2013-11-18 12:11:19 UTC
After analyzing a machine that reached a point where the engine is consuming more than 9 GiB of RSS I found the following:

* Generated a heap dump as follows:

# ps -u ovirt
  PID TTY          TIME CMD
 1665 ?        00:00:00 ovirt-engine.py
 1677 ?        01:21:54 java
 1691 ?        00:00:31 ovirt-websocket
# su - ovirt -s /bin/sh
-sh-4.1$ jmap -dump:format=b,file=/var/log/ovirt-engine/dump/juan-20131118.dump 1677

The size of this dump is 972 MiB. Still need to analyze its content, but the size of a dump is always larger than the heap in use, so this means that it isn't the heap that is consuming the RAM, so we can probably discard a memory leak inside the engine.

* Generated a thread stack dump as follows:

# kill -3 1677
# cp /var/log/ovirt-engine/console.log /var/log/ovirt-engine/dump/juan-20131118.txt

There are 196 threads in that dump. Taking into account that each thread has a default stack size of 1 MiB this will consume 200 additional MiB. Still far from the total 9 GiB, so we can also discard a thread leak.

* Generated a memory map as follows:

# pmap 1677 > /var/log/ovirt-engine/dump/juan-20131118.pmap

The interesting thing from this memory map is that there are many mappings of approx 64 MiB:

# grep 65536K *.pmap  | wc -l
87

That means that more than 5 GiB of RAM are dedicated to this purpose. This gets us really close to the 9 GiB.

These 64 MiB mappings are probably generated by the glibc allocator: it has been reported that the glibc allocator (malloc and friends) may over-allocate memory for applications with many threads:

http://sourceware.org/bugzilla/show_bug.cgi?id=11261
https://bugzilla.redhat.com/show_bug.cgi?id=799327

I don't have a conclusion yet, will continue studying the issue.

Comment 11 Juan Hernández 2013-11-18 15:30:06 UTC
The GLU C library has a MALLOC_ARENA_MAX environment variable that allows control of the number of 64 MiB memory areas that are created in x86_64 machines.

These areas are created to avoid contention when threads use the locks needed to coordinate access to the "malloc" function. In Java objects aren't allocated with "malloc", but with the Java specific allocator, so this contention shouln't be a problem.

David Jaša is already running the job after adding the following to /etc/sysconfig/ovirt-engine:

export MALLOC_ARENA_MAX=1

If this setting has a positive effect in the longevity and use of memory of the engine then we should probably add it by default.

Comment 12 Juan Hernández 2013-11-19 12:28:41 UTC
The change in MALLOC_ARENA_MAX did have an effect, but not very positive. The engine doesn't generate now those 87 memory areas, but only one. I think this is the main malloc arena. But that area is growing continually. This is from the output of "pmap":

0000000000cc3000 1666792K rw---    [ anon ]
00000000aff80000 1311232K rw---    [ anon ]

The second line corresponds to the Java heap, and it isn't growing. But the first line corresponds (I believe) to the main malloc arena, and it is continually growing since the Java virtual machine was started. It is currently, after only one day, taking 1.6 GiB. The only explanation I can find for this is an abuse of direct memory buffers in the Java code or a leak in the Java virtual machine native code. As far as I know we don't use direct buffers in the engine.

I continue studying the issue.

Comment 13 Juan Hernández 2013-11-19 16:44:16 UTC
After analyzing the Java heap dump I don't find a large amount of direct memory buffers, this confirms that this is a leak generated in native code. In order to get more information I would suggest to run the engine under the control of the valgrind memcheck tool:

1. Stop the engine:

# service ovirt-engine stop

2. Install valgrind and the debug info for the OpenJDK package:

# yum -y install valgrind yum-utils
# debuginfo-install java-1.7.0-openjdk

3. Replace the Java launcher with an script that runs the Java application under the control of the valgrind memcheck tool:

# which java
/usr/bin/java

# readlink -f /usr/bin/java
/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.45.x86_64/jre/bin/java

# cd /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.45.x86_64/jre/bin

# mv java java.original

# cat > java <<'.'
#!/bin/sh
exec /usr/bin/valgrind --tool=memcheck --leak-check=full /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.45.x86_64/jre/bin/java.original $@
.

# chmod +x java

# restorecon java

You may need to adjust the location of the Java virtual machine in the above steps.

4. Truncate the console.log file:

# > /var/log/ovirt-engine/console.log

5. Start the engine:

# service ovirt-engine start

6. Run the engine for a while, till it generates a noticeable leak of native memory (you can use the pmap command and check the size of the memory area described in comment #12), then stop it:

# service ovirt-engine stop

This should write to the /var/log/ovirt-engine/console.log information about the memory leaks caused by calls to the malloc family of functions. Hopefully will be able to extract some useful information from there.

Remember to restore the original Java launcher when finished.

Comment 14 David Jaša 2013-11-21 17:37:46 UTC
Valgrind won't be the way to go - just jboss initialization takes over 10 minutes and engine intilialization on top of it another 10+ - meaning that engine-initiated ldap connections and host connection time out as well as client-initiated connections to web UIs.

It seems that monitoring *alloc*'s as suggested by Pavel could be the way to go.. Pavel, do you have some handy strace invocation that could help, please? :)

Comment 15 Juan Hernández 2013-11-21 19:48:50 UTC
You could try the following:

1. Install the glibc-headers, glibc-utils and gcc packages.

2. Create a trace.c file with the following content:

---8<---
#include <mcheck.h>

void __trace_start() __attribute__((constructor));
void __trace_stop() __attribute__((destructor));

void __trace_start() {
    mtrace();
}

void __trace_stop() {
    muntrace();
}
--->8---

3. Compile it as follows:

# gcc -fPIC -shared -o /usr/share/ovirt-engine/bin/libtrace.so trace.c

(You can do the compilation in a different machine to avoid installing the above packages in the machine that is running the engine.)

4. Create a "java_with_malloc_trace.sh" script in the same directory where the current "java" executable is (/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.45.x86_64/jre/bin, for example). Use this content:

---8<---
#!/bin/sh

# This is the location of the malloc trace file, needs to be a place
# where the engine can write:
export MALLOC_TRACE=/var/log/ovirt-engine/malloc.trace

# This is needed to enable tracing:
export LD_PRELOAD=/usr/share/ovirt-engine/bin/libtrace.so

# Run the Java virtual machine:
exec /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.45.x86_64/jre/bin/java $@
--->8---

5. Modify the engine start script to use "java_with_malloc_trace.sh" instead of just "java". In the /usr/share/ovirt-engine/services/ovirt-engine/ovirt-engine.py file near line 256:

---8<---
#self._executable = os.path.join(
#    java.Java().getJavaHome(),
#    'bin',
#    'java',
#)
self._executable ="/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.45.x86_64/jre/bin/java_with_malloc_trace.sh"
--->8---

6. Restart the engine. It should generate a *huge* amount of information in the /var/log/ovirt-engine/malloc.trace file. Hopefully we will be able to analyze it with the mtrace script included in the glibc-utils package.

Comment 16 David Jaša 2013-11-22 11:55:13 UTC
OK, the engine is now running with this library loaded and malloc.trace log file is filling with messages.

Comment 17 Juan Hernández 2013-11-25 11:43:32 UTC
Tracing malloc didn't give good results, apparently all the JVM threads lock after a while.

Lets try to use the "native memory tracking" that has recently been added to the Java virtual machine:

1. Make sure that you have the package java-1.7.0-openjdk-devel package installed, as we will need the jcmd command provided by that package.

2. Modify the engine configuration to add the -XX:NativeMemoryTracking=detail option to the JVM. Add a 99-my.conf file to the /etc/ovirt-engine/engine.conf.d directory with the following content:

ENGINE_JVM_ARGS="${ENGINE_JVM_ARGS} -XX:NativeMemoryTracking=detail"

3. Restart the engine:

# service ovirt-engine restart

4. Use the the jcmd command to generate a report of the use of memory of the engine:

# su - ovirt -s /bin/sh -c "jcmd $(pidof ovirt-engine) VM.native_memory detail" >> /var/log/ovirt-engine/ntm.log

This will append to the /var/log/ovirt-engine/ntm.log file a report of the native memory in use. If possible add this to a crontab so it will run periodically, every 5 minutes, for example.

Comment 18 David Jaša 2013-11-25 13:48:52 UTC
(In reply to Juan Hernández from comment #17)
> Tracing malloc didn't give good results, apparently all the JVM threads lock
> after a while.
> 
> Lets try to use the "native memory tracking" that has recently been added to
> the Java virtual machine:
> 
> 1. Make sure that you have the package java-1.7.0-openjdk-devel package
> installed, as we will need the jcmd command provided by that package.
> 
> 2. Modify the engine configuration to add the
> -XX:NativeMemoryTracking=detail option to the JVM. Add a 99-my.conf file to
> the /etc/ovirt-engine/engine.conf.d directory with the following content:
> 
> ENGINE_JVM_ARGS="${ENGINE_JVM_ARGS} -XX:NativeMemoryTracking=detail"
> 
> 3. Restart the engine:
> 
> # service ovirt-engine restart
> 
> 4. Use the the jcmd command to generate a report of the use of memory of the
> engine:
> 
> # su - ovirt -s /bin/sh -c "jcmd $(pidof ovirt-engine) VM.native_memory
> detail" >> /var/log/ovirt-engine/ntm.log
> 
> This will append to the /var/log/ovirt-engine/ntm.log file a report of the
> native memory in use. If possible add this to a crontab so it will run
> periodically, every 5 minutes, for example.

OK, the commands is running in crontab every 5 minutes, in addition the date + time before each loging is logged.

Comment 19 Juan Hernández 2013-11-26 10:36:31 UTC
After one day running with the native memory tracking enabled the memory mapping that concerns us has already grown to approxy 2 GiB:

# pmap -x 8979
Address           Kbytes     RSS   Dirty Mode   Mapping
0000000000400000       4       4       0 r-x--  java
0000000000600000       8       8       4 rw---  java
0000000001dfd000 1978264 1978136 1978136 rw---    [ anon ]  <---
00000000aff80000 1311232  766464  766464 rw---    [ anon ]

I believe that this is the malloc arena, and there is only one because we are using MALLOC_ARENA_MAX=1.

This mapping doesn't appear in the list of mappings generated from the native memory tracker, and the total use of memory reported by the native memory tracker doesn't take into account this area.

I am attaching a pmap and the output of the native memory tracker.

Comment 20 Juan Hernández 2013-11-26 10:41:37 UTC
Created attachment 829178 [details]
Output of pmap -x after one day running

Comment 21 Juan Hernández 2013-11-26 10:42:24 UTC
Created attachment 829187 [details]
Output of the native memory tracker after one day running

Comment 22 Juan Hernández 2013-11-26 12:01:49 UTC
Created attachment 829233 [details]
Thread dump after one day running

Comment 23 Pavel Tisnovsky 2013-11-26 12:09:23 UTC
strace might not be the best thing for this problem because syscalls to allocate mem are used to get a quite big amount of RAM, ie it's not used for each malloc() and free(). It would be better to use mtrace as mentioned above, let me check.

Comment 24 Juan Hernández 2013-11-26 12:15:03 UTC
Note that we already tried to use mtrace as described in comment 15. The JVM started to run and to generate the mtrace file, but quickly after starting the JVM it blocked completely and stopped writing to the mtrace file, all the threads were apparently blocked, so we couldn't make any real progress. Maybe we were doing something wrong.

Comment 25 Florian Weimer 2013-11-26 13:06:37 UTC
(In reply to Juan Hernández from comment #24)
> Note that we already tried to use mtrace as described in comment 15. The JVM
> started to run and to generate the mtrace file, but quickly after starting
> the JVM it blocked completely and stopped writing to the mtrace file, all
> the threads were apparently blocked, so we couldn't make any real progress.
> Maybe we were doing something wrong.

mtrace is not thread-safe: https://sourceware.org/bugzilla/show_bug.cgi?id=6420

As far as I can see, we haven't backported the fix.  In order to pin-point the leak, we'd need deeper stack traces anyway, mtrace only logs the immediate caller.

Comment 26 Andrew Haley 2013-11-28 10:11:59 UTC
I can confirm that the memory leak is in the PKCS11 library and is probably this bug:

http://bugs.sun.com/view_bug.do?bug_id=6913047

After more than 12 hours running there is no increase in memory consumption with PKCS11 disabled. I think this is enough to prove that PKCS11 is the cause of the problem.

Disabling PKCS11 is a suitable workaround for the rime being.

Comment 27 Juan Hernández 2013-11-28 10:47:21 UTC
Thanks for the analysis Andrew.

The workaround is to disable the PKCS#11 provider modifying the $JAVA_HOME/lib/security/java.security file as follows:

#security.provider.1=sun.security.pkcs11.SunPKCS11 ${java.home}/lib/security/nss.cfg
security.provider.1=sun.security.provider.Sun
security.provider.2=sun.security.provider.Sun
security.provider.3=sun.security.rsa.SunRsaSign
security.provider.4=sun.security.ec.SunEC
security.provider.5=com.sun.net.ssl.internal.ssl.Provider
security.provider.6=com.sun.crypto.provider.SunJCE
security.provider.7=sun.security.jgss.SunProvider
security.provider.8=com.sun.security.sasl.Provider
security.provider.9=org.jcp.xml.dsig.internal.dom.XMLDSigRI
security.provider.10=sun.security.smartcardio.SunPCSC

Comment 28 Juan Hernández 2013-11-28 12:12:35 UTC
The solution I am proposing for this is to add to the engine startup scripts support for using a custom java.security file:

http://gerrit.ovirt.org/21826

Then use a custom java.security file that disables the PKCS#11 provider:

http://gerrit.ovirt.org/21827

To verify that after this changes the provider isn't actually loaded use the following command:

# su - ovirt -s /bin/sh -c "jmap -histo $(pidof ovirt-engine)" | grep sun.security.pkcs11.SunPKCS11

Before this changes it should list at least an instance of the SunPKCS11 provider, something like this:

 309:            67           4824  sun.security.pkcs11.SunPKCS11$P11Service
 432:            69           2208  sun.security.pkcs11.SunPKCS11$Descriptor
1278:             1            144  sun.security.pkcs11.SunPKCS11

After the changes the list should be empty.

Comment 29 Juan Hernández 2013-11-28 12:37:31 UTC
Note that the SunPKCS11 provider is enabled by default since version 1.7.0.11-2.4.0.2.el6 of the java-1.7.0-openjdk package, so users can start to see this issue even with previous versions of the engine.

Note also that the data ware house component will also probably need to disable the PKCS#11 provider, probably using the same custom java.security file.

Comment 31 Alon Bar-Lev 2013-12-02 09:39:02 UTC
Hello Itamar,

Why isn't this solved by an update to rhel base? Why do we need to workaround java issues introduced by rhel?

Thanks,
Alon

Comment 32 Juan Hernández 2013-12-02 09:48:07 UTC
*** Bug 1035789 has been marked as a duplicate of this bug. ***

Comment 35 David Jaša 2013-12-02 13:53:41 UTC
(In reply to Juan Hernández from comment #27)
> Thanks for the analysis Andrew.
> 
> The workaround is to disable the PKCS#11 provider modifying the
> $JAVA_HOME/lib/security/java.security file as follows:
> 
> #security.provider.1=sun.security.pkcs11.SunPKCS11
> ${java.home}/lib/security/nss.cfg
> security.provider.1=sun.security.provider.Sun
> security.provider.2=sun.security.provider.Sun
> security.provider.3=sun.security.rsa.SunRsaSign
> security.provider.4=sun.security.ec.SunEC
> security.provider.5=com.sun.net.ssl.internal.ssl.Provider
> security.provider.6=com.sun.crypto.provider.SunJCE
> security.provider.7=sun.security.jgss.SunProvider
> security.provider.8=com.sun.security.sasl.Provider
> security.provider.9=org.jcp.xml.dsig.internal.dom.XMLDSigRI
> security.provider.10=sun.security.smartcardio.SunPCSC

Engine with this workaround applied stays around 1 GB of resident memory even after 5 days of uptime.

Comment 36 Alon Bar-Lev 2013-12-02 16:27:52 UTC
Why isn't a bug opened against rhel openjdk? opening.

Comment 37 Juan Hernández 2013-12-02 19:35:14 UTC
Alon, please take the bug and add the reference to the patch you suggested.

Comment 38 Alon Bar-Lev 2013-12-02 19:39:04 UTC
I added the gerrit id.

However, I am not the owner of this issue, I wrote a patch that summarize the comments I had for the initial patches and were not applied as alternative method.

Comment 39 Juan Hernández 2013-12-02 19:54:54 UTC
From this point on I think you should take care of the bug, as you know better the solution that will be accepted and its implications. I'm explicitly asking you to take the bug. But if you don't want I will continue owning it, no problem.

Comment 42 Alon Bar-Lev 2013-12-09 01:19:17 UTC
Reopening as fix will be provided at rhel level, needs a different patch.

Comment 43 Alon Bar-Lev 2013-12-09 11:13:11 UTC
Before all ACK, can someone explain how in single channel for rhev we can have different dependencies for rhel-6.4 and rhel-6.5? Or we force people to upgrade to rhel-6.5?

Comment 44 Lee Yarwood 2013-12-09 11:33:21 UTC
(In reply to Alon Bar-Lev from comment #43)
> Before all ACK, can someone explain how in single channel for rhev we can
> have different dependencies for rhel-6.4 and rhel-6.5? Or we force people to
> upgrade to rhel-6.5?

In the classic RHN model (and new subscription manager model AFAIK)  minor releases are shipped as a rolling release using a single repo (or channel). The openjdk is only provided by the base RHEL 6 channel and as such all a 6.4 user would be required to do to get this latest 6.5 version is a simple `yum update *openjdk* -y'. Obviously by requiring the 6.5 version in our .spec we simply automate this during our own upgrade process.

Comment 45 Alon Bar-Lev 2013-12-09 18:24:01 UTC
Hi Lee,

I am feeling uneasy to force upgrade working systems to newer jdk only because rhel-6.5 introduced this.

What do you think about adding a conflict with java-1.7.0-openjdk-1.7.0.45-2.4.3.3 ?

Alon

Comment 48 Alon Bar-Lev 2013-12-10 15:09:44 UTC
With:
Conflicts: java-1.7.0-openjdk = 1:1.7.0.45-2.4.3.3.el6

# yum --disableplugin=versionlock --disablerepo=java update rhevm

<snip>

Error: rhevm conflicts with 1:java-1.7.0-openjdk-1.7.0.45-2.4.3.3.el6.x86_64

# yum --disableplugin=versionlock update rhevm

<snip>

Dependencies Resolved

==============================================================================================================================================================================================
 Package                                             Arch                            Version                                                              Repository                     Size
==============================================================================================================================================================================================
Updating:
 java-1.7.0-openjdk                                  x86_64                          1:1.7.0.45-2.4.3.4.el6_5                                             java                           26 M
 rhevm                                               noarch                          3.2.5-0.0.6.20131210git70fc6ff.root.el6ev                            temp                          1.1 M

<snip>

-------

# engine-upgrade 

Checking for updates... (This may take several minutes)...[ DONE ]
12 Updates available:
 * java-1.7.0-openjdk-1.7.0.45-2.4.3.4.el6_5.x86_64
 * java-1.7.0-openjdk-devel-1.7.0.45-2.4.3.4.el6_5.x86_64
 * rhevm-3.2.5-0.0.6.20131210git70fc6ff.root.el6ev.noarch
 * rhevm-backend-3.2.5-0.0.6.20131210git70fc6ff.root.el6ev.noarch
 * rhevm-config-3.2.5-0.0.6.20131210git70fc6ff.root.el6ev.noarch
 * rhevm-dbscripts-3.2.5-0.0.6.20131210git70fc6ff.root.el6ev.noarch
 * rhevm-genericapi-3.2.5-0.0.6.20131210git70fc6ff.root.el6ev.noarch
 * rhevm-notification-service-3.2.5-0.0.6.20131210git70fc6ff.root.el6ev.noarch
 * rhevm-restapi-3.2.5-0.0.6.20131210git70fc6ff.root.el6ev.noarch
 * rhevm-tools-common-3.2.5-0.0.6.20131210git70fc6ff.root.el6ev.noarch
 * rhevm-userportal-3.2.5-0.0.6.20131210git70fc6ff.root.el6ev.noarch
 * rhevm-webadmin-portal-3.2.5-0.0.6.20131210git70fc6ff.root.el6ev.noarch

Checking for updates... (This may take several minutes)...[ DONE ]

During the upgrade process, RHEV Manager  will not be accessible.
All existing running virtual machines will continue but you will not be able to
start or stop any new virtual machines during the process.

Would you like to proceed? (yes|no): yes

Comment 50 Jiri Belka 2013-12-16 08:58:47 UTC
ok, is27. running for couple of days, no oom killed and memory usage is ok.

 Private  +   Shared  =  RAM used       Program
  1.0 GiB + 844.0 KiB =   1.0 GiB       java [updated]

Comment 51 errata-xmlrpc 2014-01-21 17:38:51 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHSA-2014-0038.html

Comment 52 Alon Bar-Lev 2014-03-07 18:36:16 UTC
*** Bug 1072537 has been marked as a duplicate of this bug. ***