Description of problem:
vdsm memory cunsumption grows on *some* environments.
Version-Release number of selected component (if applicable):
100% in some installations
This is vdsm memory consumption *before* host restarted:
vdsm 22366 25.9 3.7 13_697_756 9_890_156 ? S<sl Sep24 4659:33 /usr/bin/python /usr/share/vdsm/vdsm
10:53:08 up 57 days, 2:34, 1 user, load average: 0.42, 0.27, 0.24
This is vdsm memory consumtion *after* host was restarted:
vdsm 24448 7.1 0.7 5_612_236 1_881_976 ? S<sl Sep21 212:56 /usr/bin/python /usr/share/vdsm/vdsm
13:23:40 up 12 days, 13:14, 1 user, load average: 1.27, 1.25, 1.26
In this case host has been rebooted but vdsmd restart reduces memory consumption as well.
Please check if using xmlrpc instead of jsonrpc eliminate the memory leak.
xmlrpc is fully supported in 3.5.
You can check this on specific host - edit the host, open the advanced
options and uncheck the "use json rpc" checkbox. No need to restart vdsm
(but you may need to put the host into maintenance).
(In reply to Nir Soffer from comment #15)
> Please check if using xmlrpc instead of jsonrpc eliminate the memory leak.
> xmlrpc is fully supported in 3.5.
I suggested them to disable SSL and switch to XMLRPC. Waiting for test results.
(In reply to Pavel Zhukov from comment #18)
> (In reply to Nir Soffer from comment #15)
> > Please check if using xmlrpc instead of jsonrpc eliminate the memory leak.
> > xmlrpc is fully supported in 3.5.
> I suggested them to disable SSL and switch to XMLRPC. Waiting for test
I'd start with what Nir suggested... And not to disable SSL.
Looking at the comments above I can see that memory grows during migration. In 3.5 we use xmlrpc for this operation so I am not really sure whether switching
protocols will give anything. I suggest to disable SSL and see whether the same
memory consumption trend stays or changes somehow.
Eldad - any luck reproducing this, now that we have a suspect?
I could not find in this bug anything that suggest a leak in vdsm.
The only info found here is something that looks like the output
of ps without the headers, so we have to guess the meaning of the
It seems that the customer expects vdsm memory after reboot or restart
to be the same as vdsm memory after running for many days.
We do not expect this behavior. We expect vdsm memory to remain constant
when the workload is stable.
Python may keep allocated memory, so Python process memory usage typically
reflect the highest usage it needed in the past.
Is the fix for bug 115810 included in the vdsm version used here?
We must have more precise information in this bug.
(In reply to Nir Soffer from comment #50)
> Python may keep allocated memory, so Python process memory usage typically
> reflect the highest usage it needed in the past.
Vdsm consumes twice more memory after 1 month than after 1 week. I don't think it's normal for Python even.
> Is the fix for bug 115810 included in the vdsm version used here?
It's 11 years old bug on anaconda. Typo?
> We must have more precise information in this bug.
*** Bug 1279950 has been marked as a duplicate of this bug. ***
Eldad, were you able to validate the benefit of https://gerrit.ovirt.org/#/c/51630 ?
we want at least a partial backport to 3.5.z
bug targeted for 3.6.3, patches merged to 3.6 branch -> MODIFIED
back to post, we need to backport https://gerrit.ovirt.org/#/c/51917/ as well
51917 merged in ovirt-3.6 branch -> MODIFED
according to fromani@ via irc BZ1283725 is not really connected with this BZ.
this BZ depens on BZ1279740 which is CodeChange only. no visible mem usage increase seen for vdsm.
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory, and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.