Note: This bug is displayed in read-only format because
the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Description of problem:
I've just had a little check on a hyper-visor (based on Centos 6.3)
My concern is more that a VMs virtual memory (VSZ) allocation is much higher than that of its configuration ?
qemu 24233 11.0 1.0 3030420 1008484 ? Sl 2012 2189:02 /usr/libexec/qemu-kvm -S -M rhel6.3.0 -cpu Conroe -enable-kvm -m 2048 -smp 4,sockets=1,cores=4,threads=1 -name
Version-Release number of selected component (if applicable):
VDSM versions:
vdsm.x86_64 4.10.0-0.44.14.el6
vdsm-cli.noarch 4.10.0-0.44.14.el6
vdsm-python.x86_64 4.10.0-0.44.14.el6
vdsm-xmlrpc.noarch 4.10.0-0.44.14.el6
How reproducible:
Everytime
Steps to Reproduce:
1. create a VM
2. allocate 2GB ( 2048 MB of Memory, with 2048 MB commit )
3. start the VM
4. check the HV, the VSZ is higher than the allocated memory for the VM
Actual results:
3030420 kb of VSZ allocated
Expected results:
2097152 kb of VSZ allocated
Additional info:
qemu 24233 11.0 1.0 *3030420* 1008484 ? Sl 2012 2189:02
/usr/libexec/qemu-kvm -S -M rhel6.3.0 -cpu Conroe -enable-kvm *-m
2048*-smp 4,sockets=1,cores=4,threads=1 -name
above example shows ~3 Gig VSZ but only 2048m are actually configured ...
if i'm not too blind and dumb :) ... 3030420 kb = 2959.39 mb ... and that's not what is configured as the VMs max allowed ram ...
It looks OK to me, since the virtual size is derived from the address space max address, and not from the actually allocated memory, so it is not swapped or anything that can cause performance degradation.
QE, can you look into it.
I understand VSZ is not the actual RSS, but a 1GB overhead is something we may want to be able to explain.?
Additionally, what would happen during migration? will this extra 1GB be migrated?
If you cat /proc/<pid>/maps you will see where the extra address space is being used. You will see a stack for every qemu-kvm thread, a zillion shared libraries, the qemu-kvm heap, the guest memory map, and a few other things.
Only guest memory should be migrated over in a live migration, everything else is initialized from scratch on the destination side. The vast majority of the virtual memory is shared (libraries), or never used (the qemu IO threads use far less stack space than the allocated virtual space for each thread stack).
Whether 1GB is reasonable is for the qemu-kvm developers to decide.
We're going to investigate it upstream and for that we'll track the progress via Bug 1193966 (RHEL7). We don't have plans to fix it in RHEL6 though, so I'm closing this bug.
Alex, if this issue is critical or in any way time sensitive, please raise a ticket through your regular Red Hat support channels to make certain it receives the proper attention and prioritization that will result in a timely resolution.
For information on how to contact the Red Hat production support team, please visit: https://www.redhat.com/support/process/production/#howto
Thanks!