RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 895448 - Memory usage (VSZ) for started VMs seems incorrect
Summary: Memory usage (VSZ) for started VMs seems incorrect
Keywords:
Status: CLOSED NEXTRELEASE
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: qemu-kvm
Version: 6.3
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: Bandan Das
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On:
Blocks: 1193966 1515947
TreeView+ depends on / blocked
 
Reported: 2013-01-15 09:01 UTC by Alex Leonhardt
Modified: 2017-11-21 16:42 UTC (History)
17 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 1193966 (view as bug list)
Environment:
Last Closed: 2015-02-18 16:03:07 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Alex Leonhardt 2013-01-15 09:01:30 UTC
Description of problem:



I've just had a little check on a hyper-visor (based on Centos 6.3)

My concern is more that a VMs virtual memory (VSZ) allocation is much higher than that of its configuration ?

qemu     24233 11.0  1.0 3030420 1008484 ?     Sl    2012 2189:02 /usr/libexec/qemu-kvm -S -M rhel6.3.0 -cpu Conroe -enable-kvm -m 2048 -smp 4,sockets=1,cores=4,threads=1 -name


Version-Release number of selected component (if applicable):

VDSM versions:

vdsm.x86_64           4.10.0-0.44.14.el6
vdsm-cli.noarch       4.10.0-0.44.14.el6
vdsm-python.x86_64    4.10.0-0.44.14.el6
vdsm-xmlrpc.noarch    4.10.0-0.44.14.el6

How reproducible:

Everytime

Steps to Reproduce:
1. create a VM
2. allocate 2GB ( 2048 MB of Memory, with 2048 MB commit ) 
3. start the VM
4. check the HV, the VSZ is higher than the allocated memory for the VM
  
Actual results:

3030420 kb of VSZ allocated

Expected results:

2097152 kb of VSZ allocated

Additional info:

qemu     24233 11.0 1.0 *3030420* 1008484 ?     Sl    2012 2189:02
/usr/libexec/qemu-kvm -S -M rhel6.3.0 -cpu Conroe -enable-kvm *-m
2048*-smp 4,sockets=1,cores=4,threads=1 -name

above example shows ~3 Gig VSZ but only 2048m are actually configured ...

if i'm not too blind and dumb :) ... 3030420 kb = 2959.39 mb ... and that's not what is configured as the VMs max allowed ram ...

Comment 1 Doron Fediuck 2013-01-15 11:49:31 UTC
Ronen,
Can you please look into it and re-assign as needed (qemu/kvm)?

Comment 2 Ronen Hod 2013-01-15 12:01:32 UTC
It looks OK to me, since the virtual size is derived from the address space max address, and not from the actually allocated memory, so it is not swapped or anything that can cause performance degradation.
QE, can you look into it.

Comment 4 Ronen Hod 2013-01-15 12:10:00 UTC
I think that /proc/<processid>/status can tell, but I am not an expert.

Comment 5 Doron Fediuck 2013-01-15 16:38:12 UTC
I understand VSZ is not the actual RSS, but a 1GB overhead is something we may want to be able to explain.?
Additionally, what would happen during migration? will this extra 1GB be migrated?

Comment 6 Rik van Riel 2013-01-29 20:45:51 UTC
If you cat /proc/<pid>/maps you will see where the extra address space is being used. You will see a stack for every qemu-kvm thread, a zillion shared libraries, the qemu-kvm heap, the guest memory map, and a few other things.

Only guest memory should be migrated over in a live migration, everything else is initialized from scratch on the destination side. The vast majority of the virtual memory is shared (libraries), or never used (the qemu IO threads use far less stack space than the allocated virtual space for each thread stack).

Whether 1GB is reasonable is for the qemu-kvm developers to decide.

Comment 10 Qunfang Zhang 2013-08-16 08:17:17 UTC
I can reproduce this issue on rhel6.5 host with the following version:

kernel-2.6.32-410.el6.x86_64
qemu-kvm-0.12.1.2-2.393.el6.x86_64
seabios-0.6.1.2-26.el6.x86_64

(1) Boot up a guest with 2G mem.

#  /usr/libexec/qemu-kvm -cpu SandyBridge -M rhel6.5.0 -enable-kvm -m 2048 -smp 2,sockets=2,cores=1,threads=1 -name rhel6.4-64 -uuid 9a0e67ec-f286-d8e7-0548-0c1c9ec93009 -nodefconfig -nodefaults -monitor stdio -rtc base=utc,clock=host,driftfix=slew -no-kvm-pit-reinjection -no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=/home/RHEL-Server-6.4-64-virtio.qcow2,if=none,id=drive-virtio-disk0,format=qcow2,cache=none -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -drive if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw -device ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -netdev tap,id=hostnet0,vhost=on -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:d5:51:8a,bus=pci.0,addr=0x3 -chardev socket,id=charserial0,path=/tmp/isa-serial,server,nowait -device isa-serial,chardev=charserial0,id=serial0 -device usb-tablet,id=input0 -vnc :10 -vga std -device intel-hda,id=sound0,bus=pci.0,addr=0x4 -device hda-duplex,id=sound0-codec0,bus=sound0.0,cad=0 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6 -qmp tcp:0:5555,server,nowait -global PIIX4_PM.disable_s3=0 -global PIIX4_PM.disable_s4=0 -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x7


(2) Check the VSZ size.
#ps -aux | grep qemu

Warning: bad syntax, perhaps a bogus '-'? See /usr/share/doc/procps-3.2.8/FAQ
root      4752 32.0  5.7 3296196 457056 pts/1  Dl+  15:54   0:15 /usr/libexec/qemu-kvm -cpu SandyBridge -M rhel6.5.0 -enable-kvm -m 2048 -smp 2,sockets=2,cores=1,threads=1 ....

It shows 3296196KB VSZ size, about 3.14G.

Comment 13 Ademar Reis 2015-02-18 16:03:07 UTC
We're going to investigate it upstream and for that we'll track the progress via Bug 1193966 (RHEL7). We don't have plans to fix it in RHEL6 though, so I'm closing this bug.

Alex, if this issue is critical or in any way time sensitive, please raise a ticket through your regular Red Hat support channels to make certain it receives the proper attention and prioritization that will result in a timely resolution.

For information on how to contact the Red Hat production support team, please visit: https://www.redhat.com/support/process/production/#howto

Thanks!


Note You need to log in before you can comment on or make changes to this bug.