RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 995420 - qemu-kvm process do not release memory(RES) after guest do stress test
Summary: qemu-kvm process do not release memory(RES) after guest do stress test
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: qemu-kvm
Version: 7.0
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: rc
: ---
Assignee: Luiz Capitulino
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-08-09 09:54 UTC by Xu Han
Modified: 2013-08-16 19:07 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2013-08-16 19:07:18 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Xu Han 2013-08-09 09:54:02 UTC
Description of problem:
After guest stress test, guest os release memory, but qemu-kvm process do not.
Tested three scenarios:
1.stop guest after stress test in guest. 
 (qemu)stop
 
2. reboot guest after guest stress test
  (qemu)system_reset

3. migrate guest to another host after guest stress test

would not let qemu-kvm process release memory via three scenarios above.

Version-Release number of selected component (if applicable):
Guest an host kernel version:
3.10.0-3.el7.x86_64
Host qemu-kvm version:
qemu-kvm-1.5.2-1.el7.x86_64

How reproducible:
100%

Steps to Reproduce:
1.boot guest:
/usr/libexec/qemu-kvm -name 'RHEL-Server-7.0-64' -nodefaults -m 5G -smp 4,cores=2,threads=2,sockets=1 -M pc-i440fx-rhel7.0.0 -cpu SandyBridge -rtc base=utc,clock=host,driftfix=slew -k en-us -boot menu=on -monitor stdio -vnc :2 -spice disable-ticketing,port=5932 -vga qxl -qmp tcp:0:5556,server,nowait -drive file=/mnt/RHEL-Server-7.0-64-1.raw,if=none,id=drive-scsi-disk,format=raw,cache=none,werror=stop,rerror=stop -device virtio-scsi-pci,id=scsi0 -device scsi-disk,drive=drive-scsi-disk,bus=scsi0.0,scsi-id=0,lun=0,id=scsi-disk,bootindex=1 -device virtio-net-pci,netdev=net1,mac=00:24:21:7f:0d:11,id=n1,mq=on,vectors=9 -netdev tap,id=net1,vhost=on,script=/etc/qemu-ifup,queues=4

2. check qemu-kvm process memory usage
[host]# top -p $(pgrep qemu-kvm) 

3. stress inside guest
[guest]# stress --cpu 4 --io 4 --vm 2 --vm-bytes 256M --timeout 1200s

4. re-check qemu-kvm process memory usage
[host]# top -p $(pgrep qemu-kvm) 

5.reboot guest
(qemu)system_reset

6.re-check and compare qemu-kvm process memory usage
[host]# top -p $(pgrep qemu-kvm)

Actual results:
step2
PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND                                                 
1428 root      20   0 9968500 1.105g   6960 S 0.999 32.37   0:41.92 qemu-kv

step3
stress: info: [1979] dispatching hogs: 4 cpu, 4 io, 2 vm, 0 hdd
stress: info: [1979] successful run completed in 1200s

step4
PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND                                                 
1428 root      20   0 9968500 1.730g   6960 S 1.332 50.69  30:01.26 qemu-kvm

step6
PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND                                                 
1428 root      20   0  9.977g 1.753g   6976 S 62.93 51.34  30:18.35 qemu-kvm

qemu-kvm(RES) don't release memory

Expected results:
qemu-kvm process should release memory after guest stress test

Additional info:
1.rhel6 hit the same issue

Comment 2 Luiz Capitulino 2013-08-15 18:50:36 UTC
My understanding of what is happening is this: you start QEMU with 5G, then you run a user-space program in the guest that touches say, 1G. This causes the host to actually allocate that 1G, but for the host it was _QEMU_ that touched that memory. So, when the guest user-space program is done, the memory it touched becomes free in the guest but for the host's POV the memory is still in use.

IOW, it's not a bug.

You can confirm this by ballooning the guest down and up. For example, suppose the stress tool used 1G and then released it, and that's what you wanted to see free in the host. Try this:

(qemu) info balloon
balloon: actual=5120
(qemu) balloon 4096
(qemu) info balloon
balloon: actual=4096
(qemu) balloon 5120

Now you should see that 1G free in host. The automatic ballooning project I'm working on is just about that: http://www.linux-kvm.org/page/Projects/auto-ballooning.

I'll wait for you confirmation before closing this as NOTABUG.

Comment 3 Xu Han 2013-08-16 04:49:29 UTC
According to comment2, I re-tested this issue.
This is test result:
1.guest boot successfully and don't run stress inside guest. 
  qemu process RES in host:

 PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND                                                             
4664 root      20   0  9.883g 1.024g   7176 S 0.000 30.00   0:33.13 qemu-kvm

2.after stress test inside guest
  qemu process RES in host:

 PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND                                                             
4664 root      20   0 9978344 3.073g   7176 S 1.332 90.01   2:37.95 qemu-kvm

(qemu) info balloon
balloon: actual=5120

3.do balloon 4096 in monitor
(qemu) balloon 4096
(qemu) info balloon
balloon: actual=4096 

 qemu process RES in host:

 PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND                                                             
4664 root      20   0 9978344 2.077g   7176 S 1.332 60.83   2:39.62 qemu-kvm

4.do balloon 5120 in monitor
(qemu) balloon 5120
(qemu) info balloon
balloon: actual=5120

  qemu process RES in host:

 PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND                                                             
4664 root      20   0 9978344 2.102g   7176 S 1.332 61.57   2:41.66 qemu-kvm

base on this test result above. change memory size via balloon. can see memory is released in host. 

Currently, In order to host release memory,still need to change manually balloon value, so I think it is still a bug.  If I am wrong, please correct me.

Comment 4 Luiz Capitulino 2013-08-16 19:07:18 UTC
This is not a bug in the sense that this is caused by a programming error, everything is working just as designed.

I agree this is not optimal, but the solution to this are long term projects like automatic ballooning and/or the free page hinting project.


Note You need to log in before you can comment on or make changes to this bug.