RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 616389 - balloon: qemu become no response for query balloon after stop guest with balloon service start
Summary: balloon: qemu become no response for query balloon after stop guest with ball...
Keywords:
Status: CLOSED DUPLICATE of bug 623903
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: qemu-kvm
Version: 6.0
Hardware: All
OS: Linux
high
medium
Target Milestone: beta
: ---
Assignee: Amit Shah
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On:
Blocks: 580954
TreeView+ depends on / blocked
 
Reported: 2010-07-20 10:45 UTC by Shirley Zhou
Modified: 2015-03-05 00:51 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2011-02-02 12:59:55 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
virt manager screenshot (115.36 KB, image/png)
2010-07-20 10:46 UTC, Shirley Zhou
no flags Details

Description Shirley Zhou 2010-07-20 10:45:16 UTC
Description of problem:
After balloon service start in windows guest, then stop guest,qemu become no response when query balloon info
Version-Release number of selected component (if applicable):
virtio-win-1.1.8-0
qemu-kvm-0.12.1.2-2.96.el6.x86_64

How reproducible:
always

Steps to Reproduce:
1.Run windows guest with virtio balloon
/usr/libexec/qemu-kvm  -M rhel6.0.0 -enable-kvm -m 4096 -smp 2,sockets=2,cores=1,threads=1 -name win08R2 -uuid cc007a9e-2c47-1234-1ead-38547538144e -nodefconfig -nodefaults  -rtc base=localtime -boot c -drive file=/home/win08r2.s1.qcow2,if=none,id=drive-ide0-0-0,boot=on,format=qcow2,cache=none -device ide-drive,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0 -drive file=/mnt/win_iso/win08-R2.iso,if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,cache=none -device ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -netdev tap,id=hostnet0 -device rtl8139,netdev=hostnet0,id=net0,mac=52:54:00:20:5e:19,bus=pci.0,addr=0x4 -chardev pty,id=serial0 -device isa-serial,chardev=serial0 -usb -device usb-tablet,id=input0 -vnc :1 -k en-us -vga std  -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3 -monitor stdio -qmp tcp:0:4444,server,nowait
2.Install balloon driver,install balloon service, reboot guest
3.After guest start ok, query balloon info
(qemu) info balloon 
balloon: actual=4096,mem_swapped_in=0
4.stop guest
(qemu) stop 
5.Check balloon info using monitor again
(qemu) info balloon 
  
Actual results:
No response after step5

Expected results:
There should be balloon info shows after step5.

Additional info:
1.After stop this service, this bug does not exist.
2.For virt-manager, after install virtio balloon and start balloon service, then do pause action, the virt-manager window become whole blank as attachment screenshot.from libvirt log, we can see it is query balloon info.

Some info not related to this bug:
1.After I uninstall virtio balloon driver, reboot guest,balloon service still start.
2.And After I uninstall virtio balloon driver, reboot guest, then check balloon info 
(qemu) info balloon 
balloon: actual=4062,mem_swapped_in=0,minor_page_faults=0,mem_swapped_out=0,free_mem=3717361664,major_page_faults=0,total_mem=4294545408
above balloon info is different with previous: 
"balloon: actual=4096,mem_swapped_in=0"

Comment 1 Shirley Zhou 2010-07-20 10:46:02 UTC
Created attachment 433125 [details]
virt manager screenshot

Comment 3 RHEL Program Management 2010-07-20 11:17:49 UTC
This issue has been proposed when we are only considering blocker
issues in the current Red Hat Enterprise Linux release.

** If you would still like this issue considered for the current
release, ask your support representative to file as a blocker on
your behalf. Otherwise ask that it be considered for the next
Red Hat Enterprise Linux release. **

Comment 4 Dor Laor 2010-07-21 09:18:20 UTC
When the balloon service is run in the guest it can provide more info.
Why is that a bug?

Comment 5 Shirley Zhou 2010-07-21 09:43:17 UTC
(In reply to comment #4)
> When the balloon service is run in the guest it can provide more info.
> Why is that a bug?    

If we stop balloon service in guest, then stop guest from monitor, then check balloon info, balloon info shows. 
While if balloon service start, then stop guest from monitor,then check balloon info, monitor become no response.

Comment 6 Shirley Zhou 2010-08-16 06:08:03 UTC
additional info for query balloon info when guest status is paused.

02:00:31.465: debug : qemuMonitorJSONCommandWithFd:217 : Send command '{"execute":"stop"}' for write with FD -1
02:00:31.466: debug : qemuMonitorJSONIOProcessLine:115 : Line [{"timestamp": {"seconds": 1281938431, "microseconds": 466241}, "event": "STOP"}]
02:00:31.466: debug : qemuMonitorJSONIOProcessEvent:86 : mon=0x7f9a5c009660 obj=0x7f9a74142610
02:00:31.466: debug : qemuMonitorJSONIOProcessEvent:99 : handle STOP handler=0x478480 data=(nil)
02:00:31.466: debug : qemuMonitorJSONIOProcessLine:115 : Line [{"return": {}}]
02:00:31.466: debug : qemuMonitorJSONIOProcess:188 : Total used 97 bytes out of 97 available in buffer
02:00:31.466: debug : qemuMonitorJSONCommandWithFd:222 : Receive command reply ret=0 errno=0 14 bytes '{"return": {}}'
02:00:31.467: debug : qemudGetProcessInfo:4600 : Got status for 5809/0 user=1487 sys=1231 cpu=3
02:00:31.467: debug : qemuMonitorJSONCommandWithFd:217 : Send command '{"execute":"query-balloon"}' for write with FD -1
02:00:32.337: debug : qemudGetProcessInfo:4600 : Got status for 5809/0 user=1488 sys=1235 cpu=2
02:00:33.331: debug : qemudGetProcessInfo:4600 : Got status for 5809/0 user=1489 sys=1240 cpu=2
02:00:34.335: debug : qemudGetProcessInfo:4600 : Got status for 5809/0 user=1491 sys=1245 cpu=3
02:00:35.337: debug : qemudGetProcessInfo:4600 : Got status for 5809/0 user=1493 sys=1249 cpu=3
02:00:36.337: debug : qemudGetProcessInfo:4600 : Got status for 5809/0 user=1494 sys=1256 cpu=3
02:00:37.338: debug : qemudGetProcessInfo:4600 : Got status for 5809/0 user=1494 sys=1262 cpu=3
02:00:38.338: debug : qemudGetProcessInfo:4600 : Got status for 5809/0 user=1495 sys=1268 cpu=3
02:00:39.339: debug : qemudGetProcessInfo:4600 : Got status for 5809/0 user=1496 sys=1274 cpu=3

At this time, I send resume command from virsh, error happens:
virsh # resume rhel6-test-clone
error: Failed to resume domain rhel6-test-clone
error: Timed out during operation: cannot acquire state change lock

Comment 8 Dor Laor 2011-02-01 12:20:04 UTC
It's a qemu issue, the guest is not running, qemu should provide the last number of sore error code

Comment 9 Amit Shah 2011-02-01 14:15:19 UTC
Looks like a duplicate of bug 626544 and bug 623903.  Can you try again and let us know?

Comment 10 Amit Shah 2011-02-02 12:59:55 UTC
Marking this as a duplicate of bug 623903.  This one was discovered before the previous two bugs, but the fix went in one of the other.  If it's not a dup, please re-open.

*** This bug has been marked as a duplicate of bug 623903 ***


Note You need to log in before you can comment on or make changes to this bug.