Bug 1074824

Summary: [virtio-win][qemu-ga-win] fail to execute any virtagent commands any more after do "guest-suspend-disk/ram"
Product: Red Hat Enterprise Linux 6 Reporter: Sibiao Luo <sluo>
Component: virtio-winAssignee: Vadim Rozenfeld <vrozenfe>
Status: CLOSED DUPLICATE QA Contact: Virtualization Bugs <virt-bugs>
Severity: medium Docs Contact:
Priority: medium    
Version: 6.6CC: acathrow, bcao, bsarathy, chayang, juzhang, michen, qzhang, vrozenfe, xfu, yvugenfi
Target Milestone: rc   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2014-03-11 04:52:33 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Sibiao Luo 2014-03-11 04:47:17 UTC
Description of problem:
fail to  execute any virtagent commands any more after do "guest-suspend-disk/ram", and checked the QEMU Guest Agent service was started in Computer Management. But if restart the QEMU Guest Agent service manually, it can execute any virtagent commands correctly again.

Version-Release number of selected component (if applicable):
host info:
2.6.32-447.el6.x86_64
qemu-kvm-rhev-0.12.1.2-2.422.el6.x86_64
guest info:
win7-64bit
qemu-ga-win-6.5-7
virtio-win-prewhql-0.1-74 

How reproducible:
100%

Steps to Reproduce:
1.Start guest with virtio serial and install qemu-ga-win-6.5-7 inside guest.
e.g:...-global PIIX4_PM.disable_s3=0 -global PIIX4_PM.disable_s4=0...-device virtio-serial-pci,id=virtio-serial0,max_ports=16,vectors=0,bus=pci.0,addr=0x3 -chardev socket,path=/tmp/qga.sock,server,nowait,id=qga0 -device virtserialport,chardev=qga0,name=org.qemu.guest_agent.0
2.Connect the chardev socket in host side for sending S3/S4 commands to guest:
# nc -U /tmp/qga.sock readline
{ "execute": "guest-ping" }
{"return": {}}

{ "execute": "guest-suspend-ram|disk"}
3.Resume the VM and execute virtagent commands again. 
{ "execute": "guest-ping" }

Actual results:
after step 2, it can do S3/S4 correctly.
after step 3, fail to  execute any virtagent commands any more after do "guest-suspend-disk/ram", and checked the QEMU Guest Agent service was started in Computer Management. 
# nc -U /tmp/qga.sock readline
{"execute": "guest-ping"}
{"return": {}}
{ "execute": "guest-suspend-ram"} <--------do S3 and resume successfully.
{"execute": "guest-ping"}
         <--------nothing output.
If restart the QEMU Guest Agent service manually in Computer Management, it can execute any virtagent commands correctly again.
{"execute": "guest-ping"}
{"return": {}}

Expected results:
it can repeat execute virtagent commands correctly after resume from S3/S4.

Additional info:
# /usr/libexec/qemu-kvm -M rhel6.5.0 -enable-kvm -m 4096 -smp 4,sockets=2,cores=2,threads=1 -no-kvm-pit-reinjection -uuid 350e716b-5f98-4bf0-9a2a-c8e123295244 -usb -device usb-tablet,id=input0 -rtc base=localtime,clock=host,driftfix=slew -drive file=/home/en_windows_7_ultimate_x64.qcow2,if=none,id=drive-virtio-disk0,format=qcow2,cache=none -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -netdev tap,id=hostnet0,vhost=on,script=/etc/qemu-ifup -device virtio-net-pci,netdev=hostnet0,id=virtio-net-pci0,mac=08:2E:5F:0A:1D:B1,bus=pci.0,addr=0x5 -device virtio-balloon-pci,id=ballooning,bus=pci.0,addr=0x6 -global PIIX4_PM.disable_s3=0 -global PIIX4_PM.disable_s4=0 -k en-us -boot menu=on -qmp tcp:0:4444,server,nowait -serial unix:/tmp/ttyS0,server,nowait -spice port=5930,disable-ticketing -vga qxl -global qxl-vga.vram_size=67108864 -monitor stdio -device virtio-serial-pci,id=virtio-serial0,max_ports=16,vectors=0,bus=pci.0,addr=0x3 -chardev socket,path=/tmp/qga.sock,server,nowait,id=qga0 -device virtserialport,chardev=qga0,name=org.qemu.guest_agent.0

Comment 1 Sibiao Luo 2014-03-11 04:52:33 UTC
It maybe duplicate to bug 888694, i close it first.

*** This bug has been marked as a duplicate of bug 888694 ***