Bug 1010670

Summary: spice session is closed when taking snapshot with ram
Product: Red Hat Enterprise Linux 6 Reporter: Arik <ahadas>
Component: qemu-kvmAssignee: Gerd Hoffmann <kraxel>
Status: CLOSED ERRATA QA Contact: Virtualization Bugs <virt-bugs>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 6.5CC: ahadas, bsarathy, chayang, ch_free, dyuan, jen, juzhang, kraxel, mazhang, michen, mkenneth, mzhan, pkrempa, quintela, qzhang, rbalakri, tlavigne, virt-maint, ydu, zpeng
Target Milestone: rc   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: qemu-kvm-0.12.1.2-2.439.el6 Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2014-10-14 06:51:31 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
qemu spice fix none

Description Arik 2013-09-22 11:42:23 UTC
Description of problem:
When taking snapshot with ram while there is an open spice session connected to the VM, the spice session is closed.

Version-Release number of selected component (if applicable):
0.10.2-24.el6 (and it was reproduced 1.0.5.5-1.fc19 as well)

How reproducible:
100%

Steps to Reproduce:
1. Run VM and open spice session to it
2. Take snapshot with RAM for the VM above
3.

Actual results:
The spice session is closed

Expected results:
The spice session should remain open and be usable after the snapshot creation is finished.

Additional info:
The spice session should remain open as it is in VM migration

Comment 3 EricLee 2013-09-25 02:13:24 UTC
Hi Arik,

I can not reproduce this bug in libvirt-0.10.2-26.el6.x86_64, and my steps:

1. start a guest with spice graphics:
# virsh start kvm-rhel6.4-x86_64-qcow2-virtio

# virsh dumpxml kvm-rhel6.4-x86_64-qcow2-virtio
...
    <graphics type='spice' port='5900' autoport='yes' listen='127.0.0.1'>
      <listen type='address' address='127.0.0.1'/>
    </graphics>
    <sound model='ich6'>
      <alias name='sound0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
    </sound>
    <video>
      <model type='qxl' ram='65536' vram='65536' heads='1'/>
      <alias name='video0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
    </video>
...

2. install spicec:
# yum install spice-client -y

3. open spice session with spicec:
# spicec -h 127.0.0.1 -p 5900 &

4. do snapshot with both disk and mem:
# virsh snapshot-create-as kvm-rhel6.4-x86_64-qcow2-virtio s1
Domain snapshot s21 created

5. check the spice session, it was not closed between the snapshot action, and is also usable after the snapshot creation is finished.

Do I missed something?

Thanks, 
EricLee

Comment 4 EricLee 2013-09-25 02:15:44 UTC
(In reply to EricLee from comment #3)
> Hi Arik,
> 
> I can not reproduce this bug in libvirt-0.10.2-26.el6.x86_64, and my steps:
> 
> 1. start a guest with spice graphics:
> # virsh start kvm-rhel6.4-x86_64-qcow2-virtio
> 
> # virsh dumpxml kvm-rhel6.4-x86_64-qcow2-virtio
> ...
>     <graphics type='spice' port='5900' autoport='yes' listen='127.0.0.1'>
>       <listen type='address' address='127.0.0.1'/>
>     </graphics>
>     <sound model='ich6'>
>       <alias name='sound0'/>
>       <address type='pci' domain='0x0000' bus='0x00' slot='0x04'
> function='0x0'/>
>     </sound>
>     <video>
>       <model type='qxl' ram='65536' vram='65536' heads='1'/>
>       <alias name='video0'/>
>       <address type='pci' domain='0x0000' bus='0x00' slot='0x02'
> function='0x0'/>
>     </video>
> ...
> 
> 2. install spicec:
> # yum install spice-client -y
> 
> 3. open spice session with spicec:
> # spicec -h 127.0.0.1 -p 5900 &
> 
> 4. do snapshot with both disk and mem:
> # virsh snapshot-create-as kvm-rhel6.4-x86_64-qcow2-virtio s1
> Domain snapshot s21 created

Sorry, should be :
Domain snapshot s1 created

> 
> 5. check the spice session, it was not closed between the snapshot action,
> and is also usable after the snapshot creation is finished.
> 
> Do I missed something?
> 
> Thanks, 
> EricLee

Comment 5 Peter Krempa 2013-09-25 06:20:06 UTC
(In reply to EricLee from comment #3)
> Hi Arik,
> 
> I can not reproduce this bug in libvirt-0.10.2-26.el6.x86_64, and my steps:
> 
> 1. start a guest with spice graphics:
> # virsh start kvm-rhel6.4-x86_64-qcow2-virtio

...
 
> 4. do snapshot with both disk and mem:
> # virsh snapshot-create-as kvm-rhel6.4-x86_64-qcow2-virtio s1

You need to create an external snapshot with memory. The commandline you've creates an internal one.

Use:
# virsh snapshot-create-as kvm-rhel6.4-x86_64-qcow2-virtio s1 --memspec /path/to/memory/save/file

(and tweak it according to your configuration)

Comment 6 EricLee 2013-09-27 03:15:31 UTC
(In reply to Peter Krempa from comment #5)
> (In reply to EricLee from comment #3)
> > Hi Arik,
> > 
> > I can not reproduce this bug in libvirt-0.10.2-26.el6.x86_64, and my steps:
> > 
> > 1. start a guest with spice graphics:
> > # virsh start kvm-rhel6.4-x86_64-qcow2-virtio
> 
> ...
>  
> > 4. do snapshot with both disk and mem:
> > # virsh snapshot-create-as kvm-rhel6.4-x86_64-qcow2-virtio s1
> 
> You need to create an external snapshot with memory. The commandline you've
> creates an internal one.
> 
> Use:
> # virsh snapshot-create-as kvm-rhel6.4-x86_64-qcow2-virtio s1 --memspec
> /path/to/memory/save/file
> 
> (and tweak it according to your configuration)

Yeah, external snapshot with memory indeed cause the spice session closed.
# spicec -h 127.0.0.1 -p 5900 &
[1] 20227

# virsh snapshot-create-as kvm-rhel6.4-x86_64-qcow2-virtio s1 --memspec /var/lib/libvirt/qemu/snapshot/kvm-rhel6.4-x86_64-qcow2-virtio/s1
Warning: abort
Domain snapshot s1 created

I can reproduce this.

Comment 9 Michal Privoznik 2014-04-14 13:33:22 UTC
So from my observations, libvirt is not telling qemu to rellocate the spice client. Libvirt is merely telling qemu to migrate to a FD and then, at the end of migration spice client is disconnected automagically:

2014-04-14 11:40:24.325+0000: 2490: debug : qemuMonitorSend:904 : QEMU_MONITOR_SEND_MSG: mon=0x7f8b8c01db30 msg={"execute":"getfd","arguments":{"fdname":"migrate"},"id":"libvirt-25"}
2014-04-14 11:40:24.326+0000: 2487: debug : qemuMonitorIOWrite:467 : QEMU_MONITOR_IO_SEND_FD: mon=0x7f8b8c01db30 fd=32 ret=72 errno=11
2014-04-14 11:40:24.329+0000: 2487: debug : qemuMonitorJSONIOProcessLine:174 : QEMU_MONITOR_RECV_REPLY: mon=0x7f8b8c01db30 reply={"return": {}, "id": "libvirt-25"}
2014-04-14 11:40:24.331+0000: 2490: debug : qemuMonitorSend:904 : QEMU_MONITOR_SEND_MSG: mon=0x7f8b8c01db30 msg={"execute":"migrate","arguments":{"detach":true,"blk":false,"inc":false,"uri":"fd:migrate"},"id":"libvirt-26"}
2014-04-14 11:40:24.332+0000: 2487: debug : qemuMonitorIOWrite:462 : QEMU_MONITOR_IO_WRITE: mon=0x7f8b8c01db30 buf={"execute":"migrate","arguments":{"detach":true,"blk":false,"inc":false,"uri":"fd:migrate"},"id":"libvirt-26"}
2014-04-14 11:40:24.343+0000: 2487: debug : qemuMonitorIOProcess:354 : QEMU_MONITOR_IO_PROCESS: mon=0x7f8b8c01db30 buf={"return": {}, "id": "libvirt-26"}
2014-04-14 11:40:24.343+0000: 2487: debug : qemuMonitorJSONIOProcessLine:174 : QEMU_MONITOR_RECV_REPLY: mon=0x7f8b8c01db30 reply={"return": {}, "id": "libvirt-26"}
2014-04-14 11:40:24.373+0000: 2490: debug : qemuMonitorSend:904 : QEMU_MONITOR_SEND_MSG: mon=0x7f8b8c01db30 msg={"execute":"query-migrate","id":"libvirt-27"}
....

2014-04-14 11:40:45.896+0000: 2490: debug : qemuMonitorSend:904 : QEMU_MONITOR_SEND_MSG: mon=0x7f8b8c01db30 msg={"execute":"query-migrate","id":"libvirt-82"}
2014-04-14 11:40:46.320+0000: 2487: debug : qemuMonitorIOProcess:354 : QEMU_MONITOR_IO_PROCESS: mon=0x7f8b8c01db30 buf={"timestamp": {"seconds": 1397475646, "microseconds": 320067}, "event": "SPICE_DISCONNECTED", "data": {"server": {"port": "5900", "family": "ipv4", "host": "127.0.0.1"}, "client": {"port": "56859", "family": "ipv4", "host": "127.0.0.1"}}}
2014-04-14 11:40:46.401+0000: 2487: debug : qemuMonitorIOProcess:354 : QEMU_MONITOR_IO_PROCESS: mon=0x7f8b8c01db30 buf={"timestamp": {"seconds": 1397475646, "microseconds": 400918}, "event": "__com.redhat_SPICE_DISCONNECTED", "data": {"server": {"auth": "none", "port": "5900", "family": "ipv4", "host": "127.0.0.1"}, "client": {"port": "56857", "family": "ipv4", "host": "127.0.0.1"}}}
2014-04-14 11:40:46.402+0000: 2487: debug : qemuMonitorJSONIOProcessLine:169 : QEMU_MONITOR_RECV_EVENT: mon=0x7f8b8c01db30 event={"timestamp": {"seconds": 1397475646, "microseconds": 401138}, "event": "SPICE_DISCONNECTED", "data": {"server": {"port": "5900", "family": "ipv4", "host": "127.0.0.1"}, "client": {"port": "56857", "family": "ipv4", "host": "127.0.0.1"}}}
2014-04-14 11:40:46.464+0000: 2487: debug : qemuMonitorIOProcess:354 : QEMU_MONITOR_IO_PROCESS: mon=0x7f8b8c01db30 buf={"timestamp": {"seconds": 1397475646, "microseconds": 463639}, "event": "SPICE_MIGRATE_COMPLETED"}
2014-04-14 11:40:46.482+0000: 2487: debug : qemuMonitorIOProcess:354 : QEMU_MONITOR_IO_PROCESS: mon=0x7f8b8c01db30 buf={"return": {"status": "completed", "downtime": 2071, "total-time": 21967, "ram": {"total": 4312137728, "remaining": 0, "transferred": 1491063563}}, "id": "libvirt-82"}


The same behaviour can be reproduced by 'virsh dump $domain /tmp/$domain.dump'. 

However, I'm not sure how to fix this bug. I mean, libvirt is not sending any 'client-info'. Should it do so? I think qemu should not disconnect a spice client if no 'client-info' was set prior to calling 'migrate' command.

Comment 10 Juan Quintela 2014-04-14 15:15:19 UTC
Can it be related to how spice handle things?

static void migration_state_notifier(Notifier *notifier, void *data)
{
    int state = get_migration_state();
    if (state == MIG_STATE_ACTIVE) {
#ifdef SPICE_INTERFACE_MIGRATION
        spice_server_migrate_start(spice_server);
#endif
    } else if (state == MIG_STATE_COMPLETED) {
#ifndef SPICE_INTERFACE_MIGRATION
        spice_server_migrate_switch(spice_server);
        monitor_protocol_event(QEVENT_SPICE_MIGRATE_COMPLETED, NULL);
        spice_migration_completed = true;
#else
        spice_server_migrate_end(spice_server, true);
    } else if (state == MIG_STATE_CANCELLED || state == MIG_STATE_ERROR) {
        spice_server_migrate_end(spice_server, false);
#endif
    }
}


I am not sure which path is followed during an snapshot?

Comment 11 Michal Privoznik 2014-04-15 14:14:45 UTC
(In reply to Juan Quintela from comment #10)
> Can it be related to how spice handle things?
> 
> static void migration_state_notifier(Notifier *notifier, void *data)
> {
>     int state = get_migration_state();
>     if (state == MIG_STATE_ACTIVE) {
> #ifdef SPICE_INTERFACE_MIGRATION
>         spice_server_migrate_start(spice_server);
> #endif
>     } else if (state == MIG_STATE_COMPLETED) {
> #ifndef SPICE_INTERFACE_MIGRATION
>         spice_server_migrate_switch(spice_server);
>         monitor_protocol_event(QEVENT_SPICE_MIGRATE_COMPLETED, NULL);
>         spice_migration_completed = true;
> #else
>         spice_server_migrate_end(spice_server, true);
>     } else if (state == MIG_STATE_CANCELLED || state == MIG_STATE_ERROR) {
>         spice_server_migrate_end(spice_server, false);
> #endif
>     }
> }
> 
> 
> I am not sure which path is followed during an snapshot?

The middle one (MIG_STATE_COMPLETED). The SPICE_MIGRATE_COMPLETED event is generated there and we can see the event in the logs. Unfortunately, I'm not spice specialist to tell what's going on there.

Comment 12 Gerd Hoffmann 2014-04-29 07:34:52 UTC
Created attachment 890679 [details]
qemu spice fix

Very likely to be a qemu bug, patch against upstream attached, not tested yet.

Comment 13 Michal Privoznik 2014-04-29 07:44:24 UTC
(In reply to Gerd Hoffmann from comment #12)
> Created attachment 890679 [details]
> qemu spice fix
> 
> Very likely to be a qemu bug, patch against upstream attached, not tested
> yet.

Thank you Gerd. I thought it is a qemu/spice bug. I'm changing the component then.

Comment 14 Gerd Hoffmann 2014-05-28 12:31:25 UTC
upstream commit a76a2f729aae21c45c7e9eef8d1d80e94d1cc930

Comment 16 mazhang 2014-05-29 03:23:48 UTC
Reproduced this bug.

Host:
qemu-kvm-rhev-debuginfo-0.12.1.2-2.427.el6.x86_64
gpxe-roms-qemu-0.9.7-6.10.el6.noarch
qemu-kvm-rhev-0.12.1.2-2.427.el6.x86_64
qemu-img-rhev-0.12.1.2-2.427.el6.x86_64
qemu-kvm-rhev-tools-0.12.1.2-2.427.el6.x86_64
kernel-2.6.32-469.el6.x86_64
virt-viewer-0.5.6-8.el6.x86_64/spice-client-0.8.2-15.el6.x86_64

Guest:
RHEL6.5-64

Steps:
1. Start a vm by virt-manager.

2. Open spice session (remote-viewer or spicec)
#remote-viewer spice://127.0.0.1:5900
or
#spicec -h 127.0.0.1 -p 5900

3. Create external snapshot with memory by virsh.
virsh # list
 Id    Name                           State
----------------------------------------------------
 1     vm0                            running

virsh # snapshot-create-as vm0 sn0 --memspec /tmp/sn0
Domain snapshot sn0 created

Result:
Spice session is closed, both remote-viewer and spicec hit this problem.

Comment 17 Gerd Hoffmann 2014-07-02 09:24:08 UTC
scratch build kicked:
http://brewweb.devel.redhat.com/brew/taskinfo?taskID=7656963

patches posted.

Comment 20 Jeff Nelson 2014-08-18 19:57:34 UTC
Fix included in qemu-kvm-0.12.1.2-2.439.el6

Comment 22 Qunfang Zhang 2014-08-19 07:40:25 UTC
Reproduced this bug on qemu-kvm-rhev-0.12.1.2-2.436.el6.x86_64 and verified pass on qemu-kvm-rhev-0.12.1.2-2.439.el6.x86_64.

On qemu-kvm-rhev-0.12.1.2-2.436.el6.x86_64:

1. Boot up a guest with spice via virt-manager.

CLI:
/usr/libexec/qemu-kvm -name vm -S -M rhel6.5.0 -enable-kvm -m 2048 -realtime mlock=off -smp 2,maxcpus=4,sockets=4,cores=1,threads=1 -uuid d1e21255-abe1-bf17-9646-23b7aaa436cd -nodefconfig -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/vm.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -device ich9-usb-ehci1,id=usb,bus=pci.0,addr=0x5.0x7 -device ich9-usb-uhci1,masterbus=usb.0,firstport=0,bus=pci.0,multifunction=on,addr=0x5 -device ich9-usb-uhci2,masterbus=usb.0,firstport=2,bus=pci.0,addr=0x5.0x1 -device ich9-usb-uhci3,masterbus=usb.0,firstport=4,bus=pci.0,addr=0x5.0x2 -drive file=/home/rhel-6.6.sn0,if=none,id=drive-ide0-0-0,format=qcow2,cache=none -device ide-drive,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0,bootindex=1 -netdev tap,fd=26,id=hostnet0 -device rtl8139,netdev=hostnet0,id=net0,mac=52:54:00:8c:3a:95,bus=pci.0,addr=0x3 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -spice port=5900,addr=0.0.0.0,disable-ticketing,seamless-migration=on -vga cirrus -device intel-hda,id=sound0,bus=pci.0,addr=0x4 -device hda-duplex,id=sound0-codec0,bus=sound0.0,cad=0 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6 -msg timestamp=on

2. virsh # list 
 Id    Name                           State
----------------------------------------------------
 1     vm                             running

3. Open the guest desktop with virt-viewer.
# virt-viewer spice;//$host_ip:5900 &

4. virsh # snapshot-create-as vm sn0 /tmp/sn0
Domain snapshot sn0 created


Result: the snapshot is created however the spice session is closed. 

================

On qemu-kvm-rhev-0.12.1.2-2.439.el6.x86_64:

With same steps, spice session is NOT closed during taking a snapshot.

So this bug is fixed.

Comment 24 errata-xmlrpc 2014-10-14 06:51:31 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2014-1490.html