Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.

Bug 918201

Summary: [qemu-kvm-rhev] live migration of guest is failing between different versions of qemu-kvm-rhev (same minor release)
Product: Red Hat Enterprise Linux 6 Reporter: Haim <hateya>
Component: qemu-kvmAssignee: Virtualization Maintenance <virt-maint>
Status: CLOSED WORKSFORME QA Contact: Virtualization Bugs <virt-bugs>
Severity: high Docs Contact:
Priority: unspecified    
Version: 6.4CC: abaron, acathrow, areis, bazulay, bsarathy, dyasny, ebenahar, iheim, juzhang, mkenneth, quintela, qzhang, virt-maint, yeylon, ykaul
Target Milestone: rc   
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2013-03-07 19:40:17 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Haim 2013-03-05 17:21:12 UTC
Description of problem:

trying to migrate a VM from one host to another and I get the following error:

qemu: warning: error while loading state for instance 0x0 of device 'ram'
load of migration failed
2013-03-05 17:06:38.422+0000: shutting down

please note that both hosts runs different versions of qemu-kvm-rhev which should be supported:

source: qemu-kvm-rhev-tools-0.12.1.2-2.348.el6.x86_64
dest: qemu-kvm-rhev-0.12.1.2-2.355.el6_4.1.x86_64

qemu-kvm-command line:

LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/sbin:/sbin:/bin QEMU_AUDIO_DRV=spice /usr/libexec/qemu-kvm -name gadi-rhevm -S -M rhel6.3.0 -cpu Opteron_G1 -enable-kvm -m 4096 -smp 2,sockets=1,cores=2,threads=1 -uuid 515e7d4b-b3c0-43ea-b86e-76f78386d02f -smbios type=1,manufacturer=Red Hat,product=RHEV Hypervisor,version=6Server-6.4.0.4.el6,serial=4C4C4544-004A-4410-804C-B5C04F39354A,uuid=515e7d4b-b3c0-43ea-b86e-76f78386d02f -nodefconfig -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/gadi-rhevm.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=2013-03-05T17:06:36,driftfix=slew -no-shutdown -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x4 -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw,serial= -device ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive file=/rhev/data-center/9800f9c4-a235-4cbd-8fe7-565d1a53f7b5/3fea4681-d7a5-4530-8115-001674f8b422/images/5fa9eac5-720d-448c-8f25-48b5e4d33cd0/2c627a09-aee5-450e-8040-39dcecfb9fa7,if=none,id=drive-virtio-disk0,format=qcow2,serial=5fa9eac5-720d-448c-8f25-48b5e4d33cd0,cache=none,werror=stop,rerror=stop,aio=native -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -netdev tap,fd=35,id=hostnet0,vhost=on,vhostfd=36 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:16:97:05,bus=pci.0,addr=0x3 -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/gadi-rhevm.com.redhat.rhevm.vdsm,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm -chardev socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/gadi-rhevm.org.qemu.guest_agent.0,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0 -chardev spicevmc,id=charchannel2,name=vdagent -device virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0 -spice port=5904,tls-port=5905,addr=0,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-channel=playback,tls-channel=record,tls-channel=smartcard,tls-channel=usbredir,seamless-migration=on -k en-us -vga qxl -global qxl-vga.vram_size=67108864 -incoming tcp:0.0.0.0:49161 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6
qemu: warning: error while loading state for instance 0x0 of device 'ram'
load of migration failed
2013-03-05 17:06:38.422+0000: shutting down

src:
libvirt-client-0.10.2-18.el6.x86_64
libvirt-python-0.10.2-18.el6.x86_64
qemu-kvm-rhev-0.12.1.2-2.355.el6_4.1.x86_64
libvirt-debuginfo-0.10.2-18.el6.x86_64
libvirt-lock-sanlock-0.10.2-18.el6.x86_64
libvirt-0.10.2-18.el6.x86_64
qemu-kvm-rhev-tools-0.12.1.2-2.348.el6.x86_64


dest:
libvirt-client-0.10.2-18.el6.x86_64
libvirt-python-0.10.2-18.el6.x86_64
libvirt-0.10.2-18.el6.x86_64
libvirt-lock-sanlock-0.10.2-18.el6.x86_64
qemu-kvm-rhev-0.12.1.2-2.355.el6_4.1.x86_64
qemu-kvm-rhev-tools-0.12.1.2-2.355.el6_4.1.x86_64

Comment 1 Qunfang Zhang 2013-03-06 06:32:24 UTC
Hi, Haim

I can reproduce this issue with qemu-kvm-348 and qemu-kvm-355_4.2 build. And I don't think this is a problem. because in from qemu-kvm-350, data-plane patches are backport and there a new parameters "dev-prop: x-data-plane = off" introduced for virtio block.
So when do migration between older qemu-kvm (lower than qemu-kvm-350) and newer version (>= qemu-kvm-350), the virtio block device parameters are changed so migration load failed. 

"(qemu) info qtree" compare:  

# diff qemu-kvm-348.txt qemu-kvm-355_4.2.txt
......
63a64
>         dev-prop: serial = "5fa9eac5-720d-448c-8"
64a66
>         dev-prop: x-data-plane = off
......

And also, as customers will not use our internal build, so we just need to make sure migration between final builds pushed out with errata have no problem is ok.
So I tested the following combination of cross version migration:

(1) qemu-kvm-355(released 6.4) <-> qemu-kvm-355_4.1
(2) qemu-kvm-355(released 6.4) <-> qemu-kvm-355_4.2
(3) qemu-kvm-355_4.1 <-> qemu-kvm-355_4.2

Al the above 3 items passed after ping-pong migration with the same command line as bug description.

=================

Hi, Juan
Could you have a double confirm? 


Thanks,
Qunfang

Comment 2 Qunfang Zhang 2013-03-06 08:46:08 UTC
Hi, Juan
This issue may be not related to x-data-plane property, may be related to bug 869981. 
Because I just re-test again with rhel6.3 host <-> rhel6.4.z cross version migration, migration succeeds though there's "data-plane" property in "info qtree" output of rhel6.4, but not existed in the output of rhel6.3.

Anyway, there's no problem between any of the following host versions: rhel6.3 release; rhel6.4 release, rhel6.4-z build.

Comment 3 Ademar Reis 2013-03-07 19:40:17 UTC
(In reply to comment #2)

[...]

> Anyway, there's no problem between any of the following host versions:
> rhel6.3 release; rhel6.4 release, rhel6.4-z build.

That's correct. We don't support internal builds, created during the development phase. If the migration works betweenn 6.3 and 6.4{,.z}, then there's no problem.

Comment 4 Ademar Reis 2013-03-07 19:40:46 UTC
*** Bug 918218 has been marked as a duplicate of this bug. ***