Bug 1091322
| Summary: | fail to reboot guest after migration from RHEL6.5 host to RHEL7.0 host | ||
|---|---|---|---|
| Product: | Red Hat Enterprise Linux 7 | Reporter: | Jan Kurik <jkurik> |
| Component: | qemu-kvm | Assignee: | Miroslav Rezanina <mrezanin> |
| Status: | CLOSED ERRATA | QA Contact: | Virtualization Bugs <virt-bugs> |
| Severity: | high | Docs Contact: | |
| Priority: | high | ||
| Version: | 7.0 | CC: | acathrow, areis, hhuang, huding, jherrman, juzhang, knoel, kraxel, lersek, michen, mrezanin, owasserm, pbonzini, pm-eus, quintela, qzhang, rkrcmar, virt-maint, xfu |
| Target Milestone: | rc | Keywords: | ZStream |
| Target Release: | --- | ||
| Hardware: | x86_64 | ||
| OS: | Linux | ||
| Whiteboard: | |||
| Fixed In Version: | qemu-kvm-1.5.3-60.el7_0.1 | Doc Type: | Bug Fix |
| Doc Text: |
Prior to this update, a bug in the migration code caused the following error on specific machine types: after a Red Hat Enterprise Linux 6.5 guest was migrated from a Red Hat Enterprise Linux 6.5 host to a Red Hat Enterprise Linux 7.0 host and then restarted, the boot failed and the guest automatically restarted. Thus, the guest entered an endless loop. With this update, the migration code has been fixed and the Red Hat Enterprise Linux 6.5 guests migrated in the aforementioned scenario now boot properly.
|
Story Points: | --- |
| Clone Of: | Environment: | ||
| Last Closed: | 2014-06-10 12:35:02 UTC | Type: | --- |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
| Bug Depends On: | 1027565 | ||
| Bug Blocks: | |||
|
Description
Jan Kurik
2014-04-25 11:16:58 UTC
Reproduce this bug using the following version: rhel7: qemu-img-1.5.3-60.el7.x86_64 kernel-3.10.0-121.el7.x86_64 rhel6.5: qemu-kvm-0.12.1.2-2.424.el6.x86_64 kernel-2.6.32-459.el6.x86_64 Steps of Reproduce: 1.boot a rhel6.5 guest and win8-64 guest on src host(rhel6.5) /usr/libexec/qemu-kvm -M rhel6.5.0 -cpu SandyBridge -enable-kvm -m 2G -smp 4 -name rhel6.5 -uuid 6afa5f93-2d4f-420f-81c6-e5fdddbd1c83 -drive file=gluster://10.66.8.240:24007/gv0/rhel6.5-64.qcow2,if=none,id=drive-virtio-disk0,format=qcow2,if=none,id=drive-virtio-disk0,format=qcow2,serial=40c061dd-5d60-4fc5-865f-55db700407f0,cache=none,werror=stop,rerror=stop,aio=threads -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0 -net none -vnc :1 -monitor stdio -serial unix:/tmp/monitor2,server,nowait 2.des host(rhel7.0) /usr/libexec/qemu-kvm -M rhel6.5.0 -cpu SandyBridge -enable-kvm -m 2G -smp 4 -name rhel6.5 -uuid 6afa5f93-2d4f-420f-81c6-e5fdddbd1c83 -drive file=gluster://10.66.8.240:24007/gv0/rhel6.5-64.qcow2,if=none,id=drive-virtio-disk0,format=qcow2,if=none,id=drive-virtio-disk0,format=qcow2,serial=40c061dd-5d60-4fc5-865f-55db700407f0,cache=none,werror=stop,rerror=stop,aio=threads -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0 -net none -vnc :1 -monitor stdio -serial unix:/tmp/monitor2,server,nowait -incoming tcp:0:5555 3.migration (qemu) migrate -d tcp:10.66.11.149:5555 4. reboot guest inside guest #reboot Actual results: for rhel6.5 guest, reboot fail. this is guest console log. ... ... Stopping certmonger: [ OK ] Stopping rhsmcertd...[ OK ] Stopping atd: [ OK ] Stopping cups: [ OK ] Stopping abrt daemon: [ OK ] Stopping sshd: [ OK ] Shutting down postfix: [ OK ] Stopping crond: [ OK ] Stopping automount: [ OK ] Stopping acpi daemon: [ OK ] Stopping HAL daemon: [ OK ] Stopping block device availability: Deactivating block devices: [SKIP]: unmount of vg_dhcp9234-lv_root (dm-0) mounted on / [ OK ] Stopping NetworkManager daemon: [ OK ] Stopping system message bus: [ OK ] Stopping rpcbind: [ OK ] Stopping auditd: type=1305 audit(1399270730.552:18): audit_pid=0 old=1220 auid=4294967295 ses=4294967295 subj=system_u:system_r:auditd_t:s0 res=1 [ OK ] type=1305 audit(1399270730.645:19): audit_enabled=0 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:auditctl_t:s0 res=1 Shutting down system logger: [ OK ] Shutting down loopback interface: [ OK ] ip6tables: Setting chains to policy ACCEPT: filter [ OK ] ip6tables: Flushing firewall rules: [ OK ] ip6tables: Unloading modules: [ OK ] iptables: Setting chains to policy ACCEPT: filter [ OK ] iptables: Flushing firewall rules: [ OK ] iptables: Unloading modules: [ OK ] Stopping monitoring for VG vg_dhcp9234: 2 logical volume(s) in volume group "vg_dhcp9234" unmonitored [ OK ] Sending all processes the TERM signal... [ OK ] Sending all processes the KILL signal... [ OK ] Saving random seed: [ OK ] Syncing hardware clock to system time [ OK ] Turning off swap: [ OK ] Turning off quotas: [ OK ] Unmounting file systems: [ OK ] init: Re-executing /sbin/init Please stand by while rebooting the system... Restarting system. machine restart for win8-64 guest, reboot is failed and the screenshot is as comment8 of bz1027565. Verify this bug using the following version: rhel7: qemu-kvm-1.5.3-60.el7_0.1.x86_64 kernel-3.10.0-122.el7.x86_64 rhel6.5: qemu-kvm-0.12.1.2-2.424.el6.x86_64 kernel-2.6.32-459.el6.x86_64 Steps of Reproduce: 1.boot a rhel6.5 guest and win8-64 guest on src host(rhel6.5) /usr/libexec/qemu-kvm -M rhel6.5.0 -cpu SandyBridge -enable-kvm -m 2G -smp 4 -name rhel6.5 -uuid 6afa5f93-2d4f-420f-81c6-e5fdddbd1c83 -drive file=gluster://10.66.8.240:24007/gv0/rhel6.5-64.qcow2,if=none,id=drive-virtio-disk0,format=qcow2,if=none,id=drive-virtio-disk0,format=qcow2,serial=40c061dd-5d60-4fc5-865f-55db700407f0,cache=none,werror=stop,rerror=stop,aio=threads -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0 -net none -vnc :1 -monitor stdio -serial unix:/tmp/monitor2,server,nowait 2.des host(rhel7.0) /usr/libexec/qemu-kvm -M rhel6.5.0 -cpu SandyBridge -enable-kvm -m 2G -smp 4 -name rhel6.5 -uuid 6afa5f93-2d4f-420f-81c6-e5fdddbd1c83 -drive file=gluster://10.66.8.240:24007/gv0/rhel6.5-64.qcow2,if=none,id=drive-virtio-disk0,format=qcow2,if=none,id=drive-virtio-disk0,format=qcow2,serial=40c061dd-5d60-4fc5-865f-55db700407f0,cache=none,werror=stop,rerror=stop,aio=threads -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0 -net none -vnc :1 -monitor stdio -serial unix:/tmp/monitor2,server,nowait -incoming tcp:0:5555 3.migration (qemu) migrate -d tcp:10.66.11.149:5555 4. reboot guest inside guest #reboot 5. system_reset on qemu-kvm side (qemu) system_reset Actual results: for rhel6.5 guest, reboot and system_reset are successful. for win8-64 guest, reboot and system_reset are successful. Verify this bug using rhel7: qemu-kvm-1.5.3-60.el7_0.1.x86_64 kernel-3.10.0-122.el7.x86_64 rhel6.5: qemu-kvm-0.12.1.2-2.424.el6.x86_64 kernel-2.6.32-459.el6.x86_64 Guest is win8-64 with virtio-scsi system disk whick driver is virtio-win-prewhql80. Steps of Reproduce: 1.boot win8-64 guest on src host(rhel6.5) # /usr/libexec/qemu-kvm -M rhel6.5.0 -cpu Westmere -enable-kvm -m 2048 -realtime mlock=off -smp 4,sockets=2,cores=2,threads=1,maxcpus=160 -drive file=/mnt/win8-64-bak-2.qcow2,if=none,id=drive-scsi-disk,format=qcow2,cache=none,werror=stop,rerror=stop -device virtio-scsi-pci,id=scsi0,addr=0x13,vectors=16,indirect_desc=on,event_idx=off,hotplug=on,num_queues=1,max_sectors=512,cmd_per_lun=16,multifunction=on,rombar=64 -device scsi-hd,drive=drive-scsi-disk,bus=scsi0.0,scsi-id=0,lun=0,id=data-disk2,bootindex=0 -vnc :10 -monitor stdio -nodefconfig -net none -boot menu=on 2.des host(rhel7.0) # /usr/libexec/qemu-kvm -M rhel6.5.0 -cpu Westmere -enable-kvm -m 2048 -realtime mlock=off -smp 4,sockets=2,cores=2,threads=1,maxcpus=160 -drive file=/mnt/win8-64-bak-2.qcow2,if=none,id=drive-scsi-disk,format=qcow2,cache=none,werror=stop,rerror=stop -device virtio-scsi-pci,id=scsi0,addr=0x13,vectors=16,indirect_desc=on,event_idx=off,hotplug=on,num_queues=1,max_sectors=512,cmd_per_lun=16,multifunction=on,rombar=64 -device scsi-hd,drive=drive-scsi-disk,bus=scsi0.0,scsi-id=0,lun=0,id=data-disk2,bootindex=0 -vnc :10 -monitor stdio -nodefconfig -net none -boot menu=on -incoming tcp:0:5800 3.migration (qemu) migrate -d tcp:10.66.11.149:5800 4. reboot guest inside guest #reboot 5. system_reset on qemu-kvm side (qemu) system_reset Actual results: for win8-64 guest, reboot and system_reset are successful. Verify this bug using the following version: rhel7: qemu-kvm-1.5.3-60.el7_0.1.x86_64 kernel-3.10.0-122.el7.x86_64 rhel6.5: qemu-kvm-0.12.1.2-2.424.el6.x86_64 kernel-2.6.32-459.el6.x86_64 On intel SandyBridge host, the command line and test steps are as comment 6. The result summary is as following. For RHEL6.5 64bit guest (test "-M rhel6.5.0/rhel6.3.0") host cold-boot twice reboot shutdown suspend/hibernate ----------------------- --------- -------- --------- --------- RHEL-6.5->RHEL-7.0 PASS PASS PASS PASS For Windows 8 64bit guest with virtio-win-prewhql80 virtio-scsi driver (test "-M rhel6.5.0") host vga cold-boot twice reboot shutdown ----------------------- --- --------- -------- -------------- RHEL-6.5->RHEL-7.0 qxl PASS PASS PASS RHEL-6.5->RHEL-7.0 cirrus PASS PASS PASS RHEL-6.5->RHEL-7.0 std PASS PASS PASS On AMD Opteron_G3 host, the command line and test steps are as comment 6. The result summary is as following. For RHEL6.5 64bit guest (test "-M rhel6.5.0/rhel6.3.0") host cold-boot twice reboot shutdown suspend/hibernate ----------------------- --------- -------- --------- --------- RHEL-6.5->RHEL-7.0 PASS PASS PASS PASS For Windows 8 64bit guest with virtio-win-prewhql80 virtio-scsi driver (test "-M rhel6.5.0") host vga cold-boot twice reboot shutdown ----------------------- --- --------- -------- -------------- RHEL-6.5->RHEL-7.0 qxl PASS PASS PASS RHEL-6.5->RHEL-7.0 cirrus PASS PASS PASS RHEL-6.5->RHEL-7.0 std PASS PASS PASS Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHSA-2014-0704.html |