Hide Forgot
Description of problem: After migrating RHEL7.2 guest with virtio-1 enabled virtio-scsi disk, run "fdisk -l", no output and the process status is D+. If w/o virito-1 enabled, can not hit such issue. Version-Release number of selected component (if applicable): source host: 3.10.0-297.el7.x86_64 qemu-kvm-rhev-2.3.0-13.el7.x86_64 destination host: 3.10.0-300.el7.x86_64 qemu-kvm-rhev-2.3.0-13.el7.x86_64 How reproducible: 100% Steps to Reproduce: 1.boot a RHEL7.2 guest with virtio-1 enabled virtio-scsi disk in src host /usr/libexec/qemu-kvm -S -M pc -cpu Opteron_G3 -enable-kvm -m 4096 \ -smp 4,sockets=4,cores=1,threads=1 -no-kvm-pit-reinjection -name sluo-test \ -uuid b18fdd6c-a213-4022-9ca4-5d07225e40b0 \ -rtc base=localtime,clock=host,driftfix=slew \ -device virtio-serial-pci,disable-legacy=true,disable-modern=false,id=virtio-serial0,max_ports=16,vectors=0,bus=pci.0,addr=0x3,ioeventfd=on \ -chardev socket,id=channel1,path=/home/testdisk/socketfile4Gqcow2,server,nowait \ -device virtserialport,chardev=channel1,name=com.redhat.rhevm.vdsm,bus=virtio-serial0.0,id=port1 \ -chardev socket,id=channel2,path=/home/testdisk/socketfile3Gqcow2,server,nowait \ -device virtserialport,chardev=channel2,name=com.redhat.rhevm.vdsm1,bus=virtio-serial0.0,id=port2 \ -drive file=/home/RHEL-Server-7.2-64-virtio-scsi.qcow2,if=none,id=drive-system-disk,format=qcow2,cache=none,aio=native,werror=stop,rerror=stop,serial=QEMU-DISK1 \ -device virtio-scsi-pci,id=scsi0,bus=pci.0,ioeventfd=off -device scsi-hd,bus=scsi0.0,drive=drive-system-disk,id=system-disk,channel=0,scsi-id=0,lun=0,ver=mike,serial=ababab,bootindex=1 \ -netdev tap,id=hostnet0,vhost=on,script=/etc/qemu-ifup \ -device virtio-net-pci,netdev=hostnet0,id=virtio-net-pci0,mac=08:2e:5f:0a:1d:b1,bus=pci.0,addr=0x5,bootindex=2 \ -device virtio-balloon-pci,disable-legacy=true,disable-modern=false,id=ballooning,bus=pci.0,addr=0x6 \ -serial unix:/tmp/ttyS0,server,nowait \ -qmp tcp:0:4444,server,nowait -k en-us -boot menu=on \ -vnc :1 -spice disable-ticketing,port=5931 -vga qxl -monitor stdio \ -device pci-bridge,id=bridge1,bus=pci.0,chassis_nr=1 \ -drive file=/home/testdisk/virtio-scsi-disk10G.qcow2,if=none,id=drive-data-disk1,format=qcow2,cache=none,werror=stop,rerror=stop,aio=native,media=disk \ -device virtio-scsi-pci,id=scsi1,bus=bridge1,addr=0xa ,disable-legacy=true,disable-modern=false -device scsi-hd,bus=scsi1.0,drive=drive-data-disk1,serial=test,id=data-disk1,physical_block_size=512,logical_block_size=512 \ -drive file=/home/driver.iso,if=none,media=cdrom,readonly=on,format=raw,id=cdrom1 -device scsi-cd,bus=scsi1.0,drive=cdrom1,id=scsi0-0 2.boot guest with listenning mode in destination host 3. migrate to destination host 4.after migrate, run "fdisk -l" & "ps aux | grep fdisk" in guest Actual results: after "fdisk -l", no output, after "ps aux | grep fdisk" the process status is D+. ps aux | grep fdisk root 3109 0.0 0.0 112420 956 pts/0 D+ 11:16 0:00 fdisk -l root 3836 0.0 0.0 112640 928 pts/1 S+ 12:24 0:00 grep --color=auto fdisk Expected results: run "fdisk -l" successfully Additional info: If w/o virito-1 enabled, can not hit such issue.
Did upstream qemu.git work for this?
Fix included in qemu-kvm-rhev-2.3.0-18.el7
reproduce the issue: source host: 3.10.0-303.el7.x86_64 qemu-kvm-rhev-2.3.0-17.el7.x86_64 destination host: 3.10.0-304.el7.x86_64 qemu-kvm-rhev-2.3.0-17.el7.x86_64 steps: the steps is the same as steps in description results: after "fdisk -l", no output, after "ps aux | grep fdisk" the process status is D+. ps aux | grep fdisk root 3109 0.0 0.0 112420 956 pts/0 D+ 11:16 0:00 fdisk -l root 3836 0.0 0.0 112640 928 pts/1 S+ 12:24 0:00 grep --color=auto fdisk ============================================================================ verify the issue: source host: 3.10.0-303.el7.x86_64 qemu-kvm-rhev-2.3.0-18.el7.x86_64 destination host: 3.10.0-304.el7.x86_64 qemu-kvm-rhev-2.3.0-18.el7.x86_64 steps: the steps is the same as steps in description results: run "fdisk -l" successfully Based on the above results, I think the bug is fixed.
According to comment10, set this issue as verified.
*** Bug 1247541 has been marked as a duplicate of this bug. ***
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2015-2546.html