Hide Forgot
Description of problem: In 4146b46c42e0989cb5842e04d88ab6ccb1713a48 (block: Produce zeros when protocols reading beyond end of file), we break qemu-iotests ./check -qcow2 022. This happens because qcow2 temporarily sets ->growable = 1 for vmstate accesses (which are stored beyond the end of regular image data). We introduce the bs->zero_beyond_eof to allow qcow2_load_vmstate() to disable ->zero_beyond_eof temporarily in addition to enable ->growable. Version-Release number of selected component (if applicable): How reproducible: 100% Steps to Reproduce: Run the qemu-iotests ./check -qcow2 022 test Actual results: qemu-iotest does not pass Expected results: qemu-iotest should pass Additional info: The fix is in upstream now. I will backport to rhel shortly.
Confirmed with Asias, this bug could also be reproduce by 'savevm'/'loadvm'. Reproduced on qemu-kvm-0.12.1.2-2.401.el6.x86_64. 'loadvm' failed and prompt error, guest paused. (qemu) savevm 1 (qemu) (qemu) info status VM status: running (qemu) (qemu) (qemu) info snapshots Snapshot devices: drive-virtio-disk0 Snapshot list (from drive-virtio-disk0): ID TAG VM SIZE DATE VM CLOCK 1 1 284M 2013-09-13 14:49:28 00:00:16.677 (qemu) lo logfile log loadvm (qemu) loadvm 1 Error -22 while loading VM state (qemu) (qemu) info status VM status: paused (restore-vm) (qemu)
*** Bug 1005755 has been marked as a duplicate of this bug. ***
Verified this bug on qemu-kvm-0.12.1.2-2.411.el6.x86_64, passed. Host: [root@localhost home]# uname -r 2.6.32-422.el6.x86_64 [root@localhost home]# rpm -q qemu-kvm qemu-kvm-0.12.1.2-2.411.el6.x86_64 # /usr/libexec/qemu-kvm -M rhel6.5.0 -enable-kvm -m 2048 -smp 2,sockets=2,cores=1,threads=1 -enable-kvm -name t2-rhel6.4-32 -uuid 61b6c504-5a8b-4fe1-8347-6c929b750dde -k en-us -rtc base=localtime,clock=host,driftfix=slew -no-kvm-pit-reinjection -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device usb-tablet,id=input0 -drive file=/home/RHEL-Server-6.4-64-virtio.qcow2,if=none,id=disk0,format=qcow2,werror=stop,rerror=stop,aio=native -device ide-drive,bus=ide.0,unit=1,drive=disk0,id=disk0 -drive if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw -device ide-drive,drive=drive-ide0-1-0,bus=ide.1,unit=0,id=cdrom -netdev tap,id=hostnet0 -device rtl8139,netdev=hostnet0,id=net0,mac=44:37:E6:5E:91:85,bus=pci.0,addr=0x5 -monitor stdio -qmp tcp:0:6666,server,nowait -chardev socket,path=/tmp/isa-serial,server,nowait,id=isa1 -device isa-serial,chardev=isa1,id=isa-serial1 -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x8 -chardev socket,id=charchannel0,path=/tmp/serial-socket,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm -chardev socket,path=/tmp/foo,server,nowait,id=foo -device virtconsole,chardev=foo,id=console0 -chardev spicevmc,id=charchannel1,name=vdagent -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=com.redhat.spice.0 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x9 -spice port=5930,disable-ticketing -vga qxl -global qxl-vga.vram_size=67108864 -k en-us -boot c -chardev socket,path=/tmp/qga.sock,server,nowait,id=qga0 -device virtserialport,bus=virtio-serial0.0,chardev=qga0,name=org.qemu.guest_agent.0 -global PIIX4_PM.disable_s3=0 -global PIIX4_PM.disable_s4=0 QEMU 0.12.1 monitor - type 'help' for more information (qemu) main_channel_link: add main channel client main_channel_handle_parsed: agent start main_channel_handle_parsed: net test: latency 99.500000 ms, bitrate 5868194842 bps (5596.346704 Mbps) inputs_connect: inputs channel client create red_dispatcher_set_cursor_peer: (qemu) (qemu) savevm 1 (qemu) (qemu) info snapshots Snapshot devices: disk0 Snapshot list (from disk0): ID TAG VM SIZE DATE VM CLOCK 1 1 579M 2013-10-10 11:41:42 00:01:05.407 (qemu) info status VM status: running (qemu) (qemu) lo logfile log loadvm (qemu) loadvm 1 red_dispatcher_loadvm_commands: inputs_detach_tablet: (qemu) (qemu) (qemu) info status VM status: running (qemu)
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHSA-2013-1553.html