Bug 1005103
Summary: | Migration should fail when migrate guest offline to a file which is specified to a readonly directory. | ||
---|---|---|---|
Product: | Red Hat Enterprise Linux 7 | Reporter: | Qian Guo <qiguo> |
Component: | qemu-kvm | Assignee: | Juan Quintela <quintela> |
Status: | CLOSED CURRENTRELEASE | QA Contact: | Virtualization Bugs <virt-bugs> |
Severity: | medium | Docs Contact: | |
Priority: | medium | ||
Version: | 7.0 | CC: | acathrow, hhuang, huding, juzhang, michen, mrezanin, qzhang, virt-maint, xfu, xuhan |
Target Milestone: | rc | Keywords: | Reopened |
Target Release: | --- | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | qemu-kvm-1.5.3-53.el7 | Doc Type: | Bug Fix |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2014-06-13 11:57:41 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Qian Guo
2013-09-06 08:44:11 UTC
Hit same problem when test migration offline in an NO_ENOUGH_SPACE location: when start migrate, qemu can prompt the info: gzip: stdout: No space left on device but migration can still continue and complete: (qemu) info migrate capabilities: xbzrle: off auto-converge: off Migration status: completed total time: 15890 milliseconds downtime: 26 milliseconds transferred ram: 486519 kbytes remaining ram: 0 kbytes total ram: 1180104 kbytes duplicate: 180757 pages skipped: 0 pages normal: 120996 pages normal bytes: 483984 kbytes (qemu) info status VM status: paused (postmigrate) It works for me: (/mnt/mirror is mounted readonly) root@deus ~]# touch /mnt/mirror/kkk touch: cannot touch `/mnt/mirror/kkk': Read-only file system [root@deus ~]# mount | grep mirror nfs:/mnt/mirror on /mnt/mirror type nfs4 (rw,addr=192.168.10.200,clientaddr=192.168.10.231) As you can see, just after doing it, migrate gives the error. migrate -d "exec:gzip -c > /mnt/mirror/kk.gz" migrate: failed to popen the migration target: Cannot allocate memory (qemu) info status VM status: running (qemu) info migrate capabilities: xbzrle: off x-rdma-pin-all: off auto-converge: off zero-blocks: off Migration status: failed total time: 0 milliseconds (qemu) Forget previous comment. I was doing something wrong and migration was failing for something different :p Fix included in qemu-kvm-1.5.3-53.el7 Reproduce this bug using the following version: qemu-kvm-1.5.3-35.el7.x86_64 kernel-3.10.0-107.el7.x86_64 Steps to Reproduce: 1.Set up a nfs server that is readonly # cat /etc/exports /home/ *(ro,no_root_squash,sync) 2.mount this nfs server to host. #mount ... 10.66.106.3:/home on /opt/mnt type nfs4 (rw,relatime,vers=4.0,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=10.66.5.112,local_lock=none,addr=10.66.106.3) ... 3.Boot guest in this host # /usr/libexec/qemu-kvm -cpu SandyBridge -enable-kvm -m 4096 -smp 4,sockets=1,cores=4,threads=1 -name rhel7base -drive file=/mnt/rhel7.qcow2,if=none,id=drive-virtio-disk0,format=qcow2,werror=stop,rerror=stop,aio=native -device virtio-blk-pci,drive=drive-virtio-disk0,id=virtio-disk0 -boot menu=on -monitor stdio -netdev tap,id=hostnet0,ifname=guest1,script=/etc/qemu-ifup,vhost=on -device virtio-net,netdev=hostnet0,mac=54:52:1b:35:3c:16,id=test -nodefaults -nodefconfig -spice port=5930,seamless-migration=on,disable-ticketing -vga qxl -global qxl-vga.vram_size=67108864 -device virtio-balloon-pci,id=balloon1 -qmp tcp:0:4446,server,nowait -global PIIX4_PM.disable_s3=0 -global PIIX4_PM.disable_s4=0 -serial unix:/tmp/qiguo,server,nowait -vnc :3 4.Try migrate this guest offline to this readonly directory. (qemu) migrate -d "exec:gzip -c >/opt/mnt/test.gz" (qemu) sh: /opt/mt/test.gz: Read-only file system Actual results: hough when start migrate guest, it prompt: (qemu) sh: /opt/mnt/test.gz: Read-only file system but the migration is still in process, and can complete: (qemu) info migrate capabilities: xbzrle: off auto-converge: off Migration status: completed total time: 24228 milliseconds downtime: 1393 milliseconds transferred ram: 707774 kbytes remaining ram: 0 kbytes total ram: 4325832 kbytes duplicate: 921664 pages skipped: 0 pages normal: 174577 pages normal bytes: 698308 kbytes (qemu) info status VM status: paused (postmigrate) Verify this bug using the following version: qemu-kvm-1.5.3-53.el7.x86_64 kernel-3.10.0-107.el7.x86_64 Steps to Reproduce: 1.Set up a nfs server that is readonly # cat /etc/exports /home/ *(ro,no_root_squash,sync) 2.mount this nfs server to host. #mount ... 10.66.106.3:/home on /opt/mnt type nfs4 (rw,relatime,vers=4.0,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=10.66.5.112,local_lock=none,addr=10.66.106.3) ... 3.Boot guest in this host # /usr/libexec/qemu-kvm -cpu SandyBridge -enable-kvm -m 4096 -smp 4,sockets=1,cores=4,threads=1 -name rhel7base -drive file=/mnt/rhel7.qcow2,if=none,id=drive-virtio-disk0,format=qcow2,werror=stop,rerror=stop,aio=native -device virtio-blk-pci,drive=drive-virtio-disk0,id=virtio-disk0 -boot menu=on -monitor stdio -netdev tap,id=hostnet0,ifname=guest1,script=/etc/qemu-ifup,vhost=on -device virtio-net,netdev=hostnet0,mac=54:52:1b:35:3c:16,id=test -nodefaults -nodefconfig -spice port=5930,seamless-migration=on,disable-ticketing -vga qxl -global qxl-vga.vram_size=67108864 -device virtio-balloon-pci,id=balloon1 -qmp tcp:0:4446,server,nowait -global PIIX4_PM.disable_s3=0 -global PIIX4_PM.disable_s4=0 -serial unix:/tmp/qiguo,server,nowait -vnc :3 4.Try migrate this guest offline to this readonly directory. (qemu) migrate -d "exec:gzip -c >/opt/mnt/test.gz" (qemu) sh: /opt/mt/test.gz: Read-only file system Actual results: after step4, the migration is failed and the guest is running as following: (qemu) info migrate capabilities: xbzrle: off x-rdma-pin-all: off auto-converge: off zero-blocks: off Migration status: failed total time: 0 milliseconds (qemu) info status VM status: running Based on the above results, I think this bug has been fixed. This request was resolved in Red Hat Enterprise Linux 7.0. Contact your manager or support representative in case you have further questions about the request. |