Bug 1219908
Summary: | Writing snapshots with "virsh snapshot-create-as" command slows as more snapshots are created | ||
---|---|---|---|
Product: | Red Hat Enterprise Linux 6 | Reporter: | Robert McSwain <rmcswain> |
Component: | qemu-kvm | Assignee: | Kevin Wolf <kwolf> |
Status: | CLOSED ERRATA | QA Contact: | Virtualization Bugs <virt-bugs> |
Severity: | high | Docs Contact: | Jiri Herrmann <jherrman> |
Priority: | unspecified | ||
Version: | 6.6 | CC: | ailan, chayang, eblake, juzhang, mkenneth, qizhu, qzhang, rbalakri, rpacheco, virt-maint, yanyang |
Target Milestone: | rc | ||
Target Release: | --- | ||
Hardware: | x86_64 | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | qemu-kvm-0.12.1.2-2.482.el6 | Doc Type: | Bug Fix |
Doc Text: |
Consistent save times for taking guest snapshots
Prior to this update, saving a KVM guest snapshot involved overwriting the state of the virtual machine using copy-on-write operations. As a consequence, taking every snapshot after the first one took an excessive amount of time. Now, the guest state written in the active layer is discarded after the snapshot is taken, which avoids the need for copy-on-write operations. As a result, saving subsequent snapshots is now as quick as saving the first one.
|
Story Points: | --- |
Clone Of: | Environment: | ||
Last Closed: | 2016-05-10 20:58:41 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | |||
Bug Blocks: | 1172231, 1275757 |
Description
Robert McSwain
2015-05-08 16:50:34 UTC
Slow internal snapshots are caused by qemu, not libvirt. It has been reported in the past. One workaround is to use external snapshots instead of internal. See also bug 1208808 for the RHEL 7 counterpart My customer mentioned that he saw this resolved upstream somewhere between Fedora 18 and Fedora 20, for what it's worth with RHEL. Customer noted this: I have run this test in a RHEL-7.1 VM and there is no slowdown for the second and third snapshots. I also ran it in a RHEL-6.6 VM to be sure that running nested VMs was not an issue, Sure enough, there is slowdown in writing snapshosts with RHEL-6.6. The person reporting a similar issue in RHEL-7 must have been testing on 7.0. This is good news as an important feature in qemu-kvm will finally work as the document says it should. Backporting upstream commit 1ebf561c might fix this and looks easy enough. Reproduced: Red Hat Enterprise Linux Server release 6.7 kernel-2.6.32-584.el6.x86_64 qemu-kvm-0.12.1.2-2.481.el6.x86_64 steps: 1. create snapshot: #qemu-img create -f qcow2 -b /home/RHEL-Server-6.7-64-virtio.qcow2 /home/overlay.qcow2 2. launch guest with the snapshot #/usr/libexec/qemu-kvm -name linux -m 8192 -realtime mlock=off -smp 2,sockets=2,cores=1,threads=1 -uuid 287294d8-7422-447d-adeb-fd6f71501a42 -nodefaults -monitor unix:/home/monitor,server,nowait -rtc base=utc,driftfix=slew -global kvm-pit.lost_tick_policy=discard -global PIIX4_PM.disable_s3=1 -global PIIX4_PM.disable_s4=1 -boot order=c,menu=on -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x6 -drive file=/home/overlay.qcow2,cache=none,if=none,id=drive-virtio-disk0,format=qcow2 -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x7,drive=drive-virtio-disk0,id=virtio-disk0 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x8 -msg timestamp=on -vnc :3 -vga cirrus 3.savevm three times # time echo savevm|nc -U /home/monitor results: (qemu) savevm real 0m7.708s (qemu) savevm real 0m20.477s (qemu) savevm real 0m18.886s Fix included in qemu-kvm-0.12.1.2-2.482.el6 Verified: kernel-2.6.32-590.el6.x86_64 qemu-img-0.12.1.2-2.482.el6.x86_64 qemu-kvm-0.12.1.2-2.482.el6.x86_64 Steps: 1. Create snapshot: #qemu-img create -f qcow2 -b /home/RHEL-Server-6.7-64-virtio.qcow2 /home/overlay.qcow2 2. Launch guest with the snapshot #/usr/libexec/qemu-kvm -name linux -cpu Westmere -m 8192 -realtime mlock=off -smp 2,sockets=2,cores=1,threads=1 -uuid 287294d8-7422-447d-adeb-fd6f71501a42 -nodefaults -monitor unix:/home/monitor,server,nowait -rtc base=utc,driftfix=slew -global kvm-pit.lost_tick_policy=discard -global PIIX4_PM.disable_s3=1 -global PIIX4_PM.disable_s4=1 -boot order=c,menu=on -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x6 -drive file=/home/overlay.qcow2,cache=none,if=none,id=drive-virtio-disk0,format=qcow2 -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x7,drive=drive-virtio-disk0,id=virtio-disk0 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x8 -msg timestamp=on -vnc :3 -vga cirrus 3.savevm for 5 times # for((i=0;i<5;i++)) do time echo savevm|nc -U Results: [root@dell-per715-05 ~]# for((i=0;i<5;i++)) do time echo savevm|nc -U /home/monitor; done QEMU 0.12.1 monitor - type 'help' for more information (qemu) savevm (qemu) real 0m15.063s user 0m0.000s sys 0m0.004s QEMU 0.12.1 monitor - type 'help' for more information (qemu) savevm (qemu) real 0m12.819s user 0m0.000s sys 0m0.001s QEMU 0.12.1 monitor - type 'help' for more information (qemu) savevm (qemu) real 0m12.816s user 0m0.000s sys 0m0.002s QEMU 0.12.1 monitor - type 'help' for more information (qemu) savevm (qemu) real 0m12.806s user 0m0.000s sys 0m0.002s QEMU 0.12.1 monitor - type 'help' for more information (qemu) savevm (qemu) real 0m12.801s user 0m0.000s sys 0m0.002s Conclusion: There is no significant slowdown for the second and afterwards snapshots, so the bug should be fixed. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2016-0815.html |