RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1219908 - Writing snapshots with "virsh snapshot-create-as" command slows as more snapshots are created
Summary: Writing snapshots with "virsh snapshot-create-as" command slows as more snaps...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: qemu-kvm
Version: 6.6
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: rc
: ---
Assignee: Kevin Wolf
QA Contact: Virtualization Bugs
Jiri Herrmann
URL:
Whiteboard:
Depends On:
Blocks: 1172231 1275757
TreeView+ depends on / blocked
 
Reported: 2015-05-08 16:50 UTC by Robert McSwain
Modified: 2019-11-14 06:43 UTC (History)
11 users (show)

Fixed In Version: qemu-kvm-0.12.1.2-2.482.el6
Doc Type: Bug Fix
Doc Text:
Consistent save times for taking guest snapshots Prior to this update, saving a KVM guest snapshot involved overwriting the state of the virtual machine using copy-on-write operations. As a consequence, taking every snapshot after the first one took an excessive amount of time. Now, the guest state written in the active layer is discarded after the snapshot is taken, which avoids the need for copy-on-write operations. As a result, saving subsequent snapshots is now as quick as saving the first one.
Clone Of:
Environment:
Last Closed: 2016-05-10 20:58:41 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2016:0815 0 normal SHIPPED_LIVE qemu-kvm bug fix and enhancement update 2016-05-10 22:39:31 UTC

Description Robert McSwain 2015-05-08 16:50:34 UTC
I have set up a virtual with a backing and overlay file. The libvirt xml file uses the overlay file to launch the VM. The attached script creates the VM domain and then saves three snapshots to the overlay file.  the first snapshot is done in about 6 s. However, the second and third snapshot takes about 4 minutes to complete.  The size of the snapshots are the same, so why is the first snapshot fast and the subsequent snapshots unusably slow. Deleting the snapshots and creating them in the already allocated space is faster; however, the second and third snapshots still take of 20 times longer than the first one. Is this a configuration issue or an inherent problem with libvirt and qemu-kvm?

Data coming as a private comment shortly.

Comment 3 Eric Blake 2015-05-08 19:38:05 UTC
Slow internal snapshots are caused by qemu, not libvirt.  It has been reported in the past.  One workaround is to use external snapshots instead of internal.

See also bug 1208808 for the RHEL 7 counterpart

Comment 4 Robert McSwain 2015-05-12 13:34:21 UTC
My customer mentioned that he saw this resolved upstream somewhere between Fedora 18 and Fedora 20, for what it's worth with RHEL.

Comment 5 Robert McSwain 2015-05-15 21:45:24 UTC
Customer noted this:

I have run this test in a RHEL-7.1 VM and there is no slowdown for the second and third snapshots. I also ran it in a RHEL-6.6 VM to be sure that running nested VMs was not an issue, Sure enough, there is slowdown in writing snapshosts with RHEL-6.6. The person reporting a similar issue in RHEL-7 must have been testing on 7.0. This is good news as an important feature in qemu-kvm will finally work as the document says it should.

Comment 6 Kevin Wolf 2015-10-29 15:44:54 UTC
Backporting upstream commit 1ebf561c might fix this and looks easy enough.

Comment 7 Qianqian Zhu 2015-11-11 02:16:57 UTC
Reproduced:

Red Hat Enterprise Linux Server release 6.7
kernel-2.6.32-584.el6.x86_64
qemu-kvm-0.12.1.2-2.481.el6.x86_64

steps:
1. create snapshot:
 #qemu-img create -f qcow2 -b /home/RHEL-Server-6.7-64-virtio.qcow2 /home/overlay.qcow2
2. launch guest with the snapshot
 #/usr/libexec/qemu-kvm -name linux -m 8192 -realtime mlock=off -smp 2,sockets=2,cores=1,threads=1 -uuid 287294d8-7422-447d-adeb-fd6f71501a42 -nodefaults -monitor unix:/home/monitor,server,nowait -rtc base=utc,driftfix=slew -global kvm-pit.lost_tick_policy=discard -global PIIX4_PM.disable_s3=1 -global PIIX4_PM.disable_s4=1 -boot order=c,menu=on -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x6 -drive file=/home/overlay.qcow2,cache=none,if=none,id=drive-virtio-disk0,format=qcow2 -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x7,drive=drive-virtio-disk0,id=virtio-disk0 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x8 -msg timestamp=on -vnc :3 -vga cirrus
3.savevm three times
 # time echo savevm|nc -U /home/monitor

results:
(qemu) savevm
real    0m7.708s
(qemu) savevm
real    0m20.477s
(qemu) savevm
real    0m18.886s

Comment 8 Jeff Nelson 2015-12-16 19:36:41 UTC
Fix included in qemu-kvm-0.12.1.2-2.482.el6

Comment 10 Qianqian Zhu 2015-12-30 07:39:16 UTC
Verified:

kernel-2.6.32-590.el6.x86_64
qemu-img-0.12.1.2-2.482.el6.x86_64
qemu-kvm-0.12.1.2-2.482.el6.x86_64

Steps:
1. Create snapshot:
 #qemu-img create -f qcow2 -b /home/RHEL-Server-6.7-64-virtio.qcow2 /home/overlay.qcow2
2. Launch guest with the snapshot
 #/usr/libexec/qemu-kvm -name linux -cpu Westmere -m 8192 -realtime mlock=off -smp 2,sockets=2,cores=1,threads=1 -uuid 287294d8-7422-447d-adeb-fd6f71501a42 -nodefaults -monitor unix:/home/monitor,server,nowait -rtc base=utc,driftfix=slew -global kvm-pit.lost_tick_policy=discard -global PIIX4_PM.disable_s3=1 -global PIIX4_PM.disable_s4=1 -boot order=c,menu=on -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x6 -drive file=/home/overlay.qcow2,cache=none,if=none,id=drive-virtio-disk0,format=qcow2 -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x7,drive=drive-virtio-disk0,id=virtio-disk0 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x8 -msg timestamp=on -vnc :3 -vga cirrus
3.savevm for 5 times
 # for((i=0;i<5;i++)) do time echo savevm|nc -U

Results:
[root@dell-per715-05 ~]# for((i=0;i<5;i++)) do time echo savevm|nc -U /home/monitor; done
QEMU 0.12.1 monitor - type 'help' for more information
(qemu) savevm
(qemu) 
real	0m15.063s
user	0m0.000s
sys	0m0.004s
QEMU 0.12.1 monitor - type 'help' for more information
(qemu) savevm
(qemu) 
real	0m12.819s
user	0m0.000s
sys	0m0.001s
QEMU 0.12.1 monitor - type 'help' for more information
(qemu) savevm
(qemu) 
real	0m12.816s
user	0m0.000s
sys	0m0.002s
QEMU 0.12.1 monitor - type 'help' for more information
(qemu) savevm
(qemu) 
real	0m12.806s
user	0m0.000s
sys	0m0.002s
QEMU 0.12.1 monitor - type 'help' for more information
(qemu) savevm
(qemu) 
real	0m12.801s
user	0m0.000s
sys	0m0.002s

Conclusion:
There is no significant slowdown for the second and afterwards snapshots, so the bug should be fixed.

Comment 13 errata-xmlrpc 2016-05-10 20:58:41 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2016-0815.html


Note You need to log in before you can comment on or make changes to this bug.