Bug 1799010
| Summary: | incremental-backup: RFE: Handle backup bitmaps during virDomainBlockPull | ||
|---|---|---|---|
| Product: | Red Hat Enterprise Linux Advanced Virtualization | Reporter: | Peter Krempa <pkrempa> |
| Component: | libvirt | Assignee: | Peter Krempa <pkrempa> |
| Status: | CLOSED ERRATA | QA Contact: | yisun |
| Severity: | high | Docs Contact: | |
| Priority: | high | ||
| Version: | 8.2 | CC: | jdenemar, lmen, mtessun, virt-maint, xuzhang, ymankad |
| Target Milestone: | rc | Keywords: | FutureFeature, Triaged |
| Target Release: | 8.0 | Flags: | pm-rhel:
mirror+
|
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | libvirt-6.6.0-1.el8 | Doc Type: | If docs needed, set a value |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2020-11-17 17:46:36 UTC | Type: | Feature Request |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
| Bug Depends On: | 1207659 | ||
| Bug Blocks: | 1799015 | ||
|
Description
Peter Krempa
2020-02-06 13:20:59 UTC
Implemented upstream:
commit 7b2163c8bf53cb4877bbfe8483f908b0593f737f
Author: Peter Krempa <pkrempa>
Date: Fri Jun 26 15:29:34 2020 +0200
qemu: backup: integrate with blockpull
Merge the bitmaps when finalizing a block pull job so that backups work
properly afterwards.
https://bugzilla.redhat.com/show_bug.cgi?id=1799010
Signed-off-by: Peter Krempa <pkrempa>
Reviewed-by: Eric Blake <eblake>
Test with: libvirt-6.6.0-2.module+el8.3.0+7567+dc41c0a9.x86_64
Result: PASS
========================================================
check bitmap merged when do blockpull
========================================================
1. prepare a qcow2 image to test
[root@dell-per740xd-11 ~]# qemu-img create -f qcow2 /var/lib/libvirt/images/vdb.qcow2 100M
Formatting '/var/lib/libvirt/images/vdb.qcow2', fmt=qcow2 cluster_size=65536 compression_type=zlib size=104857600 lazy_refcounts=off refcount_bits=16
[root@dell-per740xd-11 ~]# virsh start vm1
virshDomain vm1 started
[root@dell-per740xd-11 ~]# virsh domblklist vm1
Target Source
--------------------------------------------------------
...
vdb /var/lib/libvirt/images/vdb.qcow2
2. create checkpoint_0 xml
[root@dell-per740xd-11 ~]# cat checkpoint_0.xml
<domaincheckpoint>
<disks>
<disk checkpoint="no" name="vda" />
<disk checkpoint="bitmap" name="vdb" />
</disks>
<name>cp0</name>
<description>cp0</description>
</domaincheckpoint>
[root@dell-per740xd-11 ~]# virsh checkpoint-create vm1 checkpoint_0.xml
Domain checkpoint cp0 created from 'checkpoint_0.xml'
[root@dell-per740xd-11 ~]# virsh qemu-monitor-command vm1 --cmd '{"execute": "x-debug-block-dirty-bitmap-sha256","arguments": {"node":"libvirt-1-format","name":"cp0"}}'
{"return":{"sha256":"6d9c54dee5660c46886f32d80e57e9dd0ffa57ee0cd2a762b036d9c8e0c3a33a"},"id":"libvirt-383"}
4. generate size=1MB data from seek=10MB in vdb and get the hash value of bitmap cp0
IN VM:
[root@localhost ~]# dddd if=/dev/urandom of=/dev/vdb bs=1M seek=10 count=1; sync
[root@localhost ~]# dd if=/dev/urandom of=/dev/vdb bs=1M seek=10 count=1; sync
1+0 records in
1+0 records out
1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0210256 s, 49.9 MB/s
IN HOST:
[root@dell-per740xd-11 ~]# virsh qemu-monitor-command vm1 --cmd '{"execute": "x-debug-block-dirty-bitmap-sha256","arguments": {"node":"libvirt-1-format","name":"cp0"}}'
{"return":{"sha256":"ebc3a32a3860f015d7cb19bdfc5518c8fcfcf85cff0b09c75511ea9ca068aaf8"},"id":"libvirt-381"}
5. create disk only snapshot_0
[root@dell-per740xd-11 ~]# virsh snapshot-create-as vm1 sp0 --disk-only
Domain snapshot sp0 created
In libvirtd log, we can see the node-name for vdb.sn0 is libvirt-4-format
2020-09-08 08:38:46.015+0000: 249672: info : qemuMonitorIOWrite:433 : QEMU_MONITOR_IO_WRITE: mon=0x7f9da4037320 buf={"execute":"blockdev-create","arguments":{"job-id":"create-libvirt-4-format","options":{"driver":"qcow2","file":"libvirt-4-storage","size":104857600,"backing-file":"/var/lib/libvirt/images/vdb.qcow2","backing-fmt":"qcow2"}},"id":"libvirt-392"}
6. create checkpoint_1
[root@dell-per740xd-11 ~]# cat checkpoint_1.xml
<domaincheckpoint>
<disks>
<disk checkpoint="no" name="vda" />
<disk checkpoint="bitmap" name="vdb" />
</disks>
<name>cp1</name>
<description>cp1</description>
</domaincheckpoint>
[root@dell-per740xd-11 ~]# virsh checkpoint-create vm1 checkpoint_1.xml
Domain checkpoint cp0 created from 'checkpoint_1.xml'
7. generate size=2MB data from seek=20MB in vdb and get the hash value of bitmap cp1
IN VM:
[root@localhost ~]# dd if=/dev/urandom of=/dev/vdb bs=1M seek=20 count=2; sync
2+0 records in
2+0 records out
2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0284196 s, 73.8 MB/s
IN HOST:
[root@dell-per740xd-11 ~]# virsh qemu-monitor-command vm1 --cmd '{"execute": "x-debug-block-dirty-bitmap-sha256","arguments": {"node":"libvirt-4-format","name":"cp1"}}'
{"return":{"sha256":"a974226d477231a42607948568f993c837551308f6d61a3312e58b4163242381"},"id":"libvirt-395"}
8. create disk only snapshot_1
[root@dell-per740xd-11 ~]# virsh snapshot-create-as vm1 sp1 --disk-only
Domain snapshot sp1 created
In libvirtd log, we can see the node-name for vdb.sn1 is libvirt-6-format
2020-09-08 08:50:26.418+0000: 249672: info : qemuMonitorIOWrite:433 : QEMU_MONITOR_IO_WRITE: mon=0x7f9da4037320 buf={"execute":"blockdev-create","arguments":{"job-id":"create-libvirt-6-format","options":{"driver":"qcow2","file":"libvirt-6-storage","size":104857600,"backing-file":"/var/lib/libvirt/images/vdb.sp0","backing-fmt":"qcow2"}},"id":"libvirt-421"}
9. create checkpoint_2
[root@dell-per740xd-11 ~]# cat checkpoint_2.xml
<domaincheckpoint>
<disks>
<disk checkpoint="no" name="vda" />
<disk checkpoint="bitmap" name="vdb" />
</disks>
<name>cp2</name>
<description>cp2</description>
</domaincheckpoint>
[root@dell-per740xd-11 ~]# virsh checkpoint-create vm1 checkpoint_2.xml
Domain checkpoint cp2 created from 'checkpoint_2.xml'
10. generate size=3MB data from seek=30MB in vdb and get the hash value of bitmap cp2
IN VM:
[root@localhost ~]# dd if=/dev/urandom of=/dev/vdb bs=1M seek=30 count=3; sync
3+0 records in
3+0 records out
3145728 bytes (3.1 MB, 3.0 MiB) copied, 0.0383979 s, 81.9 MB/s
IN HOST:
[root@dell-per740xd-11 ~]# virsh qemu-monitor-command vm1 --cmd '{"execute": "x-debug-block-dirty-bitmap-sha256","arguments": {"node":"libvirt-6-format","name":"cp2"}}'
{"return":{"sha256":"01f1873b1b22b39e76adb7686551e5e5dafc1d516dee781476d306767e740db2"},"id":"libvirt-409"}
11. current dirty bitmap info (same as above steps):
cp0 should be different in vdb.qcow2, vdb.sp0 and vdb.sp1
vdb.qcow2:
[root@dell-per740xd-11 ~]# virsh qemu-monitor-command vm1 --cmd '{"execute": "x-debug-block-dirty-bitmap-sha256","arguments": {"node":"libvirt-1-format","name":"cp0"}}'
{"return":{"sha256":"ebc3a32a3860f015d7cb19bdfc5518c8fcfcf85cff0b09c75511ea9ca068aaf8"},"id":"libvirt-410"}
vdb.sp0:
[root@dell-per740xd-11 ~]# virsh qemu-monitor-command vm1 --cmd '{"execute": "x-debug-block-dirty-bitmap-sha256","arguments": {"node":"libvirt-4-format","name":"cp0"}}'
{"return":{"sha256":"a974226d477231a42607948568f993c837551308f6d61a3312e58b4163242381"},"id":"libvirt-411"}
vdb.sp1:
[root@dell-per740xd-11 ~]# virsh qemu-monitor-command vm1 --cmd '{"execute": "x-debug-block-dirty-bitmap-sha256","arguments": {"node":"libvirt-6-format","name":"cp0"}}'
{"return":{"sha256":"01f1873b1b22b39e76adb7686551e5e5dafc1d516dee781476d306767e740db2"},"id":"libvirt-412"}
cp1 should be different in vdb.sp0 and vdb.sp1:
vdb.sp0:
[root@dell-per740xd-11 ~]# virsh qemu-monitor-command vm1 --cmd '{"execute": "x-debug-block-dirty-bitmap-sha256","arguments": {"node":"libvirt-4-format","name":"cp1"}}'
{"return":{"sha256":"a974226d477231a42607948568f993c837551308f6d61a3312e58b4163242381"},"id":"libvirt-413"}
vdb.sp1:
[root@dell-per740xd-11 ~]# virsh qemu-monitor-command vm1 --cmd '{"execute": "x-debug-block-dirty-bitmap-sha256","arguments": {"node":"libvirt-6-format","name":"cp1"}}'
{"return":{"sha256":"01f1873b1b22b39e76adb7686551e5e5dafc1d516dee781476d306767e740db2"},"id":"libvirt-414"}
cp2 should only exist in vdb.sp1:
[root@dell-per740xd-11 ~]# virsh qemu-monitor-command vm1 --cmd '{"execute": "x-debug-block-dirty-bitmap-sha256","arguments": {"node":"libvirt-6-format","name":"cp2"}}'
{"return":{"sha256":"01f1873b1b22b39e76adb7686551e5e5dafc1d516dee781476d306767e740db2"},"id":"libvirt-415"}
12. block pull from vdb.sp0 and check the dirty bitmap
Current disk xml:
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2'/>
<source file='/var/lib/libvirt/images/vdb.sp1' index='6'/>
<backingStore type='file' index='4'>
<format type='qcow2'/>
<source file='/var/lib/libvirt/images/vdb.sp0'/>
<backingStore type='file' index='1'>
<format type='qcow2'/>
<source file='/var/lib/libvirt/images/vdb.qcow2'/>
<backingStore/>
</backingStore>
</backingStore>
<target dev='vdb' bus='virtio'/>
<alias name='virtio-disk1'/>
<address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
</disk>
[root@dell-per740xd-11 ~]# virsh blockpull vm1 vdb --base /var/lib/libvirt/images/vdb.sp0 --wait --verbose
Block Pull: [100 %]
Pull complete
[root@dell-per740xd-11 ~]# virsh qemu-monitor-command vm1 --cmd '{"execute": "x-debug-block-dirty-bitmap-sha256","arguments": {"node":"libvirt-6-format","name":"cp2"}}'
{"return":{"sha256":"01f1873b1b22b39e76adb7686551e5e5dafc1d516dee781476d306767e740db2"},"id":"libvirt-425"}
[root@dell-per740xd-11 ~]# virsh qemu-monitor-command vm1 --cmd '{"execute": "x-debug-block-dirty-bitmap-sha256","arguments": {"node":"libvirt-6-format","name":"cp1"}}'
{"return":{"sha256":"01f1873b1b22b39e76adb7686551e5e5dafc1d516dee781476d306767e740db2"},"id":"libvirt-426"}
[root@dell-per740xd-11 ~]# virsh qemu-monitor-command vm1 --cmd '{"execute": "x-debug-block-dirty-bitmap-sha256","arguments": {"node":"libvirt-6-format","name":"cp0"}}'
{"return":{"sha256":"01f1873b1b22b39e76adb7686551e5e5dafc1d516dee781476d306767e740db2"},"id":"libvirt-427"}
<=== nothing wrong, all same as vdb.sp1 info in step 11
13. block pull from vdb.qcow2 and check the dirty bitmap
[root@dell-per740xd-11 ~]# virsh blockpull vm1 vdb --base /var/lib/libvirt/images/vdb.qcow2 --wait --verbose
Block Pull: [100 %]
Pull complete
[root@dell-per740xd-11 ~]# virsh qemu-monitor-command vm1 --cmd '{"execute": "x-debug-block-dirty-bitmap-sha256","arguments": {"node":"libvirt-6-format","name":"cp0"}}'
{"return":{"sha256":"50fd3ca0b5134aaf14ac9a459ad924910773efa392b7a8a695405268834134a9"},"id":"libvirt-437"}
[root@dell-per740xd-11 ~]# virsh qemu-monitor-command vm1 --cmd '{"execute": "x-debug-block-dirty-bitmap-sha256","arguments": {"node":"libvirt-6-format","name":"cp1"}}'
{"return":{"sha256":"50fd3ca0b5134aaf14ac9a459ad924910773efa392b7a8a695405268834134a9"},"id":"libvirt-438"}
[root@dell-per740xd-11 ~]# virsh qemu-monitor-command vm1 --cmd '{"execute": "x-debug-block-dirty-bitmap-sha256","arguments": {"node":"libvirt-6-format","name":"cp2"}}'
{"return":{"sha256":"01f1873b1b22b39e76adb7686551e5e5dafc1d516dee781476d306767e740db2"},"id":"libvirt-440"}
<=== cp0 and cp1's dirty-bitmap merged, and they are same due to vdb.sp1 now backing on vdb.qcow2, expected
================================================================================
Automation scripts to check the full/incremental backup content is correct
================================================================================
Can be found in corresponding test run resut in our CI testing page with build: libvirt-6.6.0-2.module+el8.3.0+7567+dc41c0a9.x86_64
backing_chain.blockpull.base_to_top.original_disk_local.nbd_tcp.libvirt_snapshot.scratch_to_file
backing_chain.blockpull.base_to_top.original_disk_local.nbd_tcp.shutoff_snapshot.scratch_to_file
backing_chain.blockpull.mid_to_top.original_disk_local.nbd_tcp.libvirt_snapshot.scratch_to_file
backing_chain.blockpull.mid_to_top.original_disk_local.nbd_tcp.shutoff_snapshot.scratch_to_file
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (virt:8.3 bug fix and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:5137 |