Bug 2015281
| Summary: | [incremental_backup] virDomainGetBlockInfo reports allocation=0 during backup pull mode | ||
|---|---|---|---|
| Product: | Red Hat Enterprise Linux Advanced Virtualization | Reporter: | Nir Soffer <nsoffer> |
| Component: | libvirt | Assignee: | Peter Krempa <pkrempa> |
| Status: | CLOSED ERRATA | QA Contact: | yisun |
| Severity: | high | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | 8.5 | CC: | aefrat, aliang, coli, ehadley, jdenemar, jsuchane, lmen, pkrempa, virt-maint, xuzhang, yisun |
| Target Milestone: | rc | Keywords: | Triaged |
| Target Release: | 8.5 | Flags: | pm-rhel:
mirror+
|
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | libvirt-7.6.0-6.module+el8.5.0+13051+7ddbe958 | Doc Type: | If docs needed, set a value |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2021-11-16 07:55:33 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
|
Description
Nir Soffer
2021-10-18 19:28:20 UTC
Hi Nir, Did this also occur in RHEL8.4/earlier RHEL builds or was this introduced in RHEL8.5? Just in case we need to backport this fix to earlier RHEL/RHV builds. (In reply to Avihai from comment #2) > Did this also occur in RHEL8.4/earlier RHEL builds or was this introduced in > RHEL8.5? > Just in case we need to backport this fix to earlier RHEL/RHV builds. I did not test, but you can test this easily, see bug 2015121. The problem is that while the code is properly using nodename to fetch the appropriate stat entry, the code which is fetching the stats has possibility to avoid recursing into the backing chain. This prevented the proper stats to be fetched when the filter layer is added. To reproduce the issue it's important to use QCOW2 inside a logical volume. Patches: https://listman.redhat.com/archives/libvir-list/2021-October/msg00753.html Note that only patch 1/8 is required to fix the particular issue. Patch2 fixes a related issue, the rest is cleanup. (In reply to Peter Krempa from comment #4) > Patches: > https://listman.redhat.com/archives/libvir-list/2021-October/msg00753.html Only the first patch is needed to fix this bug, right? If you provide a scratch build we can test it quickly. Patches pushed:
commit 65ff87c51c8d0f6729a6390c5bf5548d2682632f
Author: Peter Krempa <pkrempa>
Date: Tue Oct 19 15:12:36 2021 +0200
qemuDomainGetStatsBlock: Always fetch stats for the full backing chain
Similarly to the fix to 'qemuDomainBlocksStatsGather' we should be
always fetching the full backing chain so that we can avoid any
automatic filter notes which would prevent us from fetching the stats
for the correct nodename.
Signed-off-by: Peter Krempa <pkrempa>
Reviewed-by: Michal Privoznik <mprivozn>
commit 579e05536b814052f2b9a962e3045d45cbe45f8f
Author: Peter Krempa <pkrempa>
Date: Tue Oct 19 14:32:09 2021 +0200
qemuDomainBlocksStatsGather: Always fetch stats for the full backing chain
In certain cases such as when running a backup blockjob qemu installs a
filter node between the frontend and the top node of the backend of the
disk. The stats gathering code didn't instruct the monitor code to fetch
the stats for all the layers, so since the top layer now doesn't have
stats we were reporting wrong stats such as allocation.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=2015281
Signed-off-by: Peter Krempa <pkrempa>
Reviewed-by: Michal Privoznik <mprivozn>
Tested with RHV:
1. Starting VM
virsh # list
Id Name State
------------------------------
1 backup-qcow2 running
virsh # domblklist 1
Target Source
---------------------------------------------------------------------------------------------------------------------------------------------------------------
sdc -
sda /rhev/data-center/mnt/blockSD/feab3738-c158-4d48-8a41-b5a95c057a50/images/242c082f-ca51-4cf4-82b9-fd3a3c92f38b/44d498a1-54a5-4371-8eda-02d839d7c840
sdb /rhev/data-center/mnt/blockSD/feab3738-c158-4d48-8a41-b5a95c057a50/images/72c7acc2-0eac-4474-b148-70eccb249ba7/defe6059-4c3f-4361-b4aa-3d6cff7c6c77
virsh # domblkinfo 1 sdb
Capacity: 21474836480
Allocation: 0
Physical: 1073741824
2. Writing data in guest
# dd if=/dev/zero bs=1M count=128 seek=0 of=/dev/sdb oflag=direct
128+0 records in
128+0 records out
134217728 bytes (134 MB, 128 MiB) copied, 0.418928 s, 320 MB/s
virsh # domblkinfo 1 sdb
Capacity: 21474836480
Allocation: 134545408
Physical: 1073741824
3. Started backup in RHV
$ ./backup_vm.py -c engine-dev start 6e95d38f-d9b8-4955-878c-da6d631d0ab2
[ 0.0 ] Starting full backup for VM '6e95d38f-d9b8-4955-878c-da6d631d0ab2'
[ 0.4 ] Waiting until backup '7cb4d527-97b7-471a-b421-f816c4b5f856' is ready
[ 17.7 ] Created checkpoint '6a8ab379-138d-4f56-9d3b-097d495ac01f' (to use in --from-checkpoint-uuid for the next incremental backup)
[ 17.7 ] Backup '7cb4d527-97b7-471a-b421-f816c4b5f856' is ready
virsh # domblkinfo 1 sdb
Capacity: 21474836480
Allocation: 134545408
Physical: 1073741824
Allocation reported during bacup.
4. Writing more data in guest
# dd if=/dev/zero bs=1M count=128 seek=128 of=/dev/sdb oflag=direct
128+0 records in
128+0 records out
134217728 bytes (134 MB, 128 MiB) copied, 0.464765 s, 289 MB/s
virsh # domblkinfo 1 sdb
Capacity: 21474836480
Allocation: 268763136
Physical: 1073741824
# dd if=/dev/zero bs=1M count=128 seek=256 of=/dev/sdb oflag=direct
128+0 records in
128+0 records out
134217728 bytes (134 MB, 128 MiB) copied, 0.657987 s, 204 MB/s
virsh # domblkinfo 1 sdb
Capacity: 21474836480
Allocation: 402980864
Physical: 1073741824
5. Filling entire disk while downloading full backup
Starting write in guest...
# dd if=/dev/zero bs=1M count=20480 of=/dev/sdb oflag=direct status=progress
21471690752 bytes (21 GB, 20 GiB) copied, 92 s, 233 MB/s
20480+0 records in
20480+0 records out
21474836480 bytes (21 GB, 20 GiB) copied, 92.014 s, 233 MB/s
Downloading disks...
$ ./backup_vm.py -c engine-dev download --backup-uuid 7cb4d527-97b7-471a-b421-f816c4b5f856 --backup-dir /data/scratch/nsoffer/backups/ 6e95d38f-d9b8-4955-878c-da6d631d0ab2
[ 0.0 ] Downloading VM '6e95d38f-d9b8-4955-878c-da6d631d0ab2' disks
[ 0.2 ] Downloading full backup for disk '242c082f-ca51-4cf4-82b9-fd3a3c92f38b'
[ 1.5 ] Image transfer 'c4fe7203-5fb2-42a7-bb1f-66213137433f' is ready
[ 100.00% ] 6.00 GiB, 5.77 seconds, 1.04 GiB/s
[ 7.2 ] Finalizing image transfer
[ 8.3 ] Download completed successfully
[ 8.3 ] Downloading full backup for disk '72c7acc2-0eac-4474-b148-70eccb249ba7'
[ 9.4 ] Image transfer '1696fd80-4bc8-4e2a-bdb2-d4d62ff8f89e' is ready
[ 100.00% ] 20.00 GiB, 0.47 seconds, 42.57 GiB/s
[ 9.9 ] Finalizing image transfer
[ 10.9 ] Download completed successfully
[ 10.9 ] Finished downloading disks
Checking allocation during backup:
virsh # domblkinfo 1 sdb
Capacity: 21474836480
Allocation: 9533259776
Physical: 9663676416
In vdsm log, we got allocation info during the backup:
# grep 'Extension info' vdsm.log
2021-10-20 16:15:09,692+0300 DEBUG (libvirt/events) [virt.vm] (vmId='6e95d38f-d9b8-4955-878c-da6d631d0ab2') Extension info for drive sdb volume defe6059-4c3f-4361-b4aa-3d6cff7c6c77: BlockInfo(capacity=21474836480, allocation=1074135040, physical=1073741824) (vm:1277)
2021-10-20 16:15:27,598+0300 DEBUG (periodic/2) [virt.vm] (vmId='6e95d38f-d9b8-4955-878c-da6d631d0ab2') Extension info for drive sdb volume defe6059-4c3f-4361-b4aa-3d6cff7c6c77: BlockInfo(capacity=21474836480, allocation=1841823744, physical=2147483648) (vm:1277)
2021-10-20 16:15:28,930+0300 DEBUG (libvirt/events) [virt.vm] (vmId='6e95d38f-d9b8-4955-878c-da6d631d0ab2') Extension info for drive sdb volume defe6059-4c3f-4361-b4aa-3d6cff7c6c77: BlockInfo(capacity=21474836480, allocation=2148073472, physical=2147483648) (vm:1277)
2021-10-20 16:15:33,593+0300 DEBUG (periodic/2) [virt.vm] (vmId='6e95d38f-d9b8-4955-878c-da6d631d0ab2') Extension info for drive sdb volume defe6059-4c3f-4361-b4aa-3d6cff7c6c77: BlockInfo(capacity=21474836480, allocation=3054174208, physical=3221225472) (vm:1277)
2021-10-20 16:15:35,640+0300 DEBUG (periodic/0) [virt.vm] (vmId='6e95d38f-d9b8-4955-878c-da6d631d0ab2') Extension info for drive sdb volume defe6059-4c3f-4361-b4aa-3d6cff7c6c77: BlockInfo(capacity=21474836480, allocation=3182100480, physical=3221225472) (vm:1277)
2021-10-20 16:15:41,606+0300 DEBUG (periodic/2) [virt.vm] (vmId='6e95d38f-d9b8-4955-878c-da6d631d0ab2') Extension info for drive sdb volume defe6059-4c3f-4361-b4aa-3d6cff7c6c77: BlockInfo(capacity=21474836480, allocation=3802988544, physical=4294967296) (vm:1277)
2021-10-20 16:15:43,607+0300 DEBUG (periodic/0) [virt.vm] (vmId='6e95d38f-d9b8-4955-878c-da6d631d0ab2') Extension info for drive sdb volume defe6059-4c3f-4361-b4aa-3d6cff7c6c77: BlockInfo(capacity=21474836480, allocation=4284284928, physical=4294967296) (vm:1277)
2021-10-20 16:15:43,927+0300 DEBUG (libvirt/events) [virt.vm] (vmId='6e95d38f-d9b8-4955-878c-da6d631d0ab2') Extension info for drive sdb volume defe6059-4c3f-4361-b4aa-3d6cff7c6c77: BlockInfo(capacity=21474836480, allocation=4295884800, physical=4294967296) (vm:1277)
2021-10-20 16:15:47,600+0300 DEBUG (periodic/2) [virt.vm] (vmId='6e95d38f-d9b8-4955-878c-da6d631d0ab2') Extension info for drive sdb volume defe6059-4c3f-4361-b4aa-3d6cff7c6c77: BlockInfo(capacity=21474836480, allocation=5173673984, physical=5368709120) (vm:1277)
2021-10-20 16:15:48,583+0300 DEBUG (libvirt/events) [virt.vm] (vmId='6e95d38f-d9b8-4955-878c-da6d631d0ab2') Extension info for drive sdb volume defe6059-4c3f-4361-b4aa-3d6cff7c6c77: BlockInfo(capacity=21474836480, allocation=5369757696, physical=5368709120) (vm:1277)
2021-10-20 16:15:53,604+0300 DEBUG (periodic/2) [virt.vm] (vmId='6e95d38f-d9b8-4955-878c-da6d631d0ab2') Extension info for drive sdb volume defe6059-4c3f-4361-b4aa-3d6cff7c6c77: BlockInfo(capacity=21474836480, allocation=6222381056, physical=6442450944) (vm:1277)
2021-10-20 16:15:54,681+0300 DEBUG (libvirt/events) [virt.vm] (vmId='6e95d38f-d9b8-4955-878c-da6d631d0ab2') Extension info for drive sdb volume defe6059-4c3f-4361-b4aa-3d6cff7c6c77: BlockInfo(capacity=21474836480, allocation=6442647552, physical=6442450944) (vm:1277)
2021-10-20 16:15:59,747+0300 DEBUG (periodic/2) [virt.vm] (vmId='6e95d38f-d9b8-4955-878c-da6d631d0ab2') Extension info for drive sdb volume defe6059-4c3f-4361-b4aa-3d6cff7c6c77: BlockInfo(capacity=21474836480, allocation=7516520448, physical=7516192768) (vm:1277)
6. Checking allocation after backup was completed
virsh # domblkinfo 1 sdb
Capacity: 21474836480
Allocation: 21479489536
Physical: 22548578304
Looking in libvirt log, query-blockstats returns now:
Starting backup job
{
"execute": "transaction",
"arguments": {
"actions": [
{
"type": "block-dirty-bitmap-add",
"data": {
"node": "libvirt-2-format",
"name": "6a8ab379-138d-4f56-9d3b-097d495ac01f",
"persistent": true,
"disabled": false
}
},
{
"type": "block-dirty-bitmap-add",
"data": {
"node": "libvirt-1-format",
"name": "6a8ab379-138d-4f56-9d3b-097d495ac01f",
"persistent": true,
"disabled": false
}
},
{
"type": "blockdev-backup",
"data": {
"device": "libvirt-2-format",
"job-id": "backup-sda-libvirt-2-format",
"target": "libvirt-5-format",
"sync": "none",
"auto-finalize": true,
"auto-dismiss": false
}
},
{
"type": "blockdev-backup",
"data": {
"device": "libvirt-1-format",
"job-id": "backup-sdb-libvirt-1-format",
"target": "libvirt-6-format",
"sync": "none",
"auto-finalize": true,
"auto-dismiss": false
}
}
]
},
"id": "libvirt-461"
}
query-blockstat results for libvirt-2-format:
{
"device": "",
"parent": {
"parent": {
"stats": {
"unmap_operations": 0,
"unmap_merged": 0,
"flush_total_time_ns": 0,
"wr_highest_offset": 202702848,
"wr_total_time_ns": 0,
"failed_wr_operations": 0,
"failed_rd_operations": 0,
"wr_merged": 0,
"wr_bytes": 0,
"timed_stats": [],
"failed_unmap_operations": 0,
"failed_flush_operations": 0,
"account_invalid": false,
"rd_total_time_ns": 0,
"invalid_unmap_operations": 0,
"flush_operations": 0,
"wr_operations": 0,
"unmap_bytes": 0,
"rd_merged": 0,
"rd_bytes": 0,
"unmap_total_time_ns": 0,
"invalid_flush_operations": 0,
"account_failed": false,
"rd_operations": 0,
"invalid_wr_operations": 0,
"invalid_rd_operations": 0
},
"node-name": "libvirt-2-storage",
"driver-specific": {
"discard-bytes-ok": 0,
"discard-nb-ok": 0,
"driver": "host_device",
"discard-nb-failed": 0
}
},
"stats": {
"unmap_operations": 0,
"unmap_merged": 0,
"flush_total_time_ns": 0,
"wr_highest_offset": 5574623232,
"wr_total_time_ns": 0,
"failed_wr_operations": 0,
"failed_rd_operations": 0,
"wr_merged": 0,
"wr_bytes": 0,
"timed_stats": [],
"failed_unmap_operations": 0,
"failed_flush_operations": 0,
"account_invalid": false,
"rd_total_time_ns": 0,
"invalid_unmap_operations": 0,
"flush_operations": 0,
"wr_operations": 0,
"unmap_bytes": 0,
"rd_merged": 0,
"rd_bytes": 0,
"unmap_total_time_ns": 0,
"invalid_flush_operations": 0,
"account_failed": false,
"rd_operations": 0,
"invalid_wr_operations": 0,
"invalid_rd_operations": 0
},
"backing": {
"parent": {
"stats": {
"unmap_operations": 0,
"unmap_merged": 0,
"flush_total_time_ns": 0,
"wr_highest_offset": 0,
"wr_total_time_ns": 0,
"failed_wr_operations": 0,
"failed_rd_operations": 0,
"wr_merged": 0,
"wr_bytes": 0,
"timed_stats": [],
"failed_unmap_operations": 0,
"failed_flush_operations": 0,
"account_invalid": false,
"rd_total_time_ns": 0,
"invalid_unmap_operations": 0,
"flush_operations": 0,
"wr_operations": 0,
"unmap_bytes": 0,
"rd_merged": 0,
"rd_bytes": 0,
"unmap_total_time_ns": 0,
"invalid_flush_operations": 0,
"account_failed": false,
"rd_operations": 0,
"invalid_wr_operations": 0,
"invalid_rd_operations": 0
},
"node-name": "libvirt-4-storage",
"driver-specific": {
"discard-bytes-ok": 0,
"discard-nb-ok": 0,
"driver": "host_device",
"discard-nb-failed": 0
}
},
"stats": {
"unmap_operations": 0,
"unmap_merged": 0,
"flush_total_time_ns": 0,
"wr_highest_offset": 0,
"wr_total_time_ns": 0,
"failed_wr_operations": 0,
"failed_rd_operations": 0,
"wr_merged": 0,
"wr_bytes": 0,
"timed_stats": [],
"failed_unmap_operations": 0,
"failed_flush_operations": 0,
"account_invalid": false,
"rd_total_time_ns": 0,
"invalid_unmap_operations": 0,
"flush_operations": 0,
"wr_operations": 0,
"unmap_bytes": 0,
"rd_merged": 0,
"rd_bytes": 0,
"unmap_total_time_ns": 0,
"invalid_flush_operations": 0,
"account_failed": false,
"rd_operations": 0,
"invalid_wr_operations": 0,
"invalid_rd_operations": 0
},
"node-name": "libvirt-4-format"
},
"node-name": "libvirt-2-format"
},
"stats": {
"unmap_operations": 0,
"unmap_merged": 0,
"flush_total_time_ns": 406288403,
"wr_highest_offset": 4091357184,
"wr_total_time_ns": 11842122676,
"failed_wr_operations": 0,
"failed_rd_operations": 0,
"wr_merged": 0,
"wr_bytes": 43664384,
"timed_stats": [],
"failed_unmap_operations": 0,
"failed_flush_operations": 0,
"account_invalid": false,
"rd_total_time_ns": 2593899216,
"invalid_unmap_operations": 0,
"flush_operations": 99,
"wr_operations": 1055,
"unmap_bytes": 0,
"rd_merged": 0,
"rd_bytes": 152744960,
"unmap_total_time_ns": 0,
"invalid_flush_operations": 0,
"account_failed": false,
"idle_time_ns": 16085462183,
"rd_operations": 7184,
"invalid_wr_operations": 0,
"invalid_rd_operations": 0
},
"backing": {
"parent": {
"stats": {
"unmap_operations": 0,
"unmap_merged": 0,
"flush_total_time_ns": 0,
"wr_highest_offset": 202702848,
"wr_total_time_ns": 0,
"failed_wr_operations": 0,
"failed_rd_operations": 0,
"wr_merged": 0,
"wr_bytes": 0,
"timed_stats": [],
"failed_unmap_operations": 0,
"failed_flush_operations": 0,
"account_invalid": false,
"rd_total_time_ns": 0,
"invalid_unmap_operations": 0,
"flush_operations": 0,
"wr_operations": 0,
"unmap_bytes": 0,
"rd_merged": 0,
"rd_bytes": 0,
"unmap_total_time_ns": 0,
"invalid_flush_operations": 0,
"account_failed": false,
"rd_operations": 0,
"invalid_wr_operations": 0,
"invalid_rd_operations": 0
},
"node-name": "libvirt-2-storage",
"driver-specific": {
"discard-bytes-ok": 0,
"discard-nb-ok": 0,
"driver": "host_device",
"discard-nb-failed": 0
}
},
"stats": {
"unmap_operations": 0,
"unmap_merged": 0,
"flush_total_time_ns": 0,
"wr_highest_offset": 5574623232,
"wr_total_time_ns": 0,
"failed_wr_operations": 0,
"failed_rd_operations": 0,
"wr_merged": 0,
"wr_bytes": 0,
"timed_stats": [],
"failed_unmap_operations": 0,
"failed_flush_operations": 0,
"account_invalid": false,
"rd_total_time_ns": 0,
"invalid_unmap_operations": 0,
"flush_operations": 0,
"wr_operations": 0,
"unmap_bytes": 0,
"rd_merged": 0,
"rd_bytes": 0,
"unmap_total_time_ns": 0,
"invalid_flush_operations": 0,
"account_failed": false,
"rd_operations": 0,
"invalid_wr_operations": 0,
"invalid_rd_operations": 0
},
"backing": {
"parent": {
"stats": {
"unmap_operations": 0,
"unmap_merged": 0,
"flush_total_time_ns": 0,
"wr_highest_offset": 0,
"wr_total_time_ns": 0,
"failed_wr_operations": 0,
"failed_rd_operations": 0,
"wr_merged": 0,
"wr_bytes": 0,
"timed_stats": [],
"failed_unmap_operations": 0,
"failed_flush_operations": 0,
"account_invalid": false,
"rd_total_time_ns": 0,
"invalid_unmap_operations": 0,
"flush_operations": 0,
"wr_operations": 0,
"unmap_bytes": 0,
"rd_merged": 0,
"rd_bytes": 0,
"unmap_total_time_ns": 0,
"invalid_flush_operations": 0,
"account_failed": false,
"rd_operations": 0,
"invalid_wr_operations": 0,
"invalid_rd_operations": 0
},
"node-name": "libvirt-4-storage",
"driver-specific": {
"discard-bytes-ok": 0,
"discard-nb-ok": 0,
"driver": "host_device",
"discard-nb-failed": 0
}
},
"stats": {
"unmap_operations": 0,
"unmap_merged": 0,
"flush_total_time_ns": 0,
"wr_highest_offset": 0,
"wr_total_time_ns": 0,
"failed_wr_operations": 0,
"failed_rd_operations": 0,
"wr_merged": 0,
"wr_bytes": 0,
"timed_stats": [],
"failed_unmap_operations": 0,
"failed_flush_operations": 0,
"account_invalid": false,
"rd_total_time_ns": 0,
"invalid_unmap_operations": 0,
"flush_operations": 0,
"wr_operations": 0,
"unmap_bytes": 0,
"rd_merged": 0,
"rd_bytes": 0,
"unmap_total_time_ns": 0,
"invalid_flush_operations": 0,
"account_failed": false,
"rd_operations": 0,
"invalid_wr_operations": 0,
"invalid_rd_operations": 0
},
"node-name": "libvirt-4-format"
},
"node-name": "libvirt-2-format"
},
"node-name": "#block035",
"qdev": "ua-242c082f-ca51-4cf4-82b9-fd3a3c92f38b"
},
Look good!
It seems that the response size is much bigger, and we already had the data in the
response without this change.
From RHV point of view, we only need the allocation for 2 disks - 2 64 bit integers.
libvirt fetches 12k of json from qemu.
It would be better if we had an API to query only block threshold for one of all
nodes that block threshold was set.
====================================================================
REPRODUCE WITH: libvirt-7.6.0-5.module+el8.5.0+12933+58cb48a1.x86_64
====================================================================
1. prepare backup xml
[root@dell-per730-39 ~]# cat pull_backup.xml
<domainbackup mode='pull'>
<server transport='unix' socket='/tmp/bkup.socket'/>
<disks>
<disk name='vda' backup='yes' type='file' backupmode='full' exportname='vda'>
<driver type='qcow2'/>
<scratch file='/var/lib/libvirt/images/vda.bkup'/>
</disk>
<disk name='sdb' backup='yes' type='file' backupmode='full' exportname='sdb'>
<driver type='qcow2'/>
<scratch file='/var/lib/libvirt/images/vdb.bkup'/>
</disk>
</disks>
</domainbackup>
2. prepare block device, here we'll use /dev/sdb which is a iscsi lun
[root@dell-per730-39 ~]# lsscsi
[0:2:0:0] disk DELL PERC H730 Mini 4.25 /dev/sda
[12:0:0:0] disk LIO-ORG device.logical- 4.0 /dev/sdc
[13:0:0:0] disk LIO-ORG device.logical- 4.0 /dev/sdb
[root@dell-per730-39 ~]# qemu-img create -f qcow2 /dev/sdb 1G
Formatting '/dev/sdb', fmt=qcow2 cluster_size=65536 extended_l2=off compression_type=zlib size=1073741824 lazy_refcounts=off refcount_bits=16
3. start vm
[root@dell-per730-39 ~]# virsh start vm1
Domain 'vm1' started
[root@dell-per730-39 ~]# virsh domblklist vm1
Target Source
---------------------------------------------
vda /var/lib/libvirt/images/vda.qcow2
sdb /dev/sdb
4. check the allocation of vm's sdb now, it's empty so should be '0'
[root@dell-per730-39 ~]# virsh domblkinfo vm1 sdb
Capacity: 1073741824
Allocation: 0
Physical: 1048576000
5. in vm, generate some data in vm's sdb (it's /dev/sda in vm's os)
[root@localhost ~]# dd if=/dev/urandom of=/dev/sda bs=1M count=200
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 1.18255 s, 177 MB/s
6. check sdb's allocation, it's correct now 210042880
[root@dell-per730-39 ~]# virsh domblkinfo vm1 sdb
Capacity: 1073741824
Allocation: 210042880
Physical: 1048576000
7. start the pull mode backup job
[root@dell-per730-39 ~]# virsh backup-begin vm1 pull_backup.xml
Backup started
8. check the sdb's allocation, it's now turned to '0' which is a problem
[root@dell-per730-39 ~]# virsh domblkinfo vm1 sdb
Capacity: 1073741824
Allocation: 0
Physical: 1048576000
9. abourt the backup job, the data is correct now
[root@dell-per730-39 ~]# virsh domjobabort vm1
[root@dell-per730-39 ~]# virsh domblkinfo vm1 sdb
Capacity: 1073741824
Allocation: 210042880
Physical: 1048576000
=====================================================================
VERIFIED WITH SCRATCH BUILD: libvirt-7.6.0-6.el8_rc.e6185cc768.x86_64
=====================================================================
1. prepare backup xml
[root@dell-per730-39 ~]# cat pull_backup.xml
<domainbackup mode='pull'>
<server transport='unix' socket='/tmp/bkup.socket'/>
<disks>
<disk name='vda' backup='yes' type='file' backupmode='full' exportname='vda'>
<driver type='qcow2'/>
<scratch file='/var/lib/libvirt/images/vda.bkup'/>
</disk>
<disk name='sdb' backup='yes' type='file' backupmode='full' exportname='sdb'>
<driver type='qcow2'/>
<scratch file='/var/lib/libvirt/images/vdb.bkup'/>
</disk>
</disks>
</domainbackup>
2. start the vm, here the vm's sdb is host's /dev/sdc as we created by a iscsi lun
[root@dell-per730-39 ~]# virsh start vm1
Domain 'vm1' started
[root@dell-per730-39 ~]# virsh domblklist vm1
Target Source
---------------------------------------------
vda /var/lib/libvirt/images/vda.qcow2
sdb /dev/sdc
3. check the vm's sdb allocation
[root@dell-per730-39 ~]# virsh domblkinfo vm1 sdb
Capacity: 1073741824
Allocation: 0
Physical: 1048576000
4. generate some data to vm's sdb (in vm's os it's /dev/sda)
[root@localhost ~]# dd if=/dev/urandom of=/dev/sda bs=1M count=200
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 1.16155 s, 181 MB/s
5. check the allocation data, it's now 210042880
[root@dell-per730-39 ~]# virsh domblkinfo vm1 sdb
Capacity: 1073741824
Allocation: 210042880
Physical: 1048576000
6. start the backup job
[root@dell-per730-39 ~]# virsh backup-begin vm1 pull_backup.xml
Backup started
7. check the allocation data of vm's sdb during backup job not aborted, it's correct
[root@dell-per730-39 ~]# virsh domblkinfo vm1 sdb
Capacity: 1073741824
Allocation: 210042880
Physical: 1048576000
8. generate more data in vm os to /dev/sda
[root@localhost ~]# dd if=/dev/urandom of=/dev/sda bs=1M count=200 seek=300
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 1.15912 s, 181 MB/s
9. the allocation rised as expected
[root@dell-per730-39 ~]# virsh domblkinfo vm1 sdb
Capacity: 1073741824
Allocation: 419758080
Physical: 1048576000
10. fill the whole disk in vm
[root@localhost ~]# dd if=/dev/urandom of=/dev/sda bs=1M count=1100
dd: error writing '/dev/sda': No space left on device
11. allocation data is right
[root@dell-per730-39 ~]# virsh domblkinfo vm1 sdb
Capacity: 1073741824
Allocation: 1048903680
Physical: 1048576000
12. vm paused as error happens, this is expected
[root@dell-per730-39 ~]# virsh list
Id Name State
---------------------
2 vm1 paused
Verified with libvirt-7.6.0-6.module+el8.5.0+13051+7ddbe958.x86_64 same steps as comment 12 Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (virt:av bug fix and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2021:4684 |