Bug 2017928
Summary: | [incremental_backup] Expose scratch disk allocation (wr_highest_offset) in the API | ||
---|---|---|---|
Product: | Red Hat Enterprise Linux 8 | Reporter: | Nir Soffer <nsoffer> |
Component: | libvirt | Assignee: | Peter Krempa <pkrempa> |
Status: | CLOSED ERRATA | QA Contact: | yisun |
Severity: | high | Docs Contact: | |
Priority: | unspecified | ||
Version: | 8.5 | CC: | ahadas, jdenemar, jsuchane, lmen, pkrempa, virt-maint, xuzhang, Yury.Panchenko |
Target Milestone: | rc | Keywords: | Triaged |
Target Release: | 8.5 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | libvirt-7.10.0-1.module+el8.6.0+13502+4f24a11d | Doc Type: | If docs needed, set a value |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2022-05-10 13:21:40 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | 7.10.0 |
Embargoed: | |||
Bug Depends On: | |||
Bug Blocks: | 1913387 |
Description
Nir Soffer
2021-10-27 18:01:12 UTC
Reproduce steps: 1. Prepare a backup xml [root@dell-per730-59 ~]# cat backup.xml <domainbackup mode='pull'> <server transport='unix' socket='/tmp/bkup.socket'/> <disks> <disk name='vda' backup='no'/> <disk name='vdb' backup='yes' type='block' backupmode='full' exportname='vdb'> <driver type='qcow2'/> <scratch dev='/dev/sdd'/> </disk> </disks> </domainbackup> 2. Start a vm with two disks vda and vdb, here they have existing external snsapshots [root@dell-per730-59 ~]# virsh domstate vm1 running [root@dell-per730-59 ~]# virsh dumpxml vm1 | awk '/<disk/,/<\/disk/' <disk type='file' device='disk'> <driver name='qemu' type='qcow2'/> <source file='/var/lib/libvirt/images/jeos-27-x86_64.snap1' index='3'/> <backingStore type='file' index='2'> <format type='qcow2'/> <source file='/var/lib/libvirt/images/jeos-27-x86_64.qcow2'/> <backingStore/> </backingStore> <target dev='vda' bus='virtio'/> <alias name='virtio-disk0'/> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> </disk> <disk type='file' device='disk'> <driver name='qemu' type='qcow2'/> <source file='/var/lib/libvirt/images/vdb.snap1' index='4'/> <backingStore type='file' index='1'> <format type='qcow2'/> <source file='/var/lib/libvirt/images/vdb.qcow2'/> <backingStore/> </backingStore> <target dev='vdb' bus='virtio'/> <alias name='virtio-disk1'/> <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/> </disk> 3. Start the backup [root@dell-per730-59 ~]# virsh backup-begin vm1 backup.xml Backup started 4. Check the scratch info [root@dell-per730-59 ~]# virsh backup-dumpxml vm1 | awk '/<domainbackup/,/<\/domainbackup/' <domainbackup mode='pull'> <server transport='unix' socket='/tmp/bkup.socket'/> <disks> <disk name='vda' backup='no'/> <disk name='vdb' backup='yes' type='block' backupmode='full' exportname='vdb' index='6'> <driver type='qcow2'/> <scratch dev='/dev/sdd'/> </disk> </disks> </domainbackup> 5. The scratch device info not exposed by domblkinfo or domstats [root@dell-per730-59 ~]# virsh domblkinfo vm1 /dev/sdd error: invalid argument: invalid path /dev/sdd not assigned to domain [root@dell-per730-59 ~]# virsh domstats vm1 --backing | grep /dev/sdd | wc -l 0 hi Peter, If we'll provide such info in domblkinfo, will the external snapshots' backend nodes exposed at the same time? I am just curious if there will be a side effect to expose other kind of nodes which needs to be considered and tested. For example, now we can only get the most outside layer info from virsh domblkinfo 1. vm's xml [root@dell-per730-59 ~]# virsh dumpxml vm1 | awk '/<disk/,/<\/disk/' <disk type='file' device='disk'> <driver name='qemu' type='qcow2'/> <source file='/var/lib/libvirt/images/jeos-27-x86_64.snap1' index='3'/> <backingStore type='file' index='2'> <format type='qcow2'/> <source file='/var/lib/libvirt/images/jeos-27-x86_64.qcow2'/> <backingStore/> </backingStore> <target dev='vda' bus='virtio'/> <alias name='virtio-disk0'/> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> </disk> <disk type='file' device='disk'> <driver name='qemu' type='qcow2'/> <source file='/var/lib/libvirt/images/vdb.snap1' index='4'/> <backingStore type='file' index='1'> <format type='qcow2'/> <source file='/var/lib/libvirt/images/vdb.qcow2'/> <backingStore/> </backingStore> <target dev='vdb' bus='virtio'/> <alias name='virtio-disk1'/> <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/> </disk> 2. Can get info for snap1 node, but cannot get backing node info [root@dell-per730-59 ~]# virsh domblkinfo vm1 /var/lib/libvirt/images/jeos-27-x86_64.snap1 Capacity: 10737418240 Allocation: 2101248 Physical: 1769472 [root@dell-per730-59 ~]# virsh domblkinfo vm1 /var/lib/libvirt/images/jeos-27-x86_64.qcow2 error: invalid argument: invalid path /var/lib/libvirt/images/jeos-27-x86_64.qcow2 not assigned to domain (In reply to yisun from comment #2) > hi Peter, > If we'll provide such info in domblkinfo, will the external snapshots' > backend nodes exposed at the same time? I am just curious if there will be a > side effect to expose other kind of nodes which needs to be considered and > tested. No, the new data will be exported via the bulk-stats API (virsh domstats). The old APIs will not be extended. (In reply to Nir Soffer from comment #0) [...] > I think what we need to expose the scratch disk in: > - virDomainGetBlockInfo > - virConnectGetAllDomainStats > - virDomainListGetStats > > The most important API for us is virDomainGetBlockInfo since it returns > what we need, and it is much easier to use. Unfortunately that's a legacy API and thus it will _not_ be extended any more. You'll need to fetch the stats from the bulk stats API virConnectGetAllDomainStats/virDomainListGetStats (same backend). It also solves one more of your complaints if you are querying multiple disks, the JSON data from qemu iu fetched only once. (In reply to Peter Krempa from comment #5) > > I think what we need to expose the scratch disk in: > > - virDomainGetBlockInfo > > - virConnectGetAllDomainStats > > - virDomainListGetStats > > > > The most important API for us is virDomainGetBlockInfo since it returns > > what we need, and it is much easier to use. > > Unfortunately that's a legacy API and thus it will _not_ be extended any > more. You'll need to fetch the stats from the bulk stats API > virConnectGetAllDomainStats/virDomainListGetStats (same backend). It also > solves one more of your complaints if you are querying multiple disks, the > JSON data from qemu iu fetched only once. Thanks, this works for us. Patches adding the backup disk stats to 'virsh domstats': https://listman.redhat.com/archives/libvir-list/2021-November/msg00023.html tested with scracth build, result is expected 1. Prepare a vm, here we'll use its vdb as test disk [root@dell-per740xd-27 ~]# virsh domblklist vm1 Target Source -------------------------------------------------------- vda /var/lib/libvirt/images/jeos-27-x86_64.qcow2 vdb /var/lib/libvirt/images/vdb.qcow2 2. In vm, write 200mb data to test disk: [root@localhost ~]# dd if=/dev/urandom of=/dev/vdb bs=1M count=200; sync 200+0 records in 200+0 records out 209715200 bytes (210 MB, 200 MiB) copied, 1.298 s, 162 MB/s 3. Prepare pull-mode backup xml [root@dell-per740xd-27 ~]# cat backup.xml <domainbackup mode='pull'> <server transport='unix' socket='/tmp/bkup.socket'/> <disks> <disk name='vda' backup='no'/> <disk name='vdb' backup='yes' type='block' backupmode='full' exportname='vdb'> <driver type='qcow2'/> <scratch dev='/dev/sdb'/> </disk> </disks> </domainbackup> 4. Start the backup job [root@dell-per740xd-27 ~]# virsh backup-begin vm1 backup.xml Backup started 5. check the usage info for the scratch device is displayed [root@dell-per740xd-27 ~]# virsh domstats vm1 --block --backing Domain: 'vm1' block.count=3 ... block.2.name=vdb block.2.path=/dev/sdb block.2.backingIndex=3 block.2.allocation=196624 block.2.capacity=1073741824 block.2.physical=1048576000 6. In terminal 2, add a event watcher for the scratch file [root@dell-per740xd-27 ~]# virsh domblkthreshold vm1 vdb[3] 100000000 [root@dell-per740xd-27 ~]# virsh event --all --loop 7. Rewrite 200mb data in the same disk in vm [root@localhost ~]# dd if=/dev/urandom of=/dev/vdb bs=1M count=200; sync 200+0 records in 200+0 records out 209715200 bytes (210 MB, 200 MiB) copied, 1.33676 s, 157 MB/s 8. The block threshold event triggered [root@dell-per740xd-27 ~]# virsh event --all --loop event 'block-threshold' for domain 'vm1': dev: vdb[3](/dev/sdb) 100000000 7936 9. Check the domstats, it contains correct allocation info for vdb[3] [root@dell-per740xd-27 ~]# virsh domstats vm1 --block --backing Domain: 'vm1' block.count=3 ... block.2.name=vdb block.2.path=/dev/sdb block.2.backingIndex=3 block.2.allocation=210042880 block.2.capacity=1073741824 block.2.physical=1048576000 10. Abort the backup job, the vdb[3] info not in domstats anymore as expected [root@dell-per740xd-27 ~]# virsh domjobabort vm1 [root@dell-per740xd-27 ~]# virsh domstats vm1 --block --backing ... no vdb[3] info The patches were pushed upstream: commit 045a87c526778b49662d0d5d4898bd39aa2e6985 Author: Peter Krempa <pkrempa> Date: Fri Oct 29 16:04:45 2021 +0200 qemuDomainGetStatsBlockExportDisk: Report stats also for helper images Add stat entries also for the mirror destination and the backup job scratch/target file. This is possible with '-blockdev' as we use unique index for the entries. The stats are reported when the VIR_CONNECT_GET_ALL_DOMAINS_STATS_BACKING is used. Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=2017928 Signed-off-by: Peter Krempa <pkrempa> Reviewed-by: Ján Tomko <jtomko> commit bc24810c2cabc21d1996fa814737e2f996f2c2bb Author: Peter Krempa <pkrempa> Date: Mon Nov 1 11:35:41 2021 +0100 qemuMonitorJSONQueryBlockstats: query stats for helper images Use the 'query-nodes' flag to return all stats. The flag was introduced prior to qemu-2.11 so we can always use it, but we invoke it only when querying stats. The other invocation is used for detecting the nodenames which is fragile code. (HEAD -> master, origin/master, origin/HEAD, rhev-backup-stats) The images without a frontend don't have the device field so the extraction code checks need to be relaxed. Signed-off-by: Peter Krempa <pkrempa> Reviewed-by: Ján Tomko <jtomko> commit 6448470eca50e529658861c6eb3ae8109366db86 Author: Peter Krempa <pkrempa> Date: Mon Nov 1 14:31:42 2021 +0100 qemustatusxml2xmldata: backup-pull: Add private data for scratch image Signed-off-by: Peter Krempa <pkrempa> Reviewed-by: Ján Tomko <jtomko> commit 1e4aff444c93b83435f91c814dd5ae4465918d36 Author: Peter Krempa <pkrempa> Date: Mon Nov 1 12:42:39 2021 +0100 virDomainBackupDefFormat: Propagate private data callbacks The formatter for the backup job data didn't pass the virDomainXMLOption struct to the disk formatter which meant that the private data of the disk source were not formatted. This didn't pose a problem for now as the blockjob list remembered the nodenames for the jobs, but the backup source lost them. Signed-off-by: Peter Krempa <pkrempa> Reviewed-by: Ján Tomko <jtomko> v7.9.0-44-g045a87c526 verified with: libvirt-7.10.0-1.module+el8.6.0+13502+4f24a11d.x86_64 same steps as comment 9 Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: virt:rhel and virt-devel:rhel security, bug fix, and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2022:1759 |