Description of problem: The incremental backup can start with only one checkpoint id for all disks in a vm. If vm has a new disk or combination of QCOW and RAW disks, we can’t continue the current incremental backup. For the case of added disk, we start a new incremental backup flow. And for the case of the disk combination (RAW + QCOW) we can use only the full backup. This limitation creates an additional data size in the restore points. Version-Release number of selected component (if applicable): ovirt-4.4.3 How reproducible: Create a vm with QCOW and RAW disk. Start incremental backup. Actual results: Incremental backup fails. Expected results: QCOW disk backups in the incremental mode, RAW disk makes the full backup.
Hello, Eyal. Don't forget to provide compatibility with the current variant of usage, please.
(In reply to Yury.Panchenko from comment #1) > Hello, Eyal. > Don't forget to provide compatibility with the current variant of usage, > please. Hi Yury, Sure, we do not break the previous implementation.
Verified on rhv-release-4.4.5-7-001.noarch - Clone VM from template (so it would have thin OS disk) - Add new preallocated disk to the VM - Initiate full backup on both disks - Initiate incremental backup on both disks Repeat this flow with stopped / started VM Expected: OS disk (thin) should have incremental backup_mode [1] Data disk (preallocated) should have full backup_mode Actual: As expected [1]: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <disks> <disk id="7e8a33df-bd77-43c0-8518-d277be8ac8c4"> <name>latest-rhel-guest-image-8.3-infra</name> <description>latest-rhel-guest-image-8.3-infra (c5705fc)</description> <actual_size>46526464</actual_size> <alias>latest-rhel-guest-image-8.3-infra</alias> <backup>none</backup> <backup_mode>incremental</backup_mode> <content_type>data</content_type> <format>cow</format> <image_id>2883d8e3-4336-44a4-b52d-ab011fbe676d</image_id> <propagate_errors>false</propagate_errors> <provisioned_size>10737418240</provisioned_size> <qcow_version>qcow2_v3</qcow_version> <shareable>false</shareable> <sparse>true</sparse> <status>locked</status> <storage_type>image</storage_type> <total_size>0</total_size> <wipe_after_delete>false</wipe_after_delete> <disk_profile id="ba2d4ca8-1c08-479b-9b04-b04f19455506"/> <quota id="5467fe9e-163b-4dce-9c04-329aa3ee0c41"> <data_center id="0023a401-1695-4c42-aa61-2f7108c0ccb8"/> </quota> <storage_domains> <storage_domain id="800404de-67cb-4321-a64b-b083a43967e3"/> </storage_domains> </disk> <disk id="c0e6fb33-b753-4cca-825b-9177b4a568db"> <name>26779_Disk1</name> <description></description> <actual_size>1073741824</actual_size> <alias>26779_Disk1</alias> <backup>none</backup> <backup_mode>full</backup_mode> <content_type>data</content_type> <format>raw</format> <image_id>28f728a8-ee9e-41c1-9750-655073bc70fc</image_id> <propagate_errors>false</propagate_errors> <provisioned_size>1073741824</provisioned_size> <shareable>false</shareable> <sparse>false</sparse> <status>locked</status> <storage_type>image</storage_type> <total_size>0</total_size> <wipe_after_delete>false</wipe_after_delete> <disk_profile id="c69220b2-0ef2-4ae4-8f10-89b642a7e3e5"/> <quota id="5467fe9e-163b-4dce-9c04-329aa3ee0c41"> <data_center id="0023a401-1695-4c42-aa61-2f7108c0ccb8"/> </quota> <storage_domains> <storage_domain id="9db95765-0fb7-485e-91f2-381354a66d13"/> </storage_domains> </disk> </disks>
This bugzilla is included in oVirt 4.4.5 release, published on March 18th 2021. Since the problem described in this bug report should be resolved in oVirt 4.4.5 release, it has been closed with a resolution of CURRENT RELEASE. If the solution does not work for you, please open a new bug report.