Bug 1861674 - [CBT] Report backup mode (full or incremental) for disks that participates in the VM backup
Summary: [CBT] Report backup mode (full or incremental) for disks that participates in...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: ovirt-engine
Classification: oVirt
Component: BLL.Storage
Version: 4.4.1
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ovirt-4.4.3
: ---
Assignee: Eyal Shenitzky
QA Contact: Ilan Zuckerman
URL:
Whiteboard:
Depends On: 1829829
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-07-29 09:04 UTC by Eyal Shenitzky
Modified: 2021-01-06 09:25 UTC (History)
6 users (show)

Fixed In Version:
Clone Of:
Environment:
Last Closed: 2020-11-11 06:41:27 UTC
oVirt Team: Storage
Embargoed:
pm-rhel: ovirt-4.4+
izuckerm: testing_plan_complete+


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
oVirt gerrit 111561 0 master MERGED core: add 'backup_mode' column to base_disk table 2021-02-02 09:31:54 UTC
oVirt gerrit 111562 0 master MERGED core: set backup mode for each disk in the backup 2021-02-02 09:31:54 UTC
oVirt gerrit 111570 0 master MERGED backup.py: add backup_mode property for each backup disk 2021-02-02 09:31:54 UTC
oVirt gerrit 111571 0 master MERGED backup.py: Use 'backup_mode' property for each disk in the backup 2021-02-02 09:31:54 UTC
oVirt gerrit 111718 0 master MERGED restapi: add DiskBackupMode mapper 2021-02-02 09:32:39 UTC

Description Eyal Shenitzky 2020-07-29 09:04:24 UTC
Description of problem:

When incremental backup introduced as a tech preview there was a known libvirt bug 1829829, this bug prevents from including new or existing disks in a backup operation if those disks were not included in the previous backup.

In this case, a full backup should be taken.

Version-Release number of selected component (if applicable):
4.4.1

How reproducible:
100%

Steps to Reproduce:
1. Create a VM with disks
2. Run the VM
3. Create an incremental backups for the VM
4. Add a new disk to the VM
5. Create another incremental backup for the VM

Actual results:
Incremental backup cannot be taken with the new disk

Expected results:
Incremental backup should be taken with the new disk

Additional info:

Comment 1 Nir Soffer 2020-08-05 09:19:03 UTC
Eyal, I modified the title since bugs should not specify the solution
but the problem.

Please do not add backupDiskMode to the API!

Comment 2 Eyal Shenitzky 2020-08-05 09:35:38 UTC
In order to include disks that wasn't included before in an existing backup chain we should add a new attribute backupmode='full' for each disk new
in the 'domainbackup' XML that is given to libvirt (see - https://gitlab.com/libvirt/libvirt/-/commit/7e5b993d3b8cae9c43b753591a7b12db5c540da5)

For example - 

<domainbackup mode='pull'>
 <server transport='unix' socket='/path/to/sock'/>
 <incremental>1234</incremental>
 <disks>
   <disk name='sda' backup='yes' type='file' backupmode='full' exportname='sda'>
       <driver type='qcow2'/>
       <scratch file='/path/to/scratch_sda'>
           <seclabel model='dac' relabel='no'/>
       </scratch>
       <backupmode></backupmode>
   </disk>
   <disk name='vda' backup='yes' type='file' exportname='vda'>
       <driver type='qcow2'/>
       <
       <scratch file='/path/to/scratch_vda'>
           <seclabel model="dac" relabel="no"/>
       </scratch>
   </disk>
 </disks>
</domainbackup>

Comment 3 Nir Soffer 2020-08-05 10:00:38 UTC
(In reply to Eyal Shenitzky from comment #2)
> In order to include disks that wasn't included before in an existing backup
> chain we should add a new attribute backupmode='full' for each disk new
> in the 'domainbackup' XML that is given to libvirt (see -
> https://gitlab.com/libvirt/libvirt/-/commit/
> 7e5b993d3b8cae9c43b753591a7b12db5c540da5)

This is only part of the picture. The issue is not how to enable it
in libvirt, but how to let the user know what is available.

Lets think first on the user flow:

1. user starts incremental backup
2. system detect the backup mode for all the disks included in the backup
3. system starts backup with the backup mode
4. system reports the backup mode for every disk

<backup id="backup-uuid">
    <disks>
        <disk id="existing-disk-uuid" backup_mode="incremental"/>
        <disk id="new-disk-uuid" backup_mode="full"/>
        ...
        ...
    </disks>
    <phase>initiailizing</phase>
    <creation_date>
</backup>

5. user starts transfer for each disk
6. system create a ticket with dirty=true for disk using backupmode=incremetnal
   and dirty=false for disk using backupmode=full
7. Since user knows which disk can use incremental backup and which cannot
   they will use the right context when getting extents.

Users that will ignore the information can try to use incremental backup
and fallback to full backup if getting extents for context=dirty fails
with "404 Not Found".

This means we need to keep the backup mode in the vm_backup_disk_map table.

The first step to add this is to update the feature page with the new
info.

Once we have this support, can we eliminate the error when user starts
incremental backup when full backup is required? The error can be 
replaced with a response like:

<backup id="backup-uuid">
    <disks>
        <disk id="disk-1-uuid" backup_mode="full"/>
        <disk id="disk-2-uuid" backup_mode="full"/>
        ...
        ...
    </disks>
    <phase>initiailizing</phase>
    <creation_date>
</backup>

With this we eliminate this error. Incremental backup is just a hint.

We need to get feedback from backup vendors on this change before we
implement it.

Comment 4 Eyal Shenitzky 2020-10-22 06:34:32 UTC
Flows that should be verified are:

1. Create a VM with disks
2. Run the VM
3. Create full backups for the VM

Expected results:
All the VM disks 'backup_mode' should be set to 'full' under - 
.../ovirt-engine/api/vms/vm-uuid/backups/backup-uuid/disks/disks
And a full backup was taken the VM disks


-----------------------------------------
1. Create a VM with disks
2. Run the VM
3. Create full backups for the VM
4. Finalize the backup
5. Create another incremental backup for the VM

Expected results:
All the VM disks 'backup_mode' should be set to 'incremental' under - 
.../ovirt-engine/api/vms/vm-uuid/backups/backup-uuid/disks/disks
And an incremental backup was taken the VM disks

-----------------------------------------
1. Create a VM with disks
2. Run the VM
3. Create full backups for the VM
4. Finalize the backup
5. Add a new disk to the VM
5. Create another incremental backup for the VM that includes the new disks

Expected results:
For all the VM disks that was part of the 'full' backup, 'backup_mode' should be set to 'incremental' and the 'backp_mode' for the new disk should be 'full' under- 
.../ovirt-engine/api/vms/vm-uuid/backups/backup-uuid/disks/disks
And an incremental backup was taken the VM disks that was part of the full backup, for the new disk, a full backup was taken.

Comment 5 Ilan Zuckerman 2020-10-28 11:34:39 UTC
Create a VM with disks
Run the VM
Create full backups for the VM
Finalize the backup
Add a new disk to the VM
Create another incremental backup for the VM that includes the new disks


Here is a response to a GET backup request for a VM with two disks:

27150_qcow_incr_enabled : This is a newly added disk which wasnt previously backed up
backup_mode: full

latest-rhel-guest-image-8.2-infra : This is a disk that have been fully backed up previously, and now it was incrementally backed up.
backup_mode: incremental

All as expected.
Verified on: rhv-4.4.3-10



GET {{engine}}vms/{{myvm_id}}/backups/{{backup_id}}/disks

<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<disks>
    <disk id="e1ea9d87-0fd2-48c9-879c-5d7c5b55f894">
        <name>27150_qcow_incr_enabled</name>
        <description></description>
        <actual_size>1073741824</actual_size>
        <alias>27150_qcow_incr_enabled</alias>
        <backup>incremental</backup>
        <backup_mode>full</backup_mode>
        <content_type>data</content_type>
        <format>cow</format>
        <image_id>6e473350-fef7-44f5-beb2-ec367a6aaf57</image_id>
        <propagate_errors>false</propagate_errors>
        <provisioned_size>1073741824</provisioned_size>
        <qcow_version>qcow2_v3</qcow_version>
        <shareable>false</shareable>
        <sparse>true</sparse>
        <status>locked</status>
        <storage_type>image</storage_type>
        <total_size>0</total_size>
        <wipe_after_delete>false</wipe_after_delete>
        <disk_profile id="32f63050-725e-4595-8a8f-a5de897fc86e"/>
        <quota id="75e056b9-0408-446a-814b-0037b73d5e57">
            <data_center id="3f8db389-2f98-4aa8-9126-8c03376b897a"/>
        </quota>
        <storage_domains>
            <storage_domain id="746a5fe5-481a-4c6d-8267-7b3d58f42414"/>
        </storage_domains>
    </disk>
    <disk id="b73c3c01-9e4d-4b3a-8121-5388e27cd492">
        <name>latest-rhel-guest-image-8.2-infra</name>
        <description>latest-rhel-guest-image-8.2-infra (91fc53b)</description>
        <actual_size>43589632</actual_size>
        <alias>latest-rhel-guest-image-8.2-infra</alias>
        <backup>incremental</backup>
        <backup_mode>incremental</backup_mode>
        <content_type>data</content_type>
        <format>cow</format>
        <image_id>edd18e48-8786-4939-85d8-f844b41ab8a9</image_id>
        <propagate_errors>false</propagate_errors>
        <provisioned_size>10737418240</provisioned_size>
        <qcow_version>qcow2_v3</qcow_version>
        <shareable>false</shareable>
        <sparse>true</sparse>
        <status>locked</status>
        <storage_type>image</storage_type>
        <total_size>0</total_size>
        <wipe_after_delete>false</wipe_after_delete>
        <disk_profile id="b8683335-8fc3-4cde-ae4d-d51e34251c2b"/>
        <quota id="75e056b9-0408-446a-814b-0037b73d5e57">
            <data_center id="3f8db389-2f98-4aa8-9126-8c03376b897a"/>
        </quota>
        <storage_domains>
            <storage_domain id="809ecf3e-9988-41aa-9510-5bd3ec6bf5d9"/>
        </storage_domains>
    </disk>
</disks>

Comment 6 Sandro Bonazzola 2020-11-11 06:41:27 UTC
This bugzilla is included in oVirt 4.4.3 release, published on November 10th 2020.

Since the problem described in this bug report should be resolved in oVirt 4.4.3 release, it has been closed with a resolution of CURRENT RELEASE.

If the solution does not work for you, please open a new bug report.


Note You need to log in before you can comment on or make changes to this bug.