Bug 1832204 - Allow snapshot creation and blockCopy on read-only drives
Summary: Allow snapshot creation and blockCopy on read-only drives
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux Advanced Virtualization
Classification: Red Hat
Component: libvirt
Version: 8.2
Hardware: Unspecified
OS: Unspecified
high
unspecified
Target Milestone: rc
: 8.3
Assignee: Peter Krempa
QA Contact: yisun
URL:
Whiteboard:
Depends On:
Blocks: 1759933 1821627 1835662
TreeView+ depends on / blocked
 
Reported: 2020-05-06 10:34 UTC by Benny Zlotnik
Modified: 2020-07-28 07:13 UTC (History)
13 users (show)

Fixed In Version: libvirt-6.0.0-20.el8
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1835662 (view as bug list)
Environment:
Last Closed: 2020-07-28 07:12:15 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2020:3172 0 None None None 2020-07-28 07:13:08 UTC

Description Benny Zlotnik 2020-05-06 10:34:52 UTC
Description of problem:

Recent libvirt versions blocked the option to create a snapshot for read-only disks[1], while this generally makes sense, it also breaks existing functionality for oVirt[2], as we rely on snapshots to perform live storage migration.

We looked into the option of not using the snapshot when trying to do live storage migration for read-only disks, but this would require major changes both in vdsm and ovirt engine, this is an already fragile area and making them in upcoming releases will be too risky

If possible, restoring the original behavior would be desired

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:
Creating snapshot for a read-only disk fails with:
libvirt.libvirtError: unsupported configuration: external snapshot for readonly disk sda is not supported

Expected results:
Should succeed, as well as for live-merge and blockCopy which fail with:
"libvirt.libvirtError: internal error: unable to execute QEMU command 'block-commit': Block node is read-only"

Additional info:
[1] https://www.redhat.com/archives/libvir-list/2018-October/msg01322.html
[2] https://bugzilla.redhat.com/show_bug.cgi?id=1759933

Comment 2 Peter Krempa 2020-05-06 16:17:14 UTC
Okay, so there are probably at least 3 issues here, but the original bug report is too vague on describing them:

1) read-only snapshots report an error
2) blockCopy reports 'blockdev-mirror': Block node is read-only
3) live merge ... which is in fact blockCommit reports 'block-commit': Block node is read-only

I presume the workflow is identical with the workflow for read-write disks
1) snapshot is taken
2) block-copy is started with the _SHALLOW flag, at the same time the now-backing image is copied into the destination via qemu-img convert
3) after it's done, everything is merged back via active-layer block commit

Originally, the snapshot itself probably turned the image read-write and the rest of the operations succeeded. With blockdev we create the snapshot in read-only mode. That by itself is okay. Then we add the destination of blockdev mirror as read only. This fails because the job is expected to write to it. We need to make it read-write and fix the permissions at the end.

I'm not seeing the third error about block-commit though. That might be related to permissions (I've tested with selinux disabled for now).

Since the bug report is too sparse, please verify that the workflow I expect is indeed what's happening. Additionally I'd appreciate debug logs from the block-commit issue.

Comment 3 Benny Zlotnik 2020-05-10 09:33:15 UTC
(In reply to Peter Krempa from comment #2)
> Okay, so there are probably at least 3 issues here, but the original bug
> report is too vague on describing them:
> 
> 1) read-only snapshots report an error
> 2) blockCopy reports 'blockdev-mirror': Block node is read-only
> 3) live merge ... which is in fact blockCommit reports 'block-commit': Block
> node is read-only
> 
> I presume the workflow is identical with the workflow for read-write disks
> 1) snapshot is taken
> 2) block-copy is started with the _SHALLOW flag, at the same time the
> now-backing image is copied into the destination via qemu-img convert
> 3) after it's done, everything is merged back via active-layer block commit
> 
> Originally, the snapshot itself probably turned the image read-write and the
> rest of the operations succeeded. With blockdev we create the snapshot in
> read-only mode. That by itself is okay. Then we add the destination of
> blockdev mirror as read only. This fails because the job is expected to
> write to it. We need to make it read-write and fix the permissions at the
> end.
> 
> I'm not seeing the third error about block-commit though. That might be
> related to permissions (I've tested with selinux disabled for now).
> 
> Since the bug report is too sparse, please verify that the workflow I expect
> is indeed what's happening. Additionally I'd appreciate debug logs from the
> block-commit issue.

Apologies for the bad bug report, I assumed the issue was clear.

Disregard the block commit error, I copied it from the dependent bug, but it was for libvirt 4.5 (before snapshot creation was blocked).
The workflow is correct

Comment 5 yisun 2020-05-11 03:52:58 UTC
I guess the issue is just to create a snapshot for readonly disk with following libvirt steps, this seems a 'expected behavior' previously, but I'll qa+ it since it's blocking up layer function:

root@yisun-test1 ~ 23:45:13$ rpm -qa | grep libvirt-6
libvirt-6.0.0-19.module+el8.2.1+6538+c148631f.x86_64

root@yisun-test1 ~ 23:36:24$ virsh dumpxml vm1 | awk '/<disk/,/<\/disk>/'
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2'/>
      <source file='/home/images/active.qcow2'/>
      <target dev='vda' bus='virtio'/>
      <readonly/>
      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
    </disk>

root@yisun-test1 ~ 23:36:27$ virsh start vm1
Domain vm1 started

root@yisun-test1 ~ 23:36:34$ virsh snapshot-create-as vm1 snap1 --disk-only --diskspec vda,snapshot=external,file=/tmp/snap1 --no-metadata 
error: unsupported configuration: external snapshot for readonly disk vda is not supported

The error msg was introduced in libvirt commit 067aad26, but before that commit, the snapshot for readonly disk would still be failed as the commit's description mentioned:
         error: internal error: unable to execute QEMU command 'transaction':
                             Could not create file: Permission denied

And if we do not add --diskspec params, we'll hit the error of another code branch:
root@yisun-test1 ~ 23:41:18$ virsh snapshot-create-as vm1 snap1 --disk-only
error: unsupported configuration: nothing selected for snapshot

Comment 7 Peter Krempa 2020-05-11 08:56:48 UTC
(In reply to yisun from comment #5)
> I guess the issue is just to create a snapshot for readonly disk with
> following libvirt steps, this seems a 'expected behavior' previously, but
> I'll qa+ it since it's blocking up layer function:
> 
> root@yisun-test1 ~ 23:45:13$ rpm -qa | grep libvirt-6
> libvirt-6.0.0-19.module+el8.2.1+6538+c148631f.x86_64
> 
> root@yisun-test1 ~ 23:36:24$ virsh dumpxml vm1 | awk '/<disk/,/<\/disk>/'
>     <disk type='file' device='disk'>
>       <driver name='qemu' type='qcow2'/>
>       <source file='/home/images/active.qcow2'/>
>       <target dev='vda' bus='virtio'/>
>       <readonly/>
>       <address type='pci' domain='0x0000' bus='0x04' slot='0x00'
> function='0x0'/>
>     </disk>
> 
> root@yisun-test1 ~ 23:36:27$ virsh start vm1
> Domain vm1 started
> 
> root@yisun-test1 ~ 23:36:34$ virsh snapshot-create-as vm1 snap1 --disk-only
> --diskspec vda,snapshot=external,file=/tmp/snap1 --no-metadata 
> error: unsupported configuration: external snapshot for readonly disk vda is
> not supported

You need to test it with --reuse-external and create the overlay files yourself. That's how oVirt is using it. With -blockdev we can fix it even without --reuse-external but I'm contemplating whether it's worth doing it at all.

> 
> The error msg was introduced in libvirt commit 067aad26, but before that
> commit, the snapshot for readonly disk would still be failed as the commit's
> description mentioned:
>          error: internal error: unable to execute QEMU command 'transaction':
>                              Could not create file: Permission denied

See above. With --reuse-external it worked as qemu didn't need to write the qcow2 header into the file.

> 
> And if we do not add --diskspec params, we'll hit the error of another code
> branch:
> root@yisun-test1 ~ 23:41:18$ virsh snapshot-create-as vm1 snap1 --disk-only
> error: unsupported configuration: nothing selected for snapshot

This is expected and will not be changed.

Comment 8 Peter Krempa 2020-05-11 16:56:14 UTC
Patches proposed upstream:

https://www.redhat.com/archives/libvir-list/2020-May/msg00425.html

Comment 10 Peter Krempa 2020-05-12 10:06:59 UTC
Fixed upstream:

commit 65a12c467cd683809b4d445b8cf1c3ae250209b2
Author: Peter Krempa <pkrempa>
Date:   Wed May 6 18:32:09 2020 +0200

    qemu: blockcopy: Allow copy of read-only disks with -blockdev
    
    'blockdev-mirror' requires the write permission internally to do the
    copy. This means that we have to force the image to be read-write for
    the duration of the copy and can fix it after the copy is done.
    
    https://bugzilla.redhat.com/show_bug.cgi?id=1832204
    
    Signed-off-by: Peter Krempa <pkrempa>
    Reviewed-by: Ján Tomko <jtomko>

commit fe574ea1f52daebddfdc91dd27234059c375e4bf
Author: Peter Krempa <pkrempa>
Date:   Wed May 6 17:41:12 2020 +0200

    qemu: snapshot: Allow snapshots of read-only disks when we can create them
    
    With -blockdev or when reusing externally created images and thus
    without the need for formatting the image we actually can support
    snapshots of read-only disks. Arguably it's not very useful so they are
    not done by default but users of libvirt such as oVirt are actually
    using this.
    
    https://bugzilla.redhat.com/show_bug.cgi?id=1832204
    
    Signed-off-by: Peter Krempa <pkrempa>
    Reviewed-by: Ján Tomko <jtomko>

commit 10d62782798cd6e4d472a764575c189247a263b3
Author: Peter Krempa <pkrempa>
Date:   Mon May 11 14:23:13 2020 +0200

    qemuBlockStorageSourceCreateFormat: Force write access when formatting images
    
    We need qemu to be able to write the newly created images so that it can
    format them to the specified storage format.
    
    Force write access by relabelling the images when formatting.
    
    Signed-off-by: Peter Krempa <pkrempa>
    Reviewed-by: Ján Tomko <jtomko>

commit 20939b037c37789ddca54c18862fb45b4b41740f
Author: Peter Krempa <pkrempa>
Date:   Mon May 11 15:38:28 2020 +0200

    storage_file: create: Create new images with write permission bit
    
    The 'Create' API of the two storage file backends is used only on
    code-paths where we need to format the image after creating an empty
    file. Since the DAC security driver only modifies the owner of the file
    and not the mode we need to create all files which are going to be
    formatted with the write bit set for the user.
    
    Signed-off-by: Peter Krempa <pkrempa>

Comment 19 yisun 2020-05-15 12:25:10 UTC
Hi Peter,
I met a issue when test with the fix, pls have a check

Description:
readonly disk can use same file as external snapshot target for many times, without any 'lock' protection 


Steps:
1. having a vm with 2 disks [vda, vdb], vda with <disk snapshot='no'>, vdb with <readonly/>
[root@dell-per740xd-13 files]# virsh dumpxml vm1 | awk '/<disk/,/<\/disk/'
    <disk type='file' device='disk' snapshot='no'>
      <driver name='qemu' type='qcow2'/>
      <source file='/var/lib/libvirt/images/vda.qcow2' index='2'/>
      <backingStore/>
      <target dev='vda' bus='virtio'/>
      <alias name='virtio-disk0'/>
      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
    </disk>
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2'/>
      <source file='/var/lib/libvirt/images/vdb.qcow2' index='1'/>
      <backingStore/>
      <target dev='vdb' bus='virtio'/>
      <readonly/>
      <alias name='virtio-disk1'/>
      <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
    </disk>

2. prepare external snapshot files for vda and vdb
[root@dell-per740xd-13 files]# qemu-img create -f qcow2 /var/lib/libvirt/images/vda.snap1 10G
Formatting '/var/lib/libvirt/images/vda.snap1', fmt=qcow2 size=10737418240 cluster_size=65536 lazy_refcounts=off refcount_bits=16
[root@dell-per740xd-13 files]# qemu-img create -f qcow2 /var/lib/libvirt/images/vdb.snap1 10G
Formatting '/var/lib/libvirt/images/vdb.snap1', fmt=qcow2 size=10737418240 cluster_size=65536 lazy_refcounts=off refcount_bits=16

3. create external snapshots for vdb with same cmd repeatedly
[root@dell-per740xd-13 files]# virsh snapshot-create-as vm1 snap1 --disk-only --diskspec vdb,snapshot=external,file=/var/lib/libvirt/images/vdb.snap1 --no-metadata --reuse-external
Domain snapshot snap1 created
[root@dell-per740xd-13 files]# virsh snapshot-create-as vm1 snap1 --disk-only --diskspec vdb,snapshot=external,file=/var/lib/libvirt/images/vdb.snap1 --no-metadata --reuse-external
Domain snapshot snap1 created
[root@dell-per740xd-13 files]# virsh snapshot-create-as vm1 snap1 --disk-only --diskspec vdb,snapshot=external,file=/var/lib/libvirt/images/vdb.snap1 --no-metadata --reuse-external
Domain snapshot snap1 created

4. and now, vdb will have many layers but with same source:
[root@dell-per740xd-13 files]# virsh dumpxml vm1 | awk '/<disk/,/<\/disk/'
    <disk type='file' device='disk' snapshot='no'>
      <driver name='qemu' type='qcow2'/>
      <source file='/var/lib/libvirt/images/vda.qcow2' index='2'/>
      <backingStore/>
      <target dev='vda' bus='virtio'/>
      <alias name='virtio-disk0'/>
      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
    </disk>
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2'/>
      <source file='/var/lib/libvirt/images/vdb.snap1' index='5'/>
      <backingStore type='file' index='4'>
        <format type='qcow2'/>
        <source file='/var/lib/libvirt/images/vdb.snap1'/>
        <backingStore type='file' index='3'>
          <format type='qcow2'/>
          <source file='/var/lib/libvirt/images/vdb.snap1'/>
          <backingStore type='file' index='1'>
            <format type='qcow2'/>
            <source file='/var/lib/libvirt/images/vdb.qcow2'/>
            <backingStore/>
          </backingStore>
        </backingStore>
      </backingStore>
      <target dev='vdb' bus='virtio'/>
      <readonly/>
      <alias name='virtio-disk1'/>
      <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
    </disk>

5. And blockcommit for vdb will be blocked
[root@dell-per740xd-13 files]# virsh blockcommit vm1 vdb --top vdb[4] --base vdb[3]
error: internal error: unable to execute QEMU command 'block-commit': Failed to get "write" lock

[root@dell-per740xd-13 files]# virsh blockcommit vm1 vdb --top vdb[4] --base vdb[1]
error: internal error: unable to execute QEMU command 'block-commit': Failed to get shared "consistent read" lock

[root@dell-per740xd-13 files]# virsh blockcommit vm1 vdb --active --pivot
error: internal error: unable to execute QEMU command 'block-commit': Failed to get shared "consistent read" lock

6. As comparison, the snapshot creation cmd for vda cannot repeatedly run.
[root@dell-per740xd-13 files]# virsh snapshot-create-as vm1 snap1 --disk-only --diskspec vda,snapshot=external,file=/var/lib/libvirt/images/vda.snap1 --no-metadata --reuse-external
Domain snapshot snap1 created
[root@dell-per740xd-13 files]# virsh snapshot-create-as vm1 snap1 --disk-only --diskspec vda,snapshot=external,file=/var/lib/libvirt/images/vda.snap1 --no-metadata --reuse-external
error: internal error: unable to execute QEMU command 'blockdev-add': Failed to get "write" lock

Comment 20 Peter Krempa 2020-05-15 12:30:05 UTC
That is not a relevant use case. If you still consider that to be a problem please file a separate bug.

Comment 21 yisun 2020-05-18 09:34:19 UTC
Tested with libvirt-6.0.0-20.module+el8.2.1+6621+ddee07d5.x86_64

1. Have a running vm, with two disks, here we'll use vdb to test the fix
[root@dell-per740xd-13 ~]# virsh start vm1
Domain vm1 started

[root@dell-per740xd-13 ~]# virsh domblklist vm1
 Target   Source
---------------------------------------------
 vda      /var/lib/libvirt/images/vda.qcow2
 vdb      /var/lib/libvirt/images/vdb.qcow2

[root@dell-per740xd-13 ~]# virsh dumpxml vm1 | awk '/<disk/,/<\/disk/'
    <disk type='file' device='disk' snapshot='no'>
      <driver name='qemu' type='qcow2'/>
      <source file='/var/lib/libvirt/images/vda.qcow2' index='2'/>
      <backingStore/>
      <target dev='vda' bus='virtio'/>
      <alias name='virtio-disk0'/>
      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
    </disk>
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2'/>
      <source file='/var/lib/libvirt/images/vdb.qcow2' index='1'/>
      <backingStore/>
      <target dev='vdb' bus='virtio'/>
      <readonly/>
      <alias name='virtio-disk1'/>
      <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
    </disk>

2. Prepare external snapshot image
[root@dell-per740xd-13 ~]# qemu-img create -f qcow2 /var/lib/libvirt/images/vdb.snap1 10G
Formatting '/var/lib/libvirt/images/vdb.snap1', fmt=qcow2 size=10737418240 cluster_size=65536 lazy_refcounts=off refcount_bits=16

3. Create external snapshot for readonly disk
[root@dell-per740xd-13 ~]# virsh snapshot-create-as vm1 snap1 --disk-only --diskspec vdb,snapshot=external,file=/var/lib/libvirt/images/vdb.snap1 --no-metadata --reuse-external
Domain snapshot snap1 created

4. Use qemu-img convert to copy the base image to another place
[root@dell-per740xd-13 ~]# qemu-img convert -f qcow2 /var/lib/libvirt/images/vdb.qcow2 -O qcow2 /tmp/vdb_copy.qcow2

5. Do shallow blockcopy to copy the current active layer image to another place
[root@dell-per740xd-13 ~]# virsh blockcopy vm1 vdb /tmp/vdb_shallow.qcow2 --shallow --transient-job --reuse-external
Block Copy started


[root@dell-per740xd-13 ~]# virsh blockjob vm1 vdb
Block Copy: [100 %]

[root@dell-per740xd-13 ~]# virsh dumpxml vm1 | awk '/<disk/,/<\/disk/'
    <disk type='file' device='disk' snapshot='no'>
      <driver name='qemu' type='qcow2'/>
      <source file='/var/lib/libvirt/images/vda.qcow2' index='2'/>
      <backingStore/>
      <target dev='vda' bus='virtio'/>
      <alias name='virtio-disk0'/>
      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
    </disk>
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2'/>
      <source file='/var/lib/libvirt/images/vdb.snap1' index='3'/>
      <backingStore type='file' index='1'>
        <format type='qcow2'/>
        <source file='/var/lib/libvirt/images/vdb.qcow2'/>
        <backingStore/>
      </backingStore>
      <mirror type='file' file='/tmp/vdb_shallow.qcow2' format='qcow2' job='copy' ready='yes'>
        <format type='qcow2'/>
        <source file='/tmp/vdb_shallow.qcow2' index='5'/>
        <backingStore type='file' index='6'>
          <format type='qcow2'/>
          <source file='/var/lib/libvirt/images/vdb.qcow2'/>
          <backingStore/>
        </backingStore>
      </mirror>
      <target dev='vdb' bus='virtio'/>
      <readonly/>
      <alias name='virtio-disk1'/>
      <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
    </disk>

6. Finish the blockcopy job without pivot
[root@dell-per740xd-13 ~]# virsh blockjob vm1 vdb --abort


[root@dell-per740xd-13 ~]# virsh dumpxml vm1 | awk '/<disk/,/<\/disk/'
    <disk type='file' device='disk' snapshot='no'>
      <driver name='qemu' type='qcow2'/>
      <source file='/var/lib/libvirt/images/vda.qcow2' index='2'/>
      <backingStore/>
      <target dev='vda' bus='virtio'/>
      <alias name='virtio-disk0'/>
      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
    </disk>
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2'/>
      <source file='/var/lib/libvirt/images/vdb.snap1' index='3'/>
      <backingStore type='file' index='1'>
        <format type='qcow2'/>
        <source file='/var/lib/libvirt/images/vdb.qcow2'/>
        <backingStore/>
      </backingStore>
      <target dev='vdb' bus='virtio'/>
      <readonly/>
      <alias name='virtio-disk1'/>
      <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
    </disk>

7. Do a blockcommit for vde to remove the active snapshot layer
[root@dell-per740xd-13 ~]# virsh blockcommit vm1 vdb --top vdb[3] --base vdb[1] --active --pivot
Successfully pivoted

[root@dell-per740xd-13 ~]# virsh dumpxml vm1 | awk '/<disk/,/<\/disk/'
    <disk type='file' device='disk' snapshot='no'>
      <driver name='qemu' type='qcow2'/>
      <source file='/var/lib/libvirt/images/vda.qcow2' index='2'/>
      <backingStore/>
      <target dev='vda' bus='virtio'/>
      <alias name='virtio-disk0'/>
      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
    </disk>
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2'/>
      <source file='/var/lib/libvirt/images/vdb.qcow2' index='1'/>
      <backingStore/>
      <target dev='vdb' bus='virtio'/>
      <readonly/>
      <alias name='virtio-disk1'/>
      <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
    </disk>

Comment 22 yisun 2020-05-18 09:36:07 UTC
Hi Benny,
Could you please help to confirm if the above test steps satisfy ovirt storage migration scenario? or pls have a try in your env to check if the fix works. Thanks.

Comment 23 Benny Zlotnik 2020-05-19 13:33:13 UTC
(In reply to yisun from comment #22)
> Hi Benny,
> Could you please help to confirm if the above test steps satisfy ovirt
> storage migration scenario? or pls have a try in your env to check if the
> fix works. Thanks.

I assume it won't affect the results, but it's not exactly the same: we use abort with pivot for blockcopy and blockcopy is started before the `qemu-img convert`

Comment 24 yisun 2020-05-20 07:36:05 UTC
(In reply to Benny Zlotnik from comment #23)
> (In reply to yisun from comment #22)
> > Hi Benny,
> > Could you please help to confirm if the above test steps satisfy ovirt
> > storage migration scenario? or pls have a try in your env to check if the
> > fix works. Thanks.
> 
> I assume it won't affect the results, but it's not exactly the same: we use
> abort with pivot for blockcopy and blockcopy is started before the `qemu-img
> convert`

Thx Benny,
But the blockcopy with --abort and --pivot makes me a little confused, seems after the workflow mentioned in above steps, the vm still using the original image?
Since we'll treat rhv/ovirt related issue with higher priority when implement automation in tp-libvirt to make sure we can find out potential problem earlier next time, could you pls kindly check if following steps is how ovirt using libvirt when do live storage migration, and my question is inline within the steps:

1. Vdb using image = 'vdb.qcow2'

2. Create a snapshot for vdb with:
# virsh snapshot-create-as vm1 snap1 --disk-only --diskspec vdb,snapshot=external,file=/var/lib/libvirt/images/vdb_snap1.qcow2 --no-metadata --reuse-external
NOW, THE CHAIN IS: vdb.qcow2 <- vdb_snap1.qcow2

3. Start a SHALLOW blockcopy for vdb with:
# virsh blockcopy vm1 vdb /tmp/vdb_snap1_shallow_copy.qcow2 --shallow --transient-job
NOW, THE CHAIN IS: 
	vdb.qcow2 <- vdb_snap1.qcow2 
	AND
	vdb.qcow2 <- vdb_snap1_shallow_copy.qcow2
And vdb is in mirror status

4. Use qemu-img convert to copy 'vdb.qcow2' to a new place, let's say 'vdb_copy.qcow2'

5. Run blockjob with VIR_DOMAIN_BLOCK_JOB_ABORT_PIVOT flag to finish the blockcopy job
# virsh blockjob vm1 vdb --abort --pivot
NOW, the vdb using image chain vdb.qcow2 <- vdb_snap1_shallow_copy.qcow2

6. Use blockcommit to remove the snapshot layer
# virsh blockcommit vm1 vda --top vda[2] --base vda[1] --active --pivot
(the vda[2] is vdb_snap1_shallow_copy.qcow2 and vda[1] is vdb.qcow2)
After this, vm's vdb still using vdb.qcow2 BUT NOT 'vdb_copy.qcow2' which is generated in step 4?
So after the storage live migration the vm still using the original image?

Comment 25 Peter Krempa 2020-05-20 07:52:09 UTC
'virsh blockjob --abort --pivot' doesn't semantically make sense, but I'm not sure whether we can forbid the combination at this point if somebody is actually using it.

The real result of the flag combination is as if 'virsh blockjob --pivot' was just specified, because the --pivot flag adds the appropriate flag to the API thus --abort is fully ignored.

Comment 26 Benny Zlotnik 2020-06-01 09:34:35 UTC
Sorry for the delay, I guess it was already answered, in vdsm we use the VIR_DOMAIN_BLOCK_JOB_ABORT_PIVOT flag, but I guess it "just pivots", so it should use the target image after the flow completes

Comment 28 errata-xmlrpc 2020-07-28 07:12:15 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:3172


Note You need to log in before you can comment on or make changes to this bug.