Bug 1451398

Summary: [RFE] Add index for the active layer in disk chain
Product: Red Hat Enterprise Linux Advanced Virtualization Reporter: Denis Chaplygin <dchaplyg>
Component: libvirtAssignee: Peter Krempa <pkrempa>
Status: CLOSED ERRATA QA Contact: yisun
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 8.0CC: dyuan, jdenemar, lmen, pkrempa, xuzhang
Target Milestone: rcKeywords: FutureFeature
Target Release: 8.1   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: libvirt-5.10.0-1.el8 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2020-05-05 09:43:16 UTC Type: Feature Request
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1758964    

Description Denis Chaplygin 2017-05-16 14:49:54 UTC
Description of problem:

At the moment libvirt returns indexes for all layers in chain, except the active layer and active layer is referenced using just a device name (like 'vda'). It will be useful, if active layer would also be assigned some index, so it can be processed in the same way, as other layers.

Comment 3 Jaroslav Suchanek 2019-04-24 12:26:30 UTC
This bug is going to be addressed in next major release.

Comment 4 Peter Krempa 2019-11-27 08:46:48 UTC
Added by commit:

commit 9a28d3fd922aaa6044d87bac7635c761826a36af
Author: Peter Krempa <pkrempa>
Date:   Tue Jun 19 13:03:30 2018 +0200

    conf: Allow formatting and parsing of 'index' for disk source image
    
    Similarly to backing store indexes which will become stable eventually
    we need also to be able to format and store in the status XML for later
    use the index for the top level of the backing chain.
    
    Add XML formatter, parser, schema and docs.

but non-zero indexes are used only with -blockdev so that's required for this feature.

The blockdev feature was enabled since:

commit c6a9e54ce3252196f1fc6aa9e57537a659646d18
Author: Peter Krempa <pkrempa>
Date:   Mon Jan 7 11:45:19 2019 +0100

    qemu: enable blockdev support

    Now that all pieces are in place (hopefully) let's enable -blockdev.

    We base the capability on presence of the fix for 'auto-read-only' on
    files so that blockdev works properly, mandate that qemu supports
    explicit SCSI id strings to avoid ABI regression and that the fix for
    'savevm' is present so that internal snapshots work.

v5.9.0-390-gc6a9e54ce3

and requires upstream qemu-4.2 or appropriate downstream.

Comment 6 yisun 2020-01-07 09:15:54 UTC
Hi Peter, 
During the test, 2 issues found and pls help to confirm them
The 1st issue is a behaviour change, pls confirm if it's as designed
The 2nd seems a problem so I set this back to ASSIGNED.

=======================================
1st issue: The index sequnce changed. 
=======================================
Now, the disk snapshots' indice is 5,4,3,2,1, as sort 5 -> 1 in following xml
But previously in rhel8.0 or earlier, the sort is 1 -> 5. 
I saw you mentioned -blockdev cannot have a index as 0, so maybe this caused the sequence changed?
But this is a obvious behavier change which effecting our auto scripts. 
For example, when do 'blockcommit vm1 vda --base vda[2]', the vda[2] is a different snap layer now. 
We can modify the test scripts to deal with the new sequnce, but I am just a little worried about if uplayer
product or somewhere else will be effected by this change when do blockcommit or blockpull.  thx

Current libvirt [root@dell-per730-58 bug]# for i in {1..5}; do virsh snapshot-create-as vm1 snap$i --disk-only; done
Domain snapshot snap1 created
Domain snapshot snap2 created
Domain snapshot snap3 created
Domain snapshot snap4 created
Domain snapshot snap5 created

[root@dell-per730-58 bug]# virsh dumpxml vm1| awk '/<disk/,/<\/disk/'
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2'/>
      <source file='/home/images/RHEL-8.2-x86_64-latest.snap5' index='6'/>
      <backingStore type='file' index='5'>
        <format type='qcow2'/>
        <source file='/home/images/RHEL-8.2-x86_64-latest.snap4'/>
        <backingStore type='file' index='4'>
          <format type='qcow2'/>
          <source file='/home/images/RHEL-8.2-x86_64-latest.snap3'/>
          <backingStore type='file' index='3'>
            <format type='qcow2'/>
            <source file='/home/images/RHEL-8.2-x86_64-latest.snap2'/>
            <backingStore type='file' index='2'>
              <format type='qcow2'/>
              <source file='/home/images/RHEL-8.2-x86_64-latest.snap1'/>
              <backingStore type='file' index='1'>
                <format type='qcow2'/>
                <source file='/home/images/RHEL-8.2-x86_64-latest.qcow2'/>
                <backingStore/>
              </backingStore>
            </backingStore>
          </backingStore>
        </backingStore>
      </backingStore>
      <target dev='vda' bus='virtio'/>
      <alias name='virtio-disk0'/>
      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
    </disk>

As a comparison, following is what rhel7 looks like, indice sorted as 1 -> 6:
rhel7 libvirt # virsh dumpxml avocado-vt-vm1 | awk '/<disk/,/<\/disk/'
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2' cache='none'/>
      <source file='/var/lib/libvirt/images/RHEL-7.5-x86_64-latest.s3'/>
      <backingStore type='file' index='1'>
        <format type='qcow2'/>
        <source file='/var/lib/libvirt/images/RHEL-7.5-x86_64-latest.s2'/>
        <backingStore type='file' index='2'>
          <format type='qcow2'/>
          <source file='/var/lib/libvirt/images/RHEL-7.5-x86_64-latest.s1'/>
          <backingStore type='file' index='3'>
            <format type='qcow2'/>
            <source file='/var/lib/libvirt/images/RHEL-7.5-x86_64-latest.s3'/>
            <backingStore type='file' index='4'>
              <format type='qcow2'/>
              <source file='/var/lib/libvirt/images/RHEL-7.5-x86_64-latest.s2'/>
              <backingStore type='file' index='5'>
                <format type='qcow2'/>
                <source file='/var/lib/libvirt/images/RHEL-7.5-x86_64-latest.s1'/>
                <backingStore type='file' index='6'>
                  <format type='qcow2'/>
                  <source file='/var/lib/libvirt/images/RHEL-7.5-x86_64-latest.qcow2'/>
                  <backingStore/>
                </backingStore>
              </backingStore>
            </backingStore>
          </backingStore>
        </backingStore>
      </backingStore>
      <target dev='sda' bus='scsi'/>
      <alias name='scsi0-0-0-0'/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    </disk>



2nd issue: When vm has multiple disks, their index numbers are NOT set respectively
1. vm having 2 disks
[root@dell-per730-58 bug]# virsh domblklist vm1
 Target   Source
-----------------------------------------------------
 vda      /home/images/RHEL-8.2-x86_64-latest.qcow2
 vdb      /home/images/img.raw

2. do 3 rounds of snapshot creating
[root@dell-per730-58 bug]# virsh snapshot-create-as vm1 snap1 --disk-only
[root@dell-per730-58 bug]# virsh snapshot-create-as vm1 snap2 --disk-only
[root@dell-per730-58 bug]# virsh snapshot-create-as vm1 snap3 --disk-only

3. check the vm's xml, we can see the first disk chain has indice=[7, 5, 3, 2], and the second disk chain's indice=[8, 6, 4,1]. Their indice should be set respectively but not in a single row
[root@dell-per730-58 bug]# virsh dumpxml vm1| awk '/<disk/,/<\/disk/'
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2'/>
      <source file='/home/images/RHEL-8.2-x86_64-latest.snap3' index='7'/>
      <backingStore type='file' index='5'>
        <format type='qcow2'/>
        <source file='/home/images/RHEL-8.2-x86_64-latest.snap2'/>
        <backingStore type='file' index='3'>
          <format type='qcow2'/>
          <source file='/home/images/RHEL-8.2-x86_64-latest.snap1'/>
          <backingStore type='file' index='2'>
            <format type='qcow2'/>
            <source file='/home/images/RHEL-8.2-x86_64-latest.qcow2'/>
            <backingStore/>
          </backingStore>
        </backingStore>
      </backingStore>
      <target dev='vda' bus='virtio'/>
      <alias name='virtio-disk0'/>
      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
    </disk>
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2'/>
      <source file='/home/images/img.snap3' index='8'/>
      <backingStore type='file' index='6'>
        <format type='qcow2'/>
        <source file='/home/images/img.snap2'/>
        <backingStore type='file' index='4'>
          <format type='qcow2'/>
          <source file='/home/images/img.snap1'/>
          <backingStore type='file' index='1'>
            <format type='raw'/>
            <source file='/home/images/img.raw'/>
            <backingStore/>
          </backingStore>
        </backingStore>
      </backingStore>
      <mirror type='file' job='active-commit' ready='yes'>
        <format type='qcow2'/>
        <source file='/home/images/img.snap1' index='4'/>
      </mirror>
      <target dev='vdb' bus='virtio'/>
      <alias name='virtio-disk1'/>
      <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
    </disk>

Comment 7 Peter Krempa 2020-01-07 09:30:11 UTC
Both are expected. The user is supposed to re-load the indexes from the XML as they are dynamic. Relying on static indexes was never supported.

Also the second thing is deliberate. The indexes were never documented as being sequential or havin any meaning. They must be refreshed from the XML.

Comment 8 yisun 2020-01-07 09:54:47 UTC
(In reply to Peter Krempa from comment #7)
> Both are expected. The user is supposed to re-load the indexes from the XML
> as they are dynamic. Relying on static indexes was never supported.
> 
> Also the second thing is deliberate. The indexes were never documented as
> being sequential or havin any meaning. They must be refreshed from the XML.

OK, for the second thing, I'm still confused, why we make this change? 
In rhel7, the vda can have 1,2,3,4..., and vdb can also have 1,2,3,4..., they don't effect each other. 
But now vda and vdb have mutex index numbers as above comment.
And for 'indexes were never documented as being sequential or havin any meaning', hmm, I guess a lot of exisitng tp-libvirt cases used the hard coded index as a command parameter. We'll need to modify them...

RHEL7 indice xml as follow:
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2' cache='none'/>
      <source file='/var/lib/libvirt/images/RHEL-7.5-x86_64-latest.s3'/>
      <backingStore type='file' index='1'>
        <format type='qcow2'/>
        <source file='/var/lib/libvirt/images/RHEL-7.5-x86_64-latest.s2'/>
        <backingStore type='file' index='2'>
          <format type='qcow2'/>
          <source file='/var/lib/libvirt/images/RHEL-7.5-x86_64-latest.s1'/>
          <backingStore type='file' index='3'>
            <format type='qcow2'/>
            <source file='/var/lib/libvirt/images/RHEL-7.5-x86_64-latest.s3'/>
            <backingStore type='file' index='4'>
              <format type='qcow2'/>
              <source file='/var/lib/libvirt/images/RHEL-7.5-x86_64-latest.s2'/>
              <backingStore type='file' index='5'>
                <format type='qcow2'/>
                <source file='/var/lib/libvirt/images/RHEL-7.5-x86_64-latest.s1'/>
                <backingStore type='file' index='6'>
                  <format type='qcow2'/>
                  <source file='/var/lib/libvirt/images/RHEL-7.5-x86_64-latest.qcow2'/>
                  <backingStore/>
                </backingStore>
              </backingStore>
            </backingStore>
          </backingStore>
        </backingStore>
      </backingStore>
      <target dev='sda' bus='scsi'/>
      <alias name='scsi0-0-0-0'/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    </disk>
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2' cache='none'/>
      <source file='/var/lib/libvirt/images/11.s3'/>
      <backingStore type='file' index='1'>
        <format type='qcow2'/>
        <source file='/var/lib/libvirt/images/11.s2'/>
        <backingStore type='file' index='2'>
          <format type='qcow2'/>
          <source file='/var/lib/libvirt/images/11.s1'/>
          <backingStore type='file' index='3'>
            <format type='qcow2'/>
            <source file='/var/lib/libvirt/images/11.s3'/>
            <backingStore type='file' index='4'>
              <format type='qcow2'/>
              <source file='/var/lib/libvirt/images/11.s2'/>
              <backingStore type='file' index='5'>
                <format type='qcow2'/>
                <source file='/var/lib/libvirt/images/11.s1'/>
                <backingStore type='file' index='6'>
                  <format type='qcow2'/>
                  <source file='/var/lib/libvirt/images/11'/>
                  <backingStore/>
                </backingStore>
              </backingStore>
            </backingStore>
          </backingStore>
        </backingStore>
      </backingStore>
      <target dev='sdb' bus='scsi'/>
      <alias name='scsi0-0-0-1'/>
      <address type='drive' controller='0' bus='0' target='0' unit='1'/>
    </disk>

Comment 9 Peter Krempa 2020-01-13 14:14:47 UTC
(In reply to yisun from comment #8)
> (In reply to Peter Krempa from comment #7)
> > Both are expected. The user is supposed to re-load the indexes from the XML
> > as they are dynamic. Relying on static indexes was never supported.
> > 
> > Also the second thing is deliberate. The indexes were never documented as
> > being sequential or havin any meaning. They must be refreshed from the XML.
> 
> OK, for the second thing, I'm still confused, why we make this change? 
> In rhel7, the vda can have 1,2,3,4..., and vdb can also have 1,2,3,4...,
> they don't effect each other. 

Since they are prefixed with the disk alias they will not. On the other hand currently the index is based on the node name which will be assigned to the device. Since node names must be unique globally in qemu I opted to do a single generator for simplicity as we'd have to track indexes for each disk separately otherwise.

Given that by doing block-jobs you can end up with sparse backing chains anyways it doesn't matter much what the initial scenario is. The main point now is that the meaning of vda[2] will not change after a block job. This means that if an image is removed no other image will ever use it's name.
 
Note that the fact that they are based on the node name is again an implementation detail and should not be relied upon.

> But now vda and vdb have mutex index numbers as above comment.
> And for 'indexes were never documented as being sequential or havin any
> meaning', hmm, I guess a lot of exisitng tp-libvirt cases used the hard
> coded index as a command parameter. We'll need to modify them...

Note that the documentation of the square bracket index is as follows:

https://libvirt.org/html/libvirt-libvirt-domain.html#virDomainBlockCommit

"The @base and @top parameters can be either paths to files within the backing chain, or the device target shorthand (the <target dev='...'/> sub-element, such as "vda") followed by an index to the backing chain enclosed in square brackets. Backing chain indexes can be found by inspecting //disk//backingStore/@index in the domain XML. Thus, for example, "vda[3]" refers to the backing store with index equal to "3" in the chain of disk "vda"."

I think we theoretically could do relative indexes such as vda[-1] refering to the first backing volume etc... if that might help. Unfortunately the positive indexes will have to mirror what the 'index' attribute in the XML says per documentation.

Comment 10 yisun 2020-01-14 09:05:01 UTC
(In reply to Peter Krempa from comment #9)
> (In reply to yisun from comment #8)
> > (In reply to Peter Krempa from comment #7)
> > > Both are expected. The user is supposed to re-load the indexes from the XML
> > > as they are dynamic. Relying on static indexes was never supported.
> > > 
> > > Also the second thing is deliberate. The indexes were never documented as
> > > being sequential or havin any meaning. They must be refreshed from the XML.
> > 
> > OK, for the second thing, I'm still confused, why we make this change? 
> > In rhel7, the vda can have 1,2,3,4..., and vdb can also have 1,2,3,4...,
> > they don't effect each other. 
> 
> Since they are prefixed with the disk alias they will not. On the other hand
> currently the index is based on the node name which will be assigned to the
> device. Since node names must be unique globally in qemu I opted to do a
> single generator for simplicity as we'd have to track indexes for each disk
> separately otherwise.
> 
> Given that by doing block-jobs you can end up with sparse backing chains
> anyways it doesn't matter much what the initial scenario is. The main point
> now is that the meaning of vda[2] will not change after a block job. This
> means that if an image is removed no other image will ever use it's name.
>  
> Note that the fact that they are based on the node name is again an
> implementation detail and should not be relied upon.
> 
> > But now vda and vdb have mutex index numbers as above comment.
> > And for 'indexes were never documented as being sequential or havin any
> > meaning', hmm, I guess a lot of exisitng tp-libvirt cases used the hard
> > coded index as a command parameter. We'll need to modify them...
> 
> Note that the documentation of the square bracket index is as follows:
> 
> https://libvirt.org/html/libvirt-libvirt-domain.html#virDomainBlockCommit
> 
> "The @base and @top parameters can be either paths to files within the
> backing chain, or the device target shorthand (the <target dev='...'/>
> sub-element, such as "vda") followed by an index to the backing chain
> enclosed in square brackets. Backing chain indexes can be found by
> inspecting //disk//backingStore/@index in the domain XML. Thus, for example,
> "vda[3]" refers to the backing store with index equal to "3" in the chain of
> disk "vda"."
> 
> I think we theoretically could do relative indexes such as vda[-1] refering
> to the first backing volume etc... if that might help. Unfortunately the
> positive indexes will have to mirror what the 'index' attribute in the XML
> says per documentation.

Thx for the detailed explanation!

Verified with:
# rpm -qa | egrep "^libvirt-5|^qemu-kvm-4"
libvirt-5.10.0-2.module+el8.2.0+5274+60f836b5.x86_64
qemu-kvm-4.2.0-5.module+el8.2.0+5389+367d9739.x86_64


Steps:
1. having a running vm with 2 virtual disks:
# virsh domblklist vm1
 Target   Source
---------------------------------------------
 vda      /var/lib/libvirt/images/os.qcow2
 vdb      /var/lib/libvirt/images/vdb.qcow2

2. create 4 diskonly snapshots for it:
# for i in {1..4}; do virsh snapshot-create-as vm1 snap$i --disk-only; done

3. check the vm's disk xml: all backing images and active images have index
# virsh dumpxml vm1 | egrep "<disk.*>|.*disk>|<source file|<back.*index"
    <disk type='file' device='disk'>
      <source file='/var/lib/libvirt/images/os.snap4' index='9'/>
      <backingStore type='file' index='7'>
        <source file='/var/lib/libvirt/images/os.snap3'/>
        <backingStore type='file' index='5'>
          <source file='/var/lib/libvirt/images/os.snap2'/>
          <backingStore type='file' index='3'>
            <source file='/var/lib/libvirt/images/os.snap1'/>
            <backingStore type='file' index='2'>
              <source file='/var/lib/libvirt/images/os.qcow2'/>
    </disk>
    <disk type='file' device='disk'>
      <source file='/var/lib/libvirt/images/vdb.snap4' index='10'/>
      <backingStore type='file' index='8'>
        <source file='/var/lib/libvirt/images/vdb.snap3'/>
        <backingStore type='file' index='6'>
          <source file='/var/lib/libvirt/images/vdb.snap2'/>
          <backingStore type='file' index='4'>
            <source file='/var/lib/libvirt/images/vdb.snap1'/>
            <backingStore type='file' index='1'>
              <source file='/var/lib/libvirt/images/vdb.qcow2'/>
    </disk>

4. check libvirtd debug log, see all device node names have same sequence number as the snapshot index:
# cat /var/log/libvirtd-debug.log | egrep "QEMU_MONITOR_IO_WRITE.*blockdev-add.*driver\":\"file"
2020-01-14 08:27:22.810+0000: 14905: info : qemuMonitorIOWrite:453 : QEMU_MONITOR_IO_WRITE: mon=0x7fa2c00032f0 buf={"execute":"blockdev-add","arguments":{"driver":"file","filename":"/var/lib/libvirt/images/os.snap1","node-name":"libvirt-3-storage","auto-read-only":true,"discard":"unmap"},"id":"libvirt-14"}
2020-01-14 08:27:22.827+0000: 14905: info : qemuMonitorIOWrite:453 : QEMU_MONITOR_IO_WRITE: mon=0x7fa2c00032f0 buf={"execute":"blockdev-add","arguments":{"driver":"file","filename":"/var/lib/libvirt/images/vdb.snap1","node-name":"libvirt-4-storage","auto-read-only":true,"discard":"unmap"},"id":"libvirt-19"}
2020-01-14 08:40:48.210+0000: 14905: info : qemuMonitorIOWrite:453 : QEMU_MONITOR_IO_WRITE: mon=0x7fa2c00032f0 buf={"execute":"blockdev-add","arguments":{"driver":"file","filename":"/var/lib/libvirt/images/os.snap2","node-name":"libvirt-5-storage","auto-read-only":true,"discard":"unmap"},"id":"libvirt-26"}
2020-01-14 08:40:48.229+0000: 14905: info : qemuMonitorIOWrite:453 : QEMU_MONITOR_IO_WRITE: mon=0x7fa2c00032f0 buf={"execute":"blockdev-add","arguments":{"driver":"file","filename":"/var/lib/libvirt/images/vdb.snap2","node-name":"libvirt-6-storage","auto-read-only":true,"discard":"unmap"},"id":"libvirt-31"}
2020-01-14 08:41:54.136+0000: 14905: info : qemuMonitorIOWrite:453 : QEMU_MONITOR_IO_WRITE: mon=0x7fa2c00032f0 buf={"execute":"blockdev-add","arguments":{"driver":"file","filename":"/var/lib/libvirt/images/os.snap3","node-name":"libvirt-7-storage","auto-read-only":true,"discard":"unmap"},"id":"libvirt-38"}
2020-01-14 08:41:54.152+0000: 14905: info : qemuMonitorIOWrite:453 : QEMU_MONITOR_IO_WRITE: mon=0x7fa2c00032f0 buf={"execute":"blockdev-add","arguments":{"driver":"file","filename":"/var/lib/libvirt/images/vdb.snap3","node-name":"libvirt-8-storage","auto-read-only":true,"discard":"unmap"},"id":"libvirt-43"}
2020-01-14 08:42:12.135+0000: 14905: info : qemuMonitorIOWrite:453 : QEMU_MONITOR_IO_WRITE: mon=0x7fa2c00032f0 buf={"execute":"blockdev-add","arguments":{"driver":"file","filename":"/var/lib/libvirt/images/os.snap4","node-name":"libvirt-9-storage","auto-read-only":true,"discard":"unmap"},"id":"libvirt-50"}
2020-01-14 08:42:12.152+0000: 14905: info : qemuMonitorIOWrite:453 : QEMU_MONITOR_IO_WRITE: mon=0x7fa2c00032f0 buf={"execute":"blockdev-add","arguments":{"driver":"file","filename":"/var/lib/libvirt/images/vdb.snap4","node-name":"libvirt-10-storage","auto-read-only":true,"discard":"unmap"},"id":"libvirt-55"}

5. destroy and start the vm: (hitting bz1781079, but vm can be started at the second time)
5.1 check the vm's disk xml
# virsh dumpxml vm1 | egrep "<disk.*>|.*disk>|<source file|<back.*index"
    <disk type='file' device='disk'>
      <source file='/var/lib/libvirt/images/os.snap4' index='6'/>
      <backingStore type='file' index='7'>
        <source file='/var/lib/libvirt/images/os.snap3'/>
        <backingStore type='file' index='8'>
          <source file='/var/lib/libvirt/images/os.snap2'/>
          <backingStore type='file' index='9'>
            <source file='/var/lib/libvirt/images/os.snap1'/>
            <backingStore type='file' index='10'>
              <source file='/var/lib/libvirt/images/os.qcow2'/>
    </disk>
    <disk type='file' device='disk'>
      <source file='/var/lib/libvirt/images/vdb.snap4' index='1'/>
      <backingStore type='file' index='2'>
        <source file='/var/lib/libvirt/images/vdb.snap3'/>
        <backingStore type='file' index='3'>
          <source file='/var/lib/libvirt/images/vdb.snap2'/>
          <backingStore type='file' index='4'>
            <source file='/var/lib/libvirt/images/vdb.snap1'/>
            <backingStore type='file' index='5'>
              <source file='/var/lib/libvirt/images/vdb.qcow2'/>
    </disk>


5.2 check the qemu process
# ps -ef | grep vm1 | grep -v grep
qemu     18649     1  5 03:49 ?        00:00:37 /usr/libexec/qemu-kvm -name guest=vm1,debug-threads=on -S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-6-vm1/master-key.aes -machine pc-q35-rhel8.2.0,accel=kvm,usb=off,dump-guest-core=off -cpu Broadwell-IBRS,vme=on,ss=on,vmx=off,f16c=on,rdrand=on,hypervisor=on,arat=on,tsc-adjust=on,umip=on,stibp=on,arch-capabilities=on,xsaveopt=on,pdpe1gb=on,abm=on,ibpb=on,skip-l1dfl-vmentry=on -m 1024 -overcommit mem-lock=off -smp 2,sockets=2,cores=1,threads=1 -uuid 736f1891-668f-4a89-bad3-89d4482b0a2d -no-user-config -nodefaults -chardev socket,id=charmonitor,fd=37,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc,driftfix=slew -global kvm-pit.lost_tick_policy=delay -no-hpet -no-shutdown -global ICH9-LPC.disable_s3=1 -global ICH9-LPC.disable_s4=1 -boot strict=on -device pcie-root-port,port=0x10,chassis=1,id=pci.1,bus=pcie.0,multifunction=on,addr=0x2 -device pcie-root-port,port=0x11,chassis=2,id=pci.2,bus=pcie.0,addr=0x2.0x1 -device pcie-root-port,port=0x12,chassis=3,id=pci.3,bus=pcie.0,addr=0x2.0x2 -device pcie-root-port,port=0x13,chassis=4,id=pci.4,bus=pcie.0,addr=0x2.0x3 -device pcie-root-port,port=0x14,chassis=5,id=pci.5,bus=pcie.0,addr=0x2.0x4 -device pcie-root-port,port=0x15,chassis=6,id=pci.6,bus=pcie.0,addr=0x2.0x5 -device pcie-root-port,port=0x16,chassis=7,id=pci.7,bus=pcie.0,addr=0x2.0x6 -device qemu-xhci,p2=15,p3=15,id=usb,bus=pci.2,addr=0x0 -device virtio-serial-pci,id=virtio-serial0,bus=pci.3,addr=0x0 -blockdev {"driver":"file","filename":"/var/lib/libvirt/images/os.qcow2","node-name":"libvirt-10-storage","auto-read-only":true,"discard":"unmap"} -blockdev {"node-name":"libvirt-10-format","read-only":true,"driver":"qcow2","file":"libvirt-10-storage","backing":null} -blockdev {"driver":"file","filename":"/var/lib/libvirt/images/os.snap1","node-name":"libvirt-9-storage","auto-read-only":true,"discard":"unmap"} -blockdev {"node-name":"libvirt-9-format","read-only":true,"driver":"qcow2","file":"libvirt-9-storage","backing":"libvirt-10-format"} -blockdev {"driver":"file","filename":"/var/lib/libvirt/images/os.snap2","node-name":"libvirt-8-storage","auto-read-only":true,"discard":"unmap"} -blockdev {"node-name":"libvirt-8-format","read-only":true,"driver":"qcow2","file":"libvirt-8-storage","backing":"libvirt-9-format"} -blockdev {"driver":"file","filename":"/var/lib/libvirt/images/os.snap3","node-name":"libvirt-7-storage","auto-read-only":true,"discard":"unmap"} -blockdev {"node-name":"libvirt-7-format","read-only":true,"driver":"qcow2","file":"libvirt-7-storage","backing":"libvirt-8-format"} -blockdev {"driver":"file","filename":"/var/lib/libvirt/images/os.snap4","node-name":"libvirt-6-storage","auto-read-only":true,"discard":"unmap"} -blockdev {"node-name":"libvirt-6-format","read-only":false,"driver":"qcow2","file":"libvirt-6-storage","backing":"libvirt-7-format"} -device virtio-blk-pci,scsi=off,bus=pci.4,addr=0x0,drive=libvirt-6-format,id=virtio-disk0,bootindex=1 -blockdev {"driver":"file","filename":"/var/lib/libvirt/images/vdb.qcow2","node-name":"libvirt-5-storage","auto-read-only":true,"discard":"unmap"} -blockdev {"node-name":"libvirt-5-format","read-only":true,"driver":"qcow2","file":"libvirt-5-storage","backing":null} -blockdev {"driver":"file","filename":"/var/lib/libvirt/images/vdb.snap1","node-name":"libvirt-4-storage","auto-read-only":true,"discard":"unmap"} -blockdev {"node-name":"libvirt-4-format","read-only":true,"driver":"qcow2","file":"libvirt-4-storage","backing":"libvirt-5-format"} -blockdev {"driver":"file","filename":"/var/lib/libvirt/images/vdb.snap2","node-name":"libvirt-3-storage","auto-read-only":true,"discard":"unmap"} -blockdev {"node-name":"libvirt-3-format","read-only":true,"driver":"qcow2","file":"libvirt-3-storage","backing":"libvirt-4-format"} -blockdev {"driver":"file","filename":"/var/lib/libvirt/images/vdb.snap3","node-name":"libvirt-2-storage","auto-read-only":true,"discard":"unmap"} -blockdev {"node-name":"libvirt-2-format","read-only":true,"driver":"qcow2","file":"libvirt-2-storage","backing":"libvirt-3-format"} -blockdev {"driver":"file","filename":"/var/lib/libvirt/images/vdb.snap4","node-name":"libvirt-1-storage","auto-read-only":true,"discard":"unmap"} -blockdev {"node-name":"libvirt-1-format","read-only":false,"driver":"qcow2","file":"libvirt-1-storage","backing":"libvirt-2-format"} -device virtio-blk-pci,scsi=off,bus=pci.7,addr=0x0,drive=libvirt-1-format,id=virtio-disk1 -netdev tap,fd=39,id=hostnet0,vhost=on,vhostfd=40 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:59:bf:54,bus=pci.1,addr=0x0 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -chardev socket,id=charchannel0,fd=41,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0 -device usb-tablet,id=input0,bus=usb.0,port=1 -vnc 127.0.0.1:0 -device qxl-vga,id=video0,ram_size=67108864,vram_size=67108864,vram64_size_mb=0,vgamem_mb=16,max_outputs=1,bus=pcie.0,addr=0x1 -device virtio-balloon-pci,id=balloon0,bus=pci.5,addr=0x0 -object rng-random,id=objrng0,filename=/dev/urandom -device virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.6,addr=0x0 -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny -msg timestamp=on

5.3 the qemu process disk's device node name have same sequence number as the files' index number in xml

Comment 12 yisun 2020-04-14 07:27:06 UTC
There is a way to get qemu's node name:
[root@dell-per740xd-11 ~]# virsh qemu-monitor-command vm1 '{"execute":"query-named-block-nodes"}'
{"return":[{"iops_rd":0,"detect_zeroes":"off","image":{"backing-image":{"virtual-size":10737418240,"filename":"/var/lib/libvirt/images/rhel8.qcow2","cluster-size":65536,"format":"qcow2","actual-size":1073680384,"format-specific":{"type":"qcow2","data":{"compat":"1.1","lazy-refcounts":false,"refcount-bits":16,"corrupt":false}},"dirty-flag":false},"backing-filename-format":"qcow2","virtual-size":10737418240,"filename":"/var/lib/libvirt/images/rhel8.snap1","cluster-size":65536,"format":"qcow2","actual-size":35467264,"format-specific":{"type":"qcow2","data":{"compat":"1.1","lazy-refcounts":false,"refcount-bits":16,"corrupt":false}},"full-backing-filename":"/var/lib/libvirt/images/rhel8.qcow2","backing-filename":"/var/lib/libvirt/images/rhel8.qcow2","dirty-flag":false},"iops_wr":0,"ro":false,"node-name":"libvirt-2-format","backing_file_depth":1,"drv":"qcow2","iops":0,"bps_wr":0,"write_threshold":0,"backing_file":"/var/lib/libvirt/images/rhel8.qcow2","encrypted":false,"bps":0,"bps_rd":0,"cache":{"no-flush":false,"direct":true,"writeback":true},"file":"/var/lib/libvirt/images/rhel8.snap1","encryption_key_missing":false},{"iops_rd":0,"detect_zeroes":"off","image":{"virtual-size":35520512,"filename":"/var/lib/libvirt/images/rhel8.snap1","format":"file","actual-size":35467264,"dirty-flag":false},"iops_wr":0,"ro":false,"node-name":"libvirt-2-storage","backing_file_depth":0,"drv":"file","iops":0,"bps_wr":0,"write_threshold":0,"encrypted":false,"bps":0,"bps_rd":0,"cache":{"no-flush":false,"direct":true,"writeback":true},"file":"/var/lib/libvirt/images/rhel8.snap1","encryption_key_missing":false},{"iops_rd":0,"detect_zeroes":"off","image":{"virtual-size":10737418240,"filename":"/var/lib/libvirt/images/rhel8.qcow2","cluster-size":65536,"format":"qcow2","actual-size":1073680384,"format-specific":{"type":"qcow2","data":{"compat":"1.1","lazy-refcounts":false,"refcount-bits":16,"corrupt":false}},"dirty-flag":false},"iops_wr":0,"ro":true,"node-name":"libvirt-1-format","backing_file_depth":0,"drv":"qcow2","iops":0,"bps_wr":0,"write_threshold":0,"encrypted":false,"bps":0,"bps_rd":0,"cache":{"no-flush":false,"direct":true,"writeback":true},"file":"/var/lib/libvirt/images/rhel8.qcow2","encryption_key_missing":false},{"iops_rd":0,"detect_zeroes":"off","image":{"virtual-size":831324160,"filename":"/var/lib/libvirt/images/rhel8.qcow2","format":"file","actual-size":1073680384,"dirty-flag":false},"iops_wr":0,"ro":false,"node-name":"libvirt-1-storage","backing_file_depth":0,"drv":"file","iops":0,"bps_wr":0,"write_threshold":0,"encrypted":false,"bps":0,"bps_rd":0,"cache":{"no-flush":false,"direct":true,"writeback":true},"file":"/var/lib/libvirt/images/rhel8.qcow2","encryption_key_missing":false}],"id":"libvirt-383"}

Comment 14 errata-xmlrpc 2020-05-05 09:43:16 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:2017