RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1144922 - wrong backingStore info after blockpull and destroy/start guest
Summary: wrong backingStore info after blockpull and destroy/start guest
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: libvirt
Version: 7.1
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: rc
: ---
Assignee: Peter Krempa
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-09-22 02:44 UTC by Shanzhi Yu
Modified: 2015-03-05 07:44 UTC (History)
5 users (show)

Fixed In Version: libvirt-1.2.8-4.el7
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-03-05 07:44:58 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2015:0323 0 normal SHIPPED_LIVE Low: libvirt security, bug fix, and enhancement update 2015-03-05 12:10:54 UTC

Description Shanzhi Yu 2014-09-22 02:44:44 UTC
Description of problem:

wrong backingStore info after blockpull and destroy/start guest

Version-Release number of selected component (if applicable):

libvirt-1.2.8-3.el7.x86_64

How reproducible:

100%

Steps to Reproduce:

1. Prepare a running guest

# virsh list
 Id    Name                           State
----------------------------------------------------
 32    rh7-l                          running

2. Create four external disk snapshot

# for i in 1 2 3 4 ;do virsh snapshot-create-as rh7-l s$i --disk-only --diskspec vda,file=/tmp/rh7-l.s$i;done
Domain snapshot s1 created
Domain snapshot s2 created
Domain snapshot s3 created
Domain snapshot s4 created

# virsh dumpxml rh7-g
..
<disk type='file' device='disk'>
      <driver name='qemu' type='qcow2' cache='none'/>
      <source file='/tmp/rh7-l.s4'/>
      <backingStore type='file' index='1'>
        <format type='qcow2'/>
        <source file='/tmp/rh7-l.s3'/>
        <backingStore type='file' index='2'>
          <format type='qcow2'/>
          <source file='/tmp/rh7-l.s2'/>
          <backingStore type='file' index='3'>
            <format type='qcow2'/>
            <source file='/tmp/rh7-l.s1'/>
            <backingStore type='block' index='4'>
              <format type='raw'/>
              <source dev='/dev/vg01/lv01'/>
              <backingStore/>
            </backingStore>
          </backingStore>
        </backingStore>
      </backingStore>
      <target dev='vda' bus='virtio'/>
..
#qemu-img info /tmp/rh7-l.s4 --backing-chain

rh7-l.s4-> rh7-l.s3->rh7-l.s2->rh7-l.s1->/dev/vg01/lv01

3. Do blockpull from middle to top
# virsh blockpull rh7-l vda --base vda[2] --verbose --wait
Block Pull: [100 %]
Pull complete

or

# virsh blockpull rh7-l vda --base /tmp/rh7-l.s3 --verbose --wait

4. Check snapshot file backing chain and guest xml
#qemu-img info /tmp/rh7-l.s4 --backing-chain

rh7-l.s4->rh7-l.s2->rh7-l.s1->/dev/vg01/lv01
# virsh dumpxml rh7-l|grep disk -A 16
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2' cache='none'/>
      <source file='/tmp/rh7-l.s4'/>
      <backingStore type='file' index='1'>
        <format type='qcow2'/>
        <source file='/tmp/rh7-l.s2'/>
        <backingStore type='file' index='2'>
          <format type='qcow2'/>
          <source file='/tmp/rh7-l.s1'/>
          <backingStore type='block' index='3'>
            <format type='raw'/>
            <source dev='/dev/vg01/lv01'/>
            <backingStore/>
          </backingStore>
        </backingStore>
      </backingStore>
      <target dev='vda' bus='virtio'/>
..

5. Destroy/start guest then re-check guest xml
# virsh destroy rh7-l;virsh start rh7-l;
Domain rh7-l destroyed

Domain rh7-l started

# virsh dumpxml rh7-l
..
 <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2' cache='none'/>
      <source file='/tmp/rh7-l.s4'/>
      <backingStore type='file' index='1'>
        <format type='qcow2'/>
        <source file='/tmp/rh7-l.s3'/>
        <backingStore type='file' index='2'>
          <format type='qcow2'/>
          <source file='/tmp/rh7-l.s2'/>
          <backingStore type='file' index='3'>
            <format type='qcow2'/>
            <source file='/tmp/rh7-l.s1'/>
            <backingStore type='block' index='4'>
              <format type='raw'/>
              <source dev='/dev/vg01/lv01'/>
              <backingStore/>
            </backingStore>
          </backingStore>
        </backingStore>
      </backingStore>
      <target dev='vda' bus='virtio'/>
..

6. Do blockpull again

# virsh blockpull rh7-l vda --base vda[1] --verbose --wait
error: internal error: Unexpected error: (GenericError) 'Base '/tmp/rh7-l.s3' not found'


Actual results:


Expected results:

/tmp/rh7-l.s3 should not appear in guest xml since it already pull into rh7-l.s4

Additional info:

Comment 1 Peter Krempa 2014-09-26 07:41:25 UTC
Fixed upstream:

commit fe7ef7b112b3b4d6f9c9edf499a79683fb0b7edb
Author: Peter Krempa <pkrempa>
Date:   Thu Sep 25 17:30:28 2014 +0200

    qemu: Always re-detect backing chain
    
    Since 363e9a68 we track backing chain metadata when creating snapshots
    the right way even for the inactive configuration. As we did not yet
    update other code paths that modify the backing chain (blockpull) the
    newDef backing chain gets out of sync.
    
    After stopping of a VM the new definition gets copied to the next start
    one. The new VM then has incorrect backing chain info. This patch
    switches the backing chain detector to always purge the existing backing
    chain and forces re-detection to avoid this issue until we'll have full
    backing chain tracking support.

v1.2.9-rc1-9-gfe7ef7b

Comment 4 Yang Yang 2014-10-11 06:59:01 UTC
Verified on libvirt-1.2.8-5.el7.x86_64 and qemu-kvm-rhev-2.1.2-1.el7.x86_64

Steps:
1. Create 4 external disk only snapshots
# for i in s1 s2 s3 s4; do virsh snapshot-create-as qe-con $i --disk-only --diskspec vda,file=/tmp/qe-con.$i; done
Domain snapshot s1 created
Domain snapshot s2 created
Domain snapshot s3 created
Domain snapshot s4 created

# virsh snapshot-list qe-con
 Name                 Creation Time             State
------------------------------------------------------------
 s1                   2014-10-11 14:42:10 +0800 disk-snapshot
 s2                   2014-10-11 14:42:10 +0800 disk-snapshot
 s3                   2014-10-11 14:42:11 +0800 disk-snapshot
 s4                   2014-10-11 14:42:11 +0800 disk-snapshot

2. check the xml
# virsh dumpxml qe-con | grep disk -a16
<disk type='file' device='disk'>
      <driver name='qemu' type='qcow2' cache='none'/>
      <source file='/tmp/qe-con.s4'/>
      <backingStore type='file' index='1'>
        <format type='qcow2'/>
        <source file='/tmp/qe-con.s3'/>
        <backingStore type='file' index='2'>
          <format type='qcow2'/>
          <source file='/tmp/qe-con.s2'/>
          <backingStore type='file' index='3'>
            <format type='qcow2'/>
            <source file='/tmp/qe-con.s1'/>
            <backingStore type='file' index='4'>
              <format type='qcow2'/>
              <source file='/var/lib/libvirt/images/qe-con.qcow2'/>
              <backingStore/>
            </backingStore>
          </backingStore>
        </backingStore>
      </backingStore>
      <target dev='vda' bus='virtio'/>
      <alias name='virtio-disk0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
    </disk>
3. check the image chain
# qemu-img info /tmp/qe-con.s4 --backing-chain
image: /tmp/qe-con.s4
file format: qcow2
virtual size: 5.0G (5368709120 bytes)
disk size: 3.3M
cluster_size: 65536
backing file: /tmp/qe-con.s3
backing file format: qcow2
Format specific information:
    compat: 1.1
    lazy refcounts: false

image: /tmp/qe-con.s3
file format: qcow2
virtual size: 5.0G (5368709120 bytes)
disk size: 196K
cluster_size: 65536
backing file: /tmp/qe-con.s2
backing file format: qcow2
Format specific information:
    compat: 1.1
    lazy refcounts: false

image: /tmp/qe-con.s2
file format: qcow2
virtual size: 5.0G (5368709120 bytes)
disk size: 196K
cluster_size: 65536
backing file: /tmp/qe-con.s1
backing file format: qcow2
Format specific information:
    compat: 1.1
    lazy refcounts: false

image: /tmp/qe-con.s1
file format: qcow2
virtual size: 5.0G (5368709120 bytes)
disk size: 196K
cluster_size: 65536
backing file: /var/lib/libvirt/images/qe-con.qcow2
backing file format: qcow2
Format specific information:
    compat: 1.1
    lazy refcounts: false

image: /var/lib/libvirt/images/qe-con.qcow2
file format: qcow2
virtual size: 5.0G (5368709120 bytes)
disk size: 1.2G
cluster_size: 65536
Format specific information:
    compat: 1.1
    lazy refcounts: false

4. do blockpull
# virsh blockpull qe-con vda --base /tmp/qe-con.s2 --verbose --wait
Block Pull: [100 %]
Pull complete

5. check the image chain again
# qemu-img info /tmp/qe-con.s4 --backing-chainimage: /tmp/qe-con.s4
file format: qcow2
virtual size: 5.0G (5368709120 bytes)
disk size: 3.4M
cluster_size: 65536
backing file: /tmp/qe-con.s2
backing file format: qcow2
Format specific information:
    compat: 1.1
    lazy refcounts: false

image: /tmp/qe-con.s2
file format: qcow2
virtual size: 5.0G (5368709120 bytes)
disk size: 196K
cluster_size: 65536
backing file: /tmp/qe-con.s1
backing file format: qcow2
Format specific information:
    compat: 1.1
    lazy refcounts: false

image: /tmp/qe-con.s1
file format: qcow2
virtual size: 5.0G (5368709120 bytes)
disk size: 196K
cluster_size: 65536
backing file: /var/lib/libvirt/images/qe-con.qcow2
backing file format: qcow2
Format specific information:
    compat: 1.1
    lazy refcounts: false

image: /var/lib/libvirt/images/qe-con.qcow2
file format: qcow2
virtual size: 5.0G (5368709120 bytes)
disk size: 1.2G
cluster_size: 65536
Format specific information:
    compat: 1.1
    lazy refcounts: false

6. check xml
# virsh dumpxml qe-con | grep disk -a16
<disk type='file' device='disk'>
      <driver name='qemu' type='qcow2' cache='none'/>
      <source file='/tmp/qe-con.s4'/>
      <backingStore type='file' index='1'>
        <format type='qcow2'/>
        <source file='/tmp/qe-con.s2'/>
        <backingStore type='file' index='2'>
          <format type='qcow2'/>
          <source file='/tmp/qe-con.s1'/>
          <backingStore type='file' index='3'>
            <format type='qcow2'/>
            <source file='/var/lib/libvirt/images/qe-con.qcow2'/>
            <backingStore/>
          </backingStore>
        </backingStore>
      </backingStore>
      <target dev='vda' bus='virtio'/>
      <alias name='virtio-disk0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
    </disk>
7. destroy/start the domain and check xml again
# virsh destroy qe-con; virsh start qe-con
Domain qe-con destroyed

Domain qe-con started

# virsh dumpxml qe-con | grep disk -a16
<disk type='file' device='disk'>
      <driver name='qemu' type='qcow2' cache='none'/>
      <source file='/tmp/qe-con.s4'/>
      <backingStore type='file' index='1'>
        <format type='qcow2'/>
        <source file='/tmp/qe-con.s2'/>
        <backingStore type='file' index='2'>
          <format type='qcow2'/>
          <source file='/tmp/qe-con.s1'/>
          <backingStore type='file' index='3'>
            <format type='qcow2'/>
            <source file='/var/lib/libvirt/images/qe-con.qcow2'/>
            <backingStore/>
          </backingStore>
        </backingStore>
      </backingStore>
      <target dev='vda' bus='virtio'/>
      <alias name='virtio-disk0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
    </disk>

Since the results are as expected, I'd like to set it to verified

Comment 6 errata-xmlrpc 2015-03-05 07:44:58 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2015-0323.html


Note You need to log in before you can comment on or make changes to this bug.