Bug 1144922
| Summary: | wrong backingStore info after blockpull and destroy/start guest | ||
|---|---|---|---|
| Product: | Red Hat Enterprise Linux 7 | Reporter: | Shanzhi Yu <shyu> |
| Component: | libvirt | Assignee: | Peter Krempa <pkrempa> |
| Status: | CLOSED ERRATA | QA Contact: | Virtualization Bugs <virt-bugs> |
| Severity: | high | Docs Contact: | |
| Priority: | high | ||
| Version: | 7.1 | CC: | dyuan, mzhan, pkrempa, rbalakri, yanyang |
| Target Milestone: | rc | ||
| Target Release: | --- | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | libvirt-1.2.8-4.el7 | Doc Type: | Bug Fix |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2015-03-05 07:44:58 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
Fixed upstream:
commit fe7ef7b112b3b4d6f9c9edf499a79683fb0b7edb
Author: Peter Krempa <pkrempa>
Date: Thu Sep 25 17:30:28 2014 +0200
qemu: Always re-detect backing chain
Since 363e9a68 we track backing chain metadata when creating snapshots
the right way even for the inactive configuration. As we did not yet
update other code paths that modify the backing chain (blockpull) the
newDef backing chain gets out of sync.
After stopping of a VM the new definition gets copied to the next start
one. The new VM then has incorrect backing chain info. This patch
switches the backing chain detector to always purge the existing backing
chain and forces re-detection to avoid this issue until we'll have full
backing chain tracking support.
v1.2.9-rc1-9-gfe7ef7b
Verified on libvirt-1.2.8-5.el7.x86_64 and qemu-kvm-rhev-2.1.2-1.el7.x86_64
Steps:
1. Create 4 external disk only snapshots
# for i in s1 s2 s3 s4; do virsh snapshot-create-as qe-con $i --disk-only --diskspec vda,file=/tmp/qe-con.$i; done
Domain snapshot s1 created
Domain snapshot s2 created
Domain snapshot s3 created
Domain snapshot s4 created
# virsh snapshot-list qe-con
Name Creation Time State
------------------------------------------------------------
s1 2014-10-11 14:42:10 +0800 disk-snapshot
s2 2014-10-11 14:42:10 +0800 disk-snapshot
s3 2014-10-11 14:42:11 +0800 disk-snapshot
s4 2014-10-11 14:42:11 +0800 disk-snapshot
2. check the xml
# virsh dumpxml qe-con | grep disk -a16
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2' cache='none'/>
<source file='/tmp/qe-con.s4'/>
<backingStore type='file' index='1'>
<format type='qcow2'/>
<source file='/tmp/qe-con.s3'/>
<backingStore type='file' index='2'>
<format type='qcow2'/>
<source file='/tmp/qe-con.s2'/>
<backingStore type='file' index='3'>
<format type='qcow2'/>
<source file='/tmp/qe-con.s1'/>
<backingStore type='file' index='4'>
<format type='qcow2'/>
<source file='/var/lib/libvirt/images/qe-con.qcow2'/>
<backingStore/>
</backingStore>
</backingStore>
</backingStore>
</backingStore>
<target dev='vda' bus='virtio'/>
<alias name='virtio-disk0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
</disk>
3. check the image chain
# qemu-img info /tmp/qe-con.s4 --backing-chain
image: /tmp/qe-con.s4
file format: qcow2
virtual size: 5.0G (5368709120 bytes)
disk size: 3.3M
cluster_size: 65536
backing file: /tmp/qe-con.s3
backing file format: qcow2
Format specific information:
compat: 1.1
lazy refcounts: false
image: /tmp/qe-con.s3
file format: qcow2
virtual size: 5.0G (5368709120 bytes)
disk size: 196K
cluster_size: 65536
backing file: /tmp/qe-con.s2
backing file format: qcow2
Format specific information:
compat: 1.1
lazy refcounts: false
image: /tmp/qe-con.s2
file format: qcow2
virtual size: 5.0G (5368709120 bytes)
disk size: 196K
cluster_size: 65536
backing file: /tmp/qe-con.s1
backing file format: qcow2
Format specific information:
compat: 1.1
lazy refcounts: false
image: /tmp/qe-con.s1
file format: qcow2
virtual size: 5.0G (5368709120 bytes)
disk size: 196K
cluster_size: 65536
backing file: /var/lib/libvirt/images/qe-con.qcow2
backing file format: qcow2
Format specific information:
compat: 1.1
lazy refcounts: false
image: /var/lib/libvirt/images/qe-con.qcow2
file format: qcow2
virtual size: 5.0G (5368709120 bytes)
disk size: 1.2G
cluster_size: 65536
Format specific information:
compat: 1.1
lazy refcounts: false
4. do blockpull
# virsh blockpull qe-con vda --base /tmp/qe-con.s2 --verbose --wait
Block Pull: [100 %]
Pull complete
5. check the image chain again
# qemu-img info /tmp/qe-con.s4 --backing-chainimage: /tmp/qe-con.s4
file format: qcow2
virtual size: 5.0G (5368709120 bytes)
disk size: 3.4M
cluster_size: 65536
backing file: /tmp/qe-con.s2
backing file format: qcow2
Format specific information:
compat: 1.1
lazy refcounts: false
image: /tmp/qe-con.s2
file format: qcow2
virtual size: 5.0G (5368709120 bytes)
disk size: 196K
cluster_size: 65536
backing file: /tmp/qe-con.s1
backing file format: qcow2
Format specific information:
compat: 1.1
lazy refcounts: false
image: /tmp/qe-con.s1
file format: qcow2
virtual size: 5.0G (5368709120 bytes)
disk size: 196K
cluster_size: 65536
backing file: /var/lib/libvirt/images/qe-con.qcow2
backing file format: qcow2
Format specific information:
compat: 1.1
lazy refcounts: false
image: /var/lib/libvirt/images/qe-con.qcow2
file format: qcow2
virtual size: 5.0G (5368709120 bytes)
disk size: 1.2G
cluster_size: 65536
Format specific information:
compat: 1.1
lazy refcounts: false
6. check xml
# virsh dumpxml qe-con | grep disk -a16
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2' cache='none'/>
<source file='/tmp/qe-con.s4'/>
<backingStore type='file' index='1'>
<format type='qcow2'/>
<source file='/tmp/qe-con.s2'/>
<backingStore type='file' index='2'>
<format type='qcow2'/>
<source file='/tmp/qe-con.s1'/>
<backingStore type='file' index='3'>
<format type='qcow2'/>
<source file='/var/lib/libvirt/images/qe-con.qcow2'/>
<backingStore/>
</backingStore>
</backingStore>
</backingStore>
<target dev='vda' bus='virtio'/>
<alias name='virtio-disk0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
</disk>
7. destroy/start the domain and check xml again
# virsh destroy qe-con; virsh start qe-con
Domain qe-con destroyed
Domain qe-con started
# virsh dumpxml qe-con | grep disk -a16
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2' cache='none'/>
<source file='/tmp/qe-con.s4'/>
<backingStore type='file' index='1'>
<format type='qcow2'/>
<source file='/tmp/qe-con.s2'/>
<backingStore type='file' index='2'>
<format type='qcow2'/>
<source file='/tmp/qe-con.s1'/>
<backingStore type='file' index='3'>
<format type='qcow2'/>
<source file='/var/lib/libvirt/images/qe-con.qcow2'/>
<backingStore/>
</backingStore>
</backingStore>
</backingStore>
<target dev='vda' bus='virtio'/>
<alias name='virtio-disk0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
</disk>
Since the results are as expected, I'd like to set it to verified
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHSA-2015-0323.html |
Description of problem: wrong backingStore info after blockpull and destroy/start guest Version-Release number of selected component (if applicable): libvirt-1.2.8-3.el7.x86_64 How reproducible: 100% Steps to Reproduce: 1. Prepare a running guest # virsh list Id Name State ---------------------------------------------------- 32 rh7-l running 2. Create four external disk snapshot # for i in 1 2 3 4 ;do virsh snapshot-create-as rh7-l s$i --disk-only --diskspec vda,file=/tmp/rh7-l.s$i;done Domain snapshot s1 created Domain snapshot s2 created Domain snapshot s3 created Domain snapshot s4 created # virsh dumpxml rh7-g .. <disk type='file' device='disk'> <driver name='qemu' type='qcow2' cache='none'/> <source file='/tmp/rh7-l.s4'/> <backingStore type='file' index='1'> <format type='qcow2'/> <source file='/tmp/rh7-l.s3'/> <backingStore type='file' index='2'> <format type='qcow2'/> <source file='/tmp/rh7-l.s2'/> <backingStore type='file' index='3'> <format type='qcow2'/> <source file='/tmp/rh7-l.s1'/> <backingStore type='block' index='4'> <format type='raw'/> <source dev='/dev/vg01/lv01'/> <backingStore/> </backingStore> </backingStore> </backingStore> </backingStore> <target dev='vda' bus='virtio'/> .. #qemu-img info /tmp/rh7-l.s4 --backing-chain rh7-l.s4-> rh7-l.s3->rh7-l.s2->rh7-l.s1->/dev/vg01/lv01 3. Do blockpull from middle to top # virsh blockpull rh7-l vda --base vda[2] --verbose --wait Block Pull: [100 %] Pull complete or # virsh blockpull rh7-l vda --base /tmp/rh7-l.s3 --verbose --wait 4. Check snapshot file backing chain and guest xml #qemu-img info /tmp/rh7-l.s4 --backing-chain rh7-l.s4->rh7-l.s2->rh7-l.s1->/dev/vg01/lv01 # virsh dumpxml rh7-l|grep disk -A 16 <disk type='file' device='disk'> <driver name='qemu' type='qcow2' cache='none'/> <source file='/tmp/rh7-l.s4'/> <backingStore type='file' index='1'> <format type='qcow2'/> <source file='/tmp/rh7-l.s2'/> <backingStore type='file' index='2'> <format type='qcow2'/> <source file='/tmp/rh7-l.s1'/> <backingStore type='block' index='3'> <format type='raw'/> <source dev='/dev/vg01/lv01'/> <backingStore/> </backingStore> </backingStore> </backingStore> <target dev='vda' bus='virtio'/> .. 5. Destroy/start guest then re-check guest xml # virsh destroy rh7-l;virsh start rh7-l; Domain rh7-l destroyed Domain rh7-l started # virsh dumpxml rh7-l .. <disk type='file' device='disk'> <driver name='qemu' type='qcow2' cache='none'/> <source file='/tmp/rh7-l.s4'/> <backingStore type='file' index='1'> <format type='qcow2'/> <source file='/tmp/rh7-l.s3'/> <backingStore type='file' index='2'> <format type='qcow2'/> <source file='/tmp/rh7-l.s2'/> <backingStore type='file' index='3'> <format type='qcow2'/> <source file='/tmp/rh7-l.s1'/> <backingStore type='block' index='4'> <format type='raw'/> <source dev='/dev/vg01/lv01'/> <backingStore/> </backingStore> </backingStore> </backingStore> </backingStore> <target dev='vda' bus='virtio'/> .. 6. Do blockpull again # virsh blockpull rh7-l vda --base vda[1] --verbose --wait error: internal error: Unexpected error: (GenericError) 'Base '/tmp/rh7-l.s3' not found' Actual results: Expected results: /tmp/rh7-l.s3 should not appear in guest xml since it already pull into rh7-l.s4 Additional info: