Bug 1798366
| Summary: | Fail to attach disk with copy_on_read on | ||
|---|---|---|---|
| Product: | Red Hat Enterprise Linux Advanced Virtualization | Reporter: | Han Han <hhan> |
| Component: | libvirt | Assignee: | Peter Krempa <pkrempa> |
| Status: | CLOSED ERRATA | QA Contact: | Meina Li <meili> |
| Severity: | unspecified | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | 8.2 | CC: | jdenemar, jgao, jsuchane, lmen, pkrempa, virt-maint, xuzhang |
| Target Milestone: | rc | Flags: | pm-rhel:
mirror+
|
| Target Release: | 8.0 | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | libvirt-6.0.0-6.el8 | Doc Type: | If docs needed, set a value |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2020-05-05 09:57:11 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
|
Description
Han Han
2020-02-05 07:28:44 UTC
Fixed upstream by: b71cf8726c qemu: hotplug: Fix handling of the 'copy-on-read' layer with blockdev db57e9daf5 qemuMonitorBlockdevAdd: Take double pointer argument a592d589aa qemuMonitorJSONBlockdevDel: Refactor cleanup 643294110c qemuMonitorJSONBlockdevAdd: Refactor cleanup Verified Version:
libvirt-6.0.0-7.el8.x86_64
qemu-kvm-4.2.0-12.module+el8.2.0+5858+afd073bc.x86_64
Verified Steps:
Prepare a disk xml for the following test:
# cat disk.xml
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2' copy_on_read='on'/>
<source file='/var/lib/libvirt/images/test.qcow2'/>
<target dev='vdb' bus='virtio'/>
</disk>
SC1: Hotplug disk with copy_on_read on
1. Attach the disk to the guest and check the result.
# virsh attach-device lmn1 disk.xml
Device attached successfully
# virsh dumpxml lmn1 | awk '/<disk/,/<\/disk/'
…
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2' copy_on_read='on'/>
<source file='/var/lib/libvirt/images/test.qcow2' index='2'/>
<backingStore/>
<target dev='vdb' bus='virtio'/>
<alias name='virtio-disk1'/>
<address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
</disk>
2. Detach the disk from the guest.
# virsh detach-device lmn1 disk.xml
Device detached successfully
# virsh domblklist lmn1
Target Source
---------------------------------------------
vda /var/lib/libvirt/images/lmn.qcow2
SC2: Cold plug with copy_on_read on
1. Make sure the guest is shut off.
2. Cold plug the disk to the guest, start the guest and check the result.
# virsh attach-device lmn1 disk.xml --config
Device attached successfully
# virsh start lmn1
Domain lmn1 started
# virsh dumpxml lmn1 | awk '/<disk/,/<\/disk/'
…
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2' copy_on_read='on'/>
<source file='/var/lib/libvirt/images/test.qcow2' index='1'/>
<backingStore/>
<target dev='vdb' bus='virtio'/>
<alias name='virtio-disk1'/>
<address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
</disk>
# ps -ef | grep qemu | grep copy-on-read
…
-blockdev {"driver":"file","filename":"/var/lib/libvirt/images/test.qcow2","node-name":"libvirt-1-storage","auto-read-only":true,"discard":"unmap"}
-blockdev {"node-name":"libvirt-1-format","read-only":false,"driver":"qcow2","file":"libvirt-1-storage","backing":null}
-blockdev {"driver":"copy-on-read","node-name":"libvirt-CoR-vdb","file":"libvirt-1-format"}
3. Detach the disk from the guest.
# virsh detach-device lmn1 disk.xml --config
Device detached successfully
# virsh destroy lmn1; virsh start lmn1
Domain lmn1 destroyed
Domain lmn1 started
# virsh domblklist lmn1
Target Source
---------------------------------------------
vda /var/lib/libvirt/images/lmn.qcow2
So move this bug to be verified.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:2017 |