Bug 1378540
Summary: | Libvirt runtime modularization of the block layer | ||
---|---|---|---|
Product: | Red Hat Enterprise Linux 7 | Reporter: | Ademar Reis <areis> |
Component: | libvirt | Assignee: | Peter Krempa <pkrempa> |
Status: | CLOSED ERRATA | QA Contact: | lijuan men <lmen> |
Severity: | unspecified | Docs Contact: | |
Priority: | unspecified | ||
Version: | 7.0 | CC: | dyuan, hhan, hreitz, pkrempa, rbalakri, virt-bugs, virt-maint, xuzhang |
Target Milestone: | rc | ||
Target Release: | --- | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | libvirt-3.1.0-1.el7 | Doc Type: | If docs needed, set a value |
Doc Text: | Story Points: | --- | |
Clone Of: | 1378536 | Environment: | |
Last Closed: | 2017-08-01 17:16:43 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | 1378536 | ||
Bug Blocks: |
Description
Ademar Reis
2016-09-22 16:57:03 UTC
Libvirt's storage driver is now split into separate packages and the qemu driver no longer depends strictly on the heavyweight storage backends: commit 27c8e36d60c7f2d7abf105207546f87e15256c45 Author: Peter Krempa <pkrempa> Date: Wed Feb 8 09:20:21 2017 +0100 spec: Modularize the storage driver Create a new set of sub-packages containing the new storage driver modules so that certain heavy-weight backends (gluster, rbd) can be installed separately only if required. To keep backward compatibility the 'libvirt-driver-storage' package will be turned into a virtual package pulling in all the new storage backend sub-packages. The storage driver module will be moved into libvirt-driver-storage-core including the filesystem backend which is mandatory. This then allows to make libvirt-daemon-driver-qemu depend only on the core of the storage driver. All other meta-packages still depend on the full storage driver and thus pull in all the backends. commit 0a6d3e51b40cfd0628649f985975b0d2be00b8f7 Author: Peter Krempa <pkrempa> Date: Tue Feb 7 19:40:29 2017 +0100 storage: Turn storage backends into dynamic modules If driver modules are enabled turn storage driver backends into dynamically loadable objects. This will allow greater modularity for binary distributions, where heavyweight dependencies as rbd and gluster can be avoided by selecting only a subset of drivers if the rest is not necessary. The storage modules are installed into 'LIBDIR/libvirt/storage-backend/' and users can override the location by using 'LIBVIRT_STORAGE_BACKEND_DIR' environment variable. rpm based distros will at this point install all the backends when libvirt-daemon-driver-storage package is installed. 1.we have installed libvirt-daemon-driver-storage package(it also pulled in all the new storage backend sub-packages) and used our auto test script to cover all the basic test scenarios for those storage types. The test result was pass. Is it enough for this bug? 2.I also want to know,how can I test these sub-packages respectively? Do we need to do it? For example,when I uninstall libvirt-daemon-driver-storage package and those sub-packages , only install libvirt-daemon-driver-storage-rbd,do we need to test the basic rbd function test in this scenario? Then do we need to uninstall libvirt-daemon-driver-storage-rbd and test some negative scenarios to show we must need this sub-package for rbd storage type ? But I don't know which scenario is suitable. Do you have any suggestion? (In reply to lijuan men from comment #4) > 1.we have installed libvirt-daemon-driver-storage package(it also pulled in > all the new storage backend sub-packages) and used our auto test script to > cover all the basic test scenarios for those storage types. The test result > was pass. Is it enough for this bug? Yes, this should be okay. In case when the "old" packages were installed (those that became meta-packages now) everything needs to work as it used to. > 2.I also want to know,how can I test these sub-packages respectively? Do we > need to do it? > > For example,when I uninstall libvirt-daemon-driver-storage package and those > sub-packages , only install libvirt-daemon-driver-storage-rbd,do we need to > test the basic rbd function test in this scenario? You can specifically install the qemu driver only, which only depends on the storage driver core (note that includes the local file storage driver). You can then install the specific modules you want to test. Since the code is using the same code paths, I don't really think it's worth doing positive testing (except for perhaps some basic checks) for all cases. > > Then do we need to uninstall libvirt-daemon-driver-storage-rbd and test some > negative scenarios to show we must need this sub-package for rbd storage > type ? But I don't know which scenario is suitable. Do you have any > suggestion? I don't really think this will be necessary to a great extent. It would be worth checking though whether we properly handle the cases where the storage driver module is not present, so that the error message is sane.
> > Then do we need to uninstall libvirt-daemon-driver-storage-rbd and test some
> > negative scenarios to show we must need this sub-package for rbd storage
> > type ? But I don't know which scenario is suitable. Do you have any
> > suggestion?
>
> I don't really think this will be necessary to a great extent. It would be
> worth checking though whether we properly handle the cases where the storage
> driver module is not present, so that the error message is sane.
How can I let libvirt or qemu output the error message?
1. I tried to #yum remove libvirt-daemon-driver-storage
then libvirt package was also been removed -->is it reasonable? I can't only remove libvirt-daemon-driver-storage?
2.# yum remove libvirt-daemon-driver-storage-rbd
successfully
3.start a guest with a rbd disk
<disk type='network' device='disk'>
<driver name='qemu' type='raw' cache='none'/>
<source protocol='rbd' name='lmen/test.img'>
<host name='10.73.75.52' port='6789'/>
</source>
<target dev='vdc' bus='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x0a' function='0x0'/>
</disk>
the guest can start up,and the rbd disk can be read/wrote
I can't get any error message
Which scenario will output some error message to show we must need the libvirt-daemon-driver-storage-rbd package?
RBD disks for VMs don't currently use the storage driver for anything. Try defining a RBD storage pool in this case. verify the bug version: libvirt-3.2.0-9.el7.x86_64 qemu-kvm-rhev-2.9.0-9.el7.x86_64 scenario1: for libvirt-daemon-driver-storage pkg: when installing libvirt pkg,libvirt-daemon-driver-storage pkg will also be installed,and it will pull in all the new storage backend sub-packages. [root@localhost ~]# rpm -qa | grep libvirt | grep storage libvirt-daemon-driver-storage-gluster-3.2.0-9.el7.x86_64 libvirt-daemon-driver-storage-3.2.0-9.el7.x86_64 libvirt-daemon-driver-storage-core-3.2.0-9.el7.x86_64 libvirt-daemon-driver-storage-iscsi-3.2.0-9.el7.x86_64 libvirt-daemon-driver-storage-logical-3.2.0-9.el7.x86_64 libvirt-daemon-driver-storage-mpath-3.2.0-9.el7.x86_64 libvirt-daemon-driver-storage-scsi-3.2.0-9.el7.x86_64 libvirt-daemon-driver-storage-rbd-3.2.0-9.el7.x86_64 libvirt-daemon-driver-storage-disk-3.2.0-9.el7.x86_64 The auto script in libvirt team installs libvirt-daemon-driver-storage pkg(pulls in all the new storage backend sub-packages) by default, and the script has already covered the basic scenarios for all the storage drivers. The newest auto test result is pass. scenario2:for the new storage backend sub-packages remove libvirt-daemon-driver-storage pkg and storage backend sub-packages [root@localhost ~]# yum remove libvirt-daemon-driver-storage [root@localhost ~]# yum remove libvirt-daemon-driver-storage-gluster-3.2.0-9.el7.x86_64 libvirt-daemon-driver-storage-iscsi-3.2.0-9.el7.x86_64 libvirt-daemon-driver-storage-logical-3.2.0-9.el7.x86_64 libvirt-daemon-driver-storage-mpath-3.2.0-9.el7.x86_64 libvirt-daemon-driver-storage-scsi-3.2.0-9.el7.x86_64 libvirt-daemon-driver-storage-rbd-3.2.0-9.el7.x86_64 libvirt-daemon-driver-storage-disk-3.2.0-9.el7.x86_64 [root@localhost ~]# rpm -qa | grep libvirt | grep storage libvirt-daemon-driver-storage-core-3.2.0-9.el7.x86_64 1.for libvirt-daemon-driver-storage-gluster 1)negative test: [root@localhost ~]# rpm -qa | grep libvirt | grep storage libvirt-daemon-driver-storage-core-3.2.0-9.el7.x86_64 [root@localhost ~]# systemctl restart libvirtd [root@localhost ~]# virsh pool-define gluster-pool.xml error: Failed to define pool from gluster-pool.xml error: internal error: missing backend for pool type 10 (gluster) 2)positive test [root@localhost ~]# rpm -qa | grep libvirt | grep storage libvirt-daemon-driver-storage-core-3.2.0-9.el7.x86_64 libvirt-daemon-driver-storage-gluster-3.2.0-9.el7.x86_64 [root@localhost ~]# systemctl restart libvirtd [root@localhost ~]# virsh pool-define gluster-pool.xml Pool gluster defined from gluster-pool.xml [root@localhost ~]# virsh pool-start gluster Pool gluster started [root@localhost ~]# virsh vol-list gluster Name Path ------------------------------------------------------------------------------ .trashcan gluster://10.66.5.88/gluster-vol1/.trashcan 1.img gluster://10.66.5.88/gluster-vol1/1.img 2.img gluster://10.66.5.88/gluster-vol1/2.img test.raw gluster://10.66.5.88/gluster-vol1/test.raw start guest with xml: <disk type='volume' device='disk'> <driver name='qemu' type='raw'/> <source pool='gluster' volume='test.raw'/> <target dev='vdb' bus='virtio'/> </disk> [root@localhost ~]# virsh start test error: Failed to start domain test error: unsupported configuration: using 'gluster' pools for backing 'volume' disks isn't yet supported 2.for libvirt-daemon-driver-storage-iscsi 1)negative test: [root@localhost ~]# rpm -qa | grep libvirt | grep storage libvirt-daemon-driver-storage-core-3.2.0-9.el7.x86_64 [root@localhost ~]# systemctl restart libvirtd [root@localhost ~]# virsh pool-define iscsi-pool.xml error: Failed to define pool from iscsi-pool.xml error: internal error: missing backend for pool type 5 (iscsi) 2)positive test [root@localhost ~]# rpm -qa | grep libvirt | grep storage libvirt-daemon-driver-storage-core-3.2.0-9.el7.x86_64 libvirt-daemon-driver-storage-iscsi-3.2.0-9.el7.x86_64 [root@localhost ~]# systemctl restart libvirtd [root@localhost ~]# virsh pool-define iscsi-pool.xml Pool iscsi defined from iscsi-pool.xml [root@localhost ~]# virsh pool-start iscsi Pool iscsi started [root@localhost ~]# virsh vol-list iscsi Name Path ------------------------------------------------------------------------------ unit:0:0:0 /dev/disk/by-path/ip-127.0.0.1:3260-iscsi-iqn.2016-03.com.virttest:emulated-iscsi.target-lun-0 start a guest with disk: <disk type='volume' device='disk'> <driver name='qemu' type='raw'/> <source pool='iscsi' volume='unit:0:0:0'/> <target dev='sdb' bus='scsi'/> </disk> [root@localhost ~]# virsh destroy test;virsh start test Domain test destroyed Domain test started disk can be read/wrote 3.for libvirt-daemon-driver-storage-logical 1)negative test: [root@localhost ~]# rpm -qa | grep libvirt | grep storage libvirt-daemon-driver-storage-core-3.2.0-9.el7.x86_64 [root@localhost ~]# systemctl restart libvirtd [root@localhost ~]# virsh pool-define logical-pool.xml error: Failed to define pool from logical-pool.xml error: internal error: missing backend for pool type 3 (logical) 2)positive test [root@localhost ~]# rpm -qa | grep libvirt | grep storage libvirt-daemon-driver-storage-core-3.2.0-9.el7.x86_64 libvirt-daemon-driver-storage-logical-3.2.0-9.el7.x86_64 [root@localhost ~]# systemctl restart libvirtd [root@localhost ~]# virsh pool-define logical-pool.xml Pool logical defined from logical-pool.xml [root@localhost ~]# virsh pool-start logical Pool logical started [root@localhost ~]# virsh vol-list logical Name Path ------------------------------------------------------------------------------ lv /dev/vg/lv start a guest with disk: <disk type='volume' device='disk'> <driver name='qemu' type='raw'/> <source pool='logical' volume='lv'/> <target dev='sdb' bus='scsi'/> </disk> [root@localhost ~]# virsh destroy test;virsh start test Domain test destroyed Domain test started disk can be read/wrote 4.for libvirt-daemon-driver-storage-mpath 1)negative test: [root@localhost ~]# rpm -qa | grep libvirt | grep storage libvirt-daemon-driver-storage-core-3.2.0-9.el7.x86_64 [root@localhost ~]# systemctl restart libvirtd [root@localhost ~]# virsh pool-define mpath-pool.xml error: Failed to define pool from mpath-pool.xml error: internal error: missing backend for pool type 7 (mpath) 2)positive test [root@localhost ~]# rpm -qa | grep libvirt | grep storage libvirt-daemon-driver-storage-mpath-3.2.0-9.el7.x86_64 libvirt-daemon-driver-storage-core-3.2.0-9.el7.x86_64 [root@localhost ~]# systemctl restart libvirtd [root@localhost ~]# virsh pool-define mpath-pool.xml Pool mpath defined from mpath-pool.xml [root@localhost ~]# virsh pool-start mpath Pool mpath started 5.for libvirt-daemon-driver-storage-scsi 1)negative test: [root@localhost ~]# rpm -qa | grep libvirt | grep storage libvirt-daemon-driver-storage-core-3.2.0-9.el7.x86_64 [root@localhost ~]# systemctl restart libvirtd [root@localhost ~]# virsh pool-define scsi-pool.xml error: Failed to define pool from scsi-pool.xml error: internal error: missing backend for pool type 6 (scsi) 2)positive test [root@localhost ~]# rpm -qa | grep libvirt | grep storage libvirt-daemon-driver-storage-core-3.2.0-9.el7.x86_64 libvirt-daemon-driver-storage-scsi-3.2.0-9.el7.x86_64 [root@localhost ~]# systemctl restart libvirtd [root@localhost ~]# virsh pool-define scsi-pool.xml Pool scsi defined from scsi-pool.xml [root@localhost ~]# virsh pool-start scsi Pool scsi started [root@localhost ~]# virsh vol-list scsi Name Path ------------------------------------------------------------------------------ unit:0:0:0 /dev/disk/by-path/pci-0000:00:1f.2-ata-3.0 start a guest with: <disk type='volume' device='disk'> <driver name='qemu' type='raw'/> <source pool='scsi' volume='unit:0:0:0'/> <target dev='vdb' bus='virtio'/> </disk> disk can be read/wrote 6.for libvirt-daemon-driver-storage-rbd 1)negative test: [root@localhost ~]# rpm -qa | grep libvirt | grep storage libvirt-daemon-driver-storage-core-3.2.0-9.el7.x86_64 [root@localhost ~]# systemctl restart libvirtd [root@localhost ~]# virsh pool-define rbd-pool.xml error: Failed to define pool from rbd-pool.xml error: internal error: missing backend for pool type 8 (rbd) 2)positive test [root@localhost ~]# rpm -qa | grep libvirt | grep storage libvirt-daemon-driver-storage-core-3.2.0-9.el7.x86_64 libvirt-daemon-driver-storage-rbd-3.2.0-9.el7.x86_64 [root@localhost ~]# systemctl restart libvirtd [root@localhost ~]# virsh pool-define rbd-pool.xml Pool rbd defined from rbd-pool.xml [root@localhost ~]# virsh pool-start rbd Pool rbd started [root@localhost ~]# virsh vol-list rbd Name Path ------------------------------------------------------------------------------ abc.img libvirt-pool/abc.img qcow2.img libvirt-pool/qcow2.img rbd1.img libvirt-pool/rbd1.img start a guest with disk: <disk type='volume' device='disk'> <driver name='qemu' type='raw'/> <source pool='rbd' volume='abc.img'/> <target dev='sdb' bus='scsi'/> </disk> [root@localhost ~]# virsh start test error: Failed to start domain test error: unsupported configuration: using 'rbd' pools for backing 'volume' disks isn't yet supported 7.for libvirt-daemon-driver-storage-disk 1)negative test: [root@localhost ~]# rpm -qa | grep libvirt | grep storage libvirt-daemon-driver-storage-core-3.2.0-9.el7.x86_64 [root@localhost ~]# systemctl restart libvirtd [root@localhost ~]# virsh pool-define disk-pool.xml error: Failed to define pool from disk-pool.xml error: internal error: missing backend for pool type 4 (disk) 2)positive test [root@localhost ~]# rpm -qa | grep libvirt | grep storage libvirt-daemon-driver-storage-core-3.2.0-9.el7.x86_64 libvirt-daemon-driver-storage-disk-3.2.0-9.el7.x86_64 [root@localhost ~]# systemctl restart libvirtd [root@localhost ~]# virsh pool-define disk-pool.xml Pool disk defined from disk-pool.xml [root@localhost ~]# virsh pool-start disk Pool disk started [root@localhost ~]# virsh vol-list disk Name Path ------------------------------------------------------------------------------ sdb1 /dev/sdb1 sdb2 /dev/sdb2 start a guest with: <disk type='volume' device='disk'> <driver name='qemu' type='raw'/> <source pool='disk' volume='sdb1'/> <target dev='vdb' bus='virtio'/> </disk> [root@localhost ~]# virsh destroy test;virsh start test Domain test destroyed Domain test started disk can be read/wrote and I have a question: I tried to #yum remove libvirt-daemon-driver-storage then libvirt package was also been removed -->is it reasonable? The libvirt meta-package depends on all sub-packages of libvirt, so it's reasonable that it can't be installed if you want to install only a subset of sub-packages. (In reply to Peter Krempa from comment #10) > The libvirt meta-package depends on all sub-packages of libvirt, so it's > reasonable that it can't be installed if you want to install only a subset > of sub-packages. thanks,peter, based on comment 8~10 verify the bug,mark the bug status as 'verified' Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2017:1846 Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2017:1846 |