RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1378540 - Libvirt runtime modularization of the block layer
Summary: Libvirt runtime modularization of the block layer
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: libvirt
Version: 7.0
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: Peter Krempa
QA Contact: lijuan men
URL:
Whiteboard:
Depends On: 1378536
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-09-22 16:57 UTC by Ademar Reis
Modified: 2017-08-01 23:57 UTC (History)
8 users (show)

Fixed In Version: libvirt-3.1.0-1.el7
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1378536
Environment:
Last Closed: 2017-08-01 17:16:43 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2017:1846 0 normal SHIPPED_LIVE libvirt bug fix and enhancement update 2017-08-01 18:02:50 UTC

Description Ademar Reis 2016-09-22 16:57:03 UTC
+++ This bug was initially created as a clone of Bug #1378536 +++

The reason to modularize the block layer in QEMU is to allow smaller/minimal installations, with two primary use-cases:

1. Allow the split of the QEMU package, bringing less dependencies to a minimal installation. This is particularly important for the case of remote storage drivers, such as Ceph and Gluster, which usually link to and require external libraries and tools.

2. Faster startup of QEMU, with less drivers to load.

Comment 2 Peter Krempa 2017-02-22 08:53:33 UTC
Libvirt's storage driver is now split into separate packages and the qemu driver no longer depends strictly on the heavyweight storage backends:

commit 27c8e36d60c7f2d7abf105207546f87e15256c45
Author: Peter Krempa <pkrempa>
Date:   Wed Feb 8 09:20:21 2017 +0100

    spec: Modularize the storage driver
    
    Create a new set of sub-packages containing the new storage driver
    modules so that certain heavy-weight backends (gluster, rbd) can be
    installed separately only if required.
    
    To keep backward compatibility the 'libvirt-driver-storage' package
    will be turned into a virtual package pulling in all the new storage
    backend sub-packages. The storage driver module will be moved into
    libvirt-driver-storage-core including the filesystem backend which is
    mandatory.
    
    This then allows to make libvirt-daemon-driver-qemu depend only on the
    core of the storage driver.
    
    All other meta-packages still depend on the full storage driver and thus
    pull in all the backends.


commit 0a6d3e51b40cfd0628649f985975b0d2be00b8f7
Author: Peter Krempa <pkrempa>
Date:   Tue Feb 7 19:40:29 2017 +0100

    storage: Turn storage backends into dynamic modules
    
    If driver modules are enabled turn storage driver backends into
    dynamically loadable objects. This will allow greater modularity for
    binary distributions, where heavyweight dependencies as rbd and gluster
    can be avoided by selecting only a subset of drivers if the rest is not
    necessary.
    
    The storage modules are installed into 'LIBDIR/libvirt/storage-backend/'
    and users can override the location by using
    'LIBVIRT_STORAGE_BACKEND_DIR' environment variable.
    
    rpm based distros will at this point install all the backends when
    libvirt-daemon-driver-storage package is installed.

Comment 4 lijuan men 2017-05-24 08:11:18 UTC
1.we have installed libvirt-daemon-driver-storage package(it also pulled in all the new storage backend sub-packages) and  used our auto test script to cover all the basic test scenarios for those storage types. The test result was pass. Is it enough for this bug?


2.I also want to know,how can I test these sub-packages respectively? Do we need to do it?

For example,when I uninstall libvirt-daemon-driver-storage package and those sub-packages , only install libvirt-daemon-driver-storage-rbd,do we need to test the basic rbd function test in this scenario? 

Then do we need to uninstall libvirt-daemon-driver-storage-rbd and test some negative scenarios to show we must need this sub-package for rbd storage type ? But I don't know which scenario is suitable. Do you have any suggestion?

Comment 5 Peter Krempa 2017-05-25 11:33:56 UTC
(In reply to lijuan men from comment #4)
> 1.we have installed libvirt-daemon-driver-storage package(it also pulled in
> all the new storage backend sub-packages) and  used our auto test script to
> cover all the basic test scenarios for those storage types. The test result
> was pass. Is it enough for this bug?

Yes, this should be okay. In case when the "old" packages were installed (those that became meta-packages now) everything needs to work as it used to.

> 2.I also want to know,how can I test these sub-packages respectively? Do we
> need to do it?
> 
> For example,when I uninstall libvirt-daemon-driver-storage package and those
> sub-packages , only install libvirt-daemon-driver-storage-rbd,do we need to
> test the basic rbd function test in this scenario? 

You can specifically install the qemu driver only, which only depends on the storage driver core (note that includes the local file storage driver). You can then install the specific modules you want to test. Since the code is using the same code paths, I don't really think it's worth doing positive testing (except for perhaps some basic checks) for all cases.

> 
> Then do we need to uninstall libvirt-daemon-driver-storage-rbd and test some
> negative scenarios to show we must need this sub-package for rbd storage
> type ? But I don't know which scenario is suitable. Do you have any
> suggestion?

I don't really think this will be necessary to a great extent. It would be worth checking though whether we properly handle the cases where the storage driver module is not present, so that the error message is sane.

Comment 6 lijuan men 2017-05-26 10:11:05 UTC
> > Then do we need to uninstall libvirt-daemon-driver-storage-rbd and test some
> > negative scenarios to show we must need this sub-package for rbd storage
> > type ? But I don't know which scenario is suitable. Do you have any
> > suggestion?
> 
> I don't really think this will be necessary to a great extent. It would be
> worth checking though whether we properly handle the cases where the storage
> driver module is not present, so that the error message is sane.

How can I let libvirt or qemu output the error message?

1. I tried to #yum remove libvirt-daemon-driver-storage 

then libvirt package was also been removed   -->is it reasonable? I can't only remove libvirt-daemon-driver-storage?

2.# yum remove libvirt-daemon-driver-storage-rbd 
successfully

3.start a guest with a rbd disk
 <disk type='network' device='disk'>
      <driver name='qemu' type='raw' cache='none'/>
      <source protocol='rbd' name='lmen/test.img'>
        <host name='10.73.75.52' port='6789'/>
      </source>
      <target dev='vdc' bus='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x0a' function='0x0'/>
    </disk>

the guest can start up,and the rbd disk can be read/wrote
I can't get any error message

Which scenario will output some error message to show we must need the libvirt-daemon-driver-storage-rbd package?

Comment 7 Peter Krempa 2017-05-29 08:21:06 UTC
RBD disks for VMs don't currently use the storage driver for anything. Try defining a RBD storage pool in this case.

Comment 8 lijuan men 2017-06-12 10:01:18 UTC
verify the bug

version:
libvirt-3.2.0-9.el7.x86_64
qemu-kvm-rhev-2.9.0-9.el7.x86_64


scenario1: for libvirt-daemon-driver-storage pkg:

when installing libvirt pkg,libvirt-daemon-driver-storage pkg will also be installed,and it will pull in all the new storage backend sub-packages.

[root@localhost ~]# rpm -qa | grep libvirt | grep storage
libvirt-daemon-driver-storage-gluster-3.2.0-9.el7.x86_64
libvirt-daemon-driver-storage-3.2.0-9.el7.x86_64
libvirt-daemon-driver-storage-core-3.2.0-9.el7.x86_64
libvirt-daemon-driver-storage-iscsi-3.2.0-9.el7.x86_64
libvirt-daemon-driver-storage-logical-3.2.0-9.el7.x86_64
libvirt-daemon-driver-storage-mpath-3.2.0-9.el7.x86_64
libvirt-daemon-driver-storage-scsi-3.2.0-9.el7.x86_64
libvirt-daemon-driver-storage-rbd-3.2.0-9.el7.x86_64
libvirt-daemon-driver-storage-disk-3.2.0-9.el7.x86_64

The auto script in libvirt team installs libvirt-daemon-driver-storage pkg(pulls in all the new storage backend sub-packages) by default, and the script has already covered the basic scenarios for all the storage drivers. The newest auto test result is pass.


scenario2:for the new storage backend sub-packages

remove libvirt-daemon-driver-storage pkg and storage backend sub-packages

[root@localhost ~]# yum remove libvirt-daemon-driver-storage   

[root@localhost ~]# yum remove libvirt-daemon-driver-storage-gluster-3.2.0-9.el7.x86_64 libvirt-daemon-driver-storage-iscsi-3.2.0-9.el7.x86_64 libvirt-daemon-driver-storage-logical-3.2.0-9.el7.x86_64 libvirt-daemon-driver-storage-mpath-3.2.0-9.el7.x86_64 libvirt-daemon-driver-storage-scsi-3.2.0-9.el7.x86_64 libvirt-daemon-driver-storage-rbd-3.2.0-9.el7.x86_64 libvirt-daemon-driver-storage-disk-3.2.0-9.el7.x86_64

[root@localhost ~]# rpm -qa | grep libvirt | grep storage
libvirt-daemon-driver-storage-core-3.2.0-9.el7.x86_64


1.for libvirt-daemon-driver-storage-gluster
1)negative test:
[root@localhost ~]# rpm -qa | grep libvirt | grep storage
libvirt-daemon-driver-storage-core-3.2.0-9.el7.x86_64
[root@localhost ~]# systemctl restart libvirtd

[root@localhost ~]# virsh pool-define gluster-pool.xml 
error: Failed to define pool from gluster-pool.xml
error: internal error: missing backend for pool type 10 (gluster)

2)positive test
[root@localhost ~]# rpm -qa | grep libvirt | grep storage
libvirt-daemon-driver-storage-core-3.2.0-9.el7.x86_64
libvirt-daemon-driver-storage-gluster-3.2.0-9.el7.x86_64
[root@localhost ~]# systemctl restart libvirtd

[root@localhost ~]# virsh pool-define gluster-pool.xml 
Pool gluster defined from gluster-pool.xml

[root@localhost ~]# virsh pool-start gluster
Pool gluster started

[root@localhost ~]# virsh vol-list gluster
 Name                 Path                                    
------------------------------------------------------------------------------
 .trashcan            gluster://10.66.5.88/gluster-vol1/.trashcan
 1.img                gluster://10.66.5.88/gluster-vol1/1.img 
 2.img                gluster://10.66.5.88/gluster-vol1/2.img 
 test.raw             gluster://10.66.5.88/gluster-vol1/test.raw

start guest with xml:
 <disk type='volume' device='disk'>
    <driver name='qemu' type='raw'/>
    <source pool='gluster' volume='test.raw'/>
    <target dev='vdb' bus='virtio'/>
  </disk>

[root@localhost ~]# virsh start test
error: Failed to start domain test
error: unsupported configuration: using 'gluster' pools for backing 'volume' disks isn't yet supported


2.for libvirt-daemon-driver-storage-iscsi
1)negative test:
[root@localhost ~]# rpm -qa | grep libvirt | grep storage
libvirt-daemon-driver-storage-core-3.2.0-9.el7.x86_64
[root@localhost ~]# systemctl restart libvirtd

[root@localhost ~]# virsh pool-define iscsi-pool.xml 
error: Failed to define pool from iscsi-pool.xml
error: internal error: missing backend for pool type 5 (iscsi)


2)positive test
[root@localhost ~]# rpm -qa | grep libvirt | grep storage
libvirt-daemon-driver-storage-core-3.2.0-9.el7.x86_64
libvirt-daemon-driver-storage-iscsi-3.2.0-9.el7.x86_64
[root@localhost ~]# systemctl restart libvirtd

[root@localhost ~]# virsh pool-define iscsi-pool.xml 
Pool iscsi defined from iscsi-pool.xml

[root@localhost ~]# virsh pool-start iscsi
Pool iscsi started

[root@localhost ~]# virsh vol-list iscsi
 Name                 Path                                    
------------------------------------------------------------------------------
 unit:0:0:0           /dev/disk/by-path/ip-127.0.0.1:3260-iscsi-iqn.2016-03.com.virttest:emulated-iscsi.target-lun-0

start a guest with disk:
 <disk type='volume' device='disk'>
    <driver name='qemu' type='raw'/>
    <source pool='iscsi' volume='unit:0:0:0'/>
    <target dev='sdb' bus='scsi'/>
  </disk>

[root@localhost ~]# virsh destroy test;virsh start test
Domain test destroyed

Domain test started

disk can be read/wrote



3.for libvirt-daemon-driver-storage-logical
1)negative test:
[root@localhost ~]# rpm -qa | grep libvirt | grep storage
libvirt-daemon-driver-storage-core-3.2.0-9.el7.x86_64
[root@localhost ~]# systemctl restart libvirtd

[root@localhost ~]# virsh pool-define logical-pool.xml 
error: Failed to define pool from logical-pool.xml
error: internal error: missing backend for pool type 3 (logical)

2)positive test
[root@localhost ~]# rpm -qa | grep libvirt | grep storage
libvirt-daemon-driver-storage-core-3.2.0-9.el7.x86_64
libvirt-daemon-driver-storage-logical-3.2.0-9.el7.x86_64
[root@localhost ~]# systemctl restart libvirtd

[root@localhost ~]# virsh pool-define logical-pool.xml 
Pool logical defined from logical-pool.xml

[root@localhost ~]# virsh pool-start logical
Pool logical started

[root@localhost ~]# virsh vol-list logical
 Name                 Path                                    
------------------------------------------------------------------------------
 lv                   /dev/vg/lv       


start a guest with disk:
 <disk type='volume' device='disk'>
    <driver name='qemu' type='raw'/>
    <source pool='logical' volume='lv'/>
    <target dev='sdb' bus='scsi'/>
  </disk>

[root@localhost ~]# virsh destroy test;virsh start test
Domain test destroyed

Domain test started

disk can be read/wrote



4.for libvirt-daemon-driver-storage-mpath
1)negative test:
[root@localhost ~]# rpm -qa | grep libvirt | grep storage
libvirt-daemon-driver-storage-core-3.2.0-9.el7.x86_64
[root@localhost ~]# systemctl restart libvirtd

[root@localhost ~]# virsh pool-define mpath-pool.xml 
error: Failed to define pool from mpath-pool.xml
error: internal error: missing backend for pool type 7 (mpath)


2)positive test

[root@localhost ~]# rpm -qa | grep libvirt | grep storage
libvirt-daemon-driver-storage-mpath-3.2.0-9.el7.x86_64
libvirt-daemon-driver-storage-core-3.2.0-9.el7.x86_64
[root@localhost ~]# systemctl restart libvirtd

[root@localhost ~]# virsh pool-define mpath-pool.xml 
Pool mpath defined from mpath-pool.xml

[root@localhost ~]# virsh pool-start mpath
Pool mpath started


5.for libvirt-daemon-driver-storage-scsi
1)negative test:
[root@localhost ~]# rpm -qa | grep libvirt | grep storage
libvirt-daemon-driver-storage-core-3.2.0-9.el7.x86_64

[root@localhost ~]# systemctl restart libvirtd

[root@localhost ~]# virsh pool-define scsi-pool.xml 
error: Failed to define pool from scsi-pool.xml
error: internal error: missing backend for pool type 6 (scsi)


2)positive test
[root@localhost ~]# rpm -qa | grep libvirt | grep storage
libvirt-daemon-driver-storage-core-3.2.0-9.el7.x86_64
libvirt-daemon-driver-storage-scsi-3.2.0-9.el7.x86_64
[root@localhost ~]# systemctl restart libvirtd

[root@localhost ~]# virsh pool-define scsi-pool.xml 
Pool scsi defined from scsi-pool.xml

[root@localhost ~]# virsh pool-start scsi
Pool scsi started

[root@localhost ~]# virsh vol-list scsi
 Name                 Path                                    
------------------------------------------------------------------------------
 unit:0:0:0           /dev/disk/by-path/pci-0000:00:1f.2-ata-3.0

start a guest with:
 <disk type='volume' device='disk'>
    <driver name='qemu' type='raw'/>
    <source pool='scsi' volume='unit:0:0:0'/>
    <target dev='vdb' bus='virtio'/>
  </disk>

disk can be read/wrote



6.for libvirt-daemon-driver-storage-rbd
1)negative test:
[root@localhost ~]# rpm -qa | grep libvirt | grep storage
libvirt-daemon-driver-storage-core-3.2.0-9.el7.x86_64
[root@localhost ~]# systemctl restart libvirtd

[root@localhost ~]# virsh pool-define rbd-pool.xml 
error: Failed to define pool from rbd-pool.xml
error: internal error: missing backend for pool type 8 (rbd)

2)positive test
[root@localhost ~]# rpm -qa | grep libvirt | grep storage
libvirt-daemon-driver-storage-core-3.2.0-9.el7.x86_64
libvirt-daemon-driver-storage-rbd-3.2.0-9.el7.x86_64
[root@localhost ~]# systemctl restart libvirtd

[root@localhost ~]# virsh pool-define rbd-pool.xml 
Pool rbd defined from rbd-pool.xml

[root@localhost ~]# virsh pool-start rbd
Pool rbd started

[root@localhost ~]# virsh vol-list rbd
 Name                 Path                                    
------------------------------------------------------------------------------
 abc.img              libvirt-pool/abc.img                    
 qcow2.img            libvirt-pool/qcow2.img                  
 rbd1.img             libvirt-pool/rbd1.img       

start a guest with disk:
 <disk type='volume' device='disk'>
    <driver name='qemu' type='raw'/>
    <source pool='rbd' volume='abc.img'/>
    <target dev='sdb' bus='scsi'/>
  </disk>


[root@localhost ~]# virsh start test
error: Failed to start domain test
error: unsupported configuration: using 'rbd' pools for backing 'volume' disks isn't yet supported



7.for libvirt-daemon-driver-storage-disk
1)negative test:
[root@localhost ~]# rpm -qa | grep libvirt | grep storage
libvirt-daemon-driver-storage-core-3.2.0-9.el7.x86_64
[root@localhost ~]# systemctl restart libvirtd

[root@localhost ~]# virsh pool-define disk-pool.xml 
error: Failed to define pool from disk-pool.xml
error: internal error: missing backend for pool type 4 (disk)


2)positive test
[root@localhost ~]# rpm -qa | grep libvirt | grep storage
libvirt-daemon-driver-storage-core-3.2.0-9.el7.x86_64
libvirt-daemon-driver-storage-disk-3.2.0-9.el7.x86_64
[root@localhost ~]# systemctl restart libvirtd

[root@localhost ~]# virsh pool-define disk-pool.xml 
Pool disk defined from disk-pool.xml

[root@localhost ~]# virsh pool-start disk
Pool disk started

[root@localhost ~]# virsh vol-list disk
 Name                 Path                                    
------------------------------------------------------------------------------
 sdb1                 /dev/sdb1                               
 sdb2                 /dev/sdb2        

start a guest with:
 <disk type='volume' device='disk'>
    <driver name='qemu' type='raw'/>
    <source pool='disk' volume='sdb1'/>
    <target dev='vdb' bus='virtio'/>
  </disk>

[root@localhost ~]# virsh destroy test;virsh start test
Domain test destroyed
Domain test started

disk can be read/wrote

Comment 9 lijuan men 2017-06-12 10:04:33 UTC
and I have a question:

I tried to #yum remove libvirt-daemon-driver-storage 

then libvirt package was also been removed   -->is it reasonable?

Comment 10 Peter Krempa 2017-06-12 12:26:46 UTC
The libvirt meta-package depends on all sub-packages of libvirt, so it's reasonable that it can't be installed if you want to install only a subset of sub-packages.

Comment 11 lijuan men 2017-06-13 08:41:27 UTC
(In reply to Peter Krempa from comment #10)
> The libvirt meta-package depends on all sub-packages of libvirt, so it's
> reasonable that it can't be installed if you want to install only a subset
> of sub-packages.


thanks,peter,

based on comment 8~10

verify the bug,mark the bug status as 'verified'

Comment 12 errata-xmlrpc 2017-08-01 17:16:43 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2017:1846

Comment 13 errata-xmlrpc 2017-08-01 23:57:38 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2017:1846


Note You need to log in before you can comment on or make changes to this bug.