Bug 1074169 - RFE: storage: virt-manager UI support for Ceph/RBD pools/volumes
Summary: RFE: storage: virt-manager UI support for Ceph/RBD pools/volumes
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Virtualization Tools
Classification: Community
Component: virt-manager
Version: unspecified
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: Cole Robinson
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-03-08 16:58 UTC by Erik Andersen
Modified: 2020-01-26 17:55 UTC (History)
13 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-01-26 17:55:51 UTC
Embargoed:


Attachments (Terms of Use)
a working solution (2.88 KB, patch)
2015-01-20 10:49 UTC, tonich.sh
no flags Details | Diff
Test, selecting CEPH image (716.81 KB, image/png)
2017-06-13 20:51 UTC, Alexander von Gluck IV
no flags Details
Test, Invalid CEPH image location per virt-manager (157.06 KB, image/png)
2017-06-13 20:51 UTC, Alexander von Gluck IV
no flags Details

Description Erik Andersen 2014-03-08 16:58:00 UTC
Description of problem:
The virt-manager new storage pool wizard allows you to add pools. QEMU/KVM & libvirt support Ceph's RBD storage pool, (http://libvirt.org/storage.html#StorageBackendRBD) but virt-manager does not show this as an option when creating a pool.  Virt-manager gets it's list of possible pool formats from storage.py in virtinst. 

How reproducible:
Always

Steps to Reproduce:
1. Open virt manager.
2. (Set up a connection to a KVM libvirt enabled host if you don't already have one).
3. Right click on the host, and click details.
4. Click the storage tab.
5. Click the add button (Green plus icon in the lower left).

Actual results:
Note that type does not list RBD as an option

Expected results:
Type allows you to select rbd as an option, allowing you to add a Ceph storage pool to store images.

Additional info:
I tried to start looking at this myself, and found that storage.py in virt-inst is where the list of formats comes from. I tried making a few changes in there (which I would be happy to share), but I doubt I know enough to make all the changes required for this to be useful. Here's what I have so far:

diff --git a/virtinst/storage.py b/virtinst/storage.py
index 98fcc7c..18c5752 100644
--- a/virtinst/storage.py
+++ b/virtinst/storage.py
@@ -100,6 +100,7 @@ class StoragePool(_StorageObject):
     TYPE_SCSI    = "scsi"
     TYPE_MPATH   = "mpath"
     TYPE_GLUSTER = "gluster"
+    TYPE_RBD     = "rbd"
 
     # Pool type descriptions for use in higher level programs
     _descs = {}
@@ -112,6 +113,7 @@ class StoragePool(_StorageObject):
     _descs[TYPE_SCSI]    = _("SCSI Host Adapter")
     _descs[TYPE_MPATH]   = _("Multipath Device Enumerator")
     _descs[TYPE_GLUSTER] = _("Gluster Filesystem")
+    _descs[TYPE_RBD]     = _("Rados Block Device Pool")
 
     @staticmethod
     def get_pool_types():
@@ -386,9 +388,9 @@ class StoragePool(_StorageObject):
         users = {
             "source_path": [self.TYPE_FS, self.TYPE_NETFS, self.TYPE_LOGICAL,
                             self.TYPE_DISK, self.TYPE_ISCSI, self.TYPE_SCSI],
-            "source_name": [self.TYPE_LOGICAL, self.TYPE_GLUSTER],
+            "source_name": [self.TYPE_LOGICAL, self.TYPE_GLUSTER, self.TYPE_RBD],
             "source_dir" : [self.TYPE_GLUSTER, self.TYPE_NETFS],
-            "host": [self.TYPE_NETFS, self.TYPE_ISCSI, self.TYPE_GLUSTER],
+            "host": [self.TYPE_NETFS, self.TYPE_ISCSI, self.TYPE_GLUSTER, self.TYPE_RBD],
             "format": [self.TYPE_FS, self.TYPE_NETFS, self.TYPE_DISK],
             "iqn": [self.TYPE_ISCSI],
             "target_path" : [self.TYPE_DIR, self.TYPE_FS, self.TYPE_NETFS,
@@ -414,7 +416,8 @@ class StoragePool(_StorageObject):
         return self.type in [
             StoragePool.TYPE_DIR, StoragePool.TYPE_FS,
             StoragePool.TYPE_NETFS, StoragePool.TYPE_LOGICAL,
-            StoragePool.TYPE_DISK, StoragePool.TYPE_GLUSTER]
+            StoragePool.TYPE_DISK, StoragePool.TYPE_GLUSTER,
+            StoragePool.TYPE_RBD]
 
     def get_vm_disk_type(self):
         """

Comment 1 Erik Andersen 2014-03-08 17:01:50 UTC
Also, it appear there is some XML generation code in storage.py:

        if host:
            source_xml = "<source><host name='%s'/></source>" % host
        else:
            source_xml = "<source/>"


With Ceph, each monitor is a possible host, so it should handle more than one host (for High(er) availability). For example, the following is valid XML in a domain definition:

      <source protocol='rbd' name='libvirt-pool/X17-59186.iso'>
        <host name='192.168.7.2' port='6789'/>
        <host name='192.168.7.3' port='6789'/>
        <host name='192.168.7.4' port='6789'/>

The following is a pool definition from the documentation, http://libvirt.org/storage.html#StorageBackendRBD :

      <pool type="rbd">
        <name>myrbdpool</name>
        <source>
          <name>rbdpool</name>
            <host name='1.2.3.4' port='6789'/>
            <host name='my.ceph.monitor' port='6789'/>
            <host name='third.ceph.monitor' port='6789'/>
            <auth username='admin' type='ceph'>
              <secret uuid='2ec115d7-3a88-3ceb-bc12-0ac909a6fd87'/>
            </auth>
        </source>
      </pool>

Comment 2 Erik Andersen 2014-03-08 18:00:36 UTC
I actually run into a lack of Ceph support in the UI (in my setup, I just went in and defined the pool manually in the XML by hand) is by:

0. Set up a RBD storage pool, (see http://libvirt.org/storage.html#StorageBackendRBD)
1. Going to VMM.
2. (Connecting, if I'm not already connected, to a Qemu libvirt host).
3. Selecting an existing VM.
4. Clicking on the (information? has a blue circle with a lowercase 'i' icon) button to get the VM settings.
5. Clicking Add Hardware.
6. Selecting storage.
7. Picking "Select managed or other existing storage".
8. Clicking browse.
9. Selecting the already defined storage pool.

At this point, I would want to click "New Volume" to create a new volume on the ceph storage pool. I've tried tracing this down in storagebrowse.py, but I'm having a hard time finding where it decides that the button should be inactive.

Comment 3 Cole Robinson 2014-03-10 13:01:29 UTC
(In reply to Erik Andersen from comment #1)
> Also, it appear there is some XML generation code in storage.py:
> 
>         if host:
>             source_xml = "<source><host name='%s'/></source>" % host
>         else:
>             source_xml = "<source/>"
> 

That's only to call the FindPoolSources API, which isn't supported for RBD pools, so no need to worry about that.

> 
> With Ceph, each monitor is a possible host, so it should handle more than
> one host (for High(er) availability). For example, the following is valid
> XML in a domain definition:
> 
>       <source protocol='rbd' name='libvirt-pool/X17-59186.iso'>
>         <host name='192.168.7.2' port='6789'/>
>         <host name='192.168.7.3' port='6789'/>
>         <host name='192.168.7.4' port='6789'/>
> 
> The following is a pool definition from the documentation,
> http://libvirt.org/storage.html#StorageBackendRBD :
> 
>       <pool type="rbd">
>         <name>myrbdpool</name>
>         <source>
>           <name>rbdpool</name>
>             <host name='1.2.3.4' port='6789'/>
>             <host name='my.ceph.monitor' port='6789'/>
>             <host name='third.ceph.monitor' port='6789'/>
>             <auth username='admin' type='ceph'>
>               <secret uuid='2ec115d7-3a88-3ceb-bc12-0ac909a6fd87'/>
>             </auth>
>         </source>
>       </pool>

Yeah, the multi <host> and <auth> handling would require new UI. Not to mention all the secret API handling. So this will take a lot of work unfortunately.

(In reply to Erik Andersen from comment #2)
> I actually run into a lack of Ceph support in the UI (in my setup, I just
> went in and defined the pool manually in the XML by hand) is by:
> 
> 0. Set up a RBD storage pool, (see
> http://libvirt.org/storage.html#StorageBackendRBD)
> 1. Going to VMM.
> 2. (Connecting, if I'm not already connected, to a Qemu libvirt host).
> 3. Selecting an existing VM.
> 4. Clicking on the (information? has a blue circle with a lowercase 'i'
> icon) button to get the VM settings.
> 5. Clicking Add Hardware.
> 6. Selecting storage.
> 7. Picking "Select managed or other existing storage".
> 8. Clicking browse.
> 9. Selecting the already defined storage pool.
> 
> At this point, I would want to click "New Volume" to create a new volume on
> the ceph storage pool. I've tried tracing this down in storagebrowse.py, but
> I'm having a hard time finding where it decides that the button should be
> inactive.

There were some issues here that I just fixed upstream. But the 'new volume' button should be clickable virtinst/storage.py:StoragePool.supports_storage_creation lists rbd, which right now it doesn't do that. What does rbd storage volume XML look like? We should only allow it in the virt-manager UI if the current 'new volume' wizard will work in its current state.

Comment 4 Erik Andersen 2014-03-11 03:36:14 UTC
Here's an example of a rbd image in a domain definition serving as a hard disk. (I'm not sure if that is what you are looking for).

    <disk type='network' device='disk'>
      <driver name='qemu'/>
      <auth username='libvirt'>
        <secret type='ceph' uuid='a0b534fe-cec7-4289-ab39-39dec1cbe74a'/>
      </auth>
      <source protocol='rbd' name='libvirt-pool/new-libvirt-image'>
        <host name='192.168.7.2' port='6789'/>
        <host name='192.168.7.3' port='6789'/>
        <host name='192.168.7.4' port='6789'/>
      </source>
      <target dev='hda' bus='ide'/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    </disk>

From http://libvirt.org/formatdomain.html#elementsDisks:

    <disk type='network'>
      <driver name="qemu" type="raw"/>
      <source protocol="rbd" name="image_name2">
        <host name="hostname" port="7000"/>
      </source>
      <target dev="hdd" bus="ide"/>
      <auth username='myuser'>
        <secret type='ceph' usage='mypassid'/>
      </auth>
    </disk>

Comment 5 Cole Robinson 2014-03-11 11:39:23 UTC
Thanks, how about an rbd storage volume? Something like

sudo virsh vol-list $RBD_POOL_NAME
sudo virsh vol-dumpxml $RBD_POOL_NAME $RBD_VOL_NAME

Comment 6 Erik Andersen 2014-03-11 16:49:11 UTC
So, I found out that the reason the new volume option was greyed out was because the pool was not started. I started the pool, and then it allowed me to click the new volume button.

That gives:
Error launching volume wizard: Unknown storage pool type: rbd

Error launching volume wizard: Unknown storage pool type: rbd

Traceback (most recent call last):
  File "/usr/share/virt-manager/virtManager/storagebrowse.py", line 281, in new_volume
    self.addvol = vmmCreateVolume(self.conn, pool)
  File "/usr/share/virt-manager/virtManager/createvol.py", line 42, in __init__
    self.vol_class = Storage.StoragePool.get_volume_for_pool(parent_pool.get_type())
  File "/usr/lib/python2.7/dist-packages/virtinst/Storage.py", line 290, in get_volume_for_pool
    pool_class = StoragePool.get_pool_class(pool_type)
  File "/usr/lib/python2.7/dist-packages/virtinst/Storage.py", line 269, in get_pool_class
    raise ValueError(_("Unknown storage pool type: %s" % ptype))
ValueError: Unknown storage pool type: rbd

(The virtual machines that I have apparently just directly have a connection to the rbd cluster without the pool or something (sorry, I don't totally understand all the details of how this works. But I do have it working after manually editing domain's xml and adding secrets/pools (some of the steps I took may have not been nessesary)).

libvirt-pool is the name of my rbd storage pool.


virsh # vol-list libvirt-pool
 Name                 Path                                    
------------------------------------------------------------------------------
 archlinux-2014.03.01-dual.iso libvirt-pool/archlinux-2014.03.01-dual.iso
 debian-7.4.0-kfreebsd-i386-netinst.iso libvirt-pool/debian-7.4.0-kfreebsd-i386-netinst.iso
 debian-kfreebsd.img  libvirt-pool/debian-kfreebsd.img        
 debian-update-7.1.0-amd64-DVD-1.iso libvirt-pool/debian-update-7.1.0-amd64-DVD-1.iso
 en_windows_8_1_x64_dvd_2707217.iso libvirt-pool/en_windows_8_1_x64_dvd_2707217.iso
 fedora-19.img        libvirt-pool/fedora-19.img              
 Fedora-Live-KDE-x86_64-19-1.iso libvirt-pool/Fedora-Live-KDE-x86_64-19-1.iso
 FreeBSD-10.0-RELEASE-amd64-memstick.img libvirt-pool/FreeBSD-10.0-RELEASE-amd64-memstick.img
 freebsd10.img        libvirt-pool/freebsd10.img              
 GhostBSD3.5-mate-i386.iso libvirt-pool/GhostBSD3.5-mate-i386.iso  
 linuxmint-16-cinnamon-dvd-32bit.iso libvirt-pool/linuxmint-16-cinnamon-dvd-32bit.iso
 NAS4Free-x64-LiveUSB-9.1.0.1.847.img libvirt-pool/NAS4Free-x64-LiveUSB-9.1.0.1.847.img
 new-libvirt-image    libvirt-pool/new-libvirt-image          
 nixos-graphical-13.10.35666.a92cc57-i686-linux.iso libvirt-pool/nixos-graphical-13.10.35666.a92cc57-i686-linux.iso
 openSUSE-12.3-DVD-x86_64.iso libvirt-pool/openSUSE-12.3-DVD-x86_64.iso
 opensuse.img         libvirt-pool/opensuse.img               
 PCBSD10.0-RELEASE-x64-DVD-USB-latest.iso libvirt-pool/PCBSD10.0-RELEASE-x64-DVD-USB-latest.iso
 turnkey-ejabberd-13.0-wheezy-amd64.iso libvirt-pool/turnkey-ejabberd-13.0-wheezy-amd64.iso
 X17-59186.iso        libvirt-pool/X17-59186.iso              



virsh # vol-dumpxml libvirt-pool fedora-19.img
error: failed to get pool 'fedora-19.img'
error: failed to get vol 'libvirt-pool', specifying --pool might help
error: Storage volume not found: no storage vol with matching path libvirt-pool

virsh # vol-dumpxml --pool libvirt-pool fedora-19.img
<volume type='network'>
  <name>fedora-19.img</name>
  <key>libvirt-pool/fedora-19.img</key>
  <source>
  </source>
  <capacity unit='bytes'>32212254720</capacity>
  <allocation unit='bytes'>32212254720</allocation>
  <target>
    <path>libvirt-pool/fedora-19.img</path>
    <format type='unknown'/>
    <permissions>
      <mode>00</mode>
      <owner>0</owner>
      <group>0</group>
    </permissions>
  </target>
</volume>

Comment 7 Erik Andersen 2014-03-11 16:54:41 UTC
If I take an existing VM, and go to add a new drive (CDROM in this case), now that I have the RBD pool started, it lets me select already existing images, and click "Use Volume". It's just when I click finish on addding hardware that it gives an error:

Title: Input Error
Heading: Storage parameter error.
Cannot use storage '/home/erik/libvirt-pool/debian-7.4.0-kfreebsd-i386-netinst.iso': '/home/erik/libvirt-pool' is not managed on the remote host.

Comment 8 Alexander Korolev 2014-10-07 15:01:03 UTC
Any news on this one? To avoid the

Comment 9 Alexander Korolev 2014-10-07 15:03:56 UTC
Sorry for the truncated post - To avoid the complexity, authentication can be skipped from the config, which would leave us with just:

<disk type='network'>
      <driver name="qemu" type="raw"/>
      <source protocol="rbd" name="image_name2">
        <host name="hostname" port="7000"/>
      </source>
      <target dev="hdd" bus="ide"/>
    </disk>

Comment 10 Cole Robinson 2014-12-10 19:07:14 UTC
So upstream is better here but not complete.

- You can create rbd volumes on an rbd pool, no problem.
- You can create an rbd pool, but UI doesn't have support for auth, nor multiple host names
- Selecting an rbd volume with the storagebrowser to attach to a VM does not work. We could make it work, but I actually think this is a libvirt problem, it doesn't expose a unique path for rbd volumes and instead justs gives a kinda useless string. I plan on proposing libvirt patches to fix this

Comment 11 tonich.sh 2015-01-20 10:49:12 UTC
Created attachment 981791 [details]
a working solution

a working solution for selecting an rbd volume with the storagebrowser to attach to a VM.

Comment 12 Cole Robinson 2015-03-24 22:26:01 UTC
(In reply to tonich.sh from comment #11)
> Created attachment 981791 [details]
> a working solution
> 
> a working solution for selecting an rbd volume with the storagebrowser to
> attach to a VM.

hi, can you post your patch to virt-tools-list, along with an example pool and volume definition that this works with? Even if it's not perfect we can try to adapt it to something that's generally useful. Thanks!

Comment 13 tonich.sh 2015-03-26 17:24:17 UTC
ok, i will make this soon

Comment 14 Alexander von Gluck IV 2017-06-13 20:50:50 UTC
A small test here locally to show current status. Fedora 25, virt-manager-1.4.1-2.fc25.noarch

Created libvirt-client user + secret per CEPH instructions:
  http://docs.ceph.com/docs/master/rbd/libvirt/

Configured CEPH + Secret in libvirt:

virsh # pool-list
 Name                 State      Autostart 
-------------------------------------------
 CEPH                 active     no        
 default              active     yes       


virsh # pool-dumpxml CEPH
<pool type='rbd'>
  <name>CEPH</name>
  <uuid>6ec1f937-1407-428b-9fa2-9776c0eb6ae1</uuid>
  <capacity unit='bytes'>5990446407680</capacity>
  <allocation unit='bytes'>232510153664</allocation>
  <available unit='bytes'>5952984363008</available>
  <source>
    <host name='10.80.199.10' port='6789'/>
    <name>vmpool</name>
    <auth type='ceph' username='libvirt'>
      <secret usage='client.libvirt secret'/>
    </auth>
  </source>
</pool>



Seems to be working fine and images in the pool are navigatable... however I'm unable to select them for VM's in virt-manager. (see screenshots attached)   It seems like it is giving virt-manager a local path.

Comment 15 Alexander von Gluck IV 2017-06-13 20:51:17 UTC
Created attachment 1287450 [details]
Test, selecting CEPH image

Comment 16 Alexander von Gluck IV 2017-06-13 20:51:50 UTC
Created attachment 1287451 [details]
Test, Invalid CEPH image location per virt-manager

Comment 17 Alexander von Gluck IV 2017-06-13 20:55:32 UTC
Small side note.  "Provide the existing storage path:" contains   vmpool/csbm01a (my screenshots missed that)

That definitely doesn't seem right... it looks like that blank location is getting prepended?

I normally provide qemu -hda rbd:vmpool/csbm01a to launch a VM from the remote CEPH cluster on 10.80.199.10.

Comment 18 tonich.sh 2017-12-05 10:51:02 UTC
Rebased to current master branch:

https://github.com/tonich-sh/virt-manager/tree/ceph-volume-attach

Comment 19 Cole Robinson 2020-01-26 17:55:51 UTC
As far as I know the current 2.2.0 release should have all issues resolved here, so closing as CURRENTRELEASE.

If anyone is still hitting issues using existing ceph/rbd volumes, please open a new bug and we can follow up there.


Note You need to log in before you can comment on or make changes to this bug.