Bug 1566548 - virt-manager can't handle using glusterfs as 'default' pool
Summary: virt-manager can't handle using glusterfs as 'default' pool
Keywords:
Status: CLOSED DEFERRED
Alias: None
Product: Virtualization Tools
Classification: Community
Component: virt-manager
Version: unspecified
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: ---
Assignee: Cole Robinson
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-04-12 13:48 UTC by Julen Larrucea
Modified: 2020-01-26 21:15 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-01-26 21:15:45 UTC
Embargoed:


Attachments (Terms of Use)

Description Julen Larrucea 2018-04-12 13:48:31 UTC
Description of problem:
I created a gluster backend on virt-manager and I cannot create new virtual disks.
But I can see the content of the gluster volume and load the ISO file, only the creation of the disk fails.


Version-Release number of selected component (if applicable):
CentOS Linux release 7.4.1708 (Core)
virt-manager-1.4.1-7.el7.noarch
virt-manager-common-1.4.1-7.el7.noarch
virt-viewer-5.0-7.el7.x86_64
libvirt-3.2.0-14.el7_4.9.x86_64
qemu-system-x86-2.0.0-1.el7.6.x86_64
qemu-img-1.5.3-141.el7_4.6.x86_64
qemu-kvm-1.5.3-141.el7_4.6.x86_64


How reproducible:

Steps to Reproduce:
1. yum -y install kvm libvirt qemu-kvm libvirt-python virt-manager virt-viewer virt-install

2. Make sure that the gluster volume has the right permissions
  gluster volume set gkvms server.allow-insecure on
  ## Adjust ownership to qemu user
  gluster volume set gkvms storage.owner-uid 107 
  gluster volume set gkvms storage.owner-gid 107

3. Create a gluster pool (I removed the default one and called this one "default")

4. Try to create a new VM and click on "Forward" on the 4th step (Create a disk image for the virtual machine)



Actual results:

When clicking on "Forward" on step 4 of 5 of "Create a new virtual machine", it pops up this error message:

Uncaught error validating install parameters: 'NoneType' object has no attribute 'endswith'

Traceback (most recent call last):
  File "/usr/share/virt-manager/virtManager/create.py", line 1697, in _validate
    return self._validate_storage_page()
  File "/usr/share/virt-manager/virtManager/create.py", line 1963, in _validate_storage_page
    self._guest.name, do_log=True)
  File "/usr/share/virt-manager/virtManager/create.py", line 1955, in _get_storage_path
    path = self._addstorage.get_default_path(vmname)
  File "/usr/share/virt-manager/virtManager/addstorage.py", line 238, in get_default_path
    path = os.path.join(target, path)
  File "/usr/lib64/python2.7/posixpath.py", line 77, in join
    elif path == '' or path.endswith('/'):
AttributeError: 'NoneType' object has no attribute 'endswith'



Expected results:

The virtual disk should be created on the gluster volume.

Additional info:

> virsh pool-info default
Name:           default
UUID:           80061946-963c-4afc-823a-fea6d4c15241
State:          running
Persistent:     yes
Autostart:      yes
Capacity:       3,21 TiB
Allocation:     8,38 GiB
Available:      3,20 TiB


> virsh pool-dumpxml default
<pool type='gluster'>
  <name>default</name>
  <uuid>80061946-963c-4afc-823a-fea6d4c15241</uuid>
  <capacity unit='bytes'>3528313929728</capacity>
  <allocation unit='bytes'>8996388864</allocation>
  <available unit='bytes'>3519317540864</available>
  <source>
    <host name='my-server.example.com'/>
    <dir path='/VMs'/>
    <name>kvms</name>
  </source>
</pool>


> qemu-img create -f qcow2 gluster://my-server.example.com/kvms/VMs/test.qcow2 1G
Formatting 'gluster://my-server.example.com/kvms/VMs/test.qcow2', fmt=qcow2 size=1073741824 encryption=off cluster_size=65536 lazy_refcounts=off debug=0 
[2018-04-12 13:27:55.579560] E [MSGID: 108006] [afr-common.c:5006:__afr_handle_child_down_event] 0-kvms-replicate-0: All subvolumes are down. Going offline until atleast one of them comes back up.
[2018-04-12 13:27:56.682912] E [MSGID: 108006] [afr-common.c:5006:__afr_handle_child_down_event] 0-kvms-replicate-0: All subvolumes are down. Going offline until atleast one of them comes back up.
[2018-04-12 13:27:57.593471] E [MSGID: 108006] [afr-common.c:5006:__afr_handle_child_down_event] 0-kvms-replicate-0: All subvolumes are down. Going offline until atleast one of them comes back up.

> virsh vol-list default
# Does not show the image, but by mounting the volume with "mount" I can see that it was actually created.

# Additional info:
- The gluster bricks on each gluster node are created on a zfs volumes.
- I can import VM images that are already on the pool, which I copied with "cp" and they work fine.

Comment 1 Cole Robinson 2019-06-16 16:32:41 UTC
I don't have a setup to test but this appears to be still relevant. There's assumptions everywhere that the default pool is just a dir on the filesystem. Properly fixing it will take some work

Comment 2 Cole Robinson 2020-01-26 21:15:45 UTC
IMO this is a fairly advanced usecase and I don't expect virt-install/virt-manager to every really play well here. You can kinda trick things by mounting gluster to a local directory, and then pointing the default pool at that as a 'dir' pool. I know it's not the same but in truth I think that's realistically the best you'll get out of virt-manager/virt-install given avaiable dev resources.

Closing as DEFERRED. If someone shows up with patches though I'd be happy to help review


Note You need to log in before you can comment on or make changes to this bug.