Description of problem:
I created a gluster backend on virt-manager and I cannot create new virtual disks.
But I can see the content of the gluster volume and load the ISO file, only the creation of the disk fails.
Version-Release number of selected component (if applicable):
CentOS Linux release 7.4.1708 (Core)
Steps to Reproduce:
1. yum -y install kvm libvirt qemu-kvm libvirt-python virt-manager virt-viewer virt-install
2. Make sure that the gluster volume has the right permissions
gluster volume set gkvms server.allow-insecure on
## Adjust ownership to qemu user
gluster volume set gkvms storage.owner-uid 107
gluster volume set gkvms storage.owner-gid 107
3. Create a gluster pool (I removed the default one and called this one "default")
4. Try to create a new VM and click on "Forward" on the 4th step (Create a disk image for the virtual machine)
When clicking on "Forward" on step 4 of 5 of "Create a new virtual machine", it pops up this error message:
Uncaught error validating install parameters: 'NoneType' object has no attribute 'endswith'
Traceback (most recent call last):
File "/usr/share/virt-manager/virtManager/create.py", line 1697, in _validate
File "/usr/share/virt-manager/virtManager/create.py", line 1963, in _validate_storage_page
File "/usr/share/virt-manager/virtManager/create.py", line 1955, in _get_storage_path
path = self._addstorage.get_default_path(vmname)
File "/usr/share/virt-manager/virtManager/addstorage.py", line 238, in get_default_path
path = os.path.join(target, path)
File "/usr/lib64/python2.7/posixpath.py", line 77, in join
elif path == '' or path.endswith('/'):
AttributeError: 'NoneType' object has no attribute 'endswith'
The virtual disk should be created on the gluster volume.
> virsh pool-info default
Capacity: 3,21 TiB
Allocation: 8,38 GiB
Available: 3,20 TiB
> virsh pool-dumpxml default
> qemu-img create -f qcow2 gluster://my-server.example.com/kvms/VMs/test.qcow2 1G
Formatting 'gluster://my-server.example.com/kvms/VMs/test.qcow2', fmt=qcow2 size=1073741824 encryption=off cluster_size=65536 lazy_refcounts=off debug=0
[2018-04-12 13:27:55.579560] E [MSGID: 108006] [afr-common.c:5006:__afr_handle_child_down_event] 0-kvms-replicate-0: All subvolumes are down. Going offline until atleast one of them comes back up.
[2018-04-12 13:27:56.682912] E [MSGID: 108006] [afr-common.c:5006:__afr_handle_child_down_event] 0-kvms-replicate-0: All subvolumes are down. Going offline until atleast one of them comes back up.
[2018-04-12 13:27:57.593471] E [MSGID: 108006] [afr-common.c:5006:__afr_handle_child_down_event] 0-kvms-replicate-0: All subvolumes are down. Going offline until atleast one of them comes back up.
> virsh vol-list default
# Does not show the image, but by mounting the volume with "mount" I can see that it was actually created.
# Additional info:
- The gluster bricks on each gluster node are created on a zfs volumes.
- I can import VM images that are already on the pool, which I copied with "cp" and they work fine.
I don't have a setup to test but this appears to be still relevant. There's assumptions everywhere that the default pool is just a dir on the filesystem. Properly fixing it will take some work
IMO this is a fairly advanced usecase and I don't expect virt-install/virt-manager to every really play well here. You can kinda trick things by mounting gluster to a local directory, and then pointing the default pool at that as a 'dir' pool. I know it's not the same but in truth I think that's realistically the best you'll get out of virt-manager/virt-install given avaiable dev resources.
Closing as DEFERRED. If someone shows up with patches though I'd be happy to help review