Description of problem: Summarizing problem reported in ML by Joop: It looks like that something goes wrong in vdsm/storage/glusterVolume.py. Volname in getVmVolumeInfo ends up with a volumename with double underscores in it, then svdsmProxy.glusterVolumeInfo is called which in the end calls a cli script, by supervdsmd, which returns an empty xml document because there is no such volume with double underscores. Running the command which is logged in supervdsm.log confirms this too. Reducing the volname to have only single underscores returns a correct xml object. In my case: Real path entered during setup: st01:gv_ovirt_data01 What's used: st01:gv__ovirt__data01 Version-Release number of selected component (if applicable): How reproducible: Always Additional info:
Created attachment 1106600 [details] vdsm log showing the error
Version of vdsm used: vdsm-4.17.0-33.git92f3d7f.el7ev.x86_64 Text of the original post to users at ovirt.org @ 14-12-2015 12:00 I have reinstalled my test environment have come across an old error, see BZ 988299, Bad volume specification {u'index': 0,. At the end of that BZ there is mentioning of a problem with '_' in the name of the volume and a patch is mentioned but the code has since been change quite a bit and I can't find if that still applies. It look like it doesn't because I have a gluster volume with the name gv_ovirt_data01 and it look like it gets translated to gv__ovirt__data01 and then I can't start any VMs Weird thing, I CAN import VMs from the export domain to this gluster domain. Followup mail: I have just done the following on 2 servers which also hold the volumes with '_' in it: mkdir -p /gluster/br-ovirt-data02 ssm -f create -p vg_`hostname -s` --size 10G --name lv-ovirt-data02 --fstype xfs /gluster/br-ovirt-data02 echo /dev/mapper/vg_`hostname -s`-lv-ovirt-data02 /gluster/br-ovirt-data02 xfs defaults 1 2 >>/etc/fstab semanage fcontext -a -t glusterd_brick_t /gluster/br-ovirt-data02 restorecon -Rv /gluster/br-ovirt-data02 mkdir /gluster/br-ovirt-data02/gl-ovirt-data02 chown -R 36:36 /gluster/ Added a replicated volume on top of the above, started it, added a Storage Domain using that volume, moved a disk to it, and started the VM, works!
In vdsm-4.16.X it isn't a problem because I have one hosted-engine setup with that version and a underscore in the volume name. The command to check for gluster volumes is different though: /usr/sbin/gluster --mode=script volume info --xml and in vdsm-4.17.X its: /usr/sbin/gluster --mode=script volume info --remote-host=st01.nieuwland.nl gv__ovirt__data01 --xml
Hi Joop, The code that fails seems to be related to a patch (https://gerrit.ovirt.org/44061) that isn't merged to any branch. Actually, this patch wasn't tested nor verified. Basically, there should be no issues with volume names that include underscores in their names. Is vdsm-4.17.X build that you tried a private build? Can you try same scenarios using a stable branch? Thanks!
It looks like a private build was indeed used. Can't replicate it at the moment so this can be closed Joop