Vijay, I initially thought VDSM could inspect /proc/filesystems in order to deduce if it has glusterfs installed, but it doesn't seem to be there (using upstream glusterfs 3.6.2-1). Is this the intended behavior, or is this just the wrong way to check?
(In reply to Allon Mureinik from comment #1) > Vijay, I initially thought VDSM could inspect /proc/filesystems in order to > deduce if it has glusterfs installed, but it doesn't seem to be there (using > upstream glusterfs 3.6.2-1). > > Is this the intended behavior, or is this just the wrong way to check? This indeed is intended behavior as gluster is in userspace. Maybe a good way to check would be through rpm -qa | grep glusterfs-fuse/glusterfs-api depending on the storage domain integration in use.
(In reply to Vijay Bellur from comment #2) > (In reply to Allon Mureinik from comment #1) > > Vijay, I initially thought VDSM could inspect /proc/filesystems in order to > > deduce if it has glusterfs installed, but it doesn't seem to be there (using > > upstream glusterfs 3.6.2-1). > > > > Is this the intended behavior, or is this just the wrong way to check? > > This indeed is intended behavior as gluster is in userspace. Maybe a good > way to check would be through rpm -qa | grep glusterfs-fuse/glusterfs-api > depending on the storage domain integration in use. What about distros that don't have a concept of RPMs, also dev environments. Is there a way to do it without rpms specific checks?
We can check if the gluster command line is available. If we can run gluster and get its version, we can assume that gluster is installed. Or simpler, require glusterfs client tools on a host.
Is the 3.7 client gluster packages planned to be in the regular RHEL 7 channels?
glusterfs-cli package is available in RHEL optional channel
(In reply to Sahina Bose from comment #7) > glusterfs-cli package is available in RHEL optional channel Sahina: But it is not available on ppc - right? Yaniv: Vdsm already has a cross-platform way to report existing package, using either rpm or apt python bindings.
(In reply to Nir Soffer from comment #8) > (In reply to Sahina Bose from comment #7) > > glusterfs-cli package is available in RHEL optional channel > > Sahina: But it is not available on ppc - right? Right not available on ppc > > Yaniv: Vdsm already has a cross-platform way to report existing package, > using either rpm or apt python bindings.
Ala, please add the required doctext to this bug.
Testing using 3.6.0-11 Option is not greyed out like Yaniv. Instead I got below error message (which can work) "Error while executing action: Cannot add Storage Connection. Host camel-vdsc.qa.lab.tlv.redhat.com cannot connect to Glusterfs. Verify that glusterfs-cli package is installed on the host." Yaniv - are we good with it or do you want it greyedout only?
Please be aware to https://bugzilla.redhat.com/show_bug.cgi?id=1260754 I think it is related to this fix...
(In reply to Aharon Canan from comment #11) > Testing using 3.6.0-11 > > Option is not greyed out like Yaniv. > Instead I got below error message (which can work) > > "Error while executing action: Cannot add Storage Connection. Host > camel-vdsc.qa.lab.tlv.redhat.com cannot connect to Glusterfs. Verify that > glusterfs-cli package is installed on the host." > > Yaniv - are we good with it or do you want it greyedout only? Yaniv - If we want it greyed out please reopen. Marking as verified as we have informative message.
Is it the same as in the CEPH package testing?
(In reply to Yaniv Dary from comment #14) > Is it the same as in the CEPH package testing? Do not know, didn't test ceph pkg yet
I understand from derez that behavior is the same
(In reply to Ala Hino from comment #16) > I understand from derez that behavior is the same This solution is acceptable by me, we will see if any customers will complain about this and we can change this if needed.
(In reply to Yaniv Dary from comment #17) > (In reply to Ala Hino from comment #16) > > I understand from derez that behavior is the same > > This solution is acceptable by me, we will see if any customers will > complain about this and we can change this if needed. So why not doing it right from the beginning instead of waiting for costumers to complain and then do it again??
(In reply to Aharon Canan from comment #18) > (In reply to Yaniv Dary from comment #17) > > (In reply to Ala Hino from comment #16) > > > I understand from derez that behavior is the same > > > > This solution is acceptable by me, we will see if any customers will > > complain about this and we can change this if needed. > > So why not doing it right from the beginning instead of waiting for > costumers to complain and then do it again?? Doing it right is not a science, I think should cover the requirement and I don't expect customer issues on this. If there will be we will consider changing the flow.
(In reply to Yaniv Dary from comment #19) > (In reply to Aharon Canan from comment #18) > > (In reply to Yaniv Dary from comment #17) > > > (In reply to Ala Hino from comment #16) > > > > I understand from derez that behavior is the same > > > > > > This solution is acceptable by me, we will see if any customers will > > > complain about this and we can change this if needed. > > > > So why not doing it right from the beginning instead of waiting for > > costumers to complain and then do it again?? > > Doing it right is not a science, I think should cover the requirement and I > don't expect customer issues on this. If there will be we will consider > changing the flow. Note that grade-out functionalities is not too common in the GUI. In this specfic case, when creating a new storage domain, a new popup is opened and admin checks storage type from a drop-down. Do we expect drop-down content to dynamically change based on one condition or another? This cold be a killer once done every time admin opens the drop-down. It is not simply grade-out the "New Domain" button if glusterfs packages are not installed.
oVirt 3.6.0 has been released on November 4th, 2015 and should fix this issue. If problems still persist, please open a new BZ and reference this one.