Created attachment 1071070 [details] Logs01 Description of problem Trying to create new gluster storage domain blocked with CDA (below) although the glusterfs-cli pkg installed on host. from GUI- ========= Error while executing action: Cannot add Storage Connection. Host camel-vdsc.qa.lab.tlv.redhat.com cannot connect to Glusterfs. Verify that glusterfs-cli package is installed on the host. from engine log =============== 2015-09-07 15:27:42,977 WARN [org.ovirt.engine.core.bll.storage.AddStorageServerConnectionCommand] (ajp-/127.0.0.1:8702-8) [4c1fc26c] CanDoAction of action 'AddStorageServerConnection' failed for user admin@internal. Reasons: VAR__ACTION__ADD,VAR__TYPE__STORAGE__CONNECTION,ACTION_TYPE_FAIL_VDS_CANNOT_CONNECT_TO_GLUSTERFS,$VdsName camel-vdsc.qa.lab.tlv.redhat.com Version-Release number of selected component (if applicable): 3.6.0-11 How reproducible: 100% Steps to Reproduce: 1. Create new glusterfs storage domain 2. 3. Actual results: CDA Expected results: Should create the new gluster domain Additional info: trying to mount manually works fine (below) 10.35.160.6:/acanan01 on /mnt type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
[root@camel-vdsc ~]# rpm -qa |grep gluster glusterfs-libs-3.7.1-11.el7.x86_64 glusterfs-fuse-3.7.1-11.el7.x86_64 glusterfs-3.7.1-11.el7.x86_64 glusterfs-api-3.7.1-11.el7.x86_64 glusterfs-devel-3.7.1-11.el7.x86_64 glusterfs-rdma-3.7.1-11.el7.x86_64 glusterfs-cli-3.7.1-11.el7.x86_64 glusterfs-client-xlators-3.7.1-11.el7.x86_64 glusterfs-api-devel-3.7.1-11.el7.x86_64
Please add the engine and VDSM rpms versions.
[root@camel-vdsc ~]# rpm -qa |grep vdsm vdsm-xmlrpc-4.17.5-1.el7ev.noarch vdsm-cli-4.17.5-1.el7ev.noarch vdsm-infra-4.17.5-1.el7ev.noarch vdsm-yajsonrpc-4.17.5-1.el7ev.noarch vdsm-python-4.17.5-1.el7ev.noarch vdsm-jsonrpc-4.17.5-1.el7ev.noarch vdsm-4.17.5-1.el7ev.noarch [root@elad-rhevm3 ~]# rpm -qa |grep rhev rhevm-image-uploader-3.6.0-0.2.master.git6846716.el6ev.noarch rhevm-dependencies-3.6.0-0.0.1.master.el6ev.noarch rhevm-tools-3.6.0-0.13.master.el6.noarch rhevm-dbscripts-3.6.0-0.13.master.el6.noarch rhevm-websocket-proxy-3.6.0-0.13.master.el6.noarch rhevm-vmconsole-proxy-helper-3.6.0-0.13.master.el6.noarch rhevm-spice-client-x64-msi-3.6-3.el6.noarch rhevm-lib-3.6.0-0.13.master.el6.noarch rhevm-iso-uploader-3.6.0-0.2.alpha.gitea4158a.el6ev.noarch rhevm-setup-base-3.6.0-0.13.master.el6.noarch rhevm-branding-rhev-3.6.0-0.0.master.20150824191211.el6ev.noarch redhat-support-plugin-rhev-3.6.0-8.el6.noarch rhevm-setup-plugin-vmconsole-proxy-helper-3.6.0-0.13.master.el6.noarch rhevm-spice-client-x86-cab-3.6-3.el6.noarch rhevm-userportal-3.6.0-0.13.master.el6.noarch rhevm-3.6.0-0.13.master.el6.noarch rhevm-sdk-python-3.6.0.0-1.el6ev.noarch rhevm-cli-3.6.0.0-1.el6ev.noarch rhevm-setup-plugin-ovirt-engine-common-3.6.0-0.13.master.el6.noarch rhevm-doc-3.6.0-1.el6eng.noarch rhevm-setup-3.6.0-0.13.master.el6.noarch rhevm-setup-plugin-ovirt-engine-3.6.0-0.13.master.el6.noarch rhevm-spice-client-x64-cab-3.6-3.el6.noarch rhevm-webadmin-portal-3.6.0-0.13.master.el6.noarch rhevm-extensions-api-impl-3.6.0-0.13.master.el6.noarch rhevm-setup-plugin-websocket-proxy-3.6.0-0.13.master.el6.noarch rhevm-setup-plugins-3.6.0-0.4.master.20150811110743.el6ev.noarch rhevm-spice-client-x86-msi-3.6-3.el6.noarch rhevm-log-collector-3.6.0-0.4.beta.gitf27fdb6.el6ev.noarch rhev-guest-tools-iso-3.5-9.el6ev.noarch rhevm-restapi-3.6.0-0.13.master.el6.noarch rhevm-backend-3.6.0-0.13.master.el6.noarch
After putting the host into maintenance and re-activating it, I was able to add gluster volume. I think glusterfs-cli added after the host added to the engine and was up and running. This could explain why the engine was not aware that gluserfs-cli was installed. Basically, we check whether glusterfs-cli is installed based on caps we get from host. When activating a storage domain, we don't get updated caps from the host. I'd expect admin to put host into maintenance before installing glusterfs-cli package. Yaniv, what do you think?
(In reply to Ala Hino from comment #4) > After putting the host into maintenance and re-activating it, I was able to > add gluster volume. > > I think glusterfs-cli added after the host added to the engine and was up > and running. This could explain why the engine was not aware that > gluserfs-cli was installed. > > Basically, we check whether glusterfs-cli is installed based on caps we get > from host. When activating a storage domain, we don't get updated caps from > the host. > I'd expect admin to put host into maintenance before installing > glusterfs-cli package. Yaniv, what do you think? This is a requirement - updating packages on the host while the engine thinks it is active can have unforseeable problems, and is not supported. Ala - just to be on the safe side, please add an explicit note about this to the RFE in bz 922744. Other than that, there is nothing we can do.
Ack, updating packages requires host be put to maintenance and re-activating it.