Bug 1260754 - Can't create new Gluster storage domain although gluster-cli pkg installed unless host is put to maintenance and re-activated
Summary: Can't create new Gluster storage domain although gluster-cli pkg installed un...
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-engine
Version: 3.6.0
Hardware: Unspecified
OS: Unspecified
unspecified
urgent
Target Milestone: ovirt-3.6.3
: 3.6.0
Assignee: Ala Hino
QA Contact: Aharon Canan
URL:
Whiteboard: storage
Depends On:
Blocks: 1254499 1258205
TreeView+ depends on / blocked
 
Reported: 2015-09-07 15:56 UTC by Aharon Canan
Modified: 2016-03-10 06:25 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-09-08 09:48:06 UTC
oVirt Team: Storage
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
Logs01 (1.81 MB, application/x-gzip)
2015-09-07 15:56 UTC, Aharon Canan
no flags Details

Description Aharon Canan 2015-09-07 15:56:23 UTC
Created attachment 1071070 [details]
Logs01

Description of problem
Trying to create new gluster storage domain blocked with CDA (below) although the glusterfs-cli pkg installed on host.

from GUI-
=========
Error while executing action: Cannot add Storage Connection. Host camel-vdsc.qa.lab.tlv.redhat.com cannot connect to Glusterfs. Verify that glusterfs-cli package is installed on the host.

from engine log
===============
2015-09-07 15:27:42,977 WARN  [org.ovirt.engine.core.bll.storage.AddStorageServerConnectionCommand] (ajp-/127.0.0.1:8702-8) [4c1fc26c] CanDoAction of action 'AddStorageServerConnection' failed for user admin@internal. Reasons: VAR__ACTION__ADD,VAR__TYPE__STORAGE__CONNECTION,ACTION_TYPE_FAIL_VDS_CANNOT_CONNECT_TO_GLUSTERFS,$VdsName camel-vdsc.qa.lab.tlv.redhat.com

Version-Release number of selected component (if applicable):
3.6.0-11

How reproducible:
100%

Steps to Reproduce:
1. Create new glusterfs storage domain
2.
3.

Actual results:
CDA

Expected results:
Should create the new gluster domain

Additional info:
trying to mount manually works fine (below)
10.35.160.6:/acanan01 on /mnt type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)

Comment 1 Aharon Canan 2015-09-07 16:00:04 UTC
[root@camel-vdsc ~]# rpm -qa |grep gluster
glusterfs-libs-3.7.1-11.el7.x86_64
glusterfs-fuse-3.7.1-11.el7.x86_64
glusterfs-3.7.1-11.el7.x86_64
glusterfs-api-3.7.1-11.el7.x86_64
glusterfs-devel-3.7.1-11.el7.x86_64
glusterfs-rdma-3.7.1-11.el7.x86_64
glusterfs-cli-3.7.1-11.el7.x86_64
glusterfs-client-xlators-3.7.1-11.el7.x86_64
glusterfs-api-devel-3.7.1-11.el7.x86_64

Comment 2 Allon Mureinik 2015-09-08 07:11:07 UTC
Please add the engine and VDSM rpms versions.

Comment 3 Aharon Canan 2015-09-08 07:14:02 UTC
[root@camel-vdsc ~]# rpm -qa |grep vdsm
vdsm-xmlrpc-4.17.5-1.el7ev.noarch
vdsm-cli-4.17.5-1.el7ev.noarch
vdsm-infra-4.17.5-1.el7ev.noarch
vdsm-yajsonrpc-4.17.5-1.el7ev.noarch
vdsm-python-4.17.5-1.el7ev.noarch
vdsm-jsonrpc-4.17.5-1.el7ev.noarch
vdsm-4.17.5-1.el7ev.noarch

[root@elad-rhevm3 ~]# rpm -qa |grep rhev
rhevm-image-uploader-3.6.0-0.2.master.git6846716.el6ev.noarch
rhevm-dependencies-3.6.0-0.0.1.master.el6ev.noarch
rhevm-tools-3.6.0-0.13.master.el6.noarch
rhevm-dbscripts-3.6.0-0.13.master.el6.noarch
rhevm-websocket-proxy-3.6.0-0.13.master.el6.noarch
rhevm-vmconsole-proxy-helper-3.6.0-0.13.master.el6.noarch
rhevm-spice-client-x64-msi-3.6-3.el6.noarch
rhevm-lib-3.6.0-0.13.master.el6.noarch
rhevm-iso-uploader-3.6.0-0.2.alpha.gitea4158a.el6ev.noarch
rhevm-setup-base-3.6.0-0.13.master.el6.noarch
rhevm-branding-rhev-3.6.0-0.0.master.20150824191211.el6ev.noarch
redhat-support-plugin-rhev-3.6.0-8.el6.noarch
rhevm-setup-plugin-vmconsole-proxy-helper-3.6.0-0.13.master.el6.noarch
rhevm-spice-client-x86-cab-3.6-3.el6.noarch
rhevm-userportal-3.6.0-0.13.master.el6.noarch
rhevm-3.6.0-0.13.master.el6.noarch
rhevm-sdk-python-3.6.0.0-1.el6ev.noarch
rhevm-cli-3.6.0.0-1.el6ev.noarch
rhevm-setup-plugin-ovirt-engine-common-3.6.0-0.13.master.el6.noarch
rhevm-doc-3.6.0-1.el6eng.noarch
rhevm-setup-3.6.0-0.13.master.el6.noarch
rhevm-setup-plugin-ovirt-engine-3.6.0-0.13.master.el6.noarch
rhevm-spice-client-x64-cab-3.6-3.el6.noarch
rhevm-webadmin-portal-3.6.0-0.13.master.el6.noarch
rhevm-extensions-api-impl-3.6.0-0.13.master.el6.noarch
rhevm-setup-plugin-websocket-proxy-3.6.0-0.13.master.el6.noarch
rhevm-setup-plugins-3.6.0-0.4.master.20150811110743.el6ev.noarch
rhevm-spice-client-x86-msi-3.6-3.el6.noarch
rhevm-log-collector-3.6.0-0.4.beta.gitf27fdb6.el6ev.noarch
rhev-guest-tools-iso-3.5-9.el6ev.noarch
rhevm-restapi-3.6.0-0.13.master.el6.noarch
rhevm-backend-3.6.0-0.13.master.el6.noarch

Comment 4 Ala Hino 2015-09-08 09:12:51 UTC
After putting the host into maintenance and re-activating it, I was able to add gluster volume.

I think glusterfs-cli added after the host added to the engine and was up and running. This could explain why the engine was not aware that gluserfs-cli was installed.

Basically, we check whether glusterfs-cli is installed based on caps we get from host. When activating a storage domain, we don't get updated caps from the host.
I'd expect admin to put host into maintenance before installing glusterfs-cli package. Yaniv, what do you think?

Comment 5 Allon Mureinik 2015-09-08 09:48:06 UTC
(In reply to Ala Hino from comment #4)
> After putting the host into maintenance and re-activating it, I was able to
> add gluster volume.
> 
> I think glusterfs-cli added after the host added to the engine and was up
> and running. This could explain why the engine was not aware that
> gluserfs-cli was installed.
> 
> Basically, we check whether glusterfs-cli is installed based on caps we get
> from host. When activating a storage domain, we don't get updated caps from
> the host.
> I'd expect admin to put host into maintenance before installing
> glusterfs-cli package. Yaniv, what do you think?

This is a requirement - updating packages on the host while the engine thinks it is active can have unforseeable problems, and is not supported.

Ala - just to be on the safe side, please add an explicit note about this to the RFE in bz 922744. Other than that, there is nothing we can do.

Comment 6 Yaniv Lavi 2015-09-08 09:53:21 UTC
Ack, updating packages requires host be put to maintenance and re-activating it.


Note You need to log in before you can comment on or make changes to this bug.