Bug 1242344

Summary: [hosted-engine-setup] [GlusterFS support] Deployment fails with: " Fault: <Fault 1: '<type \'exceptions.Exception\'>:method "glusterVolumesList" is not supported'> "
Product: [oVirt] ovirt-hosted-engine-setup Reporter: Elad <ebenahar>
Component: GeneralAssignee: Simone Tiraboschi <stirabos>
Status: CLOSED CURRENTRELEASE QA Contact: Elad <ebenahar>
Severity: urgent Docs Contact:
Priority: urgent    
Version: ---CC: acanan, bazulay, bugs, dnarayan, ecohen, gklein, lsurette, mgoldboi, nsoffer, rbalakri, sabose, sbonazzo, tjeyasin, ycui, yeylon
Target Milestone: ovirt-3.6.0-rcFlags: rule-engine: ovirt-3.6.0+
ylavi: planning_ack+
rule-engine: devel_ack+
rule-engine: testing_ack+
Target Release: 1.3.0   
Hardware: x86_64   
OS: Unspecified   
Whiteboard: integration
Fixed In Version: Doc Type: Bug Fix
Doc Text:
glusterVolumesList was provided only by vdsm-gluster witch will require a different subscription. Directly using glusterfs command to avoid requiring vdsm-gluster
Story Points: ---
Clone Of: Environment:
Last Closed: 2015-11-04 13:37:18 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1083025    
Attachments:
Description Flags
sosreport none

Description Elad 2015-07-13 06:49:20 UTC
Created attachment 1051287 [details]
sosreport

Description of problem:
Tried to deploy hosted-engine over GlusterFS using a replica 3 volume.
The operation failed with the following error message in the setup log:

2015-07-12 16:17:20 WARNING otopi.plugins.ovirt_hosted_engine_setup.storage.nfs nfs._validateDomain:200 Due to several bugs in mount.glusterfs the validation of GlusterFS share cannot be reliable.
2015-07-12 16:17:20 DEBUG otopi.plugins.ovirt_hosted_engine_setup.storage.nfs nfs._check_replica_level:168 glusterVolumesList
2015-07-12 16:17:20 DEBUG otopi.context context._executeMethod:155 method exception
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/otopi/context.py", line 145, in _executeMethod
    method['method']()
  File "/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/ovirt-hosted-engine-setup/storage/nfs.py", line 290, in _customization
    check_space=False,
  File "/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/ovirt-hosted-engine-setup/storage/nfs.py", line 204, in _validateDomain
    self._check_replica_level(connection)
  File "/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/ovirt-hosted-engine-setup/storage/nfs.py", line 169, in _check_replica_level
    response = cli.glusterVolumesList(volume, server)
  File "/usr/lib64/python2.7/xmlrpclib.py", line 1224, in __call__
    return self.__send(self.__name, args)
  File "/usr/lib64/python2.7/xmlrpclib.py", line 1578, in __request
    verbose=self.__verbose
  File "/usr/lib64/python2.7/xmlrpclib.py", line 1264, in request
    return self.single_request(host, handler, request_body, verbose)
  File "/usr/lib64/python2.7/xmlrpclib.py", line 1297, in single_request
    return self.parse_response(response)
  File "/usr/lib/python2.7/site-packages/vdsm/vdscli.py", line 43, in wrapped_parse_response
    return old_parse_response(*args, **kwargs)
  File "/usr/lib64/python2.7/xmlrpclib.py", line 1473, in parse_response
    return u.close()
  File "/usr/lib64/python2.7/xmlrpclib.py", line 793, in close
    raise Fault(**self._stack[0])
Fault: <Fault 1: '<type \'exceptions.Exception\'>:method "glusterVolumesList" is not supported'>
2015-07-12 16:17:20 ERROR otopi.context context._executeMethod:164 Failed to execute stage 'Environment customization': <Fault 1: '<type \'exceptions.Exception\'>:method "glusterVolumesList" is not supported'>


Version-Release number of selected component (if applicable):
ovirt-hosted-engine-setup-1.3.0-0.0.master.20150707150259.git9de6e6f.el7.noarch
vdsm-4.17.0-1121.gitf817790.el7.noarch
glusterfs-3.7.2-3.el7.x86_64

How reproducible:
Always

Steps to Reproduce:
1. 
- Create a replica 3 volume in the Gluster cluster and configure it as the following:

gluster volume create <volume> replica 3 transport tcp <host1:/path>             <host2:/path> <host3:/path> force

- Set the volume with the following configurations:

  gluster volume set <volume> cluster.quorum-type auto
  gluster volume set <volume> network.ping-timeout 10
  gluster volume set <volume> auth.allow \*
  gluster volume set <volume> group virt
  gluster volume set <volume> storage.owner-uid 36
  gluster volume set <volume> storage.owner-gid 36
  gluster volume set <volume> server.allow-insecure on


- Start the volume:

gluster volume start <volume>

2. Deploy hosted-engine using GlusterFS with the new volume
3.

Actual results:
Hosted-engine deployment fails with the mentioned error in the setup log:

'exceptions.Exception\'>:method "glusterVolumesList" is not supported'>

Glusterfs is installed with all its dependencies glusterVolumesList should work.

Expected results:
Glusterfs is installed with all its dependencies glusterVolumesList should work.
Hosted-engine deployment over GlusterFS should succeed.

Additional info:
sosreport

Comment 1 Sandro Bonazzola 2015-07-13 07:01:49 UTC
Moving to VDSM since the call to glusterVolumesList should work without having vdsm-gluster installed.

Comment 2 Timothy Asir 2015-07-14 09:50:34 UTC
Currently vdsm uses gluster api module to get the list of method names to register to provide access to the gluster volume api. But unfortunately this gluster api module is not available with vdsm package. This issue can be resolved by either copying gluster/api.py and gluster/fstab.py into /usr/share/vdsm/gluster/ path and restart vdsm services.


The file bindingxmlrpc.py in rpc module uses api.py which is under gluster path to get the list of method names using a function called getGlusterMethods. Unfortunately this api.py and its dependent file fstab.py is not come up with vdsm package. This issue can be fixed by moving this function
getGlusterMethods into gluster/__init__.py and use it in bindingxmlrpc.py instead of calling from gluster api.py. So that we need not to ship this api.py and fstab.py along with vdsm package.

Comment 3 Timothy Asir 2015-07-14 10:05:35 UTC
As a workaround this can be quickly resolved by installing vdsm-gluster package which will copy the required files.

Comment 4 Nir Soffer 2015-07-15 15:23:02 UTC
(In reply to Sandro Bonazzola from comment #1)
> Moving to VDSM since the call to glusterVolumesList should work without
> having vdsm-gluster installed.

glusterVolumeList is not supported without vdsm-gluster. vdsm requires only glusterVolumeInfo, used to get the bricks used by the volume you want to mount, and to validate that the volume is using one of the allowed replica count (default to 3).

Why do you need glusterVolumeList?

Comment 5 Sandro Bonazzola 2015-07-16 06:39:13 UTC
You can see how it's used here: https://gerrit.ovirt.org/gitweb?p=ovirt-hosted-engine-setup.git;a=blob;f=src/plugins/ovirt-hosted-engine-setup/storage/nfs.py;h=57a2df52e48185d8609d6653f2f2aa169a5f1cd6;hb=refs/heads/master#l163

It's used for checking if volume exists and if it's replica 3.

Comment 6 Nir Soffer 2015-07-16 06:46:55 UTC
(In reply to Sandro Bonazzola from comment #5)
> You can see how it's used here:
> https://gerrit.ovirt.org/gitweb?p=ovirt-hosted-engine-setup.git;a=blob;f=src/
> plugins/ovirt-hosted-engine-setup/storage/nfs.py;
> h=57a2df52e48185d8609d6653f2f2aa169a5f1cd6;hb=refs/heads/master#l163
> 
> It's used for checking if volume exists and if it's replica 3.

But this verb is not part of vdsm api; its use is invalid unless you install vdsm-gluster.

Changing component as there is no vdsm bug here. Using non-existing api is 
an issue in the caller application.

Comment 7 Darshan 2015-07-16 09:25:14 UTC
     In vdsm storage domain, they get the gluster volume related information by invoking the glusterVolumeInfo() method that is registered to supervdsm(this does not need vdsm-gluster package).

     Can hosted-engine-setup use similar approach to get volume related information instead of calling the vdsm api ?

Comment 8 Sandro Bonazzola 2015-07-16 09:46:10 UTC
Will try to do that.

Comment 9 Nir Soffer 2015-07-16 09:53:48 UTC
(In reply to Darshan from comment #7)
>      In vdsm storage domain, they get the gluster volume related information
> by invoking the glusterVolumeInfo() method that is registered to
> supervdsm(this does not need vdsm-gluster package).
> 
>      Can hosted-engine-setup use similar approach to get volume related
> information instead of calling the vdsm api ?

glusterVolumeInfo is not part of vdsm api. It is part of vdsm-gluster api.

What you need is to depend on vdsm-gluster, or if you cannot depend on it,
vdsm-gluster maintainers should either add the needed apis to vdsm, or break vdsm-gluster to vdsm-gluster-client and vdsm-gluster-server.

Another option, since you are trying to access gluster api and not vdsm
apis, why not use directly gluster command line? then you are free to use
anything you like.

Comment 10 Sandro Bonazzola 2015-07-16 09:59:19 UTC
Gluster client API must be available for engine to query vdsm about it anyway.
So I can use gluster client directly but since this kind of work should be done anyway I'd prefer gluster guys to expose the client calls in vdsm or in a vdsm-gluster-client. We can't depend on vdsm-gluster as is since the gluster server requires a subscription in RHEL.

Comment 11 Nir Soffer 2015-07-16 10:09:48 UTC
Sahina, breaking vdsm-gluster  to client and server package was discussed
in the past, maybe it is time to do this?

It does not make sense technically that nicer api for gluster command line
requires installing a gluster server.

Do you see any technical reason not to break this package?

Comment 12 Sandro Bonazzola 2015-07-16 13:53:38 UTC
Moving to gluster-client dependency.

Comment 13 Sahina Bose 2015-07-21 09:48:10 UTC
(In reply to Nir Soffer from comment #11)
> Sahina, breaking vdsm-gluster  to client and server package was discussed
> in the past, maybe it is time to do this?
> 
> It does not make sense technically that nicer api for gluster command line
> requires installing a gluster server.
> 
> Do you see any technical reason not to break this package?

glusterVolumeInfo and other query commands will only work with the remote-server option even if we split into a vdsm-gluster-client

Comment 15 Nir Soffer 2015-07-21 11:38:46 UTC
(In reply to Sahina Bose from comment #13)
> (In reply to Nir Soffer from comment #11)
> > Sahina, breaking vdsm-gluster  to client and server package was discussed
> > in the past, maybe it is time to do this?
> > 
> > It does not make sense technically that nicer api for gluster command line
> > requires installing a gluster server.
> > 
> > Do you see any technical reason not to break this package?
> 
> glusterVolumeInfo and other query commands will only work with the
> remote-server option even if we split into a vdsm-gluster-client

Sure, but this is expected; if you don't run a local gluster server, you
should not expect to connect to is.

Comment 17 Elad 2015-11-03 09:43:32 UTC
Hostet-engine over Gluster deployment (replica 3 volume)  succeeds.

Tested using:

ovirt-hosted-engine-ha-1.3.1-1.el7ev.noarch
ovirt-host-deploy-1.4.0-1.el7ev.noarch
ovirt-vmconsole-host-1.0.0-1.el7ev.noarch
ovirt-vmconsole-1.0.0-1.el7ev.noarch
ovirt-hosted-engine-setup-1.3.0-1.el7ev.noarch
libgovirt-0.3.3-1.el7.x86_64
ovirt-setup-lib-1.0.0-1.el7ev.noarch
vdsm-cli-4.17.10-5.el7ev.noarch

vdsm-infra-4.17.10-5.el7ev.noarch
vdsm-xmlrpc-4.17.10-5.el7ev.noarch
vdsm-python-4.17.10-5.el7ev.noarch
vdsm-4.17.10-5.el7ev.noarch
vdsm-yajsonrpc-4.17.10-5.el7ev.noarch
vdsm-jsonrpc-4.17.10-5.el7ev.noarch

glusterfs-client-xlators-3.7.1-16.el7.x86_64
glusterfs-rdma-3.7.1-16.el7.x86_64
glusterfs-cli-3.7.1-16.el7.x86_64
glusterfs-fuse-3.7.1-16.el7.x86_64
glusterfs-libs-3.7.1-16.el7.x86_64
glusterfs-api-3.7.1-16.el7.x86_64
glusterfs-devel-3.7.1-16.el7.x86_64
glusterfs-api-devel-3.7.1-16.el7.x86_64
glusterfs-3.7.1-16.el7.x86_64

Comment 18 Sandro Bonazzola 2015-11-04 13:37:18 UTC
oVirt 3.6.0 has been released on November 4th, 2015 and should fix this issue.
If problems still persist, please open a new BZ and reference this one.