Bug 1404606 - The initial checks on the gluster volume ignores additional mount options like backup-volfile-servers
Summary: The initial checks on the gluster volume ignores additional mount options lik...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: ovirt-hosted-engine-setup
Classification: oVirt
Component: Plugins.Gluster
Version: 2.1.0
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: ovirt-4.2.2
: 2.2.10
Assignee: Simone Tiraboschi
QA Contact: SATHEESARAN
URL:
Whiteboard:
: 1398769 (view as bug list)
Depends On: 1455169
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-12-14 08:53 UTC by SATHEESARAN
Modified: 2018-05-10 06:23 UTC (History)
6 users (show)

Fixed In Version:
Clone Of:
Environment:
Last Closed: 2018-05-10 06:23:46 UTC
oVirt Team: Integration
Embargoed:
sabose: ovirt-4.2?
sasundar: planning_ack?
rule-engine: devel_ack+
sasundar: testing_ack+


Attachments (Terms of Use)
hosted-engine-setup (180.53 KB, text/plain)
2016-12-14 08:57 UTC, SATHEESARAN
no flags Details

Description SATHEESARAN 2016-12-14 08:53:15 UTC
Description of problem:
-----------------------
Self Hosted-Engine deployment with glusterfs backend fails to mount gluster volume with backup-volfile-servers option though provided with the deployment, and fails when the primary volfile server is down

Version-Release number of selected component (if applicable):
-------------------------------------------------------------
ovirt-4.1-snapshot
ovirt-hosted-engine-setup-2.1.0-0.0.master.20161130101611.gitb3ad261.el7.centos.noarch
RHGS 3.2.0 ( nightly - glusterfs-3.8.4-8.el7rhgs )

How reproducible:
-----------------
Always

Steps to Reproduce:
--------------------
1. Create a gluster replica 3 volume, optimize the volume for virt store and start the volume
2. Run hosted engine deployment - 'hosted-engine --deploy' 
3. Choose 'glusterfs' storage, and provide storage path with IP of node1 (i.e) <NODE1_IP>:/<gluster_volume_name>
3. Provide additional mount options with backup-volfile-servers
4. Bring down the glusterd on the node NODE1
5. Continue with setup

Actual results:
---------------
Hosted-engine setup fails with error - "[ ERROR ] Cannot access storage connection 10.70.36.73:/engine: Command '/sbin/gluster' failed to execute"

Expected results:
-----------------
As backup-volfile-server option is provided as additional mount option, the mount should happen with the additional volfile servers

Comment 1 SATHEESARAN 2016-12-14 08:54:23 UTC
Here is the snip from the hosted-engine deployment

<snip>

          --== STORAGE CONFIGURATION ==--
         
          Please specify the storage you would like to use (glusterfs, iscsi, fc, nfs3, nfs4)[nfs3]: glusterfs
[ INFO  ] Please note that Replica 3 support is required for the shared storage.
          Please specify the full shared storage connection path to use (example: host:/path): 10.70.36.73:/engine           
          If needed, specify additional mount options for the connection to the hosted-engine storage domain []: backup-volfile-servers=10.70.36.74:10.70.36.75
[ ERROR ] Cannot access storage connection 10.70.36.73:/engine: Command '/sbin/gluster' failed to execute

</snip>

Comment 2 SATHEESARAN 2016-12-14 08:55:36 UTC
I have marked 'oVirt Team as Infra'. I am not sure on that. Please make suitable changes on this field, if I'm wrong

Comment 3 SATHEESARAN 2016-12-14 08:57:35 UTC
Created attachment 1231520 [details]
hosted-engine-setup

Comment 4 Simone Tiraboschi 2016-12-14 09:47:23 UTC
What fails here it's just the initial validation of gluster volume: we use '/sbin/gluster --mode=script volume info' to gather volume info in order to enforce it's in replica 3.

2016-12-14 12:49:40 DEBUG otopi.plugins.otopi.dialog.human dialog.__logString:204 DIALOG:SEND                 If needed, specify additional mount options for the connection to the hosted-engine storage domain []: 
2016-12-14 12:50:41 DEBUG otopi.plugins.otopi.dialog.human dialog.__logString:204 DIALOG:RECEIVE    backup-volfile-servers=10.70.36.74:10.70.36.75
2016-12-14 12:50:41 DEBUG otopi.plugins.gr_he_setup.storage.nfs plugin.executeRaw:813 execute: ('/sbin/gluster', '--mode=script', '--xml', 'volume', 'info', 'engine', '--remote-host=10.70.36.73'), executable='None', cwd='None', env=None
2016-12-14 12:50:41 DEBUG otopi.plugins.gr_he_setup.storage.nfs plugin.executeRaw:863 execute-result: ('/sbin/gluster', '--mode=script', '--xml', 'volume', 'info', 'engine', '--remote-host=10.70.36.73'), rc=1
2016-12-14 12:50:41 DEBUG otopi.plugins.gr_he_setup.storage.nfs plugin.execute:921 execute-output: ('/sbin/gluster', '--mode=script', '--xml', 'volume', 'info', 'engine', '--remote-host=10.70.36.73') stdout:
Connection failed. Please check if gluster daemon is operational.

2016-12-14 12:50:41 DEBUG otopi.plugins.gr_he_setup.storage.nfs plugin.execute:926 execute-output: ('/sbin/gluster', '--mode=script', '--xml', 'volume', 'info', 'engine', '--remote-host=10.70.36.73') stderr:


2016-12-14 12:50:41 DEBUG otopi.plugins.gr_he_setup.storage.nfs nfs._customization:420 exception
Traceback (most recent call last):
  File "/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/gr-he-setup/storage/nfs.py", line 414, in _customization
    ohostedcons.StorageEnv.MNT_OPTIONS
  File "/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/gr-he-setup/storage/nfs.py", line 311, in _validateDomain
    self._check_volume_properties(connection)
  File "/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/gr-he-setup/storage/nfs.py", line 182, in _check_volume_properties
    raiseOnError=True
  File "/usr/lib/python2.7/site-packages/otopi/plugin.py", line 931, in execute
    command=args[0],
RuntimeError: Command '/sbin/gluster' failed to execute

Comment 5 Simone Tiraboschi 2016-12-14 09:52:20 UTC
Satheesaran, do you know if
 /sbin/gluster --mode=script --xml volume info engine --remote-host=10.70.36.73
could support somehow the additional mount options?

Comment 6 SATHEESARAN 2016-12-15 04:10:24 UTC
(In reply to Simone Tiraboschi from comment #5)
> Satheesaran, do you know if
>  /sbin/gluster --mode=script --xml volume info engine
> --remote-host=10.70.36.73
> could support somehow the additional mount options?

Hi Simone,

The command actually never supports additional mount options.
But it does accepts additional remote-hosts.

"/sbin/gluster --mode=script --xml volume info engine --remote-host=10.70.36.73 --remote-host=10.70.36.74" 

This works well as the command is executed on the other remote host until we have multiple remote hosts mentioned

Comment 7 SATHEESARAN 2016-12-15 06:32:35 UTC
(In reply to SATHEESARAN from comment #6)
> (In reply to Simone Tiraboschi from comment #5)
> > Satheesaran, do you know if
> >  /sbin/gluster --mode=script --xml volume info engine
> > --remote-host=10.70.36.73
> > could support somehow the additional mount options?
> 
> Hi Simone,
> 
> The command actually never supports additional mount options.
> But it does accepts additional remote-hosts.
> 
> "/sbin/gluster --mode=script --xml volume info engine
> --remote-host=10.70.36.73 --remote-host=10.70.36.74" 
> 
> This works well as the command is executed on the other remote host until we
> have multiple remote hosts mentioned

Sorry that doesn't works. 
I stand corrected.
It takes only the last '--remote-host' value.
There is no multiple values of '--remote-host' accepted for the command.

Comment 8 Sahina Bose 2016-12-22 08:11:04 UTC
*** Bug 1398769 has been marked as a duplicate of this bug. ***

Comment 9 Simone Tiraboschi 2017-12-19 17:11:10 UTC
Fixed with node-zero

Comment 10 SATHEESARAN 2018-05-07 01:13:54 UTC
Tested with RHV 4.2.3 and ovirt-hosted-engine-setup-2.2.18.

With node-zero deployment, this issue is no longer seen

Comment 11 Sandro Bonazzola 2018-05-10 06:23:46 UTC
This bugzilla is included in oVirt 4.2.2 release, published on March 28th 2018.

Since the problem described in this bug report should be
resolved in oVirt 4.2.2 release, it has been closed with a resolution of CURRENT RELEASE.

If the solution does not work for you, please open a new bug report.


Note You need to log in before you can comment on or make changes to this bug.