Bug 1404606
Summary: | The initial checks on the gluster volume ignores additional mount options like backup-volfile-servers | ||||||
---|---|---|---|---|---|---|---|
Product: | [oVirt] ovirt-hosted-engine-setup | Reporter: | SATHEESARAN <sasundar> | ||||
Component: | Plugins.Gluster | Assignee: | Simone Tiraboschi <stirabos> | ||||
Status: | CLOSED CURRENTRELEASE | QA Contact: | SATHEESARAN <sasundar> | ||||
Severity: | medium | Docs Contact: | |||||
Priority: | medium | ||||||
Version: | 2.1.0 | CC: | bugs, khung, sabose, sasundar, stirabos, ylavi | ||||
Target Milestone: | ovirt-4.2.2 | Flags: | sabose:
ovirt-4.2?
sasundar: planning_ack? rule-engine: devel_ack+ sasundar: testing_ack+ |
||||
Target Release: | 2.2.10 | ||||||
Hardware: | x86_64 | ||||||
OS: | Linux | ||||||
Whiteboard: | |||||||
Fixed In Version: | Doc Type: | If docs needed, set a value | |||||
Doc Text: | Story Points: | --- | |||||
Clone Of: | Environment: | ||||||
Last Closed: | 2018-05-10 06:23:46 UTC | Type: | Bug | ||||
Regression: | --- | Mount Type: | --- | ||||
Documentation: | --- | CRM: | |||||
Verified Versions: | Category: | --- | |||||
oVirt Team: | Integration | RHEL 7.3 requirements from Atomic Host: | |||||
Cloudforms Team: | --- | Target Upstream Version: | |||||
Embargoed: | |||||||
Bug Depends On: | 1455169 | ||||||
Bug Blocks: | |||||||
Attachments: |
|
Description
SATHEESARAN
2016-12-14 08:53:15 UTC
Here is the snip from the hosted-engine deployment <snip> --== STORAGE CONFIGURATION ==-- Please specify the storage you would like to use (glusterfs, iscsi, fc, nfs3, nfs4)[nfs3]: glusterfs [ INFO ] Please note that Replica 3 support is required for the shared storage. Please specify the full shared storage connection path to use (example: host:/path): 10.70.36.73:/engine If needed, specify additional mount options for the connection to the hosted-engine storage domain []: backup-volfile-servers=10.70.36.74:10.70.36.75 [ ERROR ] Cannot access storage connection 10.70.36.73:/engine: Command '/sbin/gluster' failed to execute </snip> I have marked 'oVirt Team as Infra'. I am not sure on that. Please make suitable changes on this field, if I'm wrong Created attachment 1231520 [details]
hosted-engine-setup
What fails here it's just the initial validation of gluster volume: we use '/sbin/gluster --mode=script volume info' to gather volume info in order to enforce it's in replica 3. 2016-12-14 12:49:40 DEBUG otopi.plugins.otopi.dialog.human dialog.__logString:204 DIALOG:SEND If needed, specify additional mount options for the connection to the hosted-engine storage domain []: 2016-12-14 12:50:41 DEBUG otopi.plugins.otopi.dialog.human dialog.__logString:204 DIALOG:RECEIVE backup-volfile-servers=10.70.36.74:10.70.36.75 2016-12-14 12:50:41 DEBUG otopi.plugins.gr_he_setup.storage.nfs plugin.executeRaw:813 execute: ('/sbin/gluster', '--mode=script', '--xml', 'volume', 'info', 'engine', '--remote-host=10.70.36.73'), executable='None', cwd='None', env=None 2016-12-14 12:50:41 DEBUG otopi.plugins.gr_he_setup.storage.nfs plugin.executeRaw:863 execute-result: ('/sbin/gluster', '--mode=script', '--xml', 'volume', 'info', 'engine', '--remote-host=10.70.36.73'), rc=1 2016-12-14 12:50:41 DEBUG otopi.plugins.gr_he_setup.storage.nfs plugin.execute:921 execute-output: ('/sbin/gluster', '--mode=script', '--xml', 'volume', 'info', 'engine', '--remote-host=10.70.36.73') stdout: Connection failed. Please check if gluster daemon is operational. 2016-12-14 12:50:41 DEBUG otopi.plugins.gr_he_setup.storage.nfs plugin.execute:926 execute-output: ('/sbin/gluster', '--mode=script', '--xml', 'volume', 'info', 'engine', '--remote-host=10.70.36.73') stderr: 2016-12-14 12:50:41 DEBUG otopi.plugins.gr_he_setup.storage.nfs nfs._customization:420 exception Traceback (most recent call last): File "/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/gr-he-setup/storage/nfs.py", line 414, in _customization ohostedcons.StorageEnv.MNT_OPTIONS File "/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/gr-he-setup/storage/nfs.py", line 311, in _validateDomain self._check_volume_properties(connection) File "/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/gr-he-setup/storage/nfs.py", line 182, in _check_volume_properties raiseOnError=True File "/usr/lib/python2.7/site-packages/otopi/plugin.py", line 931, in execute command=args[0], RuntimeError: Command '/sbin/gluster' failed to execute Satheesaran, do you know if /sbin/gluster --mode=script --xml volume info engine --remote-host=10.70.36.73 could support somehow the additional mount options? (In reply to Simone Tiraboschi from comment #5) > Satheesaran, do you know if > /sbin/gluster --mode=script --xml volume info engine > --remote-host=10.70.36.73 > could support somehow the additional mount options? Hi Simone, The command actually never supports additional mount options. But it does accepts additional remote-hosts. "/sbin/gluster --mode=script --xml volume info engine --remote-host=10.70.36.73 --remote-host=10.70.36.74" This works well as the command is executed on the other remote host until we have multiple remote hosts mentioned (In reply to SATHEESARAN from comment #6) > (In reply to Simone Tiraboschi from comment #5) > > Satheesaran, do you know if > > /sbin/gluster --mode=script --xml volume info engine > > --remote-host=10.70.36.73 > > could support somehow the additional mount options? > > Hi Simone, > > The command actually never supports additional mount options. > But it does accepts additional remote-hosts. > > "/sbin/gluster --mode=script --xml volume info engine > --remote-host=10.70.36.73 --remote-host=10.70.36.74" > > This works well as the command is executed on the other remote host until we > have multiple remote hosts mentioned Sorry that doesn't works. I stand corrected. It takes only the last '--remote-host' value. There is no multiple values of '--remote-host' accepted for the command. *** Bug 1398769 has been marked as a duplicate of this bug. *** Fixed with node-zero Tested with RHV 4.2.3 and ovirt-hosted-engine-setup-2.2.18. With node-zero deployment, this issue is no longer seen This bugzilla is included in oVirt 4.2.2 release, published on March 28th 2018. Since the problem described in this bug report should be resolved in oVirt 4.2.2 release, it has been closed with a resolution of CURRENT RELEASE. If the solution does not work for you, please open a new bug report. |