Bug 1630901 - gdeploy fails for unsupported disk
Summary: gdeploy fails for unsupported disk
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: gdeploy
Version: rhgs-3.0
Hardware: All
OS: Linux
unspecified
medium
Target Milestone: ---
: ---
Assignee: Sachidananda Urs
QA Contact: Manisha Saini
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-09-19 14:04 UTC by Jeremy Tourville
Modified: 2018-09-23 03:45 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-09-23 03:45:04 UTC
Embargoed:


Attachments (Terms of Use)
gdeploy config file (3.59 KB, text/plain)
2018-09-19 14:04 UTC, Jeremy Tourville
no flags Details
gdeploy config - current as of 9/20/18 9:45 AM CT (3.57 KB, text/plain)
2018-09-20 14:45 UTC, Jeremy Tourville
no flags Details
Updated gdeploy configuration file (3.45 KB, text/plain)
2018-09-21 02:41 UTC, Sachidananda Urs
no flags Details

Description Jeremy Tourville 2018-09-19 14:04:10 UTC
Created attachment 1484764 [details]
gdeploy config file

Description of problem:
I think there might be an issue with the grafton sanity disk check script used with gdeploy and run-script.yml  *NOTE* the inconsistent host names

changed: [obe.cyber-range.lan] (this is the correct piece of info) 
ping: ovir-be.cyber-range.lan (the ping portion changed the host name somehow???)

Version-Release number of selected component (if applicable):


How reproducible:
gdeploy -c gdeploy.conf -vv

Steps to Reproduce:
1.set DHCP reservation to 172.30.50.4 for obe.cyber-range.lan
2.set /etc/hosts entry 172.30.50.4 obe.cyber-range.lan
3.blacklisted multipath driver for disk sda (it uses hardware RAID controller)
4.no gpt label or partition exists for /dev/sda (confirmed with gdisk)

Actual results:
obe.cyber-range.lan : ok=1 changed=1 unreachable=0 failed=0 yet the script does not continue

Expected results:
Playbook should not contain an error line:
Error: Unsupported disk type! Only ['raid10', 'raid5', 'raid6', 'jbod'] are supported

Additional info:
root@vmh ~]# gdeploy -c gdeploy.conf -vv ansible-playbook 2.6.2 config file = /etc/ansible/ansible.cfg configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python2.7/site-packages/ansible executable location = /usr/bin/ansible-playbook python version = 2.7.5 (default, Jul 13 2018, 13:06:57) [GCC 4.8.5 20150623 (Red Hat 4.8.5-28)] Using /etc/ansible/ansible.cfg as config file PLAYBOOK: run-script.yml *********************************************************************************************************************************************** 1 plays in /tmp/tmpubmQdM/run-script.yml PLAY [gluster_servers] ************************************************************************************************************************************************* META: ran handlers TASK [Run a shell script] ********************************************************************************************************************************************** task path: /tmp/tmpubmQdM/run-script.yml:7 changed: [obe.cyber-range.lan] => (item=/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d /dev/sda -h ovir-be.cyber-range.lan) => {"changed": true, "failed_when_result": false, "item": "/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d /dev/sda -h ovir-be.cyber-range.lan", "rc": 0, "stderr": "Shared connection to obe.cyber-range.lan closed.\r\n", "stderr_lines": ["Shared connection to obe.cyber-range.lan closed."], "stdout": "ping: ovir-be.cyber-range.lan: Name or service not known\r\nping failed unable to reach ovir-be.cyber-range.lan\r\nUsage: grep [OPTION]... PATTERN [FILE]...\r\nTry 'grep --help' for more information.\r\n", "stdout_lines": ["ping: ovir-be.cyber-range.lan: Name or service not known", "ping failed unable to reach ovir-be.cyber-range.lan", "Usage: grep [OPTION]... PATTERN [FILE]...", "Try 'grep --help' for more information."]} META: ran handlers META: ran handlers PLAY RECAP ************************************************************************************************************************************************************* obe.cyber-range.lan : ok=1 changed=1 unreachable=0 failed=0 Error: Unsupported disk type! Only ['raid10', 'raid5', 'raid6', 'jbod'] are supported

Comment 2 Jeremy Tourville 2018-09-20 10:27:50 UTC
I found a mistake in my config file.  This may or may not be a bug.  I had appednded -h flag (and incorrectly spelled host name) in grafton sanity check which resulted in the differences noted in hostnames during Ansible playbook run.  Despite this config error and fix the gdeploy still fails for the unsupported disk error.

Comment 3 Sachidananda Urs 2018-09-20 13:53:32 UTC
(In reply to Jeremy Tourville from comment #2)
> I found a mistake in my config file.  This may or may not be a bug.  I had
> appednded -h flag (and incorrectly spelled host name) in grafton sanity
> check which resulted in the differences noted in hostnames during Ansible
> playbook run.  Despite this config error and fix the gdeploy still fails for
> the unsupported disk error.

Jeremy, in your conf you can't have the comments in the same line.
This is the limitation of python config parser.

In your case the disk will be taken as `sda # Change to @VDO_DEVICE_name@ if using vdo' ... 

So please remove the comments that are on the same line as the value. It should work fine.

Comment 4 Jeremy Tourville 2018-09-20 14:45:22 UTC
Created attachment 1485167 [details]
gdeploy config - current as of 9/20/18 9:45 AM CT

Comment 5 Jeremy Tourville 2018-09-20 14:48:23 UTC
>>>you can't have the comments in the same line.>>>
Are you referring to this? file=/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d /dev/sda
I am not using VDO.  Yes, I understand my disk will be taken as sda.  I have uploaded my current gdeploy config file.  Despite removing -h and associated parameters fro that option the install still fails with unsupported disk.  I have been unable to find documentation to tell me correct parameters for a single ovirt node that is hyperconverged using gluster with hardware raid.  Is there somethng about the script that detects disk type parameter?  blkid says /dev/sda: UUID="puRtgB-llwv-8j3C-RFVI-YMfT-nWbI-lRz4xa" TYPE="LVM2_member"  Here is the resultant run of gdeploy even after removing -h

[root@vmh ~]# gdeploy -c gdeploy.conf -vv
ansible-playbook 2.6.2
  config file = /etc/ansible/ansible.cfg
  configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python2.7/site-packages/ansible
  executable location = /usr/bin/ansible-playbook
  python version = 2.7.5 (default, Jul 13 2018, 13:06:57) [GCC 4.8.5 20150623 (Red Hat 4.8.5-28)]
Using /etc/ansible/ansible.cfg as config file

PLAYBOOK: run-script.yml ***********************************************************************************************************************************************
1 plays in /tmp/tmpGgRiBp/run-script.yml

PLAY [gluster_servers] *************************************************************************************************************************************************
META: ran handlers

TASK [Run a shell script] **********************************************************************************************************************************************
task path: /tmp/tmpGgRiBp/run-script.yml:7
changed: [obe.cyber-range.lan] => (item=/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d /dev/sda) => {"changed": true, "failed_when_result": false, "item": "/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d /dev/sda", "rc": 0, "stderr": "Shared connection to obe.cyber-range.lan closed.\r\n", "stderr_lines": ["Shared connection to obe.cyber-range.lan closed."], "stdout": "", "stdout_lines": []}
META: ran handlers
META: ran handlers

PLAY RECAP *************************************************************************************************************************************************************
obe.cyber-range.lan        : ok=1    changed=1    unreachable=0    failed=0

Error: Unsupported disk type!
Only ['raid10', 'raid5', 'raid6', 'jbod'] are supported

Comment 6 Jeremy Tourville 2018-09-20 14:59:31 UTC
Is the problem -
Error: Unsupported disk type!
Only ['raid10', 'raid5', 'raid6', 'jbod'] are supported

but blkid says -  TYPE="LVM2_member"

It makes no difference if I use 
[disktype]
raid6
OR
[disktype]
jbod

both parameters result in failure.

Comment 7 Sachidananda Urs 2018-09-21 02:41:38 UTC
Created attachment 1485371 [details]
Updated gdeploy configuration file

Comment 8 Sachidananda Urs 2018-09-21 02:43:14 UTC
Jeremy, I have attached the modified conf file in the bug. What I meant is to remove the comments that are on the same line as value. There were 4 or 5 such comments, I have removed them. Please use the attached config file.

Comment 9 Sachidananda Urs 2018-09-21 02:45:56 UTC
(In reply to Jeremy Tourville from comment #5)
> >>>you can't have the comments in the same line.>>>
> Are you referring to this?
> file=/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d /dev/sda
> I am not using VDO.  Yes, I understand my disk will be taken as sda.  I have
> uploaded my current gdeploy config file.  Despite removing -h and associated
> parameters fro that option the install still fails with unsupported disk.


In the updated config file please add the -h part, I did not mean that. I have explained the issue in Comment #8

Comment 10 Jeremy Tourville 2018-09-21 12:00:35 UTC
Thank you for the clarification and example file.  I was able to get much further.  It seems my issue was related to formatting.  I had used the formatting that was on the ovirt documentation page - https://www.ovirt.org/documentation/gluster-hyperconverged/chap-Single_node_hyperconverged/  The formatting on that page has the comments on the same line which caused my issues.

Comment 11 Jeremy Tourville 2018-09-21 16:01:56 UTC
After modifying my gdeploy config file so that that combination of LV 1 trough LV 4 all equaled 5.5 TB I was able to complete setup successfully.  This bug report may be closed.  My issue was due to documentation and formatting.  Thank you for your support.

Comment 12 Sachidananda Urs 2018-09-23 03:45:04 UTC
(In reply to Jeremy Tourville from comment #11)
> After modifying my gdeploy config file so that that combination of LV 1
> trough LV 4 all equaled 5.5 TB I was able to complete setup successfully. 
> This bug report may be closed.  My issue was due to documentation and
> formatting.  Thank you for your support.

You are welcome. Closing the bug.


Note You need to log in before you can comment on or make changes to this bug.