Bug 1394636 - Unable to create bricks with JBOD backend
Summary: Unable to create bricks with JBOD backend
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: gdeploy
Version: rhgs-3.2
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
: RHGS 3.1.3 Async
Assignee: Sachidananda Urs
QA Contact: SATHEESARAN
URL:
Whiteboard:
Depends On:
Blocks: 1351528
TreeView+ depends on / blocked
 
Reported: 2016-11-14 06:02 UTC by SATHEESARAN
Modified: 2017-03-07 17:46 UTC (History)
5 users (show)

Fixed In Version: gdeploy-2.0.1-4
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-02-07 11:35:42 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2017:0260 0 normal SHIPPED_LIVE Important: ansible and gdeploy security and bug fix update 2017-02-07 16:32:47 UTC

Description SATHEESARAN 2016-11-14 06:02:04 UTC
Description of problem:
-----------------------
I was trying to create bricks using gdeploy, with JBOD backend with 4 disks.
gdeploy fails with errors

Version-Release number of selected component (if applicable):
-------------------------------------------------------------
gdeploy-2.0.1-3.el7rhgs

How reproducible:
-----------------
Always

Steps to Reproduce:
-------------------
1. Try creating bricks with 'disktype' as 'jbod' and 'diskcount' as '4'

Actual results:
---------------
gdeploy fails

Expected results:
-----------------
gdeploy should successfully create bricks

Additional info:
----------------
Errors seen :
<snip>
TASK [Create volume group on the disks] ****************************************
changed: [dhcp37-54.lab.eng.blr.redhat.com] => (item={u'brick': u'/dev/vdb', u'vg': u'RHGS_vg1'})

PLAY RECAP *********************************************************************
dhcp37-54.lab.eng.blr.redhat.com : ok=1    changed=1    unreachable=0    failed=0   

Traceback (most recent call last):
  File "/usr/bin/gdeploy", line 198, in <module>
    main(sys.argv[1:])
  File "/usr/bin/gdeploy", line 183, in main
    call_features()
  File "/usr/lib/python2.7/site-packages/gdeploylib/call_features.py", line 36, in call_features
    map(get_feature_dir, Global.sections)
  File "/usr/lib/python2.7/site-packages/gdeploylib/call_features.py", line 83, in get_feature_dir
    section_dict, yml = feature_call(section_dict)
  File "/usr/lib/python2.7/site-packages/gdeployfeatures/lv/lv.py", line 22, in lv_create
    section_dict, yml = get_lv_vg_names('lvname', section_dict)
  File "/usr/lib/python2.7/site-packages/gdeployfeatures/lv/lv.py", line 65, in get_lv_vg_names
    section_dict, ymls = get_mount_data(section_dict, lvname, vgname)
  File "/usr/lib/python2.7/site-packages/gdeployfeatures/lv/lv.py", line 125, in get_mount_data
    -n size=8192"%(sw[0],su[0])
TypeError: 'int' object has no attribute '__getitem__'
</snip>

Comment 1 SATHEESARAN 2016-11-14 06:03:15 UTC
Creduts to Sac as he found out this issue and also the RCA for this issue

Comment 2 SATHEESARAN 2016-11-14 06:03:39 UTC
[hosts]
host1

[disktype]
jbod

[diskcount]
4

[pv]
action=create
devices=vdb

[vg1]
action=create
vgname=RHGS_vg1
pvname=vdb

[lv1]
action=create
vgname=RHGS_vg1
lvname=engine_lv
lvtype=thick
size=10GB
mount=/rhgs/brick1

Comment 3 SATHEESARAN 2016-11-14 06:04:11 UTC
(In reply to SATHEESARAN from comment #1)
> Creduts to Sac as he found out this issue and also the RCA for this issue

Sorry for the typo. Read it as 'Credits' to Sac.

Comment 4 Sachidananda Urs 2016-11-14 06:26:47 UTC
sas, I've fixed this upstream. Will cherry-pick.

Thanks for filing this bug.

Comment 6 Sachidananda Urs 2016-11-14 10:56:43 UTC
Commit: https://github.com/gluster/gdeploy/pull/216/commits/d25503400 fixes the issue.

Comment 8 SATHEESARAN 2016-11-28 10:10:55 UTC
Tested with gdeploy-2.0.1-5 and all works good with 'jbod' as the disktype.

Comment 10 errata-xmlrpc 2017-02-07 11:35:42 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2017-0260.html


Note You need to log in before you can comment on or make changes to this bug.