Bug 1489439

Summary: [RFE] Allow multiple disks per volume group
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: Sachidananda Urs <surs>
Component: gdeployAssignee: Devyani Kota <dkota>
Status: CLOSED ERRATA QA Contact: Manisha Saini <msaini>
Severity: medium Docs Contact:
Priority: unspecified    
Version: rhgs-3.3CC: amukherj, dkota, pmulay, rcyriac, rhinduja, rhs-bugs, sabose, sankarshan, sheggodu, smohan, storage-qa-internal, surs
Target Milestone: ---Keywords: FutureFeature, ZStream
Target Release: RHGS 3.3.1 Async   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: gdeploy-2.0.2-23 Doc Type: If docs needed, set a value
Doc Text:
Earlier only one disk was allowed per volume group. Now more than one disk is allowed per volume group.
Story Points: ---
Clone Of: Environment:
Last Closed: 2018-06-21 03:33:14 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1581561, 1651116    

Description Sachidananda Urs 2017-09-07 13:04:23 UTC
Description of problem:

Currently gdeploy is limited to 1 disk -> 1 vg
Enhance to allow multiple disks per volume group.

The configuration file will look something like:

#==========================

[hosts]
10.70.42.122

[vg1]
action=create
vgname=RHGS_vg1
pvname=vdb,vdc

[lv1]
action=create
vgname=RHGS_vg1
lvname=engine_lv
lvtype=thick
size=10GB
mount=/rhgs/brick1

Comment 3 Devyani Kota 2017-10-16 17:33:50 UTC
PR[1] #450 fixes the issue.

One can use the following config file:
#####################
[hosts]
node 1

[pv]
action=create
devices=vdb,vdc

[vg1]
action=create
vgname=RHS_vg1
pvname=vdb,vdc
#####################

Creates a vg as following:

[root@dhcp42-210 ~]# vgs
  VG              #PV #LV #SN Attr   VSize   VFree 
  RHS_vg1           2   0   0 wz--n-  99.99g 99.99g
  rhgs_dhcp42-210   1   2   0 wz--n- <19.00g     0 

[1] https://github.com/gluster/gdeploy/pull/450

Thanks,
Devyani

Comment 5 Sachidananda Urs 2017-11-08 11:15:02 UTC
Recently I had a discussion with Sahina, regarding multiple PVs for a volume group. Sahina would you want this feature in 3.3.1 or 3.4.0?

Comment 6 Sahina Bose 2017-11-09 11:26:39 UTC
RHHI setup allows for multiple bricks in a thinpool as we don't require the gluster snapshot feature. This has been tested and qualified in RHHI 1.0

The requirement for multiple disks in a VG is when users want to attach lvmcache (may have only 1 NVMe SSD) to thinpool. If the server has multiple disks these can be formed into 1 VG -> 1 thinpool and cachepool attached to this.

Comment 9 Manisha Saini 2018-02-21 12:42:15 UTC
Devyani,

Tested the same with Gdeploy 2.0.2.22 build using the sample conf file provided by you in comment #3. This seems failing for me.


# rpm -qa | grep gdeploy
gdeploy-2.0.2-22.el7rhgs.noarch


# rpm -qa | grep ansible
ansible-2.3.2.0-1.el7.noarch


Gdeploy.conf -

[hosts]
dhcp37-121.lab.eng.blr.redhat.com

[pv]
action=create
devices=sdb,sdc,sdd,vda,vdb

[vg1]
action=create
vgname=RHS_vgs1
pvname=sdb,sdc,sdd,vda,vdb


Output=

************************************

# gdeploy  -c backend_Gdeploysetup2.conf 

PLAY [gluster_servers] ****************************************************************************************************************************************

TASK [Clean up filesystem signature] **************************************************************************************************************************
skipping: [dhcp37-121.lab.eng.blr.redhat.com] => (item=/dev/sdb) 
skipping: [dhcp37-121.lab.eng.blr.redhat.com] => (item=/dev/sdc) 
skipping: [dhcp37-121.lab.eng.blr.redhat.com] => (item=/dev/sdd) 
skipping: [dhcp37-121.lab.eng.blr.redhat.com] => (item=/dev/vda) 
skipping: [dhcp37-121.lab.eng.blr.redhat.com] => (item=/dev/vdb) 

TASK [Create Physical Volume] *********************************************************************************************************************************
changed: [dhcp37-121.lab.eng.blr.redhat.com] => (item=/dev/sdb)
changed: [dhcp37-121.lab.eng.blr.redhat.com] => (item=/dev/sdc)
changed: [dhcp37-121.lab.eng.blr.redhat.com] => (item=/dev/sdd)
changed: [dhcp37-121.lab.eng.blr.redhat.com] => (item=/dev/vda)
changed: [dhcp37-121.lab.eng.blr.redhat.com] => (item=/dev/vdb)

PLAY RECAP ****************************************************************************************************************************************************
dhcp37-121.lab.eng.blr.redhat.com : ok=1    changed=1    unreachable=0    failed=0   


PLAY [gluster_servers] ****************************************************************************************************************************************

TASK [Create volume group on the disks] ***********************************************************************************************************************
changed: [dhcp37-121.lab.eng.blr.redhat.com] => (item={u'brick': u'/dev/sdb', u'vg': u'RHS_vgs1'})
failed: [dhcp37-121.lab.eng.blr.redhat.com] (item={u'brick': u'/dev/sdc', u'vg': u'RHS_vgs1'}) => {"changed": false, "failed": true, "item": {"brick": "/dev/sdc", "vg": "RHS_vgs1"}, "msg": "A volume group called RHS_vgs1 already exists", "rc": 1}
failed: [dhcp37-121.lab.eng.blr.redhat.com] (item={u'brick': u'/dev/sdd', u'vg': u'RHS_vgs1'}) => {"changed": false, "failed": true, "item": {"brick": "/dev/sdd", "vg": "RHS_vgs1"}, "msg": "A volume group called RHS_vgs1 already exists", "rc": 1}
failed: [dhcp37-121.lab.eng.blr.redhat.com] (item={u'brick': u'/dev/vda', u'vg': u'RHS_vgs1'}) => {"changed": false, "failed": true, "item": {"brick": "/dev/vda", "vg": "RHS_vgs1"}, "msg": "A volume group called RHS_vgs1 already exists", "rc": 1}
failed: [dhcp37-121.lab.eng.blr.redhat.com] (item={u'brick': u'/dev/vdb', u'vg': u'RHS_vgs1'}) => {"changed": false, "failed": true, "item": {"brick": "/dev/vdb", "vg": "RHS_vgs1"}, "msg": "A volume group called RHS_vgs1 already exists", "rc": 1}
	to retry, use: --limit @/tmp/tmpt98PU1/vgcreate.retry

PLAY RECAP ****************************************************************************************************************************************************
dhcp37-121.lab.eng.blr.redhat.com : ok=0    changed=0    unreachable=0    failed=1   

Ignoring errors...
*******************************


Cli output-

# pvs
  PV         VG              Fmt  Attr PSize   PFree  
  /dev/sda2  rhel_dhcp37-121 lvm2 a--  <19.00g      0 
  /dev/sdb   RHS_vgs1        lvm2 a--  <20.00g <20.00g
  /dev/sdc                   lvm2 ---   20.00g  20.00g
  /dev/sdd                   lvm2 ---   20.00g  20.00g
  /dev/vda                   lvm2 ---   20.00g  20.00g
  /dev/vdb                   lvm2 ---   20.00g  20.00g
# vgs
  VG              #PV #LV #SN Attr   VSize   VFree  
  RHS_vgs1          1   0   0 wz--n- <20.00g <20.00g
  rhel_dhcp37-121   1   2   0 wz--n- <19.00g      0

Comment 10 Devyani Kota 2018-02-21 20:21:10 UTC
Hi manisha,
The config file pasted in comment #3 works for me, with the following output[1].
The PR #450[2] as mentioned in the same comment, fixes the issue.

After comment #8, sachi mentioned that it was fixed in gdeploy-2.0.2-20.
I checked the source code for gdeploy-2.0.2-22, the changes in the patch do not exist.(Will check 2.0.2-20's source code)
If it wasn't working for you, am presuming the patch didn't make it to the build.
So, what we can do for now is we make sure we add this patch in the next build.
We are working on a new build including more fixes, we'll make sure to cherry-pick this patch as well(by next week).

[1] https://paste.opensuse.org/94844657

Cli Output:
[root@dhcp43-218 ~]# vgs
  VG           #PV #LV #SN Attr   VSize   VFree 
  RHS_vg1        2   0   0 wz--n-  99.99g 99.99g
  rhgs_network   1   2   0 wz--n- <19.00g     0 

[2] https://github.com/gluster/gdeploy/pull/450

Comment 11 Manisha Saini 2018-02-22 06:23:40 UTC
Thanks Devyani for the update.I haven't tested the same with gdeploy-2.0.2-20.
But accordingly it should work with gdeploy-2.0.2-22.el7rhgs.noarch build as well.


Since its failing for me on gdeploy-2.0.2-22.el7rhgs.noarch ,Based on comment #9 I am moving this BZ to assigned state in order to track the same for the next Gdeploy build ( having this patch included)

Comment 12 Devyani Kota 2018-02-22 06:38:45 UTC
ack!

Comment 17 errata-xmlrpc 2018-06-21 03:33:14 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2018:1958