Bugzilla will be upgraded to version 5.0. The upgrade date is tentatively scheduled for 2 December 2018, pending final testing and feedback.
Bug 1489439 - [RFE] Allow multiple disks per volume group
[RFE] Allow multiple disks per volume group
Status: CLOSED ERRATA
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: gdeploy (Show other bugs)
3.3
x86_64 Linux
unspecified Severity medium
: ---
: RHGS 3.3.1 Async
Assigned To: Devyani Kota
Manisha Saini
: FutureFeature, ZStream
Depends On:
Blocks: 1581561
  Show dependency treegraph
 
Reported: 2017-09-07 09:04 EDT by Sachidananda Urs
Modified: 2018-06-20 23:34 EDT (History)
12 users (show)

See Also:
Fixed In Version: gdeploy-2.0.2-23
Doc Type: If docs needed, set a value
Doc Text:
Earlier only one disk was allowed per volume group. Now more than one disk is allowed per volume group.
Story Points: ---
Clone Of:
Environment:
Last Closed: 2018-06-20 23:33:14 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2018:1958 None None None 2018-06-20 23:34 EDT

  None (edit)
Description Sachidananda Urs 2017-09-07 09:04:23 EDT
Description of problem:

Currently gdeploy is limited to 1 disk -> 1 vg
Enhance to allow multiple disks per volume group.

The configuration file will look something like:

#==========================

[hosts]
10.70.42.122

[vg1]
action=create
vgname=RHGS_vg1
pvname=vdb,vdc

[lv1]
action=create
vgname=RHGS_vg1
lvname=engine_lv
lvtype=thick
size=10GB
mount=/rhgs/brick1
Comment 3 Devyani Kota 2017-10-16 13:33:50 EDT
PR[1] #450 fixes the issue.

One can use the following config file:
#####################
[hosts]
node 1

[pv]
action=create
devices=vdb,vdc

[vg1]
action=create
vgname=RHS_vg1
pvname=vdb,vdc
#####################

Creates a vg as following:

[root@dhcp42-210 ~]# vgs
  VG              #PV #LV #SN Attr   VSize   VFree 
  RHS_vg1           2   0   0 wz--n-  99.99g 99.99g
  rhgs_dhcp42-210   1   2   0 wz--n- <19.00g     0 

[1] https://github.com/gluster/gdeploy/pull/450

Thanks,
Devyani
Comment 5 Sachidananda Urs 2017-11-08 06:15:02 EST
Recently I had a discussion with Sahina, regarding multiple PVs for a volume group. Sahina would you want this feature in 3.3.1 or 3.4.0?
Comment 6 Sahina Bose 2017-11-09 06:26:39 EST
RHHI setup allows for multiple bricks in a thinpool as we don't require the gluster snapshot feature. This has been tested and qualified in RHHI 1.0

The requirement for multiple disks in a VG is when users want to attach lvmcache (may have only 1 NVMe SSD) to thinpool. If the server has multiple disks these can be formed into 1 VG -> 1 thinpool and cachepool attached to this.
Comment 9 Manisha Saini 2018-02-21 07:42:15 EST
Devyani,

Tested the same with Gdeploy 2.0.2.22 build using the sample conf file provided by you in comment #3. This seems failing for me.


# rpm -qa | grep gdeploy
gdeploy-2.0.2-22.el7rhgs.noarch


# rpm -qa | grep ansible
ansible-2.3.2.0-1.el7.noarch


Gdeploy.conf -

[hosts]
dhcp37-121.lab.eng.blr.redhat.com

[pv]
action=create
devices=sdb,sdc,sdd,vda,vdb

[vg1]
action=create
vgname=RHS_vgs1
pvname=sdb,sdc,sdd,vda,vdb


Output=

************************************

# gdeploy  -c backend_Gdeploysetup2.conf 

PLAY [gluster_servers] ****************************************************************************************************************************************

TASK [Clean up filesystem signature] **************************************************************************************************************************
skipping: [dhcp37-121.lab.eng.blr.redhat.com] => (item=/dev/sdb) 
skipping: [dhcp37-121.lab.eng.blr.redhat.com] => (item=/dev/sdc) 
skipping: [dhcp37-121.lab.eng.blr.redhat.com] => (item=/dev/sdd) 
skipping: [dhcp37-121.lab.eng.blr.redhat.com] => (item=/dev/vda) 
skipping: [dhcp37-121.lab.eng.blr.redhat.com] => (item=/dev/vdb) 

TASK [Create Physical Volume] *********************************************************************************************************************************
changed: [dhcp37-121.lab.eng.blr.redhat.com] => (item=/dev/sdb)
changed: [dhcp37-121.lab.eng.blr.redhat.com] => (item=/dev/sdc)
changed: [dhcp37-121.lab.eng.blr.redhat.com] => (item=/dev/sdd)
changed: [dhcp37-121.lab.eng.blr.redhat.com] => (item=/dev/vda)
changed: [dhcp37-121.lab.eng.blr.redhat.com] => (item=/dev/vdb)

PLAY RECAP ****************************************************************************************************************************************************
dhcp37-121.lab.eng.blr.redhat.com : ok=1    changed=1    unreachable=0    failed=0   


PLAY [gluster_servers] ****************************************************************************************************************************************

TASK [Create volume group on the disks] ***********************************************************************************************************************
changed: [dhcp37-121.lab.eng.blr.redhat.com] => (item={u'brick': u'/dev/sdb', u'vg': u'RHS_vgs1'})
failed: [dhcp37-121.lab.eng.blr.redhat.com] (item={u'brick': u'/dev/sdc', u'vg': u'RHS_vgs1'}) => {"changed": false, "failed": true, "item": {"brick": "/dev/sdc", "vg": "RHS_vgs1"}, "msg": "A volume group called RHS_vgs1 already exists", "rc": 1}
failed: [dhcp37-121.lab.eng.blr.redhat.com] (item={u'brick': u'/dev/sdd', u'vg': u'RHS_vgs1'}) => {"changed": false, "failed": true, "item": {"brick": "/dev/sdd", "vg": "RHS_vgs1"}, "msg": "A volume group called RHS_vgs1 already exists", "rc": 1}
failed: [dhcp37-121.lab.eng.blr.redhat.com] (item={u'brick': u'/dev/vda', u'vg': u'RHS_vgs1'}) => {"changed": false, "failed": true, "item": {"brick": "/dev/vda", "vg": "RHS_vgs1"}, "msg": "A volume group called RHS_vgs1 already exists", "rc": 1}
failed: [dhcp37-121.lab.eng.blr.redhat.com] (item={u'brick': u'/dev/vdb', u'vg': u'RHS_vgs1'}) => {"changed": false, "failed": true, "item": {"brick": "/dev/vdb", "vg": "RHS_vgs1"}, "msg": "A volume group called RHS_vgs1 already exists", "rc": 1}
	to retry, use: --limit @/tmp/tmpt98PU1/vgcreate.retry

PLAY RECAP ****************************************************************************************************************************************************
dhcp37-121.lab.eng.blr.redhat.com : ok=0    changed=0    unreachable=0    failed=1   

Ignoring errors...
*******************************


Cli output-

# pvs
  PV         VG              Fmt  Attr PSize   PFree  
  /dev/sda2  rhel_dhcp37-121 lvm2 a--  <19.00g      0 
  /dev/sdb   RHS_vgs1        lvm2 a--  <20.00g <20.00g
  /dev/sdc                   lvm2 ---   20.00g  20.00g
  /dev/sdd                   lvm2 ---   20.00g  20.00g
  /dev/vda                   lvm2 ---   20.00g  20.00g
  /dev/vdb                   lvm2 ---   20.00g  20.00g
# vgs
  VG              #PV #LV #SN Attr   VSize   VFree  
  RHS_vgs1          1   0   0 wz--n- <20.00g <20.00g
  rhel_dhcp37-121   1   2   0 wz--n- <19.00g      0
Comment 10 Devyani Kota 2018-02-21 15:21:10 EST
Hi manisha,
The config file pasted in comment #3 works for me, with the following output[1].
The PR #450[2] as mentioned in the same comment, fixes the issue.

After comment #8, sachi mentioned that it was fixed in gdeploy-2.0.2-20.
I checked the source code for gdeploy-2.0.2-22, the changes in the patch do not exist.(Will check 2.0.2-20's source code)
If it wasn't working for you, am presuming the patch didn't make it to the build.
So, what we can do for now is we make sure we add this patch in the next build.
We are working on a new build including more fixes, we'll make sure to cherry-pick this patch as well(by next week).

[1] https://paste.opensuse.org/94844657

Cli Output:
[root@dhcp43-218 ~]# vgs
  VG           #PV #LV #SN Attr   VSize   VFree 
  RHS_vg1        2   0   0 wz--n-  99.99g 99.99g
  rhgs_network   1   2   0 wz--n- <19.00g     0 

[2] https://github.com/gluster/gdeploy/pull/450
Comment 11 Manisha Saini 2018-02-22 01:23:40 EST
Thanks Devyani for the update.I haven't tested the same with gdeploy-2.0.2-20.
But accordingly it should work with gdeploy-2.0.2-22.el7rhgs.noarch build as well.


Since its failing for me on gdeploy-2.0.2-22.el7rhgs.noarch ,Based on comment #9 I am moving this BZ to assigned state in order to track the same for the next Gdeploy build ( having this patch included)
Comment 12 Devyani Kota 2018-02-22 01:38:45 EST
ack!
Comment 17 errata-xmlrpc 2018-06-20 23:33:14 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2018:1958

Note You need to log in before you can comment on or make changes to this bug.