Bug 1277582

Summary: gdeploy uses different default naming scheme compared to rhgsc
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: Martin Bukatovic <mbukatov>
Component: gdeployAssignee: Sachidananda Urs <surs>
Status: CLOSED DEFERRED QA Contact: Manisha Saini <msaini>
Severity: low Docs Contact:
Priority: unspecified    
Version: rhgs-3.1CC: nchilaka, rcyriac, smohan
Target Milestone: ---Keywords: ZStream
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2018-11-15 08:42:31 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Martin Bukatovic 2015-11-03 15:14:39 UTC
Description of problem
======================

Gdeploy uses different default naming scheme for xfs brick mountpoints
compared to gluster storage console. Unless there is some very good hidden
reason for this, it would be good to use the same defaults in both tools.

Version-Release number of selected component (if applicable)
============================================================

gdeploy-1.0-12.el6rhs.noarch

How reproducible
================

100 %

Steps to Reproduce
==================

1. Prepare configuration file for gdeploy and specify just 'hosts' and
   'devices' sections so that default values of brick setup are used.

   Example of such gluster.conf file:

~~~
[hosts]
node-129.storage.example.com
node-131.storage.example.com
node-128.storage.example.com
node-130.storage.example.com

[devices]
/dev/vdb
~~~

2. Run gdeploy with this configuration (gdeploy -c gluster.conf).

Actual results
==============

Check configuration created by gdeploy on storage servers:

~~~
# lsblk /dev/vdb
NAME                                       MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
vdb                                        252:16   0   50G  0 disk 
├─GLUSTER_vg1-GLUSTER_pool1_tmeta (dm-0)   253:0    0  256M  0 lvm  
│ └─GLUSTER_vg1-GLUSTER_pool1-tpool (dm-2) 253:2    0 49.8G  0 lvm  
│   ├─GLUSTER_vg1-GLUSTER_pool1 (dm-3)     253:3    0 49.8G  0 lvm  
│   └─GLUSTER_vg1-GLUSTER_lv1 (dm-4)       253:4    0 49.8G  0 lvm  /gluster/brick1
└─GLUSTER_vg1-GLUSTER_pool1_tdata (dm-1)   253:1    0 49.8G  0 lvm  
  └─GLUSTER_vg1-GLUSTER_pool1-tpool (dm-2) 253:2    0 49.8G  0 lvm  
    ├─GLUSTER_vg1-GLUSTER_pool1 (dm-3)     253:3    0 49.8G  0 lvm  
    └─GLUSTER_vg1-GLUSTER_lv1 (dm-4)       253:4    0 49.8G  0 lvm  /gluster/brick1
~~~

See that xfs filesystem for the brick is mounted into /gluster/brick1
This differs from defaults used in Gluster Storage Console, which would
mount it into /rhgs/brick1 instead.

Expected results
================

Gdeploy would use same default naming schema for brick xfs mountpoints as
console:

~~~
# lsblk /dev/vdb
NAME                                       MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
vdb                                        252:16   0   50G  0 disk 
├─GLUSTER_vg1-GLUSTER_pool1_tmeta (dm-0)   253:0    0  256M  0 lvm  
│ └─GLUSTER_vg1-GLUSTER_pool1-tpool (dm-2) 253:2    0 49.8G  0 lvm  
│   ├─GLUSTER_vg1-GLUSTER_pool1 (dm-3)     253:3    0 49.8G  0 lvm  
│   └─GLUSTER_vg1-GLUSTER_lv1 (dm-4)       253:4    0 49.8G  0 lvm  /rhgs/brick1
└─GLUSTER_vg1-GLUSTER_pool1_tdata (dm-1)   253:1    0 49.8G  0 lvm  
  └─GLUSTER_vg1-GLUSTER_pool1-tpool (dm-2) 253:2    0 49.8G  0 lvm  
    ├─GLUSTER_vg1-GLUSTER_pool1 (dm-3)     253:3    0 49.8G  0 lvm  
    └─GLUSTER_vg1-GLUSTER_lv1 (dm-4)       253:4    0 49.8G  0 lvm  /rhgs/brick1
~~~

Comment 1 Nandaja Varma 2015-11-04 04:28:44 UTC
The naming was initially consistent with this, but later we got a suggestion to change it to Gluster since it will be used by the upstream users as well and not is specific to RHGS.

Comment 5 Sachidananda Urs 2018-11-15 08:42:31 UTC
gdeploy has been deprecated in favour of gluster-ansible, this bug would not be addressed in gdeploy.