Bug 1277582 - gdeploy uses different default naming scheme compared to rhgsc
gdeploy uses different default naming scheme compared to rhgsc
Status: ON_QA
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: gdeploy (Show other bugs)
3.1
Unspecified Unspecified
unspecified Severity low
: ---
: ---
Assigned To: Sachidananda Urs
Manisha Saini
: ZStream
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2015-11-03 10:14 EST by Martin Bukatovic
Modified: 2017-10-11 03:25 EDT (History)
3 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed:
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Martin Bukatovic 2015-11-03 10:14:39 EST
Description of problem
======================

Gdeploy uses different default naming scheme for xfs brick mountpoints
compared to gluster storage console. Unless there is some very good hidden
reason for this, it would be good to use the same defaults in both tools.

Version-Release number of selected component (if applicable)
============================================================

gdeploy-1.0-12.el6rhs.noarch

How reproducible
================

100 %

Steps to Reproduce
==================

1. Prepare configuration file for gdeploy and specify just 'hosts' and
   'devices' sections so that default values of brick setup are used.

   Example of such gluster.conf file:

~~~
[hosts]
node-129.storage.example.com
node-131.storage.example.com
node-128.storage.example.com
node-130.storage.example.com

[devices]
/dev/vdb
~~~

2. Run gdeploy with this configuration (gdeploy -c gluster.conf).

Actual results
==============

Check configuration created by gdeploy on storage servers:

~~~
# lsblk /dev/vdb
NAME                                       MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
vdb                                        252:16   0   50G  0 disk 
├─GLUSTER_vg1-GLUSTER_pool1_tmeta (dm-0)   253:0    0  256M  0 lvm  
│ └─GLUSTER_vg1-GLUSTER_pool1-tpool (dm-2) 253:2    0 49.8G  0 lvm  
│   ├─GLUSTER_vg1-GLUSTER_pool1 (dm-3)     253:3    0 49.8G  0 lvm  
│   └─GLUSTER_vg1-GLUSTER_lv1 (dm-4)       253:4    0 49.8G  0 lvm  /gluster/brick1
└─GLUSTER_vg1-GLUSTER_pool1_tdata (dm-1)   253:1    0 49.8G  0 lvm  
  └─GLUSTER_vg1-GLUSTER_pool1-tpool (dm-2) 253:2    0 49.8G  0 lvm  
    ├─GLUSTER_vg1-GLUSTER_pool1 (dm-3)     253:3    0 49.8G  0 lvm  
    └─GLUSTER_vg1-GLUSTER_lv1 (dm-4)       253:4    0 49.8G  0 lvm  /gluster/brick1
~~~

See that xfs filesystem for the brick is mounted into /gluster/brick1
This differs from defaults used in Gluster Storage Console, which would
mount it into /rhgs/brick1 instead.

Expected results
================

Gdeploy would use same default naming schema for brick xfs mountpoints as
console:

~~~
# lsblk /dev/vdb
NAME                                       MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
vdb                                        252:16   0   50G  0 disk 
├─GLUSTER_vg1-GLUSTER_pool1_tmeta (dm-0)   253:0    0  256M  0 lvm  
│ └─GLUSTER_vg1-GLUSTER_pool1-tpool (dm-2) 253:2    0 49.8G  0 lvm  
│   ├─GLUSTER_vg1-GLUSTER_pool1 (dm-3)     253:3    0 49.8G  0 lvm  
│   └─GLUSTER_vg1-GLUSTER_lv1 (dm-4)       253:4    0 49.8G  0 lvm  /rhgs/brick1
└─GLUSTER_vg1-GLUSTER_pool1_tdata (dm-1)   253:1    0 49.8G  0 lvm  
  └─GLUSTER_vg1-GLUSTER_pool1-tpool (dm-2) 253:2    0 49.8G  0 lvm  
    ├─GLUSTER_vg1-GLUSTER_pool1 (dm-3)     253:3    0 49.8G  0 lvm  
    └─GLUSTER_vg1-GLUSTER_lv1 (dm-4)       253:4    0 49.8G  0 lvm  /rhgs/brick1
~~~
Comment 1 Nandaja Varma 2015-11-03 23:28:44 EST
The naming was initially consistent with this, but later we got a suggestion to change it to Gluster since it will be used by the upstream users as well and not is specific to RHGS.

Note You need to log in before you can comment on or make changes to this bug.