Bug 1277582 - gdeploy uses different default naming scheme compared to rhgsc
Summary: gdeploy uses different default naming scheme compared to rhgsc
Keywords:
Status: CLOSED DEFERRED
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: gdeploy
Version: rhgs-3.1
Hardware: Unspecified
OS: Unspecified
unspecified
low
Target Milestone: ---
: ---
Assignee: Sachidananda Urs
QA Contact: Manisha Saini
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-11-03 15:14 UTC by Martin Bukatovic
Modified: 2018-11-15 08:42 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-11-15 08:42:31 UTC
Embargoed:


Attachments (Terms of Use)

Description Martin Bukatovic 2015-11-03 15:14:39 UTC
Description of problem
======================

Gdeploy uses different default naming scheme for xfs brick mountpoints
compared to gluster storage console. Unless there is some very good hidden
reason for this, it would be good to use the same defaults in both tools.

Version-Release number of selected component (if applicable)
============================================================

gdeploy-1.0-12.el6rhs.noarch

How reproducible
================

100 %

Steps to Reproduce
==================

1. Prepare configuration file for gdeploy and specify just 'hosts' and
   'devices' sections so that default values of brick setup are used.

   Example of such gluster.conf file:

~~~
[hosts]
node-129.storage.example.com
node-131.storage.example.com
node-128.storage.example.com
node-130.storage.example.com

[devices]
/dev/vdb
~~~

2. Run gdeploy with this configuration (gdeploy -c gluster.conf).

Actual results
==============

Check configuration created by gdeploy on storage servers:

~~~
# lsblk /dev/vdb
NAME                                       MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
vdb                                        252:16   0   50G  0 disk 
├─GLUSTER_vg1-GLUSTER_pool1_tmeta (dm-0)   253:0    0  256M  0 lvm  
│ └─GLUSTER_vg1-GLUSTER_pool1-tpool (dm-2) 253:2    0 49.8G  0 lvm  
│   ├─GLUSTER_vg1-GLUSTER_pool1 (dm-3)     253:3    0 49.8G  0 lvm  
│   └─GLUSTER_vg1-GLUSTER_lv1 (dm-4)       253:4    0 49.8G  0 lvm  /gluster/brick1
└─GLUSTER_vg1-GLUSTER_pool1_tdata (dm-1)   253:1    0 49.8G  0 lvm  
  └─GLUSTER_vg1-GLUSTER_pool1-tpool (dm-2) 253:2    0 49.8G  0 lvm  
    ├─GLUSTER_vg1-GLUSTER_pool1 (dm-3)     253:3    0 49.8G  0 lvm  
    └─GLUSTER_vg1-GLUSTER_lv1 (dm-4)       253:4    0 49.8G  0 lvm  /gluster/brick1
~~~

See that xfs filesystem for the brick is mounted into /gluster/brick1
This differs from defaults used in Gluster Storage Console, which would
mount it into /rhgs/brick1 instead.

Expected results
================

Gdeploy would use same default naming schema for brick xfs mountpoints as
console:

~~~
# lsblk /dev/vdb
NAME                                       MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
vdb                                        252:16   0   50G  0 disk 
├─GLUSTER_vg1-GLUSTER_pool1_tmeta (dm-0)   253:0    0  256M  0 lvm  
│ └─GLUSTER_vg1-GLUSTER_pool1-tpool (dm-2) 253:2    0 49.8G  0 lvm  
│   ├─GLUSTER_vg1-GLUSTER_pool1 (dm-3)     253:3    0 49.8G  0 lvm  
│   └─GLUSTER_vg1-GLUSTER_lv1 (dm-4)       253:4    0 49.8G  0 lvm  /rhgs/brick1
└─GLUSTER_vg1-GLUSTER_pool1_tdata (dm-1)   253:1    0 49.8G  0 lvm  
  └─GLUSTER_vg1-GLUSTER_pool1-tpool (dm-2) 253:2    0 49.8G  0 lvm  
    ├─GLUSTER_vg1-GLUSTER_pool1 (dm-3)     253:3    0 49.8G  0 lvm  
    └─GLUSTER_vg1-GLUSTER_lv1 (dm-4)       253:4    0 49.8G  0 lvm  /rhgs/brick1
~~~

Comment 1 Nandaja Varma 2015-11-04 04:28:44 UTC
The naming was initially consistent with this, but later we got a suggestion to change it to Gluster since it will be used by the upstream users as well and not is specific to RHGS.

Comment 5 Sachidananda Urs 2018-11-15 08:42:31 UTC
gdeploy has been deprecated in favour of gluster-ansible, this bug would not be addressed in gdeploy.


Note You need to log in before you can comment on or make changes to this bug.