Bug 1332158 - Brick, TCP ports are shown N/A after volume creation through gdeploy
Summary: Brick, TCP ports are shown N/A after volume creation through gdeploy
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: gdeploy
Version: rhgs-3.1
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: ---
Assignee: Sachidananda Urs
QA Contact: Anush Shetty
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-05-02 11:33 UTC by Bhaskarakiran
Modified: 2016-11-23 23:12 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-05-04 10:17:32 UTC
Target Upstream Version:


Attachments (Terms of Use)

Description Bhaskarakiran 2016-05-02 11:33:38 UTC
Description of problem:
=======================

Tried creating 1x3 volumes and below is the volume status output. LV creation and mounting works fine.

[root@rhsqa1 ~]# gluster v status
Status of volume: data
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick rhsqa1.lab.eng.blr.redhat.com:/rhgs/d
ata/data                                    N/A       N/A        N       N/A  
Brick rhsqa13.lab.eng.blr.redhat.com:/rhgs/
data/data                                   N/A       N/A        N       N/A  
Brick rhsqa4.lab.eng.blr.redhat.com:/rhgs/d
ata/data                                    N/A       N/A        N       N/A  
NFS Server on localhost                     N/A       N/A        N       N/A  
Self-heal Daemon on localhost               N/A       N/A        N       N/A  
NFS Server on rhsqa4.lab.eng.blr.redhat.com N/A       N/A        N       N/A  
Self-heal Daemon on rhsqa4.lab.eng.blr.redh
at.com                                      N/A       N/A        N       N/A  
NFS Server on rhsqa13.lab.eng.blr.redhat.co
m                                           N/A       N/A        N       N/A  
Self-heal Daemon on rhsqa13.lab.eng.blr.red
hat.com                                     N/A       N/A        N       N/A  
 
Task Status of Volume data
------------------------------------------------------------------------------
There are no active volume tasks
 
Status of volume: engine_vol
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick rhsqa1.lab.eng.blr.redhat.com:/rhgs/e
ngine/ev                                    N/A       N/A        N       N/A  
Brick rhsqa13.lab.eng.blr.redhat.com:/rhgs/
engine/ev                                   N/A       N/A        N       N/A  
Brick rhsqa4.lab.eng.blr.redhat.com:/rhgs/e
ngine/ev                                    N/A       N/A        N       N/A  
NFS Server on localhost                     N/A       N/A        N       N/A  
Self-heal Daemon on localhost               N/A       N/A        N       N/A  
NFS Server on rhsqa4.lab.eng.blr.redhat.com N/A       N/A        N       N/A  
Self-heal Daemon on rhsqa4.lab.eng.blr.redh
at.com                                      N/A       N/A        N       N/A  
NFS Server on rhsqa13.lab.eng.blr.redhat.co
m                                           N/A       N/A        N       N/A  
Self-heal Daemon on rhsqa13.lab.eng.blr.red
hat.com                                     N/A       N/A        N       N/A  
 
Task Status of Volume engine_vol
------------------------------------------------------------------------------
There are no active volume tasks
 
Status of volume: vmstore
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick rhsqa1.lab.eng.blr.redhat.com:/rhgs/v
mstore/vms                                  N/A       N/A        Y       12215
Brick rhsqa13.lab.eng.blr.redhat.com:/rhgs/
vmstore/vms                                 N/A       N/A        Y       16131
Brick rhsqa4.lab.eng.blr.redhat.com:/rhgs/v
mstore/vms                                  N/A       N/A        Y       18868
NFS Server on localhost                     N/A       N/A        N       N/A  
Self-heal Daemon on localhost               N/A       N/A        N       N/A  
NFS Server on rhsqa4.lab.eng.blr.redhat.com N/A       N/A        N       N/A  
Self-heal Daemon on rhsqa4.lab.eng.blr.redh
at.com                                      N/A       N/A        N       N/A  
NFS Server on rhsqa13.lab.eng.blr.redhat.co
m                                           N/A       N/A        N       N/A  
Self-heal Daemon on rhsqa13.lab.eng.blr.red
hat.com                                     N/A       N/A        N       N/A  
 
Task Status of Volume vmstore
------------------------------------------------------------------------------
There are no active volume tasks

volume creation conf file:
=========================
[volume1]
action=create
volname=engine_vol
transport=tcp,rdma
replica=yes
replica_count=3
force=yes
key=group,storage.owner-uid,storage.owner-gid,features.shard,features.shard-block-size,performance.low-prio-threads,cluster.data-self-heal-algorithm
value=virt,36,36,on,512MB,32,full
brick_dirs=/rhgs/engine/ev

[volume2]
action=create
volname=vmstore
transport=tcp,rdma
replica=yes
replica_count=3
force=yes
key=group,storage.owner-uid,storage.owner-gid,features.shard,features.shard-block-size,performance.low-prio-threads,cluster.data-self-heal-algorithm
value=virt,36,36,on,512MB,32,full
brick_dirs=/rhgs/vmstore/vms

[volume3]
action=create
volname=data
transport=tcp,rdma
replica=yes
replica_count=3
force=yes
key=group,storage.owner-uid,storage.owner-gid,features.shard,features.shard-block-size,performance.low-prio-threads,cluster.data-self-heal-algorithm
value=virt,36,36,on,512MB,32,full
brick_dirs=/rhgs/data/data


Version-Release number of selected component (if applicable):
-------------------------------------------------------------
gdeploy-2.0-8

How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 2 Sachidananda Urs 2016-05-03 10:17:16 UTC
Bhaskar this is due to a bug in gluster. In the config file you've written
tcp,rdma for transport. Which causes the volume start to fail.

volume start: foo: failed: Commit failed on localhost. Please check log file for details.

For now remove rdma from transport till the gluster issue is root caused and
fixed. Also, Kaushal suggested to install glusterfs-rdma package and check.

Can you please close this bug?

Comment 3 Bhaskarakiran 2016-05-04 10:17:32 UTC
I will check that. Closing this for now.


Note You need to log in before you can comment on or make changes to this bug.