Bug 987092 - [RFE] core: volume status does not display brick ports
[RFE] core: volume status does not display brick ports
Status: CLOSED CURRENTRELEASE
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: glusterfs (Show other bugs)
2.1
x86_64 Linux
medium Severity high
: ---
: ---
Assigned To: Bug Updates Notification Mailing List
storage-qa-internal@redhat.com
glusterd
: FutureFeature
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2013-07-22 13:04 EDT by Saurabh
Modified: 2016-01-19 01:15 EST (History)
5 users (show)

See Also:
Fixed In Version:
Doc Type: Enhancement
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2015-12-24 04:32:30 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Saurabh 2013-07-22 13:04:25 EDT
Description of problem:
gluster volume status does not display the ports of all the bricks.
issue is seen after bringing back the already down bricks using 
gluster volume start <vol-name> force


Version-Release number of selected component (if applicable):
[root@nfs1 ~]# rpm -qa | grep glusterfs
glusterfs-3.4.0.12rhs.beta4-1.el6rhs.x86_64
glusterfs-fuse-3.4.0.12rhs.beta4-1.el6rhs.x86_64
glusterfs-server-3.4.0.12rhs.beta4-1.el6rhs.x86_64


How reproducible:
always

Steps to Reproduce:
setup - 4 rhs nodes [1, 2, 3, 4]
1. create a volume, start it
2. mount the volume using nfs
3. start I/O on the mount point
4. while I/O is going on killall gluster related processes on node2 and node3
5. after some time, bring back the brick using 
   gluster volume start <vol-name> force
   before this command issue /etc/init.d/glusterd start on node2 and node3
6. gluster volume status <vol-name>

Actual results:

Status of volume: quota-dist-rep
Gluster process						Port	Online	Pid
------------------------------------------------------------------------------
Brick 10.70.37.180:/rhs/bricks/quota-d1r1		49172	Y	23303
Brick 10.70.37.80:/rhs/bricks/quota-d1r2		N/A	Y	18615
Brick 10.70.37.216:/rhs/bricks/quota-d2r1		N/A	Y	5388
Brick 10.70.37.139:/rhs/bricks/quota-d2r2		49172	Y	17100
Brick 10.70.37.180:/rhs/bricks/quota-d3r1		49173	Y	23314
Brick 10.70.37.80:/rhs/bricks/quota-d3r2		N/A	Y	18623
Brick 10.70.37.216:/rhs/bricks/quota-d4r1		N/A	Y	5394
Brick 10.70.37.139:/rhs/bricks/quota-d4r2		49173	Y	17111
Brick 10.70.37.180:/rhs/bricks/quota-d5r1		49174	Y	23325
Brick 10.70.37.80:/rhs/bricks/quota-d5r2		N/A	Y	18627
Brick 10.70.37.216:/rhs/bricks/quota-d6r1		N/A	Y	5402
Brick 10.70.37.139:/rhs/bricks/quota-d6r2		49174	Y	17122
NFS Server on localhost					2049	Y	25673
Self-heal Daemon on localhost				N/A	Y	25680
NFS Server on 10.70.37.139				2049	Y	18714
Self-heal Daemon on 10.70.37.139			N/A	Y	18721
NFS Server on 10.70.37.80				2049	Y	20101
Self-heal Daemon on 10.70.37.80				N/A	Y	20114
NFS Server on 10.70.37.216				2049	Y	6885
Self-heal Daemon on 10.70.37.216			N/A	Y	6892
 
           Task                                      ID         Status
           ----                                      --         ------
      Rebalance    9e281276-6e32-43d6-8028-d06c80dc3b18              3

Expected results:
ports need to be displayed

Additional info:
Comment 2 Atin Mukherjee 2015-12-24 04:32:30 EST
This is working with the current release.

Note You need to log in before you can comment on or make changes to this bug.