Bug 1223715 - Though brick demon is not running, gluster vol status command shows the pid
Summary: Though brick demon is not running, gluster vol status command shows the pid
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: glusterd
Version: rhgs-3.1
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: RHGS 3.1.0
Assignee: Gaurav Kumar Garg
QA Contact: SATHEESARAN
URL:
Whiteboard: GlusterD
Depends On:
Blocks: 1202842 1223772 1228065
TreeView+ depends on / blocked
 
Reported: 2015-05-21 10:11 UTC by RajeshReddy
Modified: 2016-07-13 22:34 UTC (History)
11 users (show)

Fixed In Version: glusterfs-3.7.1-1
Doc Type: Bug Fix
Doc Text:
Previously, when the gluster volume status command was executed, glusterd showed the brick pid even when the brick daemon was offline. With this fix, the brick pid is not displayed if the brick pid is offline.
Clone Of:
: 1223772 (view as bug list)
Environment:
Last Closed: 2015-07-29 04:44:26 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2015:1495 0 normal SHIPPED_LIVE Important: Red Hat Gluster Storage 3.1 update 2015-07-29 08:26:26 UTC

Description RajeshReddy 2015-05-21 10:11:09 UTC
Description of problem:
========================
Though brick demon is not running, gluster vol status command shows the pid 

Version-Release number of selected component (if applicable):
==================================
Though brick demon is not running, gluster vol status command shows the pid 


How reproducible:


Steps to Reproduce:
=======================
1.Create a volume with two bricks of two different nodes and then kill brick process in one of the node 
2.Observe the gluster vol status and still shows the pid though no brick demon is running on 
3.

Actual results:


Expected results:
===============
Status should show N/A under PID section for the down brick 

Additional info:
====================

[root@rhs-client38 ~]# gluster vol status vol1
Status of volume: vol1
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.70.33.229:/rajesh2/brick4          49185     0          Y       4291 
Brick 10.70.33.235:/rajesh2/brick4          N/A       N/A        N       17125
NFS Server on localhost                     2049      0          Y       17145
Bitrot Daemon on localhost                  N/A       N/A        Y       17150
Scrubber Daemon on localhost                N/A       N/A        Y       17162
NFS Server on 10.70.33.229                  2049      0          Y       13601
Bitrot Daemon on 10.70.33.229               N/A       N/A        Y       13609
Scrubber Daemon on 10.70.33.229             N/A       N/A        Y       13621
 
Task Status of Volume vol1
------------------------------------------------------------------------------
There are no active volume tasks



[root@rhs-client38 ~]# ps -aux | grep 17125
Warning: bad syntax, perhaps a bogus '-'? See /usr/share/doc/procps-3.2.8/FAQ
root     17245  0.0  0.0 103252   864 pts/0    S+   02:10   0:00 grep 17125
[root@rhs-client38 ~]#

Comment 3 Gaurav Kumar Garg 2015-05-21 12:36:23 UTC
upstream patch http://review.gluster.org/#/c/10877/ is available. once it merge i will clone it to rhgs.

Comment 6 RajeshReddy 2015-06-09 14:17:05 UTC
Tested with glusterfs-api-3.7.1-1 and vol status is not showing PID of non running brick so marking this bug as verified

Comment 7 Atin Mukherjee 2015-06-09 18:09:56 UTC
Shouldn't this be marked as Verified?

Comment 8 Bhavana 2015-07-13 06:45:49 UTC
Hi Gaurav,

The doc text is updated. Please review the same and share your technical review comments. If it looks ok, then sign-off on the same.

Regards,
Bhavana

Comment 9 errata-xmlrpc 2015-07-29 04:44:26 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2015-1495.html


Note You need to log in before you can comment on or make changes to this bug.