Bug 1223715

Summary: Though brick demon is not running, gluster vol status command shows the pid
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: RajeshReddy <rmekala>
Component: glusterdAssignee: Gaurav Kumar Garg <ggarg>
Status: CLOSED ERRATA QA Contact: SATHEESARAN <sasundar>
Severity: medium Docs Contact:
Priority: medium    
Version: rhgs-3.1CC: amukherj, asrivast, bmohanra, ggarg, kparthas, mzywusko, nlevinki, rmekala, sasundar, smohan, vbellur
Target Milestone: ---Keywords: Triaged
Target Release: RHGS 3.1.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard: GlusterD
Fixed In Version: glusterfs-3.7.1-1 Doc Type: Bug Fix
Doc Text:
Previously, when the gluster volume status command was executed, glusterd showed the brick pid even when the brick daemon was offline. With this fix, the brick pid is not displayed if the brick pid is offline.
Story Points: ---
Clone Of:
: 1223772 (view as bug list) Environment:
Last Closed: 2015-07-29 04:44:26 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1202842, 1223772, 1228065    

Description RajeshReddy 2015-05-21 10:11:09 UTC
Description of problem:
========================
Though brick demon is not running, gluster vol status command shows the pid 

Version-Release number of selected component (if applicable):
==================================
Though brick demon is not running, gluster vol status command shows the pid 


How reproducible:


Steps to Reproduce:
=======================
1.Create a volume with two bricks of two different nodes and then kill brick process in one of the node 
2.Observe the gluster vol status and still shows the pid though no brick demon is running on 
3.

Actual results:


Expected results:
===============
Status should show N/A under PID section for the down brick 

Additional info:
====================

[root@rhs-client38 ~]# gluster vol status vol1
Status of volume: vol1
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.70.33.229:/rajesh2/brick4          49185     0          Y       4291 
Brick 10.70.33.235:/rajesh2/brick4          N/A       N/A        N       17125
NFS Server on localhost                     2049      0          Y       17145
Bitrot Daemon on localhost                  N/A       N/A        Y       17150
Scrubber Daemon on localhost                N/A       N/A        Y       17162
NFS Server on 10.70.33.229                  2049      0          Y       13601
Bitrot Daemon on 10.70.33.229               N/A       N/A        Y       13609
Scrubber Daemon on 10.70.33.229             N/A       N/A        Y       13621
 
Task Status of Volume vol1
------------------------------------------------------------------------------
There are no active volume tasks



[root@rhs-client38 ~]# ps -aux | grep 17125
Warning: bad syntax, perhaps a bogus '-'? See /usr/share/doc/procps-3.2.8/FAQ
root     17245  0.0  0.0 103252   864 pts/0    S+   02:10   0:00 grep 17125
[root@rhs-client38 ~]#

Comment 3 Gaurav Kumar Garg 2015-05-21 12:36:23 UTC
upstream patch http://review.gluster.org/#/c/10877/ is available. once it merge i will clone it to rhgs.

Comment 6 RajeshReddy 2015-06-09 14:17:05 UTC
Tested with glusterfs-api-3.7.1-1 and vol status is not showing PID of non running brick so marking this bug as verified

Comment 7 Atin Mukherjee 2015-06-09 18:09:56 UTC
Shouldn't this be marked as Verified?

Comment 8 Bhavana 2015-07-13 06:45:49 UTC
Hi Gaurav,

The doc text is updated. Please review the same and share your technical review comments. If it looks ok, then sign-off on the same.

Regards,
Bhavana

Comment 9 errata-xmlrpc 2015-07-29 04:44:26 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2015-1495.html