Bug 1434448

Summary: Brick Multiplexing:Volume status still shows the PID even after killing the process
Product: [Community] GlusterFS Reporter: Nag Pavan Chilakam <nchilaka>
Component: coreAssignee: Jeff Darcy <jeff>
Status: CLOSED EOL QA Contact:
Severity: medium Docs Contact:
Priority: unspecified    
Version: 3.10CC: bugs, jeff
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
: 1437494 (view as bug list) Environment:
Last Closed: 2018-06-20 18:25:26 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1437494, 1438051    

Description Nag Pavan Chilakam 2017-03-21 14:11:18 UTC
Description of problem:
==================
After enabling brick multiplexing, I killed the brick process(which is universal for that node for all bricks of all volumes) on one of the node.
I see that the process gets killed and all bricks show the online status and port number as N or N/A
However it still shows the old PID of the killed process
This PID also should be shown as N

root@dhcp35-215 bricks]# gluster v status|grep 215
Before kill the brick process(grep'ing only for bricks in this local node)

Brick 10.70.35.215:/rhs/brick3/cross3       49152     0          Y       13072
Brick 10.70.35.215:/rhs/brick4/cross3       49152     0          Y       13072
Brick 10.70.35.215:/rhs/brick1/ecvol        49152     0          Y       13072
Brick 10.70.35.215:/rhs/brick2/ecvol        49152     0          Y       13072
Brick 10.70.35.215:/rhs/brick3/ecvol        49152     0          Y       13072
Brick 10.70.35.215:/rhs/brick4/ecvol        49152     0          Y       13072
Brick 10.70.35.215:/rhs/brick1/ecx          49152     0          Y       13072
Brick 10.70.35.215:/rhs/brick2/ecx          49152     0          Y       13072
Brick 10.70.35.215:/rhs/brick3/ecx          49152     0          Y       13072
Brick 10.70.35.215:/rhs/brick4/ecx          49152     0          Y       13072
Brick 10.70.35.215:/rhs/brick3/rep2         49152     0          Y       13072
Brick 10.70.35.215:/rhs/brick4/rep2         49152     0          Y       13072
Brick 10.70.35.215:/rhs/brick3/rep3         49152     0          Y       13072
Brick 10.70.35.215:/rhs/brick4/rep3         49152     0          Y       13072
[root@dhcp35-215 bricks]# kill -9 13072
[root@dhcp35-215 bricks]# gluster v status|grep 215
(after kill the brick process)
Brick 10.70.35.215:/rhs/brick3/cross3       N/A       N/A        N       13072
Brick 10.70.35.215:/rhs/brick4/cross3       N/A       N/A        N       13072
Brick 10.70.35.215:/rhs/brick1/ecvol        N/A       N/A        N       13072
Brick 10.70.35.215:/rhs/brick2/ecvol        N/A       N/A        N       13072
Brick 10.70.35.215:/rhs/brick3/ecvol        N/A       N/A        N       13072
Brick 10.70.35.215:/rhs/brick4/ecvol        N/A       N/A        N       13072
Brick 10.70.35.215:/rhs/brick1/ecx          N/A       N/A        N       13072
Brick 10.70.35.215:/rhs/brick2/ecx          N/A       N/A        N       13072
Brick 10.70.35.215:/rhs/brick3/ecx          N/A       N/A        N       13072
Brick 10.70.35.215:/rhs/brick4/ecx          N/A       N/A        N       13072
Brick 10.70.35.215:/rhs/brick3/rep2         N/A       N/A        N       13072
Brick 10.70.35.215:/rhs/brick4/rep2         N/A       N/A        N       13072
Brick 10.70.35.215:/rhs/brick3/rep3         N/A       N/A        N       13072
Brick 10.70.35.215:/rhs/brick4/rep3         N/A       N/A        N       13072



[root@dhcp35-215 bricks]# ps -ef|grep 13072
root      2258 21234  0 19:35 pts/0    00:00:00 grep --color=auto 13072
[root@dhcp35-215 bricks]# 


Version-Release number of selected component (if applicable):
============
glusterfs-libs-3.10.0-1.el7.x86_64
glusterfs-api-3.10.0-1.el7.x86_64
glusterfs-rdma-3.10.0-1.el7.x86_64
glusterfs-3.10.0-1.el7.x86_64
python2-gluster-3.10.0-1.el7.x86_64
glusterfs-fuse-3.10.0-1.el7.x86_64
glusterfs-server-3.10.0-1.el7.x86_64
glusterfs-geo-replication-3.10.0-1.el7.x86_64
glusterfs-extra-xlators-3.10.0-1.el7.x86_64
glusterfs-client-xlators-3.10.0-1.el7.x86_64
glusterfs-cli-3.10.0-1.el7.x86_64



How reproducible:
=======
always

Steps to Reproduce:
1.enabled brick multiplexing feature
2.create a volume or multiple volume and start them
3.you can notice all bricks hosted on the same node will be having same PID
4. select a node and kill the PID
5. issue volume status

Actual results:
====
volume status still shows the PID against each brick even though the PID is killed

Expected results:
================
PID must show as N/A

Comment 1 Jeff Darcy 2017-03-21 15:16:58 UTC
I would say that killing a process is an invalid test, but this probably needs to be fixed anyway.

Comment 2 Shyamsundar 2018-06-20 18:25:26 UTC
This bug reported is against a version of Gluster that is no longer maintained (or has been EOL'd). See https://www.gluster.org/release-schedule/ for the versions currently maintained.

As a result this bug is being closed.

If the bug persists on a maintained version of gluster or against the mainline gluster repository, request that it be reopened and the Version field be marked appropriately.