Bug 1434448 - Brick Multiplexing:Volume status still shows the PID even after killing the process
Summary: Brick Multiplexing:Volume status still shows the PID even after killing the p...
Keywords:
Status: CLOSED EOL
Alias: None
Product: GlusterFS
Classification: Community
Component: core
Version: 3.10
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
Assignee: Jeff Darcy
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks: 1437494 1438051
TreeView+ depends on / blocked
 
Reported: 2017-03-21 14:11 UTC by Nag Pavan Chilakam
Modified: 2018-06-20 18:25 UTC (History)
2 users (show)

Fixed In Version:
Clone Of:
: 1437494 (view as bug list)
Environment:
Last Closed: 2018-06-20 18:25:26 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Nag Pavan Chilakam 2017-03-21 14:11:18 UTC
Description of problem:
==================
After enabling brick multiplexing, I killed the brick process(which is universal for that node for all bricks of all volumes) on one of the node.
I see that the process gets killed and all bricks show the online status and port number as N or N/A
However it still shows the old PID of the killed process
This PID also should be shown as N

root@dhcp35-215 bricks]# gluster v status|grep 215
Before kill the brick process(grep'ing only for bricks in this local node)

Brick 10.70.35.215:/rhs/brick3/cross3       49152     0          Y       13072
Brick 10.70.35.215:/rhs/brick4/cross3       49152     0          Y       13072
Brick 10.70.35.215:/rhs/brick1/ecvol        49152     0          Y       13072
Brick 10.70.35.215:/rhs/brick2/ecvol        49152     0          Y       13072
Brick 10.70.35.215:/rhs/brick3/ecvol        49152     0          Y       13072
Brick 10.70.35.215:/rhs/brick4/ecvol        49152     0          Y       13072
Brick 10.70.35.215:/rhs/brick1/ecx          49152     0          Y       13072
Brick 10.70.35.215:/rhs/brick2/ecx          49152     0          Y       13072
Brick 10.70.35.215:/rhs/brick3/ecx          49152     0          Y       13072
Brick 10.70.35.215:/rhs/brick4/ecx          49152     0          Y       13072
Brick 10.70.35.215:/rhs/brick3/rep2         49152     0          Y       13072
Brick 10.70.35.215:/rhs/brick4/rep2         49152     0          Y       13072
Brick 10.70.35.215:/rhs/brick3/rep3         49152     0          Y       13072
Brick 10.70.35.215:/rhs/brick4/rep3         49152     0          Y       13072
[root@dhcp35-215 bricks]# kill -9 13072
[root@dhcp35-215 bricks]# gluster v status|grep 215
(after kill the brick process)
Brick 10.70.35.215:/rhs/brick3/cross3       N/A       N/A        N       13072
Brick 10.70.35.215:/rhs/brick4/cross3       N/A       N/A        N       13072
Brick 10.70.35.215:/rhs/brick1/ecvol        N/A       N/A        N       13072
Brick 10.70.35.215:/rhs/brick2/ecvol        N/A       N/A        N       13072
Brick 10.70.35.215:/rhs/brick3/ecvol        N/A       N/A        N       13072
Brick 10.70.35.215:/rhs/brick4/ecvol        N/A       N/A        N       13072
Brick 10.70.35.215:/rhs/brick1/ecx          N/A       N/A        N       13072
Brick 10.70.35.215:/rhs/brick2/ecx          N/A       N/A        N       13072
Brick 10.70.35.215:/rhs/brick3/ecx          N/A       N/A        N       13072
Brick 10.70.35.215:/rhs/brick4/ecx          N/A       N/A        N       13072
Brick 10.70.35.215:/rhs/brick3/rep2         N/A       N/A        N       13072
Brick 10.70.35.215:/rhs/brick4/rep2         N/A       N/A        N       13072
Brick 10.70.35.215:/rhs/brick3/rep3         N/A       N/A        N       13072
Brick 10.70.35.215:/rhs/brick4/rep3         N/A       N/A        N       13072



[root@dhcp35-215 bricks]# ps -ef|grep 13072
root      2258 21234  0 19:35 pts/0    00:00:00 grep --color=auto 13072
[root@dhcp35-215 bricks]# 


Version-Release number of selected component (if applicable):
============
glusterfs-libs-3.10.0-1.el7.x86_64
glusterfs-api-3.10.0-1.el7.x86_64
glusterfs-rdma-3.10.0-1.el7.x86_64
glusterfs-3.10.0-1.el7.x86_64
python2-gluster-3.10.0-1.el7.x86_64
glusterfs-fuse-3.10.0-1.el7.x86_64
glusterfs-server-3.10.0-1.el7.x86_64
glusterfs-geo-replication-3.10.0-1.el7.x86_64
glusterfs-extra-xlators-3.10.0-1.el7.x86_64
glusterfs-client-xlators-3.10.0-1.el7.x86_64
glusterfs-cli-3.10.0-1.el7.x86_64



How reproducible:
=======
always

Steps to Reproduce:
1.enabled brick multiplexing feature
2.create a volume or multiple volume and start them
3.you can notice all bricks hosted on the same node will be having same PID
4. select a node and kill the PID
5. issue volume status

Actual results:
====
volume status still shows the PID against each brick even though the PID is killed

Expected results:
================
PID must show as N/A

Comment 1 Jeff Darcy 2017-03-21 15:16:58 UTC
I would say that killing a process is an invalid test, but this probably needs to be fixed anyway.

Comment 2 Shyamsundar 2018-06-20 18:25:26 UTC
This bug reported is against a version of Gluster that is no longer maintained (or has been EOL'd). See https://www.gluster.org/release-schedule/ for the versions currently maintained.

As a result this bug is being closed.

If the bug persists on a maintained version of gluster or against the mainline gluster repository, request that it be reopened and the Version field be marked appropriately.


Note You need to log in before you can comment on or make changes to this bug.