Bug 1443972 - [Brick Multiplexing] : Bricks for multiple volumes going down after glusterd restart and not coming back up after volume start force
Summary: [Brick Multiplexing] : Bricks for multiple volumes going down after glusterd ...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: core
Version: rhgs-3.2
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: RHGS 3.3.0
Assignee: Mohit Agrawal
QA Contact: Nag Pavan Chilakam
URL:
Whiteboard: brick-multiplexing
Depends On: 1444596
Blocks: 1417151
TreeView+ depends on / blocked
 
Reported: 2017-04-20 11:16 UTC by surabhi
Modified: 2017-09-21 04:39 UTC (History)
4 users (show)

Fixed In Version: glusterfs-3.8.4-25
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1444596 1449003 (view as bug list)
Environment:
Last Closed: 2017-09-21 04:39:40 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 1442787 0 unspecified CLOSED Brick Multiplexing: During Remove brick when glusterd of a node is stopped, the brick process gets disconnected from glu... 2021-02-22 00:41:40 UTC
Red Hat Bugzilla 1443991 0 unspecified CLOSED [Brick Multiplexing] Brick process on a node didn't come up after glusterd stop/start 2021-02-22 00:41:40 UTC
Red Hat Product Errata RHBA-2017:2774 0 normal SHIPPED_LIVE glusterfs bug fix and enhancement update 2017-09-21 08:16:29 UTC

Internal Links: 1442787 1443991

Description surabhi 2017-04-20 11:16:33 UTC
Description of problem:
*********************************

On an existing nfs-ganesha cluster with one volume I disabled nfs-ganesha and shared_storage and enabled brick multiplexing on the cluster. After enabling BM I created multiple volumes. The new volumes got similar PID except the existing one(which is expected as per devel).
Now when I tried to enable shared storage I was not able to enable and it was showing error: Another transaction in progress.
After that I enabled shared_storage from the vol file and restarted glusterd.

This caused all the volume brick to go offline. I then did gluster vol start force but that did not bring the bricks up.
I also disabled brick-multiplexing and enabled again , and restarted glusterd along with volume start force but the bricks doesn't come up.

***********************************************************************


Status of volume: vol2
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.70.46.132:/gluster/brick2/b1       N/A       N/A        N       N/A  
Brick 10.70.46.128:/gluster/brick2/b2       N/A       N/A        N       N/A  
Brick 10.70.46.138:/gluster/brick2/b3       N/A       N/A        N       N/A  
Brick 10.70.46.140:/gluster/brick2/b4       N/A       N/A        N       N/A  
Self-heal Daemon on localhost               N/A       N/A        Y       5324 
Self-heal Daemon on dhcp46-140.lab.eng.blr.
redhat.com                                  N/A       N/A        Y       3096 
Self-heal Daemon on dhcp46-128.lab.eng.blr.
redhat.com                                  N/A       N/A        Y       2740 
Self-heal Daemon on dhcp46-138.lab.eng.blr.
redhat.com                                  N/A       N/A        Y       2576 
 
Task Status of Volume vol2
------------------------------------------------------------------------------
There are no active volume tasks
 
Status of volume: vol3
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.70.46.132:/gluster/brick3/b1       N/A       N/A        N       N/A  
Brick 10.70.46.128:/gluster/brick3/b2       N/A       N/A        N       N/A  
Brick 10.70.46.138:/gluster/brick3/b3       N/A       N/A        N       N/A  
Brick 10.70.46.140:/gluster/brick3/b4       N/A       N/A        N       N/A  
Self-heal Daemon on localhost               N/A       N/A        Y       5324 
Self-heal Daemon on dhcp46-128.lab.eng.blr.
redhat.com                                  N/A       N/A        Y       2740 
Self-heal Daemon on dhcp46-140.lab.eng.blr.
redhat.com                                  N/A       N/A        Y       3096 
Self-heal Daemon on dhcp46-138.lab.eng.blr.
redhat.com                                  N/A       N/A        Y       2576 
 
Task Status of Volume vol3
------------------------------------------------------------------------------
There are no active volume tasks
 
Status of volume: vol4
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.70.46.132:/gluster/brick4/b1       N/A       N/A        N       N/A  
Brick 10.70.46.128:/gluster/brick4/b2       N/A       N/A        N       N/A  
Brick 10.70.46.138:/gluster/brick4/b3       N/A       N/A        N       N/A  
Brick 10.70.46.140:/gluster/brick4/b4       N/A       N/A        N       N/A  
Self-heal Daemon on localhost               N/A       N/A        Y       5324 
Self-heal Daemon on dhcp46-140.lab.eng.blr.
redhat.com                                  N/A       N/A        Y       3096 
Self-heal Daemon on dhcp46-128.lab.eng.blr.
redhat.com                                  N/A       N/A        Y       2740 
Self-heal Daemon on dhcp46-138.lab.eng.blr.
redhat.com                                  N/A       N/A        Y       2576 
 
Task Status of Volume vol4
------------------------------------------------------------------------------
There are no active volume tasks



Version-Release number of selected component (if applicable):
glusterfs-3.8.4-22.el7rhgs.x86_64

How reproducible:
Tried once

Steps to Reproduce:
1.4 node ganesha cluster , create a volume.
2.Disable ganesha, disable shared storage
3.enable brick multiplexing , create multiple volumes
4. Enable shared storage
5. issue seen, then restart glusterd
6. gluster vol start force 

Actual results:
************************
After glusterd restart all the bricks for the volumes which were created after enabling brick multiplexing went down and never came up.


Expected results:
***************************
glusterd restart should not make volume bricks to go down.
gluster vol start force should bring back the brick up.


Additional info:
Sosreports to follow

Comment 2 surabhi 2017-04-20 11:25:37 UTC
Sosreports available @ http://rhsqe-repo.lab.eng.blr.redhat.com/sosreports/1443972/

Comment 3 Atin Mukherjee 2017-04-20 15:25:31 UTC
Looks similar to BZ 1442787

Comment 4 Atin Mukherjee 2017-04-21 06:37:45 UTC
Refer https://bugzilla.redhat.com/show_bug.cgi?id=1443991#c6 for the initial analysis

Comment 5 Atin Mukherjee 2017-04-24 03:52:50 UTC
upstream patch : https://review.gluster.org/#/c/17101/

Comment 10 Nag Pavan Chilakam 2017-07-03 13:04:36 UTC
have retried the same on 3.8.4-32. not seeing the issue anymore. hence marking as verified

Comment 12 errata-xmlrpc 2017-09-21 04:39:40 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:2774


Note You need to log in before you can comment on or make changes to this bug.