Bug 1637968

Summary: [RHGS] [Glusterd] Bricks fail to come online after node reboot on a scaled setup
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: Rachael <rgeorge>
Component: rhgs-server-containerAssignee: Raghavendra Talur <rtalur>
Status: CLOSED INSUFFICIENT_DATA QA Contact: Rachael <rgeorge>
Severity: high Docs Contact:
Priority: unspecified    
Version: ocs-3.11CC: amukherj, hchiramm, jmulligan, kramdoss, madam, moagrawa, rgeorge, rhs-bugs, rtalur
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
: 1638192 (view as bug list) Environment:
Last Closed: 2019-11-11 20:22:14 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1638192    
Bug Blocks:    

Comment 9 Yaniv Kaul 2019-02-27 11:34:31 UTC
Any updates?

Comment 10 Atin Mukherjee 2019-02-27 12:19:13 UTC
This should be already fixed through this commit:

commit afcb244f1264af8b0df42b5c79905fd52f01b924
Author: Mohammed Rafi KC <rkavunga>
Date:   Thu Nov 15 13:18:36 2018 +0530

    glusterd/mux: Optimize brick disconnect handler code
    
    Removed unnecessary iteration during brick disconnect
    handler when multiplex is enabled.
    
            >Change-Id: I62dd3337b7e7da085da5d76aaae206e0b0edff9f
            >fixes: bz#1650115
            >Signed-off-by: Mohammed Rafi KC <rkavunga>
    upstream patch : https://review.gluster.org/#/c/glusterfs/+/21651/
    
    Change-Id: I62dd3337b7e7da085da5d76aaae206e0b0edff9f
    BUG: 1649651
    Signed-off-by: Mohammed Rafi KC <rkavunga>
    Reviewed-on: https://code.engineering.redhat.com/gerrit/156327
    Tested-by: RHGS Build Bot <nigelb>
    Reviewed-by: Atin Mukherjee <amukherj>

Mohit - what do you think?

If that's the case this is already addressed in OCS 3.11.1 given the above fix was shipped as part of RHGS 3.4.2 and hence this bug can be closed as current release.

Comment 11 Mohit Agrawal 2019-02-27 12:21:02 UTC
Yes, atin you are right, the issue is already fixed in 3.4.2.

Thanks
Mohit Agrawal