Bug 1541932

Summary: A down brick is incorrectly considered to be online and makes the volume to be started without any brick available
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: Xavi Hernandez <jahernan>
Component: replicateAssignee: Xavi Hernandez <jahernan>
Status: CLOSED ERRATA QA Contact: Vijay Avuthu <vavuthu>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: rhgs-3.4CC: bugs, jahernan, ksubrahm, ravishankar, rhinduja, rhs-bugs, sheggodu, storage-qa-internal
Target Milestone: ---   
Target Release: RHGS 3.4.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: glusterfs-3.12.2-5 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: 1541038 Environment:
Last Closed: 2018-09-04 06:42:04 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1541038    
Bug Blocks: 1503137    

Description Xavi Hernandez 2018-02-05 09:00:36 UTC
+++ This bug was initially created as a clone of Bug #1541038 +++

Description of problem:

In a replica 2 volume, if one of the bricks is down and it reports its state before the online one, AFR tries to find another online brick in find_best_down_child(). Since priv->child_up array has been initialized with -1 and this function only checks if it's 0, it considers that the other brick is alive and sends a CHILD_UP notification.

At this point the other xlators start sending requests, which fail with ENOTCONN when they reach afr. This can cause several unexpected errors.

Version-Release number of selected component (if applicable): mainline


How reproducible:

It happens randomly, depending on the order in which bricks are started.

Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:

Comment 2 Karthik U S 2018-02-09 06:52:16 UTC
Upstream patch: https://review.gluster.org/#/c/19440/

Comment 9 Vijay Avuthu 2018-04-18 06:41:52 UTC
Update:
=======

Build Used: glusterfs-3.12.2-7.el7rhgs.x86_64

Scenario:

1) create 1 * 2 replicate and start the volume
2) kill the 1st brick
3) bring down the 2nd node network, so that mount process will not connect to 2nd brick
4) mount the volume
5) after few seconds( before the connection timeout ), bring up the down network of 2nd node

Tried above scenario several times and mounting the volume is always successful.

Comment 11 errata-xmlrpc 2018-09-04 06:42:04 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2018:2607