Bug 1541932 - A down brick is incorrectly considered to be online and makes the volume to be started without any brick available
Summary: A down brick is incorrectly considered to be online and makes the volume to b...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: replicate
Version: rhgs-3.4
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: RHGS 3.4.0
Assignee: Xavi Hernandez
QA Contact: Vijay Avuthu
URL:
Whiteboard:
Depends On: 1541038
Blocks: 1503137
TreeView+ depends on / blocked
 
Reported: 2018-02-05 09:00 UTC by Xavi Hernandez
Modified: 2018-09-18 06:44 UTC (History)
8 users (show)

Fixed In Version: glusterfs-3.12.2-5
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1541038
Environment:
Last Closed: 2018-09-04 06:42:04 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2018:2607 0 None None None 2018-09-04 06:43:39 UTC

Description Xavi Hernandez 2018-02-05 09:00:36 UTC
+++ This bug was initially created as a clone of Bug #1541038 +++

Description of problem:

In a replica 2 volume, if one of the bricks is down and it reports its state before the online one, AFR tries to find another online brick in find_best_down_child(). Since priv->child_up array has been initialized with -1 and this function only checks if it's 0, it considers that the other brick is alive and sends a CHILD_UP notification.

At this point the other xlators start sending requests, which fail with ENOTCONN when they reach afr. This can cause several unexpected errors.

Version-Release number of selected component (if applicable): mainline


How reproducible:

It happens randomly, depending on the order in which bricks are started.

Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:

Comment 2 Karthik U S 2018-02-09 06:52:16 UTC
Upstream patch: https://review.gluster.org/#/c/19440/

Comment 9 Vijay Avuthu 2018-04-18 06:41:52 UTC
Update:
=======

Build Used: glusterfs-3.12.2-7.el7rhgs.x86_64

Scenario:

1) create 1 * 2 replicate and start the volume
2) kill the 1st brick
3) bring down the 2nd node network, so that mount process will not connect to 2nd brick
4) mount the volume
5) after few seconds( before the connection timeout ), bring up the down network of 2nd node

Tried above scenario several times and mounting the volume is always successful.

Comment 11 errata-xmlrpc 2018-09-04 06:42:04 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2018:2607


Note You need to log in before you can comment on or make changes to this bug.