Bugzilla will be upgraded to version 5.0. The upgrade date is tentatively scheduled for 2 December 2018, pending final testing and feedback.
Bug 1541932 - A down brick is incorrectly considered to be online and makes the volume to be started without any brick available
A down brick is incorrectly considered to be online and makes the volume to b...
Status: CLOSED ERRATA
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: replicate (Show other bugs)
3.4
Unspecified Unspecified
unspecified Severity unspecified
: ---
: RHGS 3.4.0
Assigned To: Xavi Hernandez
Vijay Avuthu
:
Depends On: 1541038
Blocks: 1503137
  Show dependency treegraph
 
Reported: 2018-02-05 04:00 EST by Xavi Hernandez
Modified: 2018-09-18 02:44 EDT (History)
8 users (show)

See Also:
Fixed In Version: glusterfs-3.12.2-5
Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: 1541038
Environment:
Last Closed: 2018-09-04 02:42:04 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2018:2607 None None None 2018-09-04 02:43 EDT

  None (edit)
Description Xavi Hernandez 2018-02-05 04:00:36 EST
+++ This bug was initially created as a clone of Bug #1541038 +++

Description of problem:

In a replica 2 volume, if one of the bricks is down and it reports its state before the online one, AFR tries to find another online brick in find_best_down_child(). Since priv->child_up array has been initialized with -1 and this function only checks if it's 0, it considers that the other brick is alive and sends a CHILD_UP notification.

At this point the other xlators start sending requests, which fail with ENOTCONN when they reach afr. This can cause several unexpected errors.

Version-Release number of selected component (if applicable): mainline


How reproducible:

It happens randomly, depending on the order in which bricks are started.

Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:
Comment 2 Karthik U S 2018-02-09 01:52:16 EST
Upstream patch: https://review.gluster.org/#/c/19440/
Comment 9 Vijay Avuthu 2018-04-18 02:41:52 EDT
Update:
=======

Build Used: glusterfs-3.12.2-7.el7rhgs.x86_64

Scenario:

1) create 1 * 2 replicate and start the volume
2) kill the 1st brick
3) bring down the 2nd node network, so that mount process will not connect to 2nd brick
4) mount the volume
5) after few seconds( before the connection timeout ), bring up the down network of 2nd node

Tried above scenario several times and mounting the volume is always successful.
Comment 11 errata-xmlrpc 2018-09-04 02:42:04 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2018:2607

Note You need to log in before you can comment on or make changes to this bug.