+++ This bug was initially created as a clone of Bug #1541038 +++ Description of problem: In a replica 2 volume, if one of the bricks is down and it reports its state before the online one, AFR tries to find another online brick in find_best_down_child(). Since priv->child_up array has been initialized with -1 and this function only checks if it's 0, it considers that the other brick is alive and sends a CHILD_UP notification. At this point the other xlators start sending requests, which fail with ENOTCONN when they reach afr. This can cause several unexpected errors. Version-Release number of selected component (if applicable): mainline How reproducible: It happens randomly, depending on the order in which bricks are started. Steps to Reproduce: 1. 2. 3. Actual results: Expected results:
Upstream patch: https://review.gluster.org/#/c/19440/
Update: ======= Build Used: glusterfs-3.12.2-7.el7rhgs.x86_64 Scenario: 1) create 1 * 2 replicate and start the volume 2) kill the 1st brick 3) bring down the 2nd node network, so that mount process will not connect to 2nd brick 4) mount the volume 5) after few seconds( before the connection timeout ), bring up the down network of 2nd node Tried above scenario several times and mounting the volume is always successful.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2018:2607