Bug 1529055

Summary: Test case ./tests/bugs/bug-1371806_1.t is failing
Product: [Community] GlusterFS Reporter: Nithya Balachandran <nbalacha>
Component: testsAssignee: bugs <bugs>
Status: CLOSED CURRENTRELEASE QA Contact:
Severity: medium Docs Contact:
Priority: medium    
Version: 3.13CC: bugs, moagrawa, nbalacha
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: glusterfs-3.13.2 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: 1498966
: 1529058 (view as bug list) Environment:
Last Closed: 2018-01-23 21:37:19 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1498966, 1532112    
Bug Blocks: 1529058    

Description Nithya Balachandran 2017-12-26 07:45:27 UTC
+++ This bug was initially created as a clone of Bug #1498966 +++

Description of problem:

Test case ./tests/bugs/bug-1371806_1.t is failing.
Version-Release number of selected component (if applicable):


How reproducible:
Sometime test case ./tests/bugs/bug-1371806_1.t is failing on centos.

Steps to Reproduce:
1.
2.
3.

Actual results:

Test case ./tests/bugs/bug-1371806_1.t is failing.
Expected results:
Test case ./tests/bugs/bug-1371806_1.t should not fail.

Additional info:

--- Additional comment from Worker Ant on 2017-10-05 12:38:01 EDT ---

REVIEW: https://review.gluster.org/18436 (Test: Test case ./tests/bugs/bug-1371806_1.t is failing) posted (#1) for review on master by MOHIT AGRAWAL (moagrawa)

--- Additional comment from Worker Ant on 2017-10-06 03:35:20 EDT ---

REVIEW: https://review.gluster.org/18436 (Test: Test case ./tests/bugs/bug-1371806_1.t is failing) posted (#2) for review on master by MOHIT AGRAWAL (moagrawa)

--- Additional comment from Worker Ant on 2017-10-06 06:05:00 EDT ---

REVIEW: https://review.gluster.org/18436 (Test: Test case ./tests/bugs/bug-1371806_1.t is failing) posted (#3) for review on master by MOHIT AGRAWAL (moagrawa)

--- Additional comment from Worker Ant on 2017-10-06 22:32:26 EDT ---

REVIEW: https://review.gluster.org/18436 (Test: Test case ./tests/bugs/bug-1371806_1.t is failing) posted (#4) for review on master by MOHIT AGRAWAL (moagrawa)

--- Additional comment from Worker Ant on 2017-10-09 08:21:16 EDT ---

REVIEW: https://review.gluster.org/18436 (Test: Test case ./tests/bugs/bug-1371806_1.t is failing) posted (#5) for review on master by MOHIT AGRAWAL (moagrawa)

--- Additional comment from Worker Ant on 2017-10-12 03:31:03 EDT ---

REVIEW: https://review.gluster.org/18436 (cluster/dht: Serialize mds update code path with lookup unwind in selfheal) posted (#6) for review on master by MOHIT AGRAWAL (moagrawa)

--- Additional comment from Worker Ant on 2017-10-12 05:06:36 EDT ---

REVIEW: https://review.gluster.org/18436 (cluster/dht: Serialize mds update code path with lookup unwind in selfheal) posted (#7) for review on master by MOHIT AGRAWAL (moagrawa)

--- Additional comment from Worker Ant on 2017-10-16 02:28:02 EDT ---

REVIEW: https://review.gluster.org/18436 (cluster/dht: Serialize mds update code path with lookup unwind in selfheal) posted (#8) for review on master by MOHIT AGRAWAL (moagrawa)

--- Additional comment from Worker Ant on 2017-10-16 02:30:03 EDT ---

REVIEW: https://review.gluster.org/18436 (cluster/dht: Serialize mds update code path with lookup unwind in selfheal) posted (#9) for review on master by MOHIT AGRAWAL (moagrawa)

--- Additional comment from Worker Ant on 2017-11-22 12:19:20 EST ---

COMMIT: https://review.gluster.org/18436 committed in master by \"MOHIT AGRAWAL\" <moagrawa> with a commit message- cluster/dht: Serialize mds update code path with lookup unwind in selfheal

Problem: Sometime test case ./tests/bugs/bug-1371806_1.t is failing on
         centos due to race condition between fresh lookup and setxattr fop.

Solution: In selfheal code path we do save mds on inode_ctx, it was not
          serialize with lookup unwind. Due to this behavior after lookup
          unwind if mds is not saved on inode_ctx and if any subsequent
          setxattr fop call it has failed with ENOENT because
          no mds has found on inode ctx.To resolve it save mds on
          inode ctx has been serialize with lookup unwind.

BUG: 1498966
Change-Id: I8d4bb40a6cbf0cec35d181ec0095cc7142b02e29
Signed-off-by: Mohit Agrawal <moagrawa>

Comment 1 Worker Ant 2017-12-26 07:58:23 UTC
REVIEW: https://review.gluster.org/19078 (cluster/dht: Serialize mds update code path with lookup unwind in selfheal) posted (#1) for review on release-3.13 by N Balachandran

Comment 2 Worker Ant 2018-01-02 18:42:14 UTC
COMMIT: https://review.gluster.org/19078 committed in release-3.13 by \"N Balachandran\" <nbalacha> with a commit message- cluster/dht: Serialize mds update code path with lookup unwind in selfheal

Problem: Sometime test case ./tests/bugs/bug-1371806_1.t is failing on
         centos due to race condition between fresh lookup and setxattr fop.

Solution: In selfheal code path we do save mds on inode_ctx, it was not
          serialize with lookup unwind. Due to this behavior after lookup
          unwind if mds is not saved on inode_ctx and if any subsequent
          setxattr fop call it has failed with ENOENT because
          no mds has found on inode ctx.To resolve it save mds on
          inode ctx has been serialize with lookup unwind.

> BUG: 1498966
> Signed-off-by: Mohit Agrawal <moagrawa>
Change-Id: I8d4bb40a6cbf0cec35d181ec0095cc7142b02e29
BUG: 1529055
Signed-off-by: Mohit Agrawal <moagrawa>
Signed-off-by: N Balachandran <nbalacha>

Comment 3 Shyamsundar 2018-01-23 21:37:19 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.13.2, please open a new bug report.

glusterfs-3.13.2 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://lists.gluster.org/pipermail/announce/2018-January/000089.html
[2] https://www.gluster.org/pipermail/gluster-users/