Bug 1498966

Summary: Test case ./tests/bugs/bug-1371806_1.t is failing
Product: [Community] GlusterFS Reporter: Mohit Agrawal <moagrawa>
Component: testsAssignee: Mohit Agrawal <moagrawa>
Status: CLOSED CURRENTRELEASE QA Contact:
Severity: medium Docs Contact:
Priority: medium    
Version: mainlineCC: bugs, nbalacha
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: glusterfs-4.0.0 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
: 1529055 1532112 (view as bug list) Environment:
Last Closed: 2018-03-15 11:17:56 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1529055, 1529058, 1532112    

Description Mohit Agrawal 2017-10-05 16:30:54 UTC
Description of problem:

Test case ./tests/bugs/bug-1371806_1.t is failing.
Version-Release number of selected component (if applicable):


How reproducible:
Sometime test case ./tests/bugs/bug-1371806_1.t is failing on centos.

Steps to Reproduce:
1.
2.
3.

Actual results:

Test case ./tests/bugs/bug-1371806_1.t is failing.
Expected results:
Test case ./tests/bugs/bug-1371806_1.t should not fail.

Additional info:

Comment 1 Worker Ant 2017-10-05 16:38:01 UTC
REVIEW: https://review.gluster.org/18436 (Test: Test case ./tests/bugs/bug-1371806_1.t is failing) posted (#1) for review on master by MOHIT AGRAWAL (moagrawa)

Comment 2 Worker Ant 2017-10-06 07:35:20 UTC
REVIEW: https://review.gluster.org/18436 (Test: Test case ./tests/bugs/bug-1371806_1.t is failing) posted (#2) for review on master by MOHIT AGRAWAL (moagrawa)

Comment 3 Worker Ant 2017-10-06 10:05:00 UTC
REVIEW: https://review.gluster.org/18436 (Test: Test case ./tests/bugs/bug-1371806_1.t is failing) posted (#3) for review on master by MOHIT AGRAWAL (moagrawa)

Comment 4 Worker Ant 2017-10-07 02:32:26 UTC
REVIEW: https://review.gluster.org/18436 (Test: Test case ./tests/bugs/bug-1371806_1.t is failing) posted (#4) for review on master by MOHIT AGRAWAL (moagrawa)

Comment 5 Worker Ant 2017-10-09 12:21:16 UTC
REVIEW: https://review.gluster.org/18436 (Test: Test case ./tests/bugs/bug-1371806_1.t is failing) posted (#5) for review on master by MOHIT AGRAWAL (moagrawa)

Comment 6 Worker Ant 2017-10-12 07:31:03 UTC
REVIEW: https://review.gluster.org/18436 (cluster/dht: Serialize mds update code path with lookup unwind in selfheal) posted (#6) for review on master by MOHIT AGRAWAL (moagrawa)

Comment 7 Worker Ant 2017-10-12 09:06:36 UTC
REVIEW: https://review.gluster.org/18436 (cluster/dht: Serialize mds update code path with lookup unwind in selfheal) posted (#7) for review on master by MOHIT AGRAWAL (moagrawa)

Comment 8 Worker Ant 2017-10-16 06:28:02 UTC
REVIEW: https://review.gluster.org/18436 (cluster/dht: Serialize mds update code path with lookup unwind in selfheal) posted (#8) for review on master by MOHIT AGRAWAL (moagrawa)

Comment 9 Worker Ant 2017-10-16 06:30:03 UTC
REVIEW: https://review.gluster.org/18436 (cluster/dht: Serialize mds update code path with lookup unwind in selfheal) posted (#9) for review on master by MOHIT AGRAWAL (moagrawa)

Comment 10 Worker Ant 2017-11-22 17:19:20 UTC
COMMIT: https://review.gluster.org/18436 committed in master by \"MOHIT AGRAWAL\" <moagrawa> with a commit message- cluster/dht: Serialize mds update code path with lookup unwind in selfheal

Problem: Sometime test case ./tests/bugs/bug-1371806_1.t is failing on
         centos due to race condition between fresh lookup and setxattr fop.

Solution: In selfheal code path we do save mds on inode_ctx, it was not
          serialize with lookup unwind. Due to this behavior after lookup
          unwind if mds is not saved on inode_ctx and if any subsequent
          setxattr fop call it has failed with ENOENT because
          no mds has found on inode ctx.To resolve it save mds on
          inode ctx has been serialize with lookup unwind.

BUG: 1498966
Change-Id: I8d4bb40a6cbf0cec35d181ec0095cc7142b02e29
Signed-off-by: Mohit Agrawal <moagrawa>

Comment 11 Shyamsundar 2018-03-15 11:17:56 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-4.0.0, please open a new bug report.

glusterfs-4.0.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://lists.gluster.org/pipermail/announce/2018-March/000092.html
[2] https://www.gluster.org/pipermail/gluster-users/