Bug 1421653

Summary: dht_setxattr returns EINVAL when a file is deleted during the FOP
Product: [Community] GlusterFS Reporter: Nithya Balachandran <nbalacha>
Component: distributeAssignee: Nithya Balachandran <nbalacha>
Status: CLOSED CURRENTRELEASE QA Contact:
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: mainlineCC: bugs
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: glusterfs-3.11.0 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
: 1424915 1424921 1424925 1425697 (view as bug list) Environment:
Last Closed: 2017-05-30 18:42:26 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1424915, 1424921, 1424925, 1425697    

Description Nithya Balachandran 2017-02-13 11:14:35 UTC
Description of problem:

dht_setxattr returns EINVAL when a file is deleted after the FOP hits dht_setxattr.



Version-Release number of selected component (if applicable):


How reproducible:
Consistently


Steps to Reproduce:
1. Create a dist-rep volume (I used a 4x2)
2. Fuse mount the volume on 2 different mount points (/mnt/g1 and /mnt/g2)
3. gdb into the mount process for /mnt/g2 and set a breakpoint on dht_setxattr
4. From /mnt/g1, touch file1
5. From /mnt/g2, set an xattr on file1. This will hit the breakpoint set in (3)
6. From /mnt/g1, rm file1
7. Continue in gdb


Actual results:
[root@rhgs313-6 g2]# setfattr -n trusted.test.yahh -v "testing 1 2 3..." file1 
setfattr: file1: Invalid argument



Expected results:
[root@rhgs313-6 g2]# setfattr -n trusted.test.yahh -v "testing 1 2 3..." file1
setfattr: file1: No such file or directory


Additional info:

The same issue shows up in dht_removexattr as well. (Repeat the test steps with the breakpoint in dht_removexattr and "setxfattr -x" )

Comment 1 Nithya Balachandran 2017-02-13 11:18:29 UTC
RCA: 

op_errno is not set to local->op_errno in dht_setxattr2 immediately after the frame check. This causes the function to unwind with op_errno set to EINVAL in case of error.

The null subvol check has also been moved to after op_errno is set to local->op_errno in dht_removexattr2 .

Comment 2 Worker Ant 2017-02-13 11:22:04 UTC
REVIEW: https://review.gluster.org/16610 (cluster/dht Correct error assignment in *xattr2 functions) posted (#1) for review on master by N Balachandran (nbalacha)

Comment 3 Worker Ant 2017-02-13 11:22:49 UTC
REVIEW: https://review.gluster.org/16610 (cluster/dht Fix error assignment in dht_*xattr2 functions) posted (#2) for review on master by N Balachandran (nbalacha)

Comment 4 Worker Ant 2017-02-16 01:53:52 UTC
COMMIT: https://review.gluster.org/16610 committed in master by Shyamsundar Ranganathan (srangana) 
------
commit 028626a86ea409f908783b9007c02877f20be43e
Author: N Balachandran <nbalacha>
Date:   Mon Feb 13 16:49:06 2017 +0530

    cluster/dht Fix error assignment in dht_*xattr2 functions
    
    Corrected the op_errno assignments and NULL checks in
    the dht_sexattr2 and dht_removexattr2 functions. Earlier,
    they unwound with the default EINVAL op_errno if the
    file had been deleted.
    
    Change-Id: Iaf837a473d769cea40132487a966c7f452990071
    BUG: 1421653
    Signed-off-by: N Balachandran <nbalacha>
    Reviewed-on: https://review.gluster.org/16610
    Smoke: Gluster Build System <jenkins.org>
    NetBSD-regression: NetBSD Build System <jenkins.org>
    CentOS-regression: Gluster Build System <jenkins.org>
    Reviewed-by: MOHIT AGRAWAL <moagrawa>
    Reviewed-by: Shyamsundar Ranganathan <srangana>

Comment 5 Shyamsundar 2017-05-30 18:42:26 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.11.0, please open a new bug report.

glusterfs-3.11.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://lists.gluster.org/pipermail/announce/2017-May/000073.html
[2] https://www.gluster.org/pipermail/gluster-users/