Bug 1371806 - DHT :- inconsistent 'custom extended attributes',uid and gid, Access permission (for directories) if User set/modifies it after bringing one or more sub-volume down
Summary: DHT :- inconsistent 'custom extended attributes',uid and gid, Access permis...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: distribute
Version: mainline
Hardware: x86_64
OS: Linux
high
medium
Target Milestone: ---
Assignee: Mohit Agrawal
QA Contact:
URL:
Whiteboard: dht-dir-attr-xattr-heal
Depends On: 1286036 1532109
Blocks: 1064147
TreeView+ depends on / blocked
 
Reported: 2016-08-31 07:22 UTC by Mohit Agrawal
Modified: 2018-01-08 02:27 UTC (History)
12 users (show)

Fixed In Version: glusterfs-3.13.0
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1286036
Environment:
Last Closed: 2017-12-08 17:32:42 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:


Attachments (Terms of Use)

Description Mohit Agrawal 2016-08-31 07:22:44 UTC
+++ This bug was initially created as a clone of Bug #1286036 +++

+++ This bug was initially created as a clone of Bug #863100 +++

Description of problem:
DHT :-  inconsistent 'custom  extended attributes',uid and gid, Access permission (for directories) if User set/modifies it after bringing one or more  sub-volume down

Version-Release number of selected component (if applicable):
3.3.0.3rhs-32.el6rhs.x86_64	

How reproducible:
always


Steps to Reproduce:
1. Create a Distributed volume having 3 or more sub-volumes on multiple server and start that volume.


2. Fuse Mount the volume from the client-1 using “mount -t glusterfs  server:/<volume> <client-1_mount_point>”

3. From mount point create some dirs and files inside it.

4. Bring on sub-volume down
[root@Rhs3 t1]# gluster volume status test
Status of volume: test
Gluster process						Port	Online	Pid
------------------------------------------------------------------------------
Brick 10.70.35.81:/home/t1				24009	Y	18564
Brick 10.70.35.85:/home/t1				24211	Y	16174
Brick 10.70.35.86:/home/t1				24212	Y	2360
NFS Server on localhost					38467	Y	2366
NFS Server on 10.70.35.81				38467	Y	12929
NFS Server on 10.70.35.85				38467	Y	10226
 
[root@Rhs3 t1]# kill -9 2360


5. 
Custom atribute:-
from mount point set custom attribute for directory and verify it on all server

client
[root@client test]# setfattr -n user.foo -v bar2 d1
[root@client test]# getfattr -n user.foo d1
# file: d1
user.foo="bar2"

server1:-
[root@Rhs1 t1]# getfattr -n user.foo d1
# file: d1
user.foo="bar2"

server2:-
[root@Rhs2 t1]# getfattr -n user.foo d1
# file: d1
user.foo="bar2"

server3:-
[root@Rhs3 t1]# getfattr -n user.foo d1
d1: user.foo: No such attribute

6.from mount point verify owner and group of dir and then modify it

[root@client test]# stat d1
  File: `d1'
  Size: 12        	Blocks: 2          IO Block: 131072 directory
Device: 15h/21d	Inode: 10442536925251715313  Links: 2
Access: (0755/drwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
Access: 2012-10-04 12:10:57.006871636 +0530
Modify: 2012-10-04 12:10:57.006871636 +0530
Change: 2012-10-04 12:10:57.007864913 +0530

[root@client test]# chown u1 d1
[root@client test]# chgrp t1 d1
[root@client test]# stat d1
  File: `d1'
  Size: 12        	Blocks: 2          IO Block: 131072 directory
Device: 15h/21d	Inode: 10442536925251715313  Links: 2
Access: (0755/drwxr-xr-x)  Uid: (  500/      u1)   Gid: (  500/      t1)
Access: 2012-10-04 12:10:57.006871636 +0530
Modify: 2012-10-04 12:10:57.006871636 +0530
Change: 2012-10-04 12:13:05.168865621 +0530

7.verify that it is updated on all sub-volume except the down one

server1:-
[root@Rhs1 t1]# stat d1
  File: `d1'
  Size: 6         	Blocks: 8          IO Block: 4096   directory
Device: fc05h/64517d	Inode: 403116740   Links: 2
Access: (0755/drwxr-xr-x)  Uid: (  500/ UNKNOWN)   Gid: (  500/ UNKNOWN)
Access: 2012-10-04 12:10:57.006871636 +0530
Modify: 2012-10-04 12:10:57.006871636 +0530
Change: 2012-10-04 12:13:05.168865621 +0530

server2:-
[root@Rhs2 t1]# stat d1
  File: `d1'
  Size: 6         	Blocks: 8          IO Block: 4096   directory
Device: fc05h/64517d	Inode: 134423062   Links: 2
Access: (0755/drwxr-xr-x)  Uid: (  500/ UNKNOWN)   Gid: (  500/ UNKNOWN)
Access: 2012-10-04 12:10:56.807630951 +0530
Modify: 2012-10-04 12:10:56.807630951 +0530
Change: 2012-10-04 12:13:04.970323409 +0530

server3:-
[root@Rhs3 t1]# stat d1
  File: `d1'
  Size: 6         	Blocks: 8          IO Block: 4096   directory
Device: fc05h/64517d	Inode: 402655089   Links: 2
Access: (0755/drwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
Access: 2012-10-04 12:10:51.424531338 +0530
Modify: 2012-10-04 12:10:51.424531338 +0530
Change: 2012-10-04 12:10:51.610007436 +0530

8.from mount point verify Dir permission and modify it

client
[root@client test]# stat d2
  File: `d2'
  Size: 12        	Blocks: 2          IO Block: 131072 directory
Device: 15h/21d	Inode: 9860248238918728119  Links: 2
Access: (0755/drwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
Access: 2012-10-04 12:10:57.281276686 +0530
Modify: 2012-10-04 12:10:57.281276686 +0530
Change: 2012-10-04 12:10:57.282865329 +0530

[root@client test]# chmod 444 d2

[root@client test]# stat d2
  File: `d2'
  Size: 12        	Blocks: 2          IO Block: 131072 directory
Device: 15h/21d	Inode: 9860248238918728119  Links: 2
Access: (0444/dr--r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
Access: 2012-10-04 12:10:57.281276686 +0530
Modify: 2012-10-04 12:10:57.281276686 +0530
Change: 2012-10-04 15:39:42.509942805 +0530

9. verify that it is updated on all sub-volume except the down one
server1
[root@Rhs1 t1]# stat d2
  File: `d2'
  Size: 6         	Blocks: 8          IO Block: 4096   directory
Device: fc05h/64517d	Inode: 134694359   Links: 2
Access: (0444/dr--r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
Access: 2012-10-04 12:10:57.281276686 +0530
Modify: 2012-10-04 12:10:57.281276686 +0530
Change: 2012-10-04 15:39:42.509942805 +0530

server2
[root@Rhs1 t1]# stat d2
  File: `d2'
  Size: 6         	Blocks: 8          IO Block: 4096   directory
Device: fc05h/64517d	Inode: 134694359   Links: 2
Access: (0444/dr--r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
Access: 2012-10-04 12:10:57.281276686 +0530
Modify: 2012-10-04 12:10:57.281276686 +0530
Change: 2012-10-04 15:39:42.509942805 +0530

server3:-
[root@Rhs3 t1]# stat d1
  File: `d1'
  Size: 6         	Blocks: 8          IO Block: 4096   directory
Device: fc05h/64517d	Inode: 402655089   Links: 2
Access: (0755/drwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
Access: 2012-10-04 12:10:51.424531338 +0530
Modify: 2012-10-04 12:10:51.424531338 +0530
Change: 2012-10-04 12:10:51.610007436 +0530

10. now bring all sub-volumes up and perform lookup from client.

Verify updated 'custom  extended attributes',uid and gid, Access permission from client

11.verify access permission from sub-volume which was down previously
[root@Rhs3 t1]# stat d2
  File: `d2'
  Size: 6         	Blocks: 8          IO Block: 4096   directory
Device: fc05h/64517d	Inode: 134219638   Links: 2
Access: (0755/drwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
Access: 2012-10-04 12:10:51.700461206 +0530
Modify: 2012-10-04 12:10:51.700461206 +0530
Change: 2012-10-04 12:10:51.759989493 +0530

 verify gid and uid on sub-volume which was down previously
[root@Rhs3 t1]# stat d1
  File: `d1'
  Size: 6         	Blocks: 8          IO Block: 4096   directory
Device: fc05h/64517d	Inode: 402655089   Links: 2
Access: (0755/drwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
Access: 2012-10-04 12:10:51.424531338 +0530
Modify: 2012-10-04 12:10:51.424531338 +0530
Change: 2012-10-04 12:10:51.610007436 +0530

Verify custom  extended attributes for that directory.


server3:-
[root@Rhs3 t1]# getfattr -n user.foo d1
d1: user.foo: No such attribute


  
Actual results:
mount point shows modified values but on subvolumes values are not consistent

Expected results:
Once sub-volume is up it should update values for 'custom  extended attributes',uid and gid, Access permission (for directories).

Additional info:

--- Additional comment from RHEL Product and Program Management on 2012-10-04 17:55:42 MVT ---

Since this issue was entered in bugzilla, the release flag has been
set to ? to ensure that it is properly evaluated for this release.

--- Additional comment from shishir gowda on 2012-10-19 09:21:57 MVT ---

There are multiple issues in this bug.
1. For user related xattrs, we can not handle healing as in dht, we would not be able to identify the correct copy. The work around for this is a subsequent setxattr for the same key, which will fix the xattr mis-match

2. UID/GID: A fix is in progress (862967)
3. mis-matching permission: Will investigate it respond back

--- Additional comment from shishir gowda on 2012-10-23 14:07:48 MVT ---

Fix @ https://code.engineering.redhat.com/gerrit/#/c/150/

--- Additional comment from Amar Tumballi on 2013-02-15 17:11:59 MVT ---

https://code.engineering.redhat.com/gerrit/#/c/1895/

--- Additional comment from Rachana Patel on 2013-03-19 18:09:46 MVT ---

(In reply to comment #2)
> There are multiple issues in this bug.
> 1. For user related xattrs, we can not handle healing as in dht, we would
> not be able to identify the correct copy. The work around for this is a
> subsequent setxattr for the same key, which will fix the xattr mis-match
> 
> 2. UID/GID: A fix is in progress (862967)
> 3. mis-matching permission: Will investigate it respond back

1. If it is the case then, it should get documented.

2. It is dependent of bug 86296 and for that buf fixed in version is glusterfs-3.4.0qa5. So the same fix is available in latest build?

3. Could you please update on 3rd issue? what is the decision?

--- Additional comment from Scott Haines on 2013-09-27 22:07:27 MVT ---

Targeting for 3.0.0 (Denali) release.

--- Additional comment from errata-xmlrpc on 2014-04-10 05:20:34 MVT ---

This bug has been dropped from advisory RHEA-2014:17485 by Scott Haines (shaines@redhat.com)

--- Additional comment from Susant Kumar Palai on 2014-05-22 11:04:45 MVT ---

Here is a observation on the side effect of the patch in upstream  :  http://review.gluster.org/#/c/6983/. This patch works for all cases except one corner case.

Currently we take the "permission info" for healing from a brick only if it has a layout xattr. Let say we add a new  brick to a volume(newly added brick will not have a layout xattr) and all except the newly added brick goes down. Then we change the permission of the root directory. Now only the new brick has witnessed the new permission. If we bring all the bricks up, the older permission will be healed across all bricks as we dont take permission info if a brick does not have layout xattr.

Here is the demo:

                     brick1                  brick

permission(initial)  755                     755

t0                   UP                      ADDED BRICK

t1                   CHILD_DOWN    

t2                                           CHANGE PERMISSION LET SAY
                                             777 on  mount point

t3                   CHILD_UP

t4                   Heal 755 to           
                     all bricks 
                     as only this brick
                     has layout xattr

Final Permission after healing
t5                   755                      755  -----> should have 777 



$ Why not having a layout xattr for root of the brick for a newly added brick helps ? 

# If we assign a layout on root for the newly added brick, then as it will have the latest ctime it may corrupt the permission for the volume.

example:

                brick1              brick2               brick3
t0              777(perm*)          777(perm*)           Added as a brick
                                                         will have perm*
                                                         755 by default       

t1                                                       If it has layout 
                                                         & higher ctime, 755
                                                         will be healed for all
Final Permission will be 755 instead of 777(bad)
t2              755                 755                   755    


Hence, creating a zeroed layout for root will create the above problem.

$ Why healing on revalidate path is choosen ?

# The reason we don't do metadata healing on fresh lookup path is, because directory selfheal is carried out on lookup path. Hence, once we do selfheal of directory on fresh lookup, we follow revalidate_cbk. And we will not be able to heal permission for that directory. So healing on the revalidate path is choosen.

--- Additional comment from RHEL Product and Program Management on 2014-05-26 14:01:14 MVT ---

This bug report previously had all acks and release flag approved.
However since at least one of its acks has been changed, the
release flag has been reset to ? by the bugbot (pm-rhel).  The
ack needs to become approved before the release flag can become
approved again.

--- Additional comment from Scott Haines on 2014-06-08 23:35:12 MVT ---

Per engineering management on 06/06/2014, moving back to rhs-future backlog.

--- Additional comment from errata-xmlrpc on 2014-06-20 14:20:27 MVT ---

This bug has been dropped from advisory RHEA-2014:17485 by Vivek Agarwal (vagarwal@redhat.com)

--- Additional comment from Nagaprasad Sathyanarayana on 2015-03-26 17:32:49 MVT ---

After having triaged, it was agreed by all leads that this BZ can not be fixed for 3.1.0 release.

--- Additional comment from John Skeoch on 2015-04-20 05:22:54 MVT ---

User racpatel@redhat.com's account has been closed

--- Additional comment from John Skeoch on 2015-04-20 05:25:23 MVT ---

User racpatel@redhat.com's account has been closed

--- Additional comment from Red Hat Bugzilla Rules Engine on 2015-11-27 05:15:34 EST ---

This bug is automatically being proposed for the current z-stream release of Red Hat Gluster Storage 3 by setting the release flag 'rhgs‑3.1.z' to '?'. 

If this bug should be proposed for a different release, please manually change the proposed release flag.

Comment 1 Worker Ant 2016-08-31 07:56:48 UTC
REVIEW: http://review.gluster.org/15369 (cluster/dht: User xattrs value is not correct after brick stop/start) posted (#1) for review on master by MOHIT AGRAWAL (moagrawa@redhat.com)

Comment 2 Worker Ant 2016-09-01 13:28:00 UTC
REVIEW: http://review.gluster.org/15369 (cluster/dht: User xattrs value is not correct after brick stop/start) posted (#2) for review on master by MOHIT AGRAWAL (moagrawa@redhat.com)

Comment 3 Worker Ant 2016-09-01 15:28:38 UTC
REVIEW: http://review.gluster.org/15369 (cluster/dht: User xattrs value is not correct after brick stop/start) posted (#3) for review on master by MOHIT AGRAWAL (moagrawa@redhat.com)

Comment 4 Worker Ant 2016-09-02 03:29:19 UTC
REVIEW: http://review.gluster.org/15369 (cluster/dht: User xattrs value is not correct after brick stop/start) posted (#4) for review on master by MOHIT AGRAWAL (moagrawa@redhat.com)

Comment 5 Worker Ant 2016-09-10 13:05:44 UTC
REVIEW: http://review.gluster.org/15456 (cluster/dht: User xattrs value is not correct after brick stop/start) posted (#1) for review on master by MOHIT AGRAWAL (moagrawa@redhat.com)

Comment 6 Worker Ant 2016-09-10 14:41:33 UTC
REVIEW: http://review.gluster.org/15456 (cluster/dht: User xattrs value is not correct after brick stop/start) posted (#2) for review on master by MOHIT AGRAWAL (moagrawa@redhat.com)

Comment 7 Worker Ant 2016-09-12 11:59:54 UTC
REVIEW: http://review.gluster.org/15468 (cluster/dht: User xattrs value is not correct after brick stop/start) posted (#1) for review on master by MOHIT AGRAWAL (moagrawa@redhat.com)

Comment 8 Worker Ant 2016-10-06 09:14:50 UTC
REVIEW: http://review.gluster.org/15468 (cluster/dht: User xattrs value is not correct after brick stop/start) posted (#2) for review on master by MOHIT AGRAWAL (moagrawa@redhat.com)

Comment 9 Worker Ant 2016-10-06 09:18:27 UTC
REVIEW: http://review.gluster.org/15468 (cluster/dht: User xattrs value is not correct after brick stop/start) posted (#3) for review on master by MOHIT AGRAWAL (moagrawa@redhat.com)

Comment 10 Worker Ant 2016-10-06 09:33:30 UTC
REVIEW: http://review.gluster.org/15468 (cluster/dht: User xattrs value is not correct after brick stop/start) posted (#4) for review on master by MOHIT AGRAWAL (moagrawa@redhat.com)

Comment 11 Worker Ant 2016-10-06 10:18:57 UTC
REVIEW: http://review.gluster.org/15468 (cluster/dht: User xattrs value is not correct after brick stop/start) posted (#5) for review on master by MOHIT AGRAWAL (moagrawa@redhat.com)

Comment 12 Worker Ant 2016-10-10 05:40:01 UTC
REVIEW: http://review.gluster.org/15468 (cluster/dht: User xattrs value is not correct after brick stop/start) posted (#6) for review on master by MOHIT AGRAWAL (moagrawa@redhat.com)

Comment 13 Worker Ant 2016-10-13 06:52:25 UTC
REVIEW: http://review.gluster.org/15468 (cluster/dht: User xattrs value is not correct after brick stop/start) posted (#7) for review on master by MOHIT AGRAWAL (moagrawa@redhat.com)

Comment 14 Worker Ant 2016-10-14 04:45:53 UTC
REVIEW: http://review.gluster.org/15468 (cluster/dht: User xattrs value is not correct after brick stop/start) posted (#8) for review on master by MOHIT AGRAWAL (moagrawa@redhat.com)

Comment 15 Worker Ant 2016-10-14 07:44:38 UTC
REVIEW: http://review.gluster.org/15468 (cluster/dht: User xattrs value is not correct after brick stop/start) posted (#9) for review on master by MOHIT AGRAWAL (moagrawa@redhat.com)

Comment 16 Worker Ant 2016-10-14 12:38:18 UTC
REVIEW: http://review.gluster.org/15468 (WIP cluster/dht: User xattrs value is not correct after brick stop/start) posted (#10) for review on master by MOHIT AGRAWAL (moagrawa@redhat.com)

Comment 17 Worker Ant 2016-10-15 02:28:49 UTC
REVIEW: http://review.gluster.org/15468 (WIP cluster/dht: User xattrs value is not correct after brick stop/start) posted (#11) for review on master by MOHIT AGRAWAL (moagrawa@redhat.com)

Comment 18 Worker Ant 2016-10-17 06:53:46 UTC
REVIEW: http://review.gluster.org/15468 (WIP cluster/dht: User xattrs value is not correct after brick stop/start) posted (#12) for review on master by MOHIT AGRAWAL (moagrawa@redhat.com)

Comment 19 Worker Ant 2016-10-17 12:40:56 UTC
REVIEW: http://review.gluster.org/15468 (WIP cluster/dht: User xattrs value is not correct after brick stop/start) posted (#13) for review on master by MOHIT AGRAWAL (moagrawa@redhat.com)

Comment 20 Worker Ant 2016-10-18 13:06:55 UTC
REVIEW: http://review.gluster.org/15468 (WIP cluster/dht: User xattrs value is not correct after brick stop/start) posted (#14) for review on master by MOHIT AGRAWAL (moagrawa@redhat.com)

Comment 21 Worker Ant 2016-10-19 03:36:14 UTC
REVIEW: http://review.gluster.org/15468 (WIP cluster/dht: User xattrs value is not correct after brick stop/start) posted (#15) for review on master by MOHIT AGRAWAL (moagrawa@redhat.com)

Comment 22 Worker Ant 2016-10-20 07:50:51 UTC
REVIEW: http://review.gluster.org/15468 (WIP cluster/dht: User xattrs value is not correct after brick stop/start) posted (#16) for review on master by MOHIT AGRAWAL (moagrawa@redhat.com)

Comment 23 Worker Ant 2016-10-20 08:31:54 UTC
REVIEW: http://review.gluster.org/15468 (WIP cluster/dht: User xattrs value is not correct after brick stop/start) posted (#17) for review on master by MOHIT AGRAWAL (moagrawa@redhat.com)

Comment 24 Worker Ant 2016-10-20 10:30:52 UTC
REVIEW: http://review.gluster.org/15468 (WIP cluster/dht: User xattrs value is not correct after brick stop/start) posted (#18) for review on master by MOHIT AGRAWAL (moagrawa@redhat.com)

Comment 25 Worker Ant 2016-10-21 02:59:39 UTC
REVIEW: http://review.gluster.org/15468 (WIP cluster/dht: User xattrs value is not correct after brick stop/start) posted (#19) for review on master by MOHIT AGRAWAL (moagrawa@redhat.com)

Comment 26 Worker Ant 2016-10-24 04:03:57 UTC
REVIEW: http://review.gluster.org/15468 (cluster/dht: User xattrs value is not correct after brick stop/start) posted (#20) for review on master by MOHIT AGRAWAL (moagrawa@redhat.com)

Comment 27 Worker Ant 2016-11-08 08:46:27 UTC
REVIEW: http://review.gluster.org/15468 (cluster/dht: User xattrs value is not correct after brick stop/start) posted (#21) for review on master by MOHIT AGRAWAL (moagrawa@redhat.com)

Comment 28 Worker Ant 2016-11-08 13:24:26 UTC
REVIEW: http://review.gluster.org/15468 (cluster/dht: User xattrs value is not correct after brick stop/start) posted (#22) for review on master by MOHIT AGRAWAL (moagrawa@redhat.com)

Comment 29 Worker Ant 2017-02-28 09:50:19 UTC
REVIEW: https://review.gluster.org/15468 (cluster/dht: User xattrs value is not correct after brick stop/start) posted (#23) for review on master by MOHIT AGRAWAL (moagrawa@redhat.com)

Comment 30 Worker Ant 2017-02-28 10:33:59 UTC
REVIEW: https://review.gluster.org/15468 (cluster/dht: User xattrs value is not correct after brick stop/start) posted (#24) for review on master by MOHIT AGRAWAL (moagrawa@redhat.com)

Comment 31 Worker Ant 2017-03-07 10:19:39 UTC
REVIEW: https://review.gluster.org/15468 (cluster/dht: User xattrs value is not correct after brick stop/start) posted (#25) for review on master by MOHIT AGRAWAL (moagrawa@redhat.com)

Comment 32 Worker Ant 2017-04-04 11:22:35 UTC
REVIEW: https://review.gluster.org/15468 (cluster/dht: User xattrs value is not correct after brick stop/start) posted (#26) for review on master by MOHIT AGRAWAL (moagrawa@redhat.com)

Comment 33 Worker Ant 2017-04-04 11:25:42 UTC
REVIEW: https://review.gluster.org/15468 (cluster/dht: User xattrs value is not correct after brick stop/start) posted (#27) for review on master by MOHIT AGRAWAL (moagrawa@redhat.com)

Comment 34 Worker Ant 2017-04-07 09:58:55 UTC
REVIEW: https://review.gluster.org/15468 (cluster/dht: User xattrs value is not correct after brick stop/start) posted (#28) for review on master by MOHIT AGRAWAL (moagrawa@redhat.com)

Comment 35 Worker Ant 2017-04-07 11:00:40 UTC
REVIEW: https://review.gluster.org/15468 (cluster/dht: User xattrs value is not correct after brick stop/start) posted (#29) for review on master by MOHIT AGRAWAL (moagrawa@redhat.com)

Comment 36 Worker Ant 2017-05-12 16:19:45 UTC
REVIEW: https://review.gluster.org/15468 (cluster/dht: User xattrs value is not correct after brick stop/start) posted (#30) for review on master by MOHIT AGRAWAL (moagrawa@redhat.com)

Comment 37 Worker Ant 2017-05-13 02:58:01 UTC
REVIEW: https://review.gluster.org/15468 (cluster/dht: User xattrs value is not correct after brick stop/start) posted (#31) for review on master by MOHIT AGRAWAL (moagrawa@redhat.com)

Comment 38 Worker Ant 2017-05-16 10:03:00 UTC
REVIEW: https://review.gluster.org/15468 (cluster/dht: User xattrs value is not correct after brick stop/start) posted (#32) for review on master by MOHIT AGRAWAL (moagrawa@redhat.com)

Comment 39 Worker Ant 2017-05-17 08:43:49 UTC
REVIEW: https://review.gluster.org/15468 (cluster/dht: User xattrs value is not correct after brick stop/start) posted (#33) for review on master by MOHIT AGRAWAL (moagrawa@redhat.com)

Comment 40 Worker Ant 2017-06-06 06:11:45 UTC
REVIEW: https://review.gluster.org/15468 (cluster/dht(WIP): User xattrs value is not correct after brick stop/start) posted (#34) for review on master by MOHIT AGRAWAL (moagrawa@redhat.com)

Comment 41 Worker Ant 2017-06-06 06:20:34 UTC
REVIEW: https://review.gluster.org/15468 (cluster/dht(WIP): User xattrs value is not correct after brick stop/start) posted (#35) for review on master by MOHIT AGRAWAL (moagrawa@redhat.com)

Comment 42 Worker Ant 2017-06-06 07:06:39 UTC
REVIEW: https://review.gluster.org/15468 (cluster/dht(WIP): User xattrs value is not correct after brick stop/start) posted (#36) for review on master by MOHIT AGRAWAL (moagrawa@redhat.com)

Comment 43 Worker Ant 2017-06-06 09:43:09 UTC
REVIEW: https://review.gluster.org/15468 (cluster/dht(WIP): User xattrs value is not correct after brick stop/start) posted (#37) for review on master by MOHIT AGRAWAL (moagrawa@redhat.com)

Comment 44 Worker Ant 2017-06-06 10:25:14 UTC
REVIEW: https://review.gluster.org/15468 (cluster/dht(WIP): User xattrs value is not correct after brick stop/start) posted (#38) for review on master by MOHIT AGRAWAL (moagrawa@redhat.com)

Comment 45 Worker Ant 2017-06-06 10:30:32 UTC
REVIEW: https://review.gluster.org/15468 (cluster/dht(WIP): User xattrs value is not correct after brick stop/start) posted (#39) for review on master by MOHIT AGRAWAL (moagrawa@redhat.com)

Comment 46 Worker Ant 2017-06-06 12:32:50 UTC
REVIEW: https://review.gluster.org/15468 (cluster/dht(WIP): User xattrs value is not correct after brick stop/start) posted (#40) for review on master by MOHIT AGRAWAL (moagrawa@redhat.com)

Comment 47 Worker Ant 2017-06-06 17:04:18 UTC
REVIEW: https://review.gluster.org/15468 (cluster/dht(WIP): User xattrs value is not correct after brick stop/start) posted (#41) for review on master by MOHIT AGRAWAL (moagrawa@redhat.com)

Comment 48 Worker Ant 2017-06-07 07:35:58 UTC
REVIEW: https://review.gluster.org/15468 (cluster/dht: User xattrs value is not correct after brick stop/start) posted (#42) for review on master by MOHIT AGRAWAL (moagrawa@redhat.com)

Comment 49 Worker Ant 2017-06-07 09:54:32 UTC
REVIEW: https://review.gluster.org/15468 (cluster/dht: User xattrs value is not correct after brick stop/start) posted (#43) for review on master by MOHIT AGRAWAL (moagrawa@redhat.com)

Comment 50 Worker Ant 2017-06-14 11:53:28 UTC
REVIEW: https://review.gluster.org/15468 (cluster/dht: User xattrs value is not correct after brick stop/start) posted (#44) for review on master by MOHIT AGRAWAL (moagrawa@redhat.com)

Comment 51 Worker Ant 2017-06-16 03:57:56 UTC
REVIEW: https://review.gluster.org/15468 (cluster/dht: User xattrs value is not correct after brick stop/start) posted (#45) for review on master by MOHIT AGRAWAL (moagrawa@redhat.com)

Comment 52 Worker Ant 2017-06-16 06:31:42 UTC
REVIEW: https://review.gluster.org/15468 (cluster/dht: User xattrs value is not correct after brick stop/start) posted (#46) for review on master by MOHIT AGRAWAL (moagrawa@redhat.com)

Comment 53 Worker Ant 2017-06-19 08:29:35 UTC
REVIEW: https://review.gluster.org/15468 (cluster/dht: User xattrs value is not correct after brick stop/start) posted (#47) for review on master by MOHIT AGRAWAL (moagrawa@redhat.com)

Comment 54 Worker Ant 2017-06-19 08:35:52 UTC
REVIEW: https://review.gluster.org/15468 (cluster/dht: User xattrs value is not correct after brick stop/start) posted (#48) for review on master by MOHIT AGRAWAL (moagrawa@redhat.com)

Comment 55 Worker Ant 2017-06-21 13:17:03 UTC
REVIEW: https://review.gluster.org/15468 (cluster/dht(WIP) : User xattrs value is not correct after brick stop/start) posted (#49) for review on master by MOHIT AGRAWAL (moagrawa@redhat.com)

Comment 56 Worker Ant 2017-06-27 06:11:59 UTC
REVIEW: https://review.gluster.org/15468 (cluster/dht(WIP) : User xattrs value is not correct after brick stop/start) posted (#50) for review on master by MOHIT AGRAWAL (moagrawa@redhat.com)

Comment 57 Worker Ant 2017-06-27 06:41:55 UTC
REVIEW: https://review.gluster.org/15468 (cluster/dht(WIP) : User xattrs value is not correct after brick stop/start) posted (#51) for review on master by MOHIT AGRAWAL (moagrawa@redhat.com)

Comment 58 Worker Ant 2017-06-27 11:54:40 UTC
REVIEW: https://review.gluster.org/15468 (cluster/dht(WIP) : User xattrs value is not correct after brick stop/start) posted (#52) for review on master by MOHIT AGRAWAL (moagrawa@redhat.com)

Comment 59 Worker Ant 2017-06-27 12:56:03 UTC
REVIEW: https://review.gluster.org/15468 (cluster/dht(WIP) : User xattrs value is not correct after brick stop/start) posted (#53) for review on master by MOHIT AGRAWAL (moagrawa@redhat.com)

Comment 60 Worker Ant 2017-06-28 09:56:18 UTC
REVIEW: https://review.gluster.org/15468 (cluster/dht : User xattrs value is not correct after brick stop/start) posted (#54) for review on master by MOHIT AGRAWAL (moagrawa@redhat.com)

Comment 61 Worker Ant 2017-06-28 10:59:24 UTC
REVIEW: https://review.gluster.org/15468 (cluster/dht : User xattrs value is not correct after brick stop/start) posted (#55) for review on master by MOHIT AGRAWAL (moagrawa@redhat.com)

Comment 62 Worker Ant 2017-07-06 06:22:33 UTC
REVIEW: https://review.gluster.org/15468 (cluster/dht : User xattrs value is not correct after brick stop/start) posted (#56) for review on master by MOHIT AGRAWAL (moagrawa@redhat.com)

Comment 63 Worker Ant 2017-07-06 13:26:18 UTC
REVIEW: https://review.gluster.org/15468 (cluster/dht : User xattrs value is not correct after brick stop/start) posted (#57) for review on master by MOHIT AGRAWAL (moagrawa@redhat.com)

Comment 64 Worker Ant 2017-07-07 01:47:36 UTC
REVIEW: https://review.gluster.org/15468 (cluster/dht : User xattrs value is not correct after brick stop/start) posted (#58) for review on master by MOHIT AGRAWAL (moagrawa@redhat.com)

Comment 65 Worker Ant 2017-07-12 11:00:57 UTC
REVIEW: https://review.gluster.org/15468 (cluster/dht(WIP) : User xattrs value is not correct after brick stop/start) posted (#59) for review on master by MOHIT AGRAWAL (moagrawa@redhat.com)

Comment 66 Worker Ant 2017-07-12 12:17:20 UTC
REVIEW: https://review.gluster.org/15468 (cluster/dht(WIP) : User xattrs value is not correct after brick stop/start) posted (#60) for review on master by MOHIT AGRAWAL (moagrawa@redhat.com)

Comment 67 Worker Ant 2017-07-12 12:35:24 UTC
REVIEW: https://review.gluster.org/15468 (cluster/dht(WIP) : User xattrs value is not correct after brick stop/start) posted (#61) for review on master by MOHIT AGRAWAL (moagrawa@redhat.com)

Comment 68 Worker Ant 2017-07-13 01:44:00 UTC
REVIEW: https://review.gluster.org/15468 (cluster/dht(WIP) : User xattrs value is not correct after brick stop/start) posted (#62) for review on master by MOHIT AGRAWAL (moagrawa@redhat.com)

Comment 69 Worker Ant 2017-07-13 09:50:49 UTC
REVIEW: https://review.gluster.org/15468 (cluster/dht(WIP) : User xattrs value is not correct after brick stop/start) posted (#63) for review on master by MOHIT AGRAWAL (moagrawa@redhat.com)

Comment 70 Worker Ant 2017-07-19 16:30:58 UTC
REVIEW: https://review.gluster.org/15468 (cluster/dht(WIP) : User xattrs value is not correct after brick stop/start) posted (#64) for review on master by MOHIT AGRAWAL (moagrawa@redhat.com)

Comment 71 Worker Ant 2017-08-19 02:32:00 UTC
REVIEW: https://review.gluster.org/15468 (cluster/dht : User xattrs value is not correct after brick stop/start) posted (#65) for review on master by MOHIT AGRAWAL (moagrawa@redhat.com)

Comment 72 Worker Ant 2017-08-21 06:21:42 UTC
REVIEW: https://review.gluster.org/15468 (cluster/dht : User xattrs value is not correct after brick stop/start) posted (#66) for review on master by MOHIT AGRAWAL (moagrawa@redhat.com)

Comment 73 Worker Ant 2017-08-21 15:45:29 UTC
REVIEW: https://review.gluster.org/15468 (cluster/dht : User xattrs value is not correct after brick stop/start) posted (#67) for review on master by MOHIT AGRAWAL (moagrawa@redhat.com)

Comment 74 Worker Ant 2017-08-23 02:32:53 UTC
REVIEW: https://review.gluster.org/15468 (cluster/dht : User xattrs value is not correct after brick stop/start) posted (#68) for review on master by MOHIT AGRAWAL (moagrawa@redhat.com)

Comment 75 Worker Ant 2017-08-23 08:31:06 UTC
REVIEW: https://review.gluster.org/15468 (cluster/dht : User xattrs value is not correct after brick stop/start) posted (#69) for review on master by MOHIT AGRAWAL (moagrawa@redhat.com)

Comment 76 Worker Ant 2017-09-07 03:50:26 UTC
REVIEW: https://review.gluster.org/15468 (cluster/dht : User xattrs value is not correct after brick stop/start) posted (#70) for review on master by MOHIT AGRAWAL (moagrawa@redhat.com)

Comment 77 Worker Ant 2017-09-12 08:33:10 UTC
REVIEW: https://review.gluster.org/15468 (cluster/dht : User xattrs are not healed after brick stop/start) posted (#71) for review on master by MOHIT AGRAWAL (moagrawa@redhat.com)

Comment 78 Worker Ant 2017-09-14 07:57:30 UTC
REVIEW: https://review.gluster.org/15468 (cluster/dht : User xattrs are not healed after brick stop/start) posted (#72) for review on master by MOHIT AGRAWAL (moagrawa@redhat.com)

Comment 79 Worker Ant 2017-09-20 05:45:36 UTC
REVIEW: https://review.gluster.org/15468 (cluster/dht : User xattrs are not healed after brick stop/start) posted (#73) for review on master by MOHIT AGRAWAL (moagrawa@redhat.com)

Comment 80 Worker Ant 2017-09-20 13:02:39 UTC
REVIEW: https://review.gluster.org/15468 (cluster/dht : User xattrs are not healed after brick stop/start) posted (#74) for review on master by MOHIT AGRAWAL (moagrawa@redhat.com)

Comment 81 Worker Ant 2017-09-21 11:51:05 UTC
REVIEW: https://review.gluster.org/15468 (cluster/dht : User xattrs are not healed after brick stop/start) posted (#75) for review on master by MOHIT AGRAWAL (moagrawa@redhat.com)

Comment 82 Worker Ant 2017-09-23 03:45:05 UTC
REVIEW: https://review.gluster.org/15468 (cluster/dht : User xattrs are not healed after brick stop/start) posted (#76) for review on master by MOHIT AGRAWAL (moagrawa@redhat.com)

Comment 83 Worker Ant 2017-09-25 09:30:19 UTC
REVIEW: https://review.gluster.org/15468 (cluster/dht : User xattrs are not healed after brick stop/start) posted (#77) for review on master by MOHIT AGRAWAL (moagrawa@redhat.com)

Comment 84 Worker Ant 2017-09-26 13:38:51 UTC
REVIEW: https://review.gluster.org/15468 (cluster/dht : User xattrs are not healed after brick stop/start) posted (#78) for review on master by MOHIT AGRAWAL (moagrawa@redhat.com)

Comment 85 Worker Ant 2017-09-29 08:40:21 UTC
REVIEW: https://review.gluster.org/15468 (cluster/dht : User xattrs are not healed after brick stop/start) posted (#79) for review on master by MOHIT AGRAWAL (moagrawa@redhat.com)

Comment 86 Worker Ant 2017-10-04 09:55:39 UTC
COMMIT: https://review.gluster.org/15468 committed in master by Raghavendra G (rgowdapp@redhat.com) 
------
commit 9b4de61a136b8e5ba7bf0e48690cdb1292d0dee8
Author: Mohit Agrawal <moagrawa@redhat.com>
Date:   Fri May 12 21:12:47 2017 +0530

    cluster/dht : User xattrs are not healed after brick stop/start
    
    Problem: In a distributed volume custom extended attribute value for a directory
             does not display correct value after stop/start or added newly brick.
             If any extended(acl) attribute value is set for a directory after stop/added
             the brick the attribute(user|acl|quota) value is not updated on brick
             after start the brick.
    
    Solution: First store hashed subvol or subvol(has internal xattr) on inode ctx and
              consider it as a MDS subvol.At the time of update custom xattr
              (user,quota,acl, selinux) on directory first check the mds from
              inode ctx, if mds is not present on inode ctx then throw EINVAL error
              to application otherwise set xattr on MDS subvol with internal xattr
              value of -1 and then try to update the attribute on other non MDS
              volumes also.If mds subvol is down in that case throw an
              error "Transport endpoint is not connected". In dht_dir_lookup_cbk|
              dht_revalidate_cbk|dht_discover_complete call dht_call_dir_xattr_heal
              to heal custom extended attribute.
              In case of gnfs server if hashed subvol has not found based on
              loc then wind a call on all subvol to update xattr.
    
    Fix:    1) Save MDS subvol on inode ctx
            2) Check if mds subvol is present on inode ctx
            3) If mds subvol is down then call unwind with error ENOTCONN and if it is up
               then set new xattr "GF_DHT_XATTR_MDS" to -1 and wind a call on other
               subvol.
            4) If setxattr fop is successful on non-mds subvol then increment the value of
               internal xattr to +1
            5) At the time of directory_lookup check the value of new xattr GF_DHT_XATTR_MDS
            6) If value is not 0 in dht_lookup_dir_cbk(other cbk) functions then call heal
               function to heal user xattr
            7) syncop_setxattr on hashed_subvol to reset the value of xattr to 0
               if heal is successful on all subvol.
    
    Test : To reproduce the issue followed below steps
           1) Create a distributed volume and create mount point
           2) Create some directory from mount point mkdir tmp{1..5}
           3) Kill any one brick from the volume
           4) Set extended attribute from mount point on directory
              setfattr -n user.foo -v "abc" ./tmp{1..5}
              It will throw error " Transport End point is not connected "
              for those hashed subvol is down
           5) Start volume with force option to start brick process
           6) Execute getfattr command on mount point for directory
           7) Check extended attribute on brick
              getfattr -n user.foo <volume-location>/tmp{1..5}
              It shows correct value for directories for those
              xattr fop were executed successfully.
    
    Note: The patch will resolve xattr healing problem only for fuse mount
          not for nfs mount.
    
    BUG: 1371806
    Signed-off-by: Mohit Agrawal <moagrawa@redhat.com>
    
    Change-Id: I4eb137eace24a8cb796712b742f1d177a65343d5

Comment 87 Niels de Vos 2017-10-09 08:50:38 UTC
This bug was used to introduce a test case that sporadically(?) fails. Please have a look https://build.gluster.org/job/centos6-regression/6753/console and improve the test.

Also note that tests are recommended to be placed under ./tests/bugs/<component>/ consider moving the test into a subdirectory. Having the tests in subdirectories makes it much easier to run all tests for a selected component.

Comment 88 Mohit Agrawal 2017-10-09 08:57:40 UTC
Niels,

I tried to execute multiple times on centos vm same test case but i did not get failure so
i have not changed the same but after took centos-machine from nigels i tried the same, i
got failure so now i have resolved the from below patch

https://review.gluster.org/#/c/18436/

From above patch we are getting crash in one nfs case, i have tried to run same on centos
machine in a loop but i did not get any crash, i have send a mail also to you.

Please check and respond on the same.

Thanks & Regards
Mohit Agrawal

Comment 89 Mohit Agrawal 2017-10-09 08:59:09 UTC
For specific to move the location of test cases i will update the same in next patch.

Comment 90 Shyamsundar 2017-12-08 17:32:42 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.13.0, please open a new bug report.

glusterfs-3.13.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://lists.gluster.org/pipermail/announce/2017-December/000087.html
[2] https://www.gluster.org/pipermail/gluster-users/


Note You need to log in before you can comment on or make changes to this bug.