Created attachment 1300826 [details] repro script [root@dhcp35-100 repros]# sh -x /tmp/3 + gluster v create v1 self:/export/sdb/x1 force volume create: v1: success: please start the volume to access data + gluster v start v1 volume start: v1: success + gluster v quota v1 enable volume quota : success + gluster v quota v1 limit-usage / 100MB volume quota : success + gluster v create v2 self:/export/sdb/y1 force volume create: v2: success: please start the volume to access data + gluster v start v2 volume start: v2: success + gluster v stop v1 Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y volume stop: v1: success + gluster v stop v2 Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y volume stop: v2: success + gluster v add-brick v1 self:/export/sdb/x2 force volume add-brick: success + ls -l /var/lib/glusterd/hooks/1/start/post/ total 12 lrwxrwxrwx. 1 root root 74 Jul 18 20:00 S28Quota-root-xattr-heal.sh -> /var/lib/glusterd/hooks/1/add-brick/post/disabled-quota-root-xattr-heal.sh -rwxr-xr-x. 1 root root 1797 Jun 29 17:07 S29CTDBsetup.sh -rwxr-xr-x. 1 root root 3293 Jun 29 17:07 S30samba-start.sh + gluster v start v2 volume start: v2: success + sleep 2 + ls -l /var/lib/glusterd/hooks/1/start/post/ total 8 -rwxr-xr-x. 1 root root 1797 Jun 29 17:07 S29CTDBsetup.sh -rwxr-xr-x. 1 root root 3293 Jun 29 17:07 S30samba-start.sh + gluster v start v1 volume start: v1: success + getfattr -d -m. -e hex /export/sdb/x1 getfattr: Removing leading '/' from absolute path names # file: export/sdb/x1 security.selinux=0x756e636f6e66696e65645f753a6f626a6563745f723a7573725f743a733000 trusted.gfid=0x00000000000000000000000000000001 trusted.glusterfs.dht=0x000000010000000000000000ffffffff trusted.glusterfs.quota.limit-set.1=0x0000000006400000ffffffffffffffff trusted.glusterfs.volume-id=0x36c59571fba544c7b807fba925749bb7 + getfattr -d -m. -e hex /export/sdb/x2 getfattr: Removing leading '/' from absolute path names # file: export/sdb/x2 security.selinux=0x756e636f6e66696e65645f753a6f626a6563745f723a7573725f743a733000 trusted.gfid=0x00000000000000000000000000000001 trusted.glusterfs.volume-id=0x36c59571fba544c7b807fba925749bb7 + gluster v quota v1 list / N/A N/A N/A N/A N/A N/A
REVIEW: https://review.gluster.org/17824 (Heal root xattr correctly upon an add-brick operation) posted (#1) for review on master by sanoj-unnikrishnan (sunnikri)
Created attachment 1301493 [details] repro script
COMMIT: https://review.gluster.org/17824 committed in master by Raghavendra G (rgowdapp) ------ commit 777ad8f6a17d11e4582cf11d332a1a4d4c0c706f Author: Sanoj Unnikrishnan <sunnikri> Date: Wed Jul 19 18:40:38 2017 +0530 Heal root xattr correctly upon an add-brick operation When an add-brick is performed the root path xattr is healed using a hook script. For a volume in stopped state, the hook script is triggered in post op of add-brick. Otherwise, if the volume is in started state the hook script is started on a subsequent volume start. The script unlinks itself after execution. The issue is that current hook script does not work when you have multiple volumes in stopped state. A hook script meant for volume1 can get trigerred during start of volume2. Fix: create separate hook script links for individual volumes. Bug: 1472609 Change-Id: If5f056509505fdbbbf73d3363e9966047ae6a3d3 Signed-off-by: Sanoj Unnikrishnan <sunnikri> Reviewed-on: https://review.gluster.org/17824 Smoke: Gluster Build System <jenkins.org> CentOS-regression: Gluster Build System <jenkins.org> Reviewed-by: Raghavendra G <rgowdapp>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.13.0, please open a new bug report. glusterfs-3.13.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://lists.gluster.org/pipermail/announce/2017-December/000087.html [2] https://www.gluster.org/pipermail/gluster-users/