Bug 1472609 - Root path xattr does not heal correctly in certain cases when volume is in stopped state
Root path xattr does not heal correctly in certain cases when volume is in st...
Status: CLOSED CURRENTRELEASE
Product: GlusterFS
Classification: Community
Component: quota (Show other bugs)
mainline
Unspecified All
low Severity high
: ---
: ---
Assigned To: Sanoj Unnikrishnan
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2017-07-19 02:10 EDT by Sanoj Unnikrishnan
Modified: 2017-12-08 12:34 EST (History)
2 users (show)

See Also:
Fixed In Version: glusterfs-3.13.0
Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2017-12-08 12:34:10 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
repro script (521 bytes, text/plain)
2017-07-19 02:10 EDT, Sanoj Unnikrishnan
no flags Details
repro script (1.31 KB, text/plain)
2017-07-20 00:50 EDT, Sanoj Unnikrishnan
no flags Details

  None (edit)
Description Sanoj Unnikrishnan 2017-07-19 02:10:31 EDT
Created attachment 1300826 [details]
repro script

[root@dhcp35-100 repros]# sh -x /tmp/3
+ gluster v create v1 self:/export/sdb/x1 force
volume create: v1: success: please start the volume to access data
+ gluster v start v1
volume start: v1: success
+ gluster v quota v1 enable
volume quota : success
+ gluster v quota v1 limit-usage / 100MB
volume quota : success
+ gluster v create v2 self:/export/sdb/y1 force
volume create: v2: success: please start the volume to access data
+ gluster v start v2
volume start: v2: success
+ gluster v stop v1
Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y
volume stop: v1: success
+ gluster v stop v2
Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y
volume stop: v2: success
+ gluster v add-brick v1 self:/export/sdb/x2 force
volume add-brick: success
+ ls -l /var/lib/glusterd/hooks/1/start/post/
total 12
lrwxrwxrwx. 1 root root   74 Jul 18 20:00 S28Quota-root-xattr-heal.sh -> /var/lib/glusterd/hooks/1/add-brick/post/disabled-quota-root-xattr-heal.sh
-rwxr-xr-x. 1 root root 1797 Jun 29 17:07 S29CTDBsetup.sh
-rwxr-xr-x. 1 root root 3293 Jun 29 17:07 S30samba-start.sh
+ gluster v start v2
volume start: v2: success
+ sleep 2
+ ls -l /var/lib/glusterd/hooks/1/start/post/
total 8
-rwxr-xr-x. 1 root root 1797 Jun 29 17:07 S29CTDBsetup.sh
-rwxr-xr-x. 1 root root 3293 Jun 29 17:07 S30samba-start.sh
+ gluster v start v1
volume start: v1: success
+ getfattr -d -m. -e hex /export/sdb/x1
getfattr: Removing leading '/' from absolute path names
# file: export/sdb/x1
security.selinux=0x756e636f6e66696e65645f753a6f626a6563745f723a7573725f743a733000
trusted.gfid=0x00000000000000000000000000000001
trusted.glusterfs.dht=0x000000010000000000000000ffffffff
trusted.glusterfs.quota.limit-set.1=0x0000000006400000ffffffffffffffff
trusted.glusterfs.volume-id=0x36c59571fba544c7b807fba925749bb7

+ getfattr -d -m. -e hex /export/sdb/x2
getfattr: Removing leading '/' from absolute path names
# file: export/sdb/x2
security.selinux=0x756e636f6e66696e65645f753a6f626a6563745f723a7573725f743a733000
trusted.gfid=0x00000000000000000000000000000001
trusted.glusterfs.volume-id=0x36c59571fba544c7b807fba925749bb7

+ gluster v quota v1 list
/                                            N/A       N/A        N/A     N/A             N/A                  N/A
Comment 1 Worker Ant 2017-07-19 09:15:38 EDT
REVIEW: https://review.gluster.org/17824 (Heal root xattr correctly upon an add-brick operation) posted (#1) for review on master by sanoj-unnikrishnan (sunnikri@redhat.com)
Comment 2 Sanoj Unnikrishnan 2017-07-20 00:50 EDT
Created attachment 1301493 [details]
repro script
Comment 3 Worker Ant 2017-09-14 13:55:34 EDT
COMMIT: https://review.gluster.org/17824 committed in master by Raghavendra G (rgowdapp@redhat.com) 
------
commit 777ad8f6a17d11e4582cf11d332a1a4d4c0c706f
Author: Sanoj Unnikrishnan <sunnikri@redhat.com>
Date:   Wed Jul 19 18:40:38 2017 +0530

    Heal root xattr correctly upon an add-brick operation
    
    When an add-brick is performed the root path xattr is healed using a hook
    script. For a volume in stopped state, the hook script is triggered in post
    op of add-brick. Otherwise, if the volume is in started state the hook script
    is started on a subsequent volume start. The script unlinks itself after
    execution.
    The issue is that current hook script does not work when you have multiple
    volumes in stopped state. A hook script meant for volume1 can get trigerred
    during start of volume2.
    
    Fix: create separate hook script links for individual volumes.
    
    Bug: 1472609
    Change-Id: If5f056509505fdbbbf73d3363e9966047ae6a3d3
    Signed-off-by: Sanoj Unnikrishnan <sunnikri@redhat.com>
    Reviewed-on: https://review.gluster.org/17824
    Smoke: Gluster Build System <jenkins@build.gluster.org>
    CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
    Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
Comment 4 Shyamsundar 2017-12-08 12:34:10 EST
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.13.0, please open a new bug report.

glusterfs-3.13.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://lists.gluster.org/pipermail/announce/2017-December/000087.html
[2] https://www.gluster.org/pipermail/gluster-users/

Note You need to log in before you can comment on or make changes to this bug.