Bug 1286191 - dist-rep + quota : directory selfheal is not healing xattr 'trusted.glusterfs.quota.limit-set'; If you bring a replica pair down
dist-rep + quota : directory selfheal is not healing xattr 'trusted.glusterfs...
Status: CLOSED ERRATA
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: distribute (Show other bugs)
3.1
x86_64 Linux
medium Severity high
: ---
: RHGS 3.1.3
Assigned To: Manikandan
krishnaram Karthick
triaged, dht-quota
: Reopened, ZStream
Depends On: 1020713 1294478
Blocks: 1020127 1299184
  Show dependency treegraph
 
Reported: 2015-11-27 07:21 EST by Susant Kumar Palai
Modified: 2016-06-23 00:57 EDT (History)
17 users (show)

See Also:
Fixed In Version: glusterfs-3.7.9-1
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: 1020713
Environment:
Last Closed: 2016-06-23 00:57:58 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Comment 2 Vijaikumar Mallikarjuna 2015-12-30 06:10:27 EST

*** This bug has been marked as a duplicate of bug 1294478 ***
Comment 3 Vijaikumar Mallikarjuna 2015-12-30 06:15:56 EST
Though the problem is same as bug# 1294478.
Test-cases are different, so changing the bug status

upstream patch: http://review.gluster.org/#/c/13100/
release-3.7 patch: http://review.gluster.org/13108
downstream patch: https://code.engineering.redhat.com/gerrit/#/c/64638/
Comment 6 krishnaram Karthick 2016-04-13 01:38:42 EDT
Verified the bug in glusterfs-server-3.7.9-1.el7rhgs.x86_64. Issue is no more seen.

Steps followed to verify the bug,

1) create a dist-rep volume and set limit usage on a sub-dir
2) killed all but one brick process of the volume
3) Created a new sub-directory and set quota limit from fuse client
4) From the backend, checked on all bricks if the newly created directory is present - newly created dir was present only in the node on which brick process was running.
5) All brick process were started - gluster v start <vol> force
6) Performed a lookup on the client, now newly created sub-dir were seen on nodes 
7) quota limit was set on both the bricks (which was down and up)
Attributes on node which had brick process down

[root@dhcp47-90 ~]# getfattr -d -m . -e hex /bricks/brick1/ct/dht-test/test1/
getfattr: Removing leading '/' from absolute path names
# file: bricks/brick1/ct/dht-test/test1/
security.selinux=0x73797374656d5f753a6f626a6563745f723a756e6c6162656c65645f743a733000
trusted.gfid=0x38bc6708ee924ce1b651b10c155582f4
trusted.glusterfs.dht=0x00000000000000000000000000000000
trusted.glusterfs.quota.b400068c-7a2c-4103-945c-137831e09f2d.contri.1=0x000000000000000000000000000000000000000000000010
trusted.glusterfs.quota.dirty=0x3000
trusted.glusterfs.quota.limit-set.1=0x0000000040000000ffffffffffffffff
trusted.glusterfs.quota.size.1=0x000000000000000000000000000000000000000000000010

Attributes on node which had brick process up
[root@dhcp46-94 ~]# getfattr -d -m . -e hex /bricks/brick1/ct/dht-test/test1/
getfattr: Removing leading '/' from absolute path names
# file: bricks/brick1/ct/dht-test/test1/
security.selinux=0x73797374656d5f753a6f626a6563745f723a676c7573746572645f627269636b5f743a733000
trusted.afr.dirty=0x000000000000000000000000
trusted.afr.testvol-client-6=0x000000000000000000000000
trusted.gfid=0x38bc6708ee924ce1b651b10c155582f4
trusted.glusterfs.dht=0x00000001000000007fffffffffffffff
trusted.glusterfs.quota.b400068c-7a2c-4103-945c-137831e09f2d.contri.1=0x000000000000000000000000000000000000000000000010
trusted.glusterfs.quota.dirty=0x3000
trusted.glusterfs.quota.limit-set.1=0x0000000040000000ffffffffffffffff ---> limit set 
trusted.glusterfs.quota.size.1=0x000000000000000000000000000000000000000000000010

8) From the client, tried writing data above the limit set. Writes failed after actual limit was crossed.

Marking this bug as verified.
Comment 10 errata-xmlrpc 2016-06-23 00:57:58 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2016:1240

Note You need to log in before you can comment on or make changes to this bug.