Bug 1286191 - dist-rep + quota : directory selfheal is not healing xattr 'trusted.glusterfs.quota.limit-set'; If you bring a replica pair down
Summary: dist-rep + quota : directory selfheal is not healing xattr 'trusted.glusterfs...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: distribute
Version: rhgs-3.1
Hardware: x86_64
OS: Linux
medium
high
Target Milestone: ---
: RHGS 3.1.3
Assignee: Manikandan
QA Contact: krishnaram Karthick
URL:
Whiteboard: triaged, dht-quota
Depends On: 1020713 1294478
Blocks: 1020127 1299184
TreeView+ depends on / blocked
 
Reported: 2015-11-27 12:21 UTC by Susant Kumar Palai
Modified: 2016-06-23 04:57 UTC (History)
17 users (show)

Fixed In Version: glusterfs-3.7.9-1
Doc Type: Bug Fix
Doc Text:
Clone Of: 1020713
Environment:
Last Closed: 2016-06-23 04:57:58 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2016:1240 0 normal SHIPPED_LIVE Red Hat Gluster Storage 3.1 Update 3 2016-06-23 08:51:28 UTC

Comment 2 Vijaikumar Mallikarjuna 2015-12-30 11:10:27 UTC

*** This bug has been marked as a duplicate of bug 1294478 ***

Comment 3 Vijaikumar Mallikarjuna 2015-12-30 11:15:56 UTC
Though the problem is same as bug# 1294478.
Test-cases are different, so changing the bug status

upstream patch: http://review.gluster.org/#/c/13100/
release-3.7 patch: http://review.gluster.org/13108
downstream patch: https://code.engineering.redhat.com/gerrit/#/c/64638/

Comment 6 krishnaram Karthick 2016-04-13 05:38:42 UTC
Verified the bug in glusterfs-server-3.7.9-1.el7rhgs.x86_64. Issue is no more seen.

Steps followed to verify the bug,

1) create a dist-rep volume and set limit usage on a sub-dir
2) killed all but one brick process of the volume
3) Created a new sub-directory and set quota limit from fuse client
4) From the backend, checked on all bricks if the newly created directory is present - newly created dir was present only in the node on which brick process was running.
5) All brick process were started - gluster v start <vol> force
6) Performed a lookup on the client, now newly created sub-dir were seen on nodes 
7) quota limit was set on both the bricks (which was down and up)
Attributes on node which had brick process down

[root@dhcp47-90 ~]# getfattr -d -m . -e hex /bricks/brick1/ct/dht-test/test1/
getfattr: Removing leading '/' from absolute path names
# file: bricks/brick1/ct/dht-test/test1/
security.selinux=0x73797374656d5f753a6f626a6563745f723a756e6c6162656c65645f743a733000
trusted.gfid=0x38bc6708ee924ce1b651b10c155582f4
trusted.glusterfs.dht=0x00000000000000000000000000000000
trusted.glusterfs.quota.b400068c-7a2c-4103-945c-137831e09f2d.contri.1=0x000000000000000000000000000000000000000000000010
trusted.glusterfs.quota.dirty=0x3000
trusted.glusterfs.quota.limit-set.1=0x0000000040000000ffffffffffffffff
trusted.glusterfs.quota.size.1=0x000000000000000000000000000000000000000000000010

Attributes on node which had brick process up
[root@dhcp46-94 ~]# getfattr -d -m . -e hex /bricks/brick1/ct/dht-test/test1/
getfattr: Removing leading '/' from absolute path names
# file: bricks/brick1/ct/dht-test/test1/
security.selinux=0x73797374656d5f753a6f626a6563745f723a676c7573746572645f627269636b5f743a733000
trusted.afr.dirty=0x000000000000000000000000
trusted.afr.testvol-client-6=0x000000000000000000000000
trusted.gfid=0x38bc6708ee924ce1b651b10c155582f4
trusted.glusterfs.dht=0x00000001000000007fffffffffffffff
trusted.glusterfs.quota.b400068c-7a2c-4103-945c-137831e09f2d.contri.1=0x000000000000000000000000000000000000000000000010
trusted.glusterfs.quota.dirty=0x3000
trusted.glusterfs.quota.limit-set.1=0x0000000040000000ffffffffffffffff ---> limit set 
trusted.glusterfs.quota.size.1=0x000000000000000000000000000000000000000000000010

8) From the client, tried writing data above the limit set. Writes failed after actual limit was crossed.

Marking this bug as verified.

Comment 10 errata-xmlrpc 2016-06-23 04:57:58 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2016:1240


Note You need to log in before you can comment on or make changes to this bug.