*** This bug has been marked as a duplicate of bug 1294478 ***
Though the problem is same as bug# 1294478. Test-cases are different, so changing the bug status upstream patch: http://review.gluster.org/#/c/13100/ release-3.7 patch: http://review.gluster.org/13108 downstream patch: https://code.engineering.redhat.com/gerrit/#/c/64638/
Verified the bug in glusterfs-server-3.7.9-1.el7rhgs.x86_64. Issue is no more seen. Steps followed to verify the bug, 1) create a dist-rep volume and set limit usage on a sub-dir 2) killed all but one brick process of the volume 3) Created a new sub-directory and set quota limit from fuse client 4) From the backend, checked on all bricks if the newly created directory is present - newly created dir was present only in the node on which brick process was running. 5) All brick process were started - gluster v start <vol> force 6) Performed a lookup on the client, now newly created sub-dir were seen on nodes 7) quota limit was set on both the bricks (which was down and up) Attributes on node which had brick process down [root@dhcp47-90 ~]# getfattr -d -m . -e hex /bricks/brick1/ct/dht-test/test1/ getfattr: Removing leading '/' from absolute path names # file: bricks/brick1/ct/dht-test/test1/ security.selinux=0x73797374656d5f753a6f626a6563745f723a756e6c6162656c65645f743a733000 trusted.gfid=0x38bc6708ee924ce1b651b10c155582f4 trusted.glusterfs.dht=0x00000000000000000000000000000000 trusted.glusterfs.quota.b400068c-7a2c-4103-945c-137831e09f2d.contri.1=0x000000000000000000000000000000000000000000000010 trusted.glusterfs.quota.dirty=0x3000 trusted.glusterfs.quota.limit-set.1=0x0000000040000000ffffffffffffffff trusted.glusterfs.quota.size.1=0x000000000000000000000000000000000000000000000010 Attributes on node which had brick process up [root@dhcp46-94 ~]# getfattr -d -m . -e hex /bricks/brick1/ct/dht-test/test1/ getfattr: Removing leading '/' from absolute path names # file: bricks/brick1/ct/dht-test/test1/ security.selinux=0x73797374656d5f753a6f626a6563745f723a676c7573746572645f627269636b5f743a733000 trusted.afr.dirty=0x000000000000000000000000 trusted.afr.testvol-client-6=0x000000000000000000000000 trusted.gfid=0x38bc6708ee924ce1b651b10c155582f4 trusted.glusterfs.dht=0x00000001000000007fffffffffffffff trusted.glusterfs.quota.b400068c-7a2c-4103-945c-137831e09f2d.contri.1=0x000000000000000000000000000000000000000000000010 trusted.glusterfs.quota.dirty=0x3000 trusted.glusterfs.quota.limit-set.1=0x0000000040000000ffffffffffffffff ---> limit set trusted.glusterfs.quota.size.1=0x000000000000000000000000000000000000000000000010 8) From the client, tried writing data above the limit set. Writes failed after actual limit was crossed. Marking this bug as verified.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2016:1240