Description of problem: I tried to set the limit for quota and it failed, seen on the latest build. Version-Release number of selected component (if applicable): glusterfs-3.6.0.11-1.el6rhs.x86_64 How reproducible: always Steps to Reproduce: 1. create a volume of 6x2 type, start it 2. volume set root-squash on, server-anonuid, server-anongid, 3. enable quota 4. try to set the limit for quota Actual results: step 4 fails, [root@nfs1 ~]# gluster volume quota dist-rep limit-usage / 10GB quota command failed : setxattr of 'trusted.glusterfs.quota.limit-set' failed on /var/run/gluster/dist-rep/. Reason : Permission denied [root@nfs1 ~]# gluster volume info dist-rep Volume Name: dist-rep Type: Distributed-Replicate Volume ID: 479f93d9-ed9b-4097-8d95-7a0657ee912f Status: Started Snap Volume: no Number of Bricks: 6 x 2 = 12 Transport-type: tcp Bricks: Brick1: 10.70.37.62:/bricks/d1r1 Brick2: 10.70.37.215:/bricks/d1r2 Brick3: 10.70.37.44:/bricks/d2r1 Brick4: 10.70.37.201:/bricks/d2r2 Brick5: 10.70.37.62:/bricks/d3r1 Brick6: 10.70.37.215:/bricks/d3r2 Brick7: 10.70.37.44:/bricks/d4r1 Brick8: 10.70.37.201:/bricks/d4r2 Brick9: 10.70.37.62:/bricks/d5r1 Brick10: 10.70.37.215:/bricks/d5r2 Brick11: 10.70.37.44:/bricks/d6r1 Brick12: 10.70.37.201:/bricks/d6r2 Options Reconfigured: server.root-squash: on server.anonuid: 502 server.anongid: 501 features.quota: on Expected results: setting the limit for quota should work. Additional info:
Marking it as blocker, after discussing with Alok and Vivek...
The patch is submitted at https://code.engineering.redhat.com/gerrit/27014 and merged.
[root@nfs1 ~]# gluster volume set dist-rep root-squash on volume set: success [root@nfs1 ~]# gluster volume set dist-rep server.anonuid 502 volume set: success [root@nfs1 ~]# gluster volume set dist-rep server.anongid 501 volume set: success [root@nfs1 ~]# gluster volume info dist-rep Volume Name: dist-rep Type: Distributed-Replicate Volume ID: 98fb382d-a5ca-4cb6-bde1-579608485527 Status: Started Snap Volume: no Number of Bricks: 6 x 2 = 12 Transport-type: tcp Bricks: Brick1: 10.70.37.62:/bricks/d1r1 Brick2: 10.70.37.215:/bricks/d1r2 Brick3: 10.70.37.44:/bricks/d2r1 Brick4: 10.70.37.201:/bricks/d2r2 Brick5: 10.70.37.62:/bricks/d3r1 Brick6: 10.70.37.215:/bricks/d3r2 Brick7: 10.70.37.44:/bricks/d4r1 Brick8: 10.70.37.201:/bricks/d4r2 Brick9: 10.70.37.62:/bricks/d5r1 Brick10: 10.70.37.215:/bricks/d5r2 Brick11: 10.70.37.44:/bricks/d6r1 Brick12: 10.70.37.201:/bricks/d6r2 Options Reconfigured: server.anongid: 501 server.anonuid: 502 server.root-squash: on nfs.addr-namelookup: on nfs.rpc-auth-reject: 10.70.35.33 features.quota-deem-statfs: on features.quota: on performance.readdir-ahead: on snap-max-hard-limit: 256 snap-max-soft-limit: 90 auto-delete: disable [root@nfs1 ~]# [root@nfs1 ~]# [root@nfs1 ~]# [root@nfs1 ~]# gluster volume quota dist-rep list quota: No quota configured on volume dist-rep [root@nfs1 ~]# gluster volume quota dist-rep limit-usage / 250GB volume quota : success [root@nfs1 ~]# gluster volume quota dist-rep list Path Hard-limit Soft-limit Used Available Soft-limit exceeded? Hard-limit exceeded? --------------------------------------------------------------------------------------------------------------------------- / 250.0GB 80% 5.0GB 245.0GB No No Hence, moving the BZ to verified
Hi Varun, Can you please review the edit doc text for technical accuracy and sign off?
Pavithra, the doc text seems perfect.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHEA-2014-1278.html