Description of problem: When disk quota exceeded, seeing below error message in brick logs. E [quota.c:1197:quota_check_limit] 0-ecvol-quota: Failed to check quota size limit Version-Release number of selected component (if applicable): [root@darkknight bricks]# rpm -qa | grep glusterfs glusterfs-3.7.0-3.el6rhs.x86_64 glusterfs-server-3.7.0-3.el6rhs.x86_64 glusterfs-api-3.7.0-3.el6rhs.x86_64 glusterfs-geo-replication-3.7.0-3.el6rhs.x86_64 glusterfs-libs-3.7.0-3.el6rhs.x86_64 glusterfs-fuse-3.7.0-3.el6rhs.x86_64 glusterfs-cli-3.7.0-3.el6rhs.x86_64 glusterfs-client-xlators-3.7.0-3.el6rhs.x86_64 How reproducible: 100% Steps to Reproduce: 1. Create 2*2 distribute replicate volume 2. Mount volume as FUSE or NFS mount 3. Enable quota on volume and set limit-usage on root of the volume 4. Create some files from mount so that disk quota exceeds Actual results: brick logs: =============================== [2015-06-09 08:39:13.015008] E [quota.c:1197:quota_check_limit] 0-vol0-quota: Failed to check quota size limit [2015-06-09 08:39:13.015102] I [server-rpc-fops.c:1378:server_writev_cbk] 0-vol0-server: 16503: WRITEV 0 (2c3e9e68-1dfb-4c52-b67d-9a36309bfee4) ==> (Disk quota exceeded) [2015-06-09 08:39:13.015102] I [server-rpc-fops.c:1378:server_writev_cbk] 0-vol0-server: 16503: WRITEV 0 (2c3e9e68-1dfb-4c52-b67d-9a36309bfee4) ==> (Disk quota exceeded) [2015-06-09 08:39:13.028128] E [quota.c:1197:quota_check_limit] 0-vol0-quota: Failed to check quota size limit [2015-06-09 08:39:13.028193] I [server-rpc-fops.c:1378:server_writev_cbk] 0-vol0-server: 16504: WRITEV 0 (2c3e9e68-1dfb-4c52-b67d-9a36309bfee4) ==> (Disk quota exceeded) [2015-06-09 08:39:13.042611] E [quota.c:1197:quota_check_limit] 0-vol0-quota: Failed to check quota size limit [2015-06-09 08:39:13.042673] I [server-rpc-fops.c:1378:server_writev_cbk] 0-vol0-server: 16506: WRITEV 0 (2c3e9e68-1dfb-4c52-b67d-9a36309bfee4) ==> (Disk quota exceeded) [2015-06-09 08:39:13.056257] E [quota.c:1197:quota_check_limit] 0-vol0-quota: Failed to check quota size limit Expected results: Additional info: [root@darkknight bricks]# gluster v info vol0 Volume Name: vol0 Type: Distributed-Replicate Volume ID: fd5c2240-113e-42a6-9271-8016d5b48c3f Status: Started Number of Bricks: 2 x 2 = 4 Transport-type: tcp Bricks: Brick1: 10.70.33.214:/rhs/brick1/b1 Brick2: 10.70.33.219:/rhs/brick1/b2 Brick3: 10.70.33.225:/rhs/brick1/b3 Brick4: 10.70.44.13:/rhs/brick1/b4 Options Reconfigured: features.quota-deem-statfs: on features.inode-quota: on features.quota: on features.uss: enable performance.readdir-ahead: on
Patch submitted upstream: http://review.gluster.org/11135
Could not see the errors reported in the bug. Marking this bug verified on build glusterfs-cli-3.7.1-12.el7rhgs.x86_64
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHSA-2015-1845.html