Description of problem:quota failes generate alerts on the slave in a geo-rep setup even after slave has crossed soft-limit. Following is the status of the quota, [root@redmoon ~]# gluster v quota slave list Path Hard-limit Soft-limit Used Available -------------------------------------------------------------------------------- / 300.0MB 80% 272.5MB 27.5MB which means it has crossed 80% quota limit, and bricks should have quota alerts, >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [root@redmoon ~]# less /var/log/glusterfs/bricks/bricks-brick1.log | grep "\sA\s" [2013-10-16 07:16:46.186448] A [quota.c:3535:quota_log_usage] 0-slave-quota: Usage is above soft limit: 890.9MB used by / [root@redmoon ~]# date -u Wed Oct 16 10:43:09 UTC 2013 [root@redeye ~]# less /var/log/glusterfs/bricks/bricks-brick2.log | grep "\sA\s" ; date -u [2013-10-16 07:16:45.538336] A [quota.c:3535:quota_log_usage] 0-slave-quota: Usage is above soft limit: 890.9MB used by / Wed Oct 16 10:43:32 UTC 2013 [root@redeye ~]# [root@redlemon ~]# less /var/log/glusterfs/bricks/bricks-brick3.log | grep "\sA\s"; date -u [2013-10-16 07:16:42.275822] A [quota.c:3535:quota_log_usage] 0-slave-quota: Usage is above soft limit: 890.2MB used by / Wed Oct 16 10:43:50 UTC 2013 [root@redcloud ~]# less /var/log/glusterfs/bricks/bricks-brick4.log | grep "\sA\s" ; date -u [2013-10-16 07:16:41.430274] A [quota.c:3535:quota_log_usage] 0-slave-quota: Usage is above soft limit: 890.2MB used by / Wed Oct 16 10:44:01 UTC 2013 >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> The earlier one was previous alert, it is not current one. Version-Release number of selected component (if applicable):3.4.0.34rhs-1.el6rhs.x86_64 How reproducible:Tried only once Steps to Reproduce: 1. create and start a geo-rep relationship between master and slave 2. set the quota limit-usage on the slave. 3. create data on the master and sync it to slave, such that 80% limit is crossed on the slave . 4 check for the alert messages in brick log files Actual results: No alerts in the brick log files. Expected results: It should alert in the brick log file. Additional info:
It is consistently reproducible. steps I followed. 1. create and start a geo-rep relationship between master and slave 2. set the quota limit-usage on the slave to 110M. 3. create data on the master using the command, "./crefi.py -n 10 --multi -d 10 -d 10 --size=100K /mnt/master/", which will create 1000 file of size 100KB, which will create some 98M data on the master. 4. Let it sync to slave. 5. Soft-limit set on slave by default is 80%, 80% of 110 is 88 and 98 should have logged alert. 6, It didn't have any logs.
Per discussion with Shanks/Saurabh, moving it to Corbett
Per dev bug triage, moving it to future
This needs to be documented: When the quota hard-timeout is set to default value of 30, the quota limit is checked once in 30 seconds and during that 30 second time window there is possibility of quota hard-limit being exceeded. In order to attain a strict checking of quota limit it is recommended to set the quota soft-timeout and hard-timeout to lower value so that quota limit is checked frequently, and possibility of quota hard-limit being exceeded is reduced.