Bug 1019752 - [Quota] quota failes generate alerts on the slave in a geo-rep setup even after slave has crossed soft-limit
[Quota] quota failes generate alerts on the slave in a geo-rep setup even aft...
Status: CLOSED WONTFIX
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: quota (Show other bugs)
2.1
x86_64 Linux
medium Severity high
: ---
: ---
Assigned To: krishnan parthasarathi
storage-qa-internal@redhat.com
:
Depends On: 1182890 1182921
Blocks:
  Show dependency treegraph
 
Reported: 2013-10-16 07:01 EDT by Vijaykumar Koppad
Modified: 2016-09-17 08:36 EDT (History)
8 users (show)

See Also:
Fixed In Version:
Doc Type: Known Issue
Doc Text:
Expected behavior of quota: If the rate of I/O is more than the value of hard-timeout and soft-timeout, there is possibility of quota limit being exceeded For example: If the rate of IO is 1GB/sec If hard-timeout is set to 5sec (default value). If soft-timeout is set to 60sec (default value). Then we may exceed quota limit by ~30GB - 60GB In order to attain a strict checking of quota limit, then you need to lower the value of hard-timeout and soft-timeout Command to set timeout: gluster volume quota <volume-name> soft-timeout 0 gluster volume quota <volume-name> hard-timeout 0
Story Points: ---
Clone Of:
Environment:
Last Closed: 2015-01-16 03:09:06 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Vijaykumar Koppad 2013-10-16 07:01:28 EDT
Description of problem:quota failes generate alerts on the slave in a geo-rep setup even after slave has crossed soft-limit. 

Following is the status of the quota, 
[root@redmoon ~]# gluster v quota slave list
                  Path                   Hard-limit Soft-limit   Used  Available
--------------------------------------------------------------------------------
/                                        300.0MB       80%     272.5MB  27.5MB


which means it has crossed 80% quota limit, and bricks should have quota alerts, 


>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
[root@redmoon ~]# less /var/log/glusterfs/bricks/bricks-brick1.log | grep "\sA\s"
[2013-10-16 07:16:46.186448] A [quota.c:3535:quota_log_usage] 0-slave-quota: Usage is above soft limit: 890.9MB used by /
[root@redmoon ~]# date -u
Wed Oct 16 10:43:09 UTC 2013

[root@redeye ~]# less /var/log/glusterfs/bricks/bricks-brick2.log | grep "\sA\s" ; date -u
[2013-10-16 07:16:45.538336] A [quota.c:3535:quota_log_usage] 0-slave-quota: Usage is above soft limit: 890.9MB used by /
Wed Oct 16 10:43:32 UTC 2013
[root@redeye ~]# 


[root@redlemon ~]# less /var/log/glusterfs/bricks/bricks-brick3.log | grep "\sA\s"; date -u 
[2013-10-16 07:16:42.275822] A [quota.c:3535:quota_log_usage] 0-slave-quota: Usage is above soft limit: 890.2MB used by /
Wed Oct 16 10:43:50 UTC 2013

[root@redcloud ~]# less /var/log/glusterfs/bricks/bricks-brick4.log | grep "\sA\s" ; date -u 
[2013-10-16 07:16:41.430274] A [quota.c:3535:quota_log_usage] 0-slave-quota: Usage is above soft limit: 890.2MB used by /
Wed Oct 16 10:44:01 UTC 2013

>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>

The earlier one was previous alert, it is not current one. 



Version-Release number of selected component (if applicable):3.4.0.34rhs-1.el6rhs.x86_64


How reproducible:Tried only once


Steps to Reproduce:
1. create and start a geo-rep relationship between master and slave
2. set the quota limit-usage on the slave.
3. create data on the master and sync it to slave, such that 80% limit is crossed on the slave .
4 check for the alert messages in brick log files

Actual results: No alerts in the brick log files.  


Expected results: It should alert in the brick log file. 


Additional info:
Comment 2 Vijaykumar Koppad 2013-10-16 09:06:36 EDT
It is consistently reproducible. 

steps I followed. 

1. create and start a geo-rep relationship between master and slave
2. set the quota limit-usage on the slave to 110M.
3. create data on the master using the command, "./crefi.py -n 10 --multi -d 10 -d 10 --size=100K /mnt/master/", which will create 1000 file of size 100KB, which will create some 98M data on the master. 
4. Let it sync to slave. 
5. Soft-limit set on slave by default is 80%, 80% of 110 is 88 and 98 should have logged alert.
6, It didn't have any logs.
Comment 3 Vivek Agarwal 2013-10-17 02:55:50 EDT
Per discussion with Shanks/Saurabh, moving it to Corbett
Comment 4 Vivek Agarwal 2013-11-26 02:01:12 EST
Per dev bug triage, moving it to future
Comment 6 Vijaikumar Mallikarjuna 2015-01-12 08:39:48 EST
This needs to be documented:

When the quota hard-timeout is set to default value of 30,
the quota limit is checked once in 30 seconds and during
that 30 second time window there is possibility of quota
hard-limit being exceeded. In order to attain a strict
checking of quota limit it is recommended to set the
quota soft-timeout and hard-timeout to lower value so
that quota limit is checked frequently, and possibility of
quota hard-limit being exceeded is reduced.

Note You need to log in before you can comment on or make changes to this bug.