Description of problem: I have more than one volume, creating data inside, but getting alert message only for one volume not for the other Version-Release number of selected component (if applicable): glusterfs-3.4.0.59rhs-1.el6rhs.x86_64 How reproducible: happen to seen this time Steps to Reproduce: 1. create two volumes, start them 2. enable quota 3. set limit on the root/directory/subdirectory, create dirs after mounting them over nfs 4. check the alert messages in the bricks. Actual results: volume in cosideration, [root@quota6 ~]# gluster volume info Volume Name: dist-rep Type: Distributed-Replicate Volume ID: 7277b955-5dea-4352-bf6e-c7c3476a6714 Status: Started Number of Bricks: 6 x 2 = 12 Transport-type: tcp Bricks: Brick1: quota5:/rhs/brick1/d1r1 Brick2: quota6:/rhs/brick1/d1r2 Brick3: quota7:/rhs/brick1/d2r1 Brick4: quota8:/rhs/brick1/d2r2 Brick5: quota5:/rhs/brick1/d3r1 Brick6: quota6:/rhs/brick1/d3r2 Brick7: quota7:/rhs/brick1/d4r1 Brick8: quota8:/rhs/brick1/d4r2 Brick9: quota5:/rhs/brick1/d5r1 Brick10: quota6:/rhs/brick1/d5r2 Brick11: quota7:/rhs/brick1/d6r1 Brick12: quota8:/rhs/brick1/d6r2 Options Reconfigured: features.quota-deem-statfs: on features.quota: on Volume Name: dist-rep2 Type: Distributed-Replicate Volume ID: ce40a89b-a850-4d77-af70-5a30d714b749 Status: Started Number of Bricks: 7 x 2 = 14 Transport-type: tcp Bricks: Brick1: quota5:/rhs/brick1/d1r12 Brick2: quota6:/rhs/brick1/d1r22 Brick3: quota7:/rhs/brick1/d2r12 Brick4: quota8:/rhs/brick1/d2r22 Brick5: quota5:/rhs/brick1/d3r12 Brick6: quota6:/rhs/brick1/d3r22 Brick7: quota7:/rhs/brick1/d4r12 Brick8: quota8:/rhs/brick1/d4r22 Brick9: quota5:/rhs/brick1/d5r12 Brick10: quota6:/rhs/brick1/d5r22 Brick11: quota7:/rhs/brick1/d6r12-add Brick12: quota8:/rhs/brick1/d6r22-add Brick13: quota7:/rhs/brick1/d6r12-add1 Brick14: quota8:/rhs/brick1/d6r22-add1 Options Reconfigured: features.hard-timeout: 10s features.soft-timeout: 60s features.default-soft-limit: 75% features.quota-deem-statfs: on features.quota: on quota list stats for volume dist-rep, [root@quota6 ~]# gluster vol quota dist-rep list / Path Hard-limit Soft-limit Used Available Soft-limit exceeded? Hard-limit exceeded? --------------------------------------------------------------------------------------------------------------------------- / 8.0GB 80% 8.0GB 0Bytes Yes Yes quota list stats for volume dist-rep2, [root@quota6 ~]# gluster vol quota dist-rep2 list Path Hard-limit Soft-limit Used Available Soft-limit exceeded? Hard-limit exceeded? --------------------------------------------------------------------------------------------------------------------------- / 60.0GB 75% 46.8GB 13.2GB Yes No /dir5 5.0GB 75% 5.0GB 0Bytes Yes Yes /dir6 10.0GB 50% 10.0GB 0Bytes Yes Yes /dir7 5.0GB 75% 5.0GB 0Bytes Yes Yes /dir8 10.0GB 65% 10.0GB 0Bytes Yes Yes for dist-rep volume I was creating data in the root of the volume, the brick logs send the "A" after soft-limit of 80% is crossed, as can be seen here, [root@quota5 ~]# less /var/log/glusterfs/bricks/rhs-brick1-d[1-5]r1.log | grep "\sA\s" [2014-02-12 10:56:59.461592] A [quota.c:3670:quota_log_usage] 0-dist-rep-quota: Usage is above soft limit: 6.4GB used by / [2014-02-12 10:56:50.100547] A [quota.c:3670:quota_log_usage] 0-dist-rep-quota: Usage is above soft limit: 6.4GB used by / [2014-02-12 10:56:58.356114] A [quota.c:3670:quota_log_usage] 0-dist-rep-quota: Usage is above soft limit: 6.4GB used by for dist-rep2 volume I was creating data in the directory /dir8, the soft-limit for this directory is modified to "65%" and the root the volume soft-limit is modified to "75%". The limit-set on root of the volume dist-rep2 is 60GB, hence there should be message at 45GB, but it is not there, similarly for directory in consideration /dir8 the limit set is 10GB and "A" message is not seen after the soft-limit is crossed. as can be seen here, [root@quota5 ~]# less /var/log/glusterfs/bricks/rhs-brick1-d[1-5]r12.log | grep "\sA\s" [2014-02-12 06:40:11.196986] A [quota.c:3664:quota_log_usage] 0-dist-rep2-quota: Usage crossed soft limit: 16.0GB used by / [2014-02-12 08:29:15.727665] A [quota.c:3670:quota_log_usage] 0-dist-rep2-quota: Usage is above soft limit: 4.9GB used by /dir5/ [2014-02-12 08:54:00.100066] A [quota.c:3664:quota_log_usage] 0-dist-rep2-quota: Usage crossed soft limit: 2.5GB used by /dir6/ [2014-02-12 10:00:01.769619] A [quota.c:3670:quota_log_usage] 0-dist-rep2-quota: Usage is above soft limit: 3.8GB used by /dir7/ [2014-02-12 10:22:06.988049] A [quota.c:3670:quota_log_usage] 0-dist-rep2-quota: Usage is above soft limit: 3.7GB used by /dir8/ [2014-02-12 06:40:28.005361] A [quota.c:3670:quota_log_usage] 0-dist-rep2-quota: Usage is above soft limit: 16.1GB used by / [2014-02-12 08:29:11.603155] A [quota.c:3670:quota_log_usage] 0-dist-rep2-quota: Usage is above soft limit: 4.9GB used by /dir5 [2014-02-12 08:55:39.821630] A [quota.c:3670:quota_log_usage] 0-dist-rep2-quota: Usage is above soft limit: 3.0GB used by /dir6/ [2014-02-12 10:00:05.750921] A [quota.c:3670:quota_log_usage] 0-dist-rep2-quota: Usage is above soft limit: 3.8GB used by /dir7/ [2014-02-12 10:21:44.384454] A [quota.c:3670:quota_log_usage] 0-dist-rep2-quota: Usage is above soft limit: 3.5GB used by /dir8/ [2014-02-12 06:40:22.364693] A [quota.c:3670:quota_log_usage] 0-dist-rep2-quota: Usage is above soft limit: 16.1GB used by / [2014-02-12 08:29:29.732604] A [quota.c:3670:quota_log_usage] 0-dist-rep2-quota: Usage is above soft limit: 4.9GB used by /dir5/ [2014-02-12 08:55:31.642108] A [quota.c:3670:quota_log_usage] 0-dist-rep2-quota: Usage is above soft limit: 3.0GB used by /dir6/ [2014-02-12 10:00:14.605694] A [quota.c:3670:quota_log_usage] 0-dist-rep2-quota: Usage is above soft limit: 3.9GB used by /dir7/ [2014-02-12 10:21:23.235106] A [quota.c:3670:quota_log_usage] 0-dist-rep2-quota: Usage is above soft limit: 3.4GB used by /dir8/ similar kind of information is there from other nodes of the cluster, NOTE: the data creation was happening for both volumes at the same time for both volumes. Expected results: "A" messages should be seen appropriately Additional info:
As 2.1 is EOL'ed, closing this bug and filed 3.1 bug# 1282725