Bug 1030765 - Quota: Falsely reports `Disk quota exceeded'
Quota: Falsely reports `Disk quota exceeded'
Status: CLOSED WORKSFORME
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: quota (Show other bugs)
2.1
x86_64 Linux
medium Severity high
: ---
: ---
Assigned To: Vijaikumar Mallikarjuna
storage-qa-internal@redhat.com
: ZStream
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2013-11-15 00:22 EST by Sachidananda Urs
Modified: 2016-09-17 08:36 EDT (History)
8 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2015-11-17 03:56:22 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Sachidananda Urs 2013-11-15 00:22:54 EST
Description of problem:

Quota reports EDQUOT despite having enough quota space.


Version-Release number of selected component (if applicable):
glusterfs 3.4.0.44rhs

Steps to Reproduce:
1. Create a distributed replicate volume (2x2) and enable quota
2. In a loop create a couple of thousands of directories and set quota on them and write around 100MB of data into the directory.
3. Shutdown->force off one of the virtual machines


Actual results:
Writing data reports EDQUOT

volume quota : success
dd: writing `dir_1260/big.file': Disk quota exceeded
dd: closing output file `dir_1260/big.file': Disk quota exceeded

Expected results:
We should be able to write data, as we are well within the limit.

Additional info:

Output of quota list:
/quota/dir_1073                           99.0GB       80%      0Bytes  99.0GB
/quota/dir_1074                           99.0GB       80%     100.0MB  98.9GB
/quota/dir_1075                           99.0GB       80%      0Bytes  99.0GB
/quota/dir_1076                           99.0GB       80%      0Bytes  99.0GB
/quota/dir_1077                           99.0GB       80%      0Bytes  99.0GB
/quota/dir_1078                           99.0GB       80%      0Bytes  99.0GB
/quota/dir_1079                           99.0GB       80%      0Bytes  99.0GB
/quota/dir_1080                           99.0GB       80%     100.0MB  98.9GB
/quota/dir_1081                           99.0GB       80%      0Bytes  99.0GB
/quota/dir_1082                           99.0GB       80%     100.0MB  98.9GB
/quota/dir_1083                           99.0GB       80%     100.0MB  98.9GB
/quota/dir_1084                           99.0GB       80%      0Bytes  99.0GB
/quota/dir_1085                           99.0GB       80%     100.0MB  98.9GB

[root@upgrade-1 ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/vg_upgrade1-lv_root
                       42G  2.1G   37G   6% /
tmpfs                 7.9G     0  7.9G   0% /dev/shm
/dev/vda1             485M   32M  428M   7% /boot
/dev/mapper/RHS_vg1-RHS_lv1
                      100G   48G   53G  48% /rhs/brick1
localhost:master      200G  105G   96G  53% /mnt/foo
localhost:master      200G  105G   96G  53% /var/run/gluster/master
localhost:master      200G  105G   96G  53% /tmp/mntoYuaNa
localhost:master      200G  105G   96G  53% /tmp/mntRNnlJH

=====================

Some of the writes succeeding and some failing:

volume quota : success
dd: writing `dir_1273/big.file': Disk quota exceeded
dd: closing output file `dir_1273/big.file': Disk quota exceeded
volume quota : success
dd: writing `dir_1274/big.file': Disk quota exceeded
dd: closing output file `dir_1274/big.file': Disk quota exceeded
volume quota : success
dd: writing `dir_1275/big.file': Disk quota exceeded
dd: closing output file `dir_1275/big.file': Disk quota exceeded
volume quota : success
10+0 records in
10+0 records out
104857600 bytes (105 MB) copied, 4.27988 s, 24.5 MB/s
volume quota : success
dd: writing `dir_1277/big.file': Disk quota exceeded
dd: closing output file `dir_1277/big.file': Disk quota exceeded
volume quota : success
10+0 records in
10+0 records out
104857600 bytes (105 MB) copied, 7.15759 s, 14.6 MB/s
volume quota : success
10+0 records in
10+0 records out
104857600 bytes (105 MB) copied, 5.43451 s, 19.3 MB/s


sosreport updated.
Comment 2 Sachidananda Urs 2013-11-15 01:36:22 EST
The following command is run on one of the servers. 

cd /var/run/gluster/master/quota
for i in {1..2000}; do mkdir dir_$i; gluster volume quota master limit-usage /quota/dir_$i 99GB; dd if=/dev/zero of=dir_$i/big.file bs=10M count=10; done

The mount created by quota: /var/run/gluster/master/quota is used to IO
Comment 3 Sachidananda Urs 2013-11-15 01:41:15 EST
sosreports: http://rhsqe-repo.lab.eng.blr.redhat.com/sosreports/1030765/
Comment 4 krishnan parthasarathi 2013-11-15 01:53:58 EST
Sac,
Could you see if this behaviour is seen on a 'normal' mount. The auxiliary mount is not meant to be used for data related activities. It is internally used for control activities such as setting and removing quota limits.
Comment 5 Sachidananda Urs 2013-11-15 06:19:23 EST
Sure I will try that. This issue was quite intermittent even on the auxiliary mount as you could see from the comment above, it would succeed/fail intermittently. But how is this mount different from 'normal' mount?
Comment 6 krishnan parthasarathi 2013-11-18 01:57:42 EST
Sac,

We don't enforce quota on 'special clients', which are identified by negative --client-pid argument supplied to the GlusterFS native mount process. This is to ensure internal processes such self-heal deamon, rebalance process don't fail to perform their internal operations because of quota enforcement.

Hope that helps.
Comment 7 Sachidananda Urs 2013-11-18 03:57:55 EST
Sure that helped, thanks. I'm trying on a normal mount.
One question though. In our case here, mount is reporting EDQUOT despite the mount is having enough space (and directory quota is well within in the limits).
As you can see it is failing intermittently, not always. Is it possible that the behavior is unpredictable if auxiliary mount is used? If so I will raise a doc bug to have this noted.
Comment 9 Vijaikumar Mallikarjuna 2015-11-17 03:56:22 EST
Please file a new bug, if this issue is still seen with 3.1.x

Note You need to log in before you can comment on or make changes to this bug.