Bug 848242

Summary: Problem implementing Gluster Quotas
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: Vidya Sakar <vinaraya>
Component: glusterfsAssignee: vpshastry <vshastry>
Status: CLOSED WORKSFORME QA Contact: Sudhir D <sdharane>
Severity: high Docs Contact:
Priority: medium    
Version: unspecifiedCC: admin, amarts, crlindiadc, gluster-bugs, nsathyan, rfortier, vbellur
Target Milestone: ---   
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: GLUSTER-3535 Environment:
Last Closed: 2013-02-04 10:37:06 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 765267    
Bug Blocks:    

Description Vidya Sakar 2012-08-15 01:34:50 UTC
+++ This bug was initially created as a clone of Bug #765267 +++

Hi Ramana,

To set the limit on directory "dheeraj" which is under the mount-point "/testgluster" run 

gluster volume quota vol-name limit-usage /dheeraj 5GB

The point is for gluster-quota the directories are relative the mount-point. Hope this helps.

Junaid

--- Additional comment from crlindiadc on 2011-09-12 03:39:59 EDT ---

Hi Junaid, 

   Thank you for the immediate reply. 

As suggested by you we had set the usage list using the following command

gluster volume quota testfs01 limit-usage /dheeraj 5GB

It also lists the same on the volume info command output as attached
~]# gluster volume info

Volume Name: testfs01
Type: Distribute
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: gs02-ib:/data/gluster/brick-2
Brick2: gs01-ib:/data/gluster/brick-1
Options Reconfigured:
features.quota: on
diagnostics.count-fop-hits: on
diagnostics.latency-measurement: on
features.limit-usage: /testgluster/dheeraj:5GB

We were able to list the gluster volume usage 
# gluster volume quota testfs01 list
        path              limit_set          size
--------------------------------------------------------------------------------
/dheeraj             5368709120           6091304960

After which we had copied some data on the spicified directory, but the quota does not restrict copying data.

The above output show the same.

Regards,
Ramana Kasaraneni.

--- Additional comment from crlindiadc on 2011-09-12 03:44:37 EDT ---

Hi, 

     We are having a strange problem, while using the quotas in GlusterFS. We
are using Gluster 3.2. We have configured gluster on Storage nodes. 

# rpm -qa |grep glust
glusterfs-fuse-3.2.0-1
glusterfs-core-3.2.0-1
glusterfs-rdma-3.2.0-1

The glusterFS configuration looks as follows:

[root@gs01 ~]# gluster volume info

Volume Name: testfs01
Type: Distribute
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: gs02-ib:/data/gluster/brick-2
Brick2: gs01-ib:/data/gluster/brick-1
Options Reconfigured:
features.quota: on
diagnostics.count-fop-hits: on
diagnostics.latency-measurement: on
features.limit-usage: /testgluster/dheeraj:5GB

We have configured the gluster quota, at the directory level. It does not list
the usage on quota enabled directory. 

# gluster volume quota testfs01 list
        path              limit_set          size
---------------------------------------------------------------------------
/testgluster/dheeraj        5GB

We tried copying more data on to the specified folder, more than 5 GB, and we
are able to write more data on the file system. 

Request you to provide some inputs on the same.

On the client nodes we had mounted the gluster file system

[root@n0 testgluster]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/cciss/c0d0p5      39G   12G   25G  32% /
tmpfs                 7.9G  380K  7.9G   1% /dev/shm
/dev/cciss/c0d0p1     281M   44M  223M  17% /boot
/dev/cciss/c0d0p2      21G   14G  6.6G  67% /var
10.1.32.33@o2ib0:/lfs1
                      135G  112G   17G  88% /lustre
10.1.60.1@o2ib0:/lfs01
                       68G   22G   43G  35% /testlustre
/etc/glusterfs/testfs01.vol
                      135G   28G  101G  22% /testgluster


We want to implement quota on the gluster file system. So request your help on
the same. 

Regards,
Ramana Kasaraneni.

--- Additional comment from junaid on 2011-09-12 03:56:05 EDT ---

Hi Ramana,

The problem might be because of the way the client is mounted

/etc/glusterfs/testfs01.vol
                      135G   28G  101G  22% /testgluster

Please use 

mount -t glusterfs server-name:/vol-name mount-point

Because if you mount directly from the volfile-file, any further changes to the volume options done through gluster cli after the client mount will not be accessible to the client. So I suspect that the changes are not visible to the client. Also, the output of gluster volume info is inconsisten for the option "features.limit-usage".

It should have been 
 
features.limit-usage: /dheeraj:5GB

> features.quota: on
> diagnostics.count-fop-hits: on
> diagnostics.latency-measurement: on
> features.limit-usage: /testgluster/dheeraj:5GB

Can you please check the volume info output again and report it here.

Junaid

--- Additional comment from crlindiadc on 2011-09-12 04:34:50 EDT ---

Hi Junaid, 

   As suggested, we had unmount the file system, and mounted the file system using the following command 

]# mount -t glusterfs gs01-ib:/testfs01 /testgluster/

and mounted file system looks like 

]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/cciss/c0d0p5      39G   12G   25G  32% /
tmpfs                 7.9G  380K  7.9G   1% /dev/shm
/dev/cciss/c0d0p1     281M   44M  223M  17% /boot
/dev/cciss/c0d0p2      21G   14G  6.6G  67% /var
10.1.32.33@o2ib0:/lfs1
                      135G  112G   17G  88% /lustre
10.1.60.1@o2ib0:/lfs01
                       68G   22G   43G  35% /testlustre
gs01-ib:/testfs01     135G     0  135G   0% /testgluster

In the above mounting, it show wrong info in the usage part. The actual used space is not getting reflected. 

# cd /testgluster/
[root@n0 testgluster]# ls
a  createperl  dheeraj  iyard01
[root@n0 testgluster]# du -sh *
4.0K    a
4.0K    createperl
5.8G    dheeraj
22G     iyard01

The actual usage on this file system is near to 29 GB, but it shows has 0% used.

Even after changing the mounting mechanism, the quota does not work.

# gluster volume quota testfs01 list
        path              limit_set          size
--------------------------------------------------------------------------------
/dheeraj             5368709120           6132764672

The volume info also the correct info 

# gluster volume info

Volume Name: testfs01
Type: Distribute
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: gs02-ib:/data/gluster/brick-2
Brick2: gs01-ib:/data/gluster/brick-1
Options Reconfigured:
diagnostics.latency-measurement: on
diagnostics.count-fop-hits: on
features.quota: on
features.limit-usage: /dheeraj:5GB

We suspect the issue seems to be some thing else.

Regards,
Ramana Kasaraneni.

--- Additional comment from junaid on 2011-09-12 05:03:20 EDT ---

> [root@n0 testgluster]# du -sh *
> 4.0K    a
> 4.0K    createperl
> 5.8G    dheeraj
> 22G     iyard01
> 
> The actual usage on this file system is near to 29 GB, but it shows has 0%
> used.
> 
> Even after changing the mounting mechanism, the quota does not work.
> 
> # gluster volume quota testfs01 list
>         path              limit_set          size
> --------------------------------------------------------------------------------
> /dheeraj             5368709120           6132764672
> 

Can you please check the behavior using 3.2.3(latest release), some quota bugs have been fixed in it. Both client and server must be updated to the new release.

--- Additional comment from amarts on 2011-09-19 23:24:43 EDT ---

Need to get information on whether upgrading to new version fixed issues.

--- Additional comment from admin.edu on 2012-04-05 08:12:37 EDT ---

quotas still don't work in 3.2.5, I haven't tried 3.2.6. volumne pirdist is a 5 brick distributed cluster and pirstripe is a 5 brick striped cluster.

I set the limit for myself to 10GB. First it indicated the size was 0, when there were already 1.2GB of files in the dir. Then I was able to create a single 20GB file in it without the write being stopped at 20GB.

gluster> volume quota pirdist list
path limit_set size
----------------------------------------------------------------------------------
/user 10GB 20.0GB

I thought maybe there was just an issue with me writing a single huge file and expecting it to catch it, so I started writing another 5GB file and it didn't prevent that write either :

gluster> volume quota pirdist list
path limit_set size
----------------------------------------------------------------------------------
/user 10GB 23.6GB

Same issue on pirstripe :

gluster> volume quota pirstripe list
path limit_set size
----------------------------------------------------------------------------------
/user 5GB 20.0GB

...except here I actually wrote a 10GB file, so it's reporting the size incorrectly, but not in the same way as it was for a "du -sh" where it was reporting the total size as size * 5 (# of storage bricks). The incorrect size report is mentioned in another bug.

--- Additional comment from admin.edu on 2012-04-05 18:33:39 EDT ---

ok, looks like the reason the quotas weren't working was because one of the bricks wasn't properly connected to the cluster. However, I have another strange issue where the quota thinks that there's actually 20GB of data in the directory even though there's ~2MB, so it won't let me do any writes. How does the quota determine how much storage a user is actually using? How do I reset this value? I tried removing the quota and adding it again, didn't help. Tried disabling and re-enabling the quotas, etc.

gluster> volume quota pirstripe list
        path              limit_set          size
----------------------------------------------------------------------------------
/user                    2GB               20.0GB

du -sh from the fuse mount :

% du -sh user
2.0M    user

^ I also did a du -sh in the directory at the filesystem layer across all the bricks and it adds up approximately to 2M.

again from the fuse mount :

% du -sh --apparent-size user
192K    user

Comment 2 Amar Tumballi 2013-02-04 10:37:06 UTC
source bug got closed with WORKSFORME. Would be great to test it again with 3.4.0qa builds and reopen if the issue persists.