Bug 1172348 - new installation of glusterfs3.5.3-1; quota not displayed to client.
Summary: new installation of glusterfs3.5.3-1; quota not displayed to client.
Keywords:
Status: CLOSED EOL
Alias: None
Product: GlusterFS
Classification: Community
Component: quota
Version: 3.5.3
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: ---
Assignee: Manikandan
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-12-09 21:40 UTC by Khoi Mai
Modified: 2016-09-20 04:29 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-06-17 16:24:25 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)
glusterfs3.5.3 server log (520.00 KB, application/x-tar)
2014-12-09 21:40 UTC, Khoi Mai
no flags Details
fuse-client glusterfs3.5.3 log file (16.94 KB, text/plain)
2014-12-09 21:42 UTC, Khoi Mai
no flags Details

Description Khoi Mai 2014-12-09 21:40:06 UTC
Created attachment 966508 [details]
glusterfs3.5.3 server log

Description of problem:  Fresh installation of glusterfs3.5.3-1 on a 2 node replicated setup.  Quota enabled, but does not appear to be presented to fuse-client for specified quota.  

On the storage node that I enabled the quota and tried to list it in the CLi the prompt never returns during listing.  There is a local mount that is mounted on the server I initiated the command on.  The other server did not have this mount.

localhost:test       1073217536  33152 1073184384   1% /var/run/gluster/test


I've tried to recycle glusterd and no changes and when I killed the glusterfs process that was still writing to the locally mounted filesystem, it now leaves it in a transport disconnect state, and I cannot remove it but nobody is using it.

[root@omhq1436 glusterfs]# ll /var/run/gluster
ls: cannot access /var/run/gluster/test: Transport endpoint is not connected
total 4
d????????? ? ?    ?    ?            ? test
-rw-r--r-- 1 root root 6 Dec  9 15:01 test.pid

My fuse-client/gFNS mounts still shows the size of the brick rather than the quota hard limit.  


I also ran 
dd if=/dev/zero of=filename bs=1024 count=2GB
in my fuse-client and it saw the limit.

[root@vx1ac9 test]# dd if=/dev/zero of=filename bs=1024 count=2GB
dd: writing `filename': Disk quota exceeded
dd: closing output file `filename': Disk quota exceeded

[root@omdx1445 test]# gluster volume info

Volume Name: test
Type: Replicate
Volume ID: 24f22448-cf5b-4941-9c89-6159ea847352
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: omhq1436:/static/test
Brick2: omdx1445:/static/test
Options Reconfigured:
features.quota: on
[root@omdx1445 test]# gluster volume quota test list
                  Path                   Hard-limit Soft-limit   Used  Available
--------------------------------------------------------------------------------
/                                          1.0GB       80%       1.0GB  0Bytes


Version-Release number of selected component (if applicable):
storage:
[root@omdx1445 test]# uname -a
Linux omdx1445 2.6.32-504.1.3.el6.x86_64 #1 SMP Fri Oct 31 11:37:10 EDT 2014 x86_64 x86_64 x86_64 GNU/Linux
[root@omdx1445 test]# rpm -qa --last|grep glusterfs
glusterfs-server-3.5.3-1.el6.x86_64           Tue 09 Dec 2014 01:51:29 PM CST
glusterfs-fuse-3.5.3-1.el6.x86_64             Tue 09 Dec 2014 01:51:29 PM CST
glusterfs-cli-3.5.3-1.el6.x86_64              Tue 09 Dec 2014 01:51:29 PM CST
glusterfs-api-3.5.3-1.el6.x86_64              Tue 09 Dec 2014 01:51:29 PM CST
glusterfs-libs-3.5.3-1.el6.x86_64             Tue 09 Dec 2014 01:51:28 PM CST
glusterfs-3.5.3-1.el6.x86_64                  Tue 09 Dec 2014 01:51:28 PM CST


fuse-client:
[root@vx1ac9 test]# uname -a
Linux vx1ac9 2.6.32-504.1.3.el6.x86_64 #1 SMP Fri Oct 31 11:37:10 EDT 2014 x86_64 x86_64 x86_64 GNU/Linux
[root@vx1ac9 test]# rpm -qa --last|grep glusterfs
glusterfs-fuse-3.5.3-1.el6.x86_64             Fri 05 Dec 2014 06:12:51 PM CST
glusterfs-3.5.3-1.el6.x86_64                  Fri 05 Dec 2014 06:12:50 PM CST
glusterfs-libs-3.5.3-1.el6.x86_64             Fri 05 Dec 2014 06:12:49 PM CST



How reproducible:
everytime you run a df on the client you will not see the appropriate quota limit set for that volume

Steps to Reproduce:
1.  create gluster volume, enable quota, set quota hard limit, mount to fuse client
2. df on client


Actual results:
the client df displays 1TB rather than 1GB

Expected results:
1GB

Additional info:
i've attached the /var/log/glusterfs to this ticket.

Comment 1 Khoi Mai 2014-12-09 21:42:04 UTC
Created attachment 966511 [details]
fuse-client glusterfs3.5.3 log file

Comment 2 Khoi Mai 2014-12-09 21:51:55 UTC
I just found that https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3/html/Administration_Guide/sect-Displaying_Quota_Limit_Information.html

I needed to enable gluster volume set quota-deem-statfs to see the appropriate df, BUT, the fact that the local filesystem was mounted and stayed there, and now I cannot remove it because I killed the glusterfs process.  /var/run/glusterfs/VOLUMENAME

In addition, in my 2 node storage, after I killed the glusterfs process on the initial server i enabled the quota on, I am unable to "quota list" on that node, but am able to on the other node.

[omhq1436]# gluster volume quota test list
quota: Could not start quota auxiliary mount


[omdx1445 ~]# gluster volume quota test list
                  Path                   Hard-limit Soft-limit   Used  Available
--------------------------------------------------------------------------------
/                                          1.0GB       80%       1.0GB  0Bytes

Comment 3 Khoi Mai 2014-12-09 21:56:05 UTC
rebooting omhq1436, got rid of the corrupted directory when I listed it and now CLi listing cmd works.

omhq1436 gluster]# ls -lrt
total 4
drwxr-xr-x 2 root root 4096 Dec  9 15:01 test
[root@omhq1436 gluster]#

[omhq1436 gluster]# gluster volume quota test list
                  Path                   Hard-limit Soft-limit   Used  Available
--------------------------------------------------------------------------------
/                                          1.0GB       80%       1.0GB  0Bytes

Comment 4 Joe Julian 2014-12-09 23:29:32 UTC
Needing to set "quota-deem-statfs=on" is a change in the default behavior. The default value of "quota-deem-statfs" should be "on" to maintain a consistent user experience with prior versions.

Comment 5 Khoi Mai 2014-12-11 00:47:04 UTC
It appears that my upgrade from glusterfs-3.4.3 to glusterfs-3.5.3. All my 'quota limits' have been reset on all my volumes.

The reason I'm saying that is because the quota limits that I did have; I listed them after all the storage nodes (4 in total) they replied back with "no quota limit for this volume"

I had 12 volumes I had to recreate the quota limit-usage for again.

After recreating the limits their was a localhost mount for all my volumes:

localhost:dyn_job     5.0G   11M  5.0G   1% /var/run/gluster/dyn_job
localhost:dyn_avr     5.0G     0  5.0G   0% /var/run/gluster/dyn_avr
localhost:dyn_ctl     5.0G  1.5G  3.6G  29% /var/run/gluster/dyn_ctl
localhost:dyn_ert      10G  5.0G  5.1G  50% /var/run/gluster/dyn_ert
localhost:dyn_wls     5.0G  2.7G  2.4G  53% /var/run/gluster/dyn_wls
localhost:dyn_mech    5.0G   69M  5.0G   2% /var/run/gluster/dyn_mech
localhost:dyn_admin   5.0G  672M  4.4G  14% /var/run/gluster/dyn_admin
localhost:dyn_cfu     5.0G  3.0G  2.1G  60% /var/run/gluster/dyn_cfu
localhost:devstatic   1.5T  818G  719G  54% /var/run/gluster/devstatic
localhost:dyn_eng     5.0G  896K  5.0G   1% /var/run/gluster/dyn_eng

I know if I recycle glusterd, nothing changes.  If i kill all gluster process, the mounts will error with "end transport not connected".

So I decided to reboot this server that was the only one that had the local mounts and after reboot they were all gone.

I would expect in an upgrade my quota limits would be preserved.

Comment 6 Niels de Vos 2014-12-16 12:35:11 UTC
Two issues have been noted here:
1. change in default behaviour of "df" output related to quota-deem-statfs
2. upgrade from 3.4 to 3.5 caused loss of quota configuration

I think (2) should have been resolved with this:
- https://github.com/gluster/glusterfs/blob/v3.5.2/doc/upgrade/quota-upgrade-steps.md

So, asking the quota developers to reconsider the default of quota-deem-statfs. Changing the default might be difficult now, but the value used in the previous version should surely be kept?

Comment 7 Niels de Vos 2016-06-17 16:24:25 UTC
This bug is getting closed because the 3.5 is marked End-Of-Life. There will be no further updates to this version. Please open a new bug against a version that still receives bugfixes if you are still facing this issue in a more current release.


Note You need to log in before you can comment on or make changes to this bug.