Bug 800649
Summary: | volume quota not working | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Amaya Rosa Gil Pippino <amaya> |
Component: | glusterfs | Assignee: | Junaid <junaid> |
Status: | CLOSED DUPLICATE | QA Contact: | Saurabh <saujain> |
Severity: | high | Docs Contact: | |
Priority: | high | ||
Version: | 1.0 | CC: | admin, amarts, amaya, gluster-bugs, junaid, mzywusko, pablo.iranzo, sdharane, sghai, vagarwal, vbellur, vinaraya |
Target Milestone: | Beta | ||
Target Release: | --- | ||
Hardware: | x86_64 | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | Bug Fix | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2012-06-13 08:28:54 UTC | Type: | --- |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Amaya Rosa Gil Pippino
2012-03-06 21:00:32 UTC
There is a typo on the description, quota is set to 200Mb and not 500Mb: [root@gluster2 ~]# gluster volume quota vol_gl_shared list path limit_set size ---------------------------------------------------------------------------------- / 200MB 4.0KB Also tried with a regular user, to discard it is affecting only root: [root@gluster3 vol_geo]# setfacl -m u:amaya:rwx /gl_vol [root@gluster3 vol_geo]# getfacl /gl_vol getfacl: Eliminando '/' inicial en nombres de ruta absolutos # file: gl_vol # owner: root # group: root user::rwx user:amaya:rwx group::r-x mask::rwx other::rw- [root@gluster3 vol_geo]# su - amaya [amaya@gluster3 ~]$ dd if=/dev/urandom of=/gl_vol/kk bs=8192 count=262144 ^C35963+0 records in 35963+0 records out 294608896 bytes (295 MB) copied, 53.887 s, 5.5 MB/s I've stopped dd once quota has been exceeded: [root@gluster1 ~]# gluster volume info vol_gl_shared Volume Name: vol_gl_shared Type: Distribute Status: Started Number of Bricks: 2 Transport-type: tcp Bricks: Brick1: gluster1:/gl_brick Brick2: gluster2:/gl_brick Options Reconfigured: features.limit-usage: /:200MB features.quota-timeout: 5 features.quota: on server.allow-insecure: off cluster.min-free-disk: 15 auth.allow: 10.* There were multiple fixes related to quota through Community->GlusterFS related Bugs, mostly this should be fixed by qa29 release tarball. Not sure if this is the right place or if I should open a new bug, or if now this has been changed, but now, when trying to create a volume with bricks on different servers but with the same directory name (just like the ones used in this bug data) an error appears, not a possibility any longer, now directory names must be different. [root@gluster-srv1 ~]# gluster volume create vol_gl_shared gluster-srv1:/gl_brick Creation of volume vol_gl_shared has been successful. Please start the volume to access data. [root@gluster-srv1 ~]# gluster volume start vol_gl_shared Starting volume vol_gl_shared has been successful [root@gluster-srv1 ~]# gluster peer probe gluster-srv2 Probe successful [root@gluster-srv1 ~]# gluster volume add-brick vol_gl_shared gluster-srv2:/gl_brick Brick: gluster-srv2:/gl_brick already in use Is this a bug or the new way to configure RHSSA? Verified this issue with following latest rpm's from brew: glusterfs-fuse-3.3.0qa45-1.el6.x86_64 glusterfs-3.3.0qa45-1.el6.x86_64 glusterfs-server-3.3.0qa45-1.el6.x86_64 glusterfs-rdma-3.3.0qa45-1.el6.x86_64 org.apache.hadoop.fs.glusterfs-glusterfs-0.20.2_0.1-1.noarch glusterfs-geo-replication-3.3.0qa45-1.el6.x86_64 And the reported issue is reproducible. I can write on specified volume even after exceeding the quota limit: Enabled quota on sghai_vol: ================= [root@dhcp201-154 glusterfs]# gluster volume quota sghai_vol enable Enabling quota has been successful set the quota limit 512 MB: ============================ [root@dhcp201-154 glusterfs]# gluster volume quota sghai_vol limit-usage /home 512MB limit set on /home vol info: ========== [root@dhcp201-154 brick2]# gluster volume info sghai_vol Volume Name: sghai_vol Type: Distribute Volume ID: d87e0965-cb29-4ccb-891f-59e65d768944 Status: Started Number of Bricks: 2 Transport-type: tcp Bricks: Brick1: dhcp201-154.englab.pnq.redhat.com:/home/brick1 Brick2: dhcp201-154.englab.pnq.redhat.com:/home/brick2 Options Reconfigured: features.limit-usage: /home:512MB features.quota: on [root@dhcp201-154 glusterfs]# gluster volume quota sghai_vol list path limit_set size ---------------------------------------------------------------------------------- /home 512MB on client: =========== mounted volume on client: ======================== [root@dhcp201-208 ~]# mount -t glusterfs 10.65.201.154:/sghai_vol /mnt created a file exceeding the quota limit of 512MB [root@dhcp201-208 mnt]# dd if=/dev/urandom of=/mnt/kk bs=8192 count=262144 ^C74091+1 records in 74091+0 records out 606953472 bytes (607 MB) copied, 213.222 s, 2.8 MB/s [root@dhcp201-208 mnt]# ll -h total 536M -rw-r--r--. 1 root root 536M May 31 2012 kk [root@dhcp201-208 mnt]# No error appears neither on server not on client mounted glusterfs on client: ============================= [root@dhcp201-208 mnt]# mount /dev/mapper/vg_dhcp201208-lv_root on / type ext4 (rw) proc on /proc type proc (rw) sysfs on /sys type sysfs (rw) devpts on /dev/pts type devpts (rw,gid=5,mode=620) tmpfs on /dev/shm type tmpfs (rw,rootcontext="system_u:object_r:tmpfs_t:s0") /dev/vda1 on /boot type ext4 (rw) none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw) sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw) 10.65.201.154:/sghai_vol on /mnt type fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072) >>>>glusterfs, mounted sghai_vol on /mnt [root@dhcp201-208 mnt]# While trying to verify this bug with 3.3.0qa45, in order to find that the limits do not get crossed, hence mounted the xfs with the option allocsize=4096 on all the servers. put up a quota limit of 1GB over volume(/) now tried to create a file of size 1GB from the glusterfs mount. the issue happens is that file that gets created is only of around 500MB(approx) whereas the disk quota exceeded limit is notified. At this time the "quota list" displays the size field as "1GB". Though after sometime it displays the correct size. To me this is a issue, if the disk quota disk limit gets hit half way of reaching the limit actually set. logs, [root@QA-31 glfs]# dd if=/dev/zero of=f.1 bs=1024 count=1048576 dd: writing `f.1': Disk quota exceeded dd: closing output file `f.1': Disk quota exceeded [root@QA-31 glfs]# ls -l total 524456 -rw-r--r--. 1 root root 537038848 May 31 05:37 f.1 [root@QA-31 glfs]# [root@RHS-71 ~]# gluster volume quota dist-rep list path limit_set size ---------------------------------------------------------------------------------- / 1GB 1.0GB after a delay, [root@RHS-71 ~]# gluster volume quota dist-rep list path limit_set size ---------------------------------------------------------------------------------- / 1GB 512.2MB Sachin, about comment #6, when you set quota on '/home', you need to create files on '/mnt/home' if glusterfs is mounted on /mnt. Can you verify bug having that in mind? Moving to ON_QA to be verify with Amar's comment #8 creating a 2GB file from fuse mount :- =================================== while quota limit set is 1GB on root of the volume, file gets created using dd. the size of the file is less around 512MB but still quota returns with quota limit reached message whereas after it starts showing correct value for the size occupied. But full file not getting creating is an issue over fuse mount [root@QA-31 glfs]# rm -rf * [root@QA-31 glfs]# dd if=/dev/zero of=f.1 bs=1048576 count=2048 dd: writing `f.1': Disk quota exceeded dd: closing output file `f.1': Disk quota exceeded [root@QA-31 glfs]# du -h . 513M . [root@RHS-71 ~]# gluster volume quota dist-rep1 list path limit_set size ---------------------------------------------------------------------------------- / 1GB 0Bytes [root@RHS-71 ~]# gluster volume quota dist-rep1 list path limit_set size ---------------------------------------------------------------------------------- / 1GB 1.0GB [root@RHS-71 ~]# gluster volume quota dist-rep1 list path limit_set size ---------------------------------------------------------------------------------- / 1GB 512.6MB creating file from nfs mount:- ============================ on nfs the same issue is not seen, as mentioned above for fuse 1. file of size 2GB while 1GB is the quota limit-set over volume. [root@QA-31 nfs-test2]# rm -rf * [root@QA-31 nfs-test2]# dd if=/dev/zero of=f.1 bs=1048576 count=2048 dd: writing `f.1': Input/output error 706+0 records in 705+0 records out 739246080 bytes (739 MB) copied, 9.69904 s, 76.2 MB/s [root@QA-31 nfs-test2]# du -h . 1.2G . [root@QA-31 nfs-test2]# du -h . 1.2G . [root@QA-31 nfs-test2]# [root@RHS-71 ~]# [root@RHS-71 ~]# gluster volume quota dist-rep1 list path limit_set size ---------------------------------------------------------------------------------- / 1GB 1.2GB [root@RHS-71 same is the case for fuse mount even if the xfs mount is done with allocsize=4096 I ran the dd command as mentioned by Saurabh. In case of ext4 as the backend, this problem is not seen. But in xfs, I see that at some point in time marker translator on the server side got 2098184 blocks being allocated to the file even though the file size is 537522176bytes as part of stat. This can be seen below(it can be seen in ia_blocks and ia_size respectively). (gdb) p *buf $3 = {ia_ino = 12404037568965143247, ia_gfid = "\342%\227t5%IÓŹ#\376s\026\304j", <incomplete sequence \317>, ia_dev = 64513, ia_type = IA_IFREG, ia_prot = {suid = 0 '\000', sgid = 0 '\000', sticky = 0 '\000', owner = {read = 1 '\001', write = 1 '\001', exec = 0 '\000'}, group = {read = 1 '\001', write = 0 '\000', exec = 0 '\000'}, other = {read = 1 '\001', write = 0 '\000', exec = 0 '\000'}}, ia_nlink = 1, ia_uid = 0, ia_gid = 0, ia_rdev = 0, ia_size = 537522176, ia_blksize = 4096, ia_blocks = 2098184, ia_atime = 1339502522, ia_atime_nsec = 26682256, ia_mtime = 1339502525, ia_mtime_nsec = 876682676, ia_ctime = 1339502525, ia_ctime_nsec = 876682676} So, what I can conclude for now is that XFS is pre-allocating the blocks and quota uses blocks*512 to calculate the size of the file. Hence, the error. *** This bug has been marked as a duplicate of bug 765478 *** |