Description of problem: while quota is about to be reached or reached, then things are no getting handled properly. as I am I/O erros, zero byte file, truncated file even after a "Disk quota exceeded" msg is recieved. Version-Release number of selected component (if applicable): glusterfs-server-3.4.0.20rhsquota1-1.el6.x86_64 glusterfs-fuse-3.4.0.20rhsquota1-1.el6.x86_64 glusterfs-3.4.0.20rhsquota1-1.el6.x86_64 How reproducible: always Steps to Reproduce: 1. create a volume of 6x2 type, start it 2. enable quota on the volume 3. mount the volume over nfs. 4. create a directory inside the volume 5. set limit on the directory, lets say 1GB 6. start creating data inside the dir, in a for loop using dd command each file of 1MB size. Actual results: when quota is about to be reached, 1024+0 records in 1024+0 records out 1048576 bytes (1.0 MB) copied, 0.354007 s, 3.0 MB/s 1024+0 records in 1024+0 records out 1048576 bytes (1.0 MB) copied, 0.351123 s, 3.0 MB/s 1024+0 records in 1024+0 records out 1048576 bytes (1.0 MB) copied, 0.29281 s, 3.6 MB/s dd: closing output file `1020.1376981546': Input/output error dd: closing output file `1021.1376981547': Input/output error 1024+0 records in 1024+0 records out 1048576 bytes (1.0 MB) copied, 0.285709 s, 3.7 MB/s 1024+0 records in 1024+0 records out 1048576 bytes (1.0 MB) copied, 0.305432 s, 3.4 MB/s dd: opening `1024.1376981548': Disk quota exceeded dd: closing output file `1025.1376981548': Disk quota exceeded dd: opening `1026.1376981548': Disk quota exceeded dd: opening `1027.1376981548': Disk quota exceeded 1024+0 records in 1024+0 records out 1048576 bytes (1.0 MB) copied, 0.357487 s, 2.9 MB/s 1024+0 records in 1024+0 records out 1048576 bytes (1.0 MB) copied, 0.36181 s, 2.9 MB/s dd: closing output file `1030.1376981549': Input/output error further, [root@rhsauto036 dir3]# ls -l 1022* -rw-r--r--. 1 root root 1048576 Aug 20 02:52 1022.1376981547 [root@rhsauto036 dir3]# ls -l 1023* -rw-r--r--. 1 root root 1048576 Aug 20 02:52 1023.1376981547 [root@rhsauto036 dir3]# ls -l 1024* ls: cannot access 1024*: No such file or directory [root@rhsauto036 dir3]# ls -l 1025* -rw-r--r--. 1 root root 0 Aug 20 02:52 1025.1376981548 [root@rhsauto036 dir3]# ls -l 1026* ls: cannot access 1026*: No such file or directory [root@rhsauto036 dir3]# ls -l 1027* ls: cannot access 1027*: No such file or directory [root@rhsauto036 dir3]# ls -l 1028* -rw-r--r--. 1 root root 1048576 Aug 20 02:52 1028.1376981548 [root@rhsauto036 dir3]# ls -l 1029* -rw-r--r--. 1 root root 1048576 Aug 20 02:52 1029.1376981548 [root@rhsauto036 dir3]# ls -l 1030* -rw-r--r--. 1 root root 458752 Aug 20 02:52 1030.1376981549 [root@rhsauto036 dir3]# [root@rhsauto036 dir3]# ls -l 1021* -rw-r--r--. 1 root root 458752 Aug 20 02:52 1021.1376981547 [root@rhsauto036 dir3]# [root@rhsauto036 dir3]# ls -l 1020* -rw-r--r--. 1 root root 0 Aug 20 02:52 1020.1376981546 [root@rhsauto036 dir3]# ls -l 1019* -rw-r--r--. 1 root root 1048576 Aug 20 02:52 1019.1376981546 Expected results: if disk quota exceeded means exceeded, it should result in zero byte files and truncated files. Additional info:
correcting the typo in Expected results, if disk quota exceeded means exceeded, it should not result in zero byte files and truncated files.
Saurabh, I am not able to get sos reports. I am getting permission denied errors You don't have permission to access /sosreports/998914/sosreport-rhsauto032-20130821013955-eca0.tar.xz on this server. Can you please tell what is the volume configuration? regards, Raghavendra.
Please ignore my previous comments. Those were for a different bug (bug #998914).
No longer seen in v3.4.0.30rhs. Please confirm.
When the size of the directory is equal to limit set on it, any further writes will fail. However for operations that create new dentries - create, symlink, mknod - we consider delta (which will be updated to the directory size) to be zero. This allows the fop to pass quota checks. In the cbk, marker updates the size of directory to account for iabuf.ia_blocks. Had ia_blocks been non-zero, subsequent create operations wouldn't pass quota limit checks in enforcer (since size of directory + delta > limit). However, because of a regression introduced in the workaround to xfs pre-allocation (to not affect quota wildly), iabuf.ia_blocks in create cbk is set to zero. This results in any number of create operations to pass quota limit checks.
Is there a resolution or a patch available for this one upstream yet?
*** Bug 989753 has been marked as a duplicate of this bug. ***
Downstream patches submitted at: rhs-2.1: https://code.engineering.redhat.com/gerrit/#/c/13654/ rhs-2.1-u1: https://code.engineering.redhat.com/gerrit/#/c/13667/ upstream patch: http://review.gluster.org/6035 regards, Raghavendra
bug 1016419 tracks accounting space consumed for storing directory entries.
Used the similar scenario as mentioned in the description section and found a file of zero size this time, -rw-rw-r--. 1 qa1 qa1 1048576 Oct 24 2013 file1050 -rw-rw-r--. 1 qa1 qa1 0 Oct 24 2013 file1053 -rw-rw-r--. 1 qa1 qa1 1048576 Oct 24 2013 file106 list info after i/o finished, [root@quota1 ~]# gluster volume quota dist-rep3 list /qa1/dir1-data Path Hard-limit Soft-limit Used Available -------------------------------------------------------------------------------- /qa1/dir1-data 1.0GB 80% 1.0GB 0Bytes other info, [qa1@rhsauto005 dir1-data]$ du -sh . 1.1G . [qa1@rhsauto005 dir1-data]$ pwd /mnt/nfs-test/qa1/dir1-data [qa1@rhsauto005 dir1-data]$ mount | grep dist-rep3 10.70.42.186:/dist-rep3 on /mnt/nfs-test type nfs (rw,addr=10.70.42.186) Found on glusterfs.3.4.0.36rhs
Hi Du, In yesterday's call it was noted that this bug would not be fixed for Big Bend U1. Could you please review the doc text I have entered. Once you approve. I'll add it as a known issue in the Release Notes.
Moving the known issues to Doc team, to be documented in release notes for U1
I've documented this as a known issue in the BB U1 Release Notes. Here is the link: http://documentation-devel.engineering.redhat.com/docs/en-US/Red_Hat_Storage/2.1/html/2.1_Update_1_Release_Notes/chap-Documentation-2.1_Update_1_Release_Notes-Known_Issues.html
This issue is fixed in RHS-3.0. In 2.1 this will as known issue. Closing the bug as future fix