Bug 981553 - quota: Bad file descriptor
quota: Bad file descriptor
Status: CLOSED ERRATA
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: glusterd (Show other bugs)
2.1
x86_64 Linux
medium Severity medium
: ---
: ---
Assigned To: vpshastry
Saurabh
:
Depends On:
Blocks: 987415
  Show dependency treegraph
 
Reported: 2013-07-05 02:57 EDT by Saurabh
Modified: 2016-01-19 01:12 EST (History)
6 users (show)

See Also:
Fixed In Version: glusterfs-3.4.0.12rhs.beta6-1
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
: 987415 (view as bug list)
Environment:
Last Closed: 2013-09-23 18:39:53 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Saurabh 2013-07-05 02:57:16 EDT
Description of problem:
well, I have updated to the latest beta release,
tried to created files of 1MB size in root of the volume
volume having 1GB of quota size.

after creating 1024 files it errors with " Bad file descriptor"

mounttype : glusterfs
[root@rhsauto030 glusterfs-test]# ps -eaf | grep glusterfs
root     12627     1 19 04:53 ?        00:08:08 /usr/sbin/glusterfs --volfile-id=/dist-rep --volfile-server=10.70.37.98 /mnt/glusterfs-test/
root     14805 10447  0 05:35 pts/0    00:00:00 grep glusterfs
[root@rhsauto030 glusterfs-test]# 


volume type: 6x2
[root@quota1 ~]# gluster volume info dist-rep 
 
Volume Name: dist-rep
Type: Distributed-Replicate
Volume ID: b1b80b68-b98b-4aab-a563-3c386c39b842
Status: Started
Number of Bricks: 6 x 2 = 12
Transport-type: tcp
Bricks:
Brick1: 10.70.37.98:/rhs/bricks/d1r1
Brick2: 10.70.37.174:/rhs/bricks/d1r2
Brick3: 10.70.37.136:/rhs/bricks/d2r1
Brick4: 10.70.37.168:/rhs/bricks/d2r2
Brick5: 10.70.37.98:/rhs/bricks/d3r1
Brick6: 10.70.37.174:/rhs/bricks/d3r2
Brick7: 10.70.37.136:/rhs/bricks/d4r1
Brick8: 10.70.37.168:/rhs/bricks/d4r2
Brick9: 10.70.37.98:/rhs/bricks/d5r1
Brick10: 10.70.37.174:/rhs/bricks/d5r2
Brick11: 10.70.37.136:/rhs/bricks/d6r1
Brick12: 10.70.37.168:/rhs/bricks/d6r2
Options Reconfigured:
features.quota: on


[root@quota1 ~]# gluster volume quota dist-rep list
                  Path                   Hard-limit Soft-limit   Used  Available
--------------------------------------------------------------------------------
/                                            1GB       90%       1.0GB  0Bytes
[root@quota1 ~]# 


Version-Release number of selected component (if applicable):
server,
-------
[root@quota1 ~]# rpm -qa | grep glusterfs
glusterfs-fuse-3.4.0.12rhs.beta2-1.el6rhs.x86_64
glusterfs-server-3.4.0.12rhs.beta2-1.el6rhs.x86_64
glusterfs-3.4.0.12rhs.beta2-1.el6rhs.x86_64
[root@quota1 ~]# 

[root@rhsauto030 glusterfs-test]# rpm -qa | grep glusterfs
glusterfs-fuse-3.4.0.12rhs.beta2-1.el6rhs.x86_64
glusterfs-3.4.0.12rhs.beta2-1.el6rhs.x86_64
[root@rhsauto030 glusterfs-test]# 


How reproducible:
already saw it twice

Steps to Reproduce:
1. volume create, start it 
2. set quota for 1GB
3. glusterfs mount, start data creation  with files of size 1MB each, 

Actual results:
once the limit is reached, the mountpoint responds back with
"Bad file descriptor"

result second and latest trial
1048576 bytes (1.0 MB) copied, 0.440683 s, 2.4 MB/s
dd: writing `1025.1372980863': Bad file descriptor
dd: closing output file `1025.1372980863': Bad file descriptor
dd: opening `1026.1372980863': Disk quota exceeded
dd: opening `1027.1372980863': Disk quota exceeded
dd: opening `1028.1372980864': Disk quota exceeded
dd: opening `1029.1372980864': Disk quota exceeded
dd: opening `1030.1372980864': Disk quota exceeded


result first trial of same test,
dd: writing `1025.1372979790': Bad file descriptor
dd: closing output file `1025.1372979790': Bad file descriptor
dd: writing `1026.1372979791': Bad file descriptor
dd: closing output file `1026.1372979791': Bad file descriptor
dd: writing `1027.1372979791': Bad file descriptor
dd: closing output file `1027.1372979791': Bad file descriptor
dd: writing `1028.1372979791': Bad file descriptor
dd: closing output file `1028.1372979791': Bad file descriptor
dd: writing `1029.1372979791': Bad file descriptor
dd: closing output file `1029.1372979791': Bad file descriptor
dd: writing `1030.1372979791': Bad file descriptor
dd: closing output file `1030.1372979791': Bad file descriptor
[root@rhsauto030 glusterfs-test]# 
[root@rhsauto030 glusterfs-test]# 
[root@rhsauto030 glusterfs-test]# #dd if=/dev/urandom of=10.$(date +%s) bs=1024 count=1024
[root@rhsauto030 glusterfs-test]# ls | wc 
   1030    1030   15373
[root@rhsauto030 glusterfs-test]# dd if=/dev/urandom of=1031.$(date +%s) bs=1024 count=1024
dd: opening `1031.1372979857': Disk quota exceeded
[root@rhsauto030 glusterfs-test]# 
[root@rhsauto030 glusterfs-test]# dd if=/dev/urandom of=1032.$(date +%s) bs=1024 count=1024
dd: opening `1032.1372979871': Disk quota exceeded
[root@rhsauto030 glusterfs-test]# 


Expected results:

Disk quota exceeded is expected not EBADF

Additional info:
The script used for creating data,

[root@rhsauto030 glusterfs-test]# for i in {1..1030}; do dd if=/dev/urandom of=$i.$(date +%s) bs=1024 count=1024; done
Comment 4 Amar Tumballi 2013-07-06 13:52:39 EDT
mostly fix should be http://review.gluster.org/#/c/5296/ Need a test.
Comment 8 Scott Haines 2013-09-23 18:39:53 EDT
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. 

For information on the advisory, and where to find the updated files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2013-1262.html
Comment 9 Scott Haines 2013-09-23 18:43:49 EDT
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. 

For information on the advisory, and where to find the updated files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2013-1262.html

Note You need to log in before you can comment on or make changes to this bug.