Bug 787368 - nfs: space consumption is high
Summary: nfs: space consumption is high
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: nfs
Version: pre-release
Hardware: Unspecified
OS: Linux
high
high
Target Milestone: ---
Assignee: Amar Tumballi
QA Contact: Saurabh
URL:
Whiteboard:
Depends On:
Blocks: 817967
TreeView+ depends on / blocked
 
Reported: 2012-02-04 14:19 UTC by Saurabh
Modified: 2016-01-19 06:09 UTC (History)
4 users (show)

Fixed In Version: glusterfs-3.4.0
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2013-07-24 17:44:23 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions: 3.3.0qa42
Embargoed:


Attachments (Terms of Use)

Description Saurabh 2012-02-04 14:19:29 UTC
Description of problem:
[root@RHSSA1 export-xfs]# gluster volume info 
 
Volume Name: dist-rep
Type: Distributed-Replicate
Volume ID: e7b55c76-31f7-4035-863a-d68cd824687c
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: 10.1.11.157:/export-xfs/dr
Brick2: 10.1.11.158:/export-xfs/drr
Brick3: 10.1.11.157:/export-xfs/ddr
Brick4: 10.1.11.158:/export-xfs/ddrr


[root@RHSSA1 export-xfs]# df -h
/dev/sdb1              10G   10G   20K 100% /export-xfs


[root@RHSSA1 export-xfs]# ls -l dr/.glusterfs | wc
    257    2306   12043
[root@RHSSA1 export-xfs]# ls -l ddr/.glusterfs | wc
    257    2306   12043


now on the mount point,
[root@RHSSA1 nfs-test]# ls -lia 
total 12
 1 drwxr-xr-x. 3 root root   46 Feb  4 13:56 .
12 drwxr-xr-x. 5 root root 4096 Feb  3 10:56 ..
[root@RHSSA1 nfs-test]# 
[root@RHSSA1 nfs-test]# mount
10.1.11.157:/dist-rep on /mnt/nfs-test type nfs (rw,vers=3,nolock,addr=10.1.11.157)


Actually I was executing iozone and before that I had executed the sanity tools and removed the intened direcorty, the issue happened is that the iozone failed and the reason reported by "nfs.log" is this,

[root@RHSSA1 export-xfs]# tail -f /root/330/inst/var/log/glusterfs/nfs.log 
[2012-02-04 13:57:13.521221] W [client3_1-fops.c:690:client3_1_writev_cbk] 0-dist-rep-client-0: remote operation failed: No space left on device
[2012-02-04 13:57:13.521868] W [client3_1-fops.c:690:client3_1_writev_cbk] 0-dist-rep-client-1: remote operation failed: No space left on device
[2012-02-04 13:57:13.648665] W [client3_1-fops.c:690:client3_1_writev_cbk] 0-dist-rep-client-0: remote operation failed: No space left on device
[2012-02-04 13:57:13.649937] W [client3_1-fops.c:690:client3_1_writev_cbk] 0-dist-rep-client-1: remote operation failed: No space left on device
[2012-02-04 13:57:13.773949] W [client3_1-fops.c:690:client3_1_writev_cbk] 0-dist-rep-client-1: remote operation failed: No space left on device
[2012-02-04 13:57:13.838390] W [client3_1-fops.c:690:client3_1_writev_cbk] 0-dist-rep-client-0: remote operation failed: No space left on device
[2012-02-04 13:57:13.838645] W [nfs3.c:4904:nfs3svc_commit_cbk] 0-nfs: ad648c8f: /iozone.tmp => -1 (No space left on device)
[2012-02-04 13:57:13.838672] W [nfs3-helpers.c:3516:nfs3_log_commit_res] 0-nfs-nfsv3: XID: ad648c8f, COMMIT: NFS: 28(No space left on device), POSIX: 28(No space left on device), wverf: 1328362142
[2012-02-04 13:57:13.840228] W [client-lk.c:379:delete_granted_locks_owner] 0-dist-rep-client-0: fdctx not valid
[2012-02-04 13:57:13.840269] W [client-lk.c:379:delete_granted_locks_owner] 0-dist-rep-client-1: fdctx not valid

Now to me it seems the hardlinks created in .glusterfds have not been removed, hence this has happened


Version-Release number of selected component (if applicable):
330qa21

How reproducible:


Steps to Reproduce:
1.
2.
3.
  
Actual results:
no space left

Expected results:
space consumption should not have happened.

Additional info:

Comment 1 Krishna Srinivas 2012-02-22 12:24:48 UTC
Saurabh does not see the behavior anymore in the recent qa releases but he suggested to keep this open for one more week.

Comment 2 Anand Avati 2012-03-07 18:39:56 UTC
CHANGE: http://review.gluster.com/2892 (libglusterfs/fd: fixed fd_anonymous() leak) merged in master by Vijay Bellur (vijay)


Note You need to log in before you can comment on or make changes to this bug.