Bug 623803 - File system free space leaks with NFS lock, rm, and signal
Summary: File system free space leaks with NFS lock, rm, and signal
Status: CLOSED DUPLICATE of bug 636926
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: kernel (Show other bugs)
(Show other bugs)
Version: 6.0
Hardware: All Linux
Target Milestone: rc
: ---
Assignee: Jeff Layton
QA Contact: Red Hat Kernel QE team
Depends On: 613736 712054
TreeView+ depends on / blocked
Reported: 2010-08-12 19:45 UTC by Marc Milgram
Modified: 2011-06-09 11:40 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: 613736
Last Closed: 2010-10-21 16:56:20 UTC
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)

Comment 1 Jeff Layton 2010-10-19 18:05:40 UTC
Have you confirmed that this is indeed a problem on RHEL6?

If so, what kernels are the client and server running? What sort of filesystem is being exported?

Comment 2 Marc Milgram 2010-10-19 20:15:25 UTC

I tested this in loopback on RHEL 6 using kernel-2.6.32-59.1.el6.x86_64.  I tried exporting several filesystem types including ext3, ext4, and btrfs.

I just reran the test using kernel-2.6.32-71.el6.x86_64 on both NFS client and server.  I used btrfs as the underlying filesystem, but I believe that the type of filesystem is irrelevant.

Needless to say, this reproduced right away for me between 2 rhel 6 boxes (the client is a VM, the server bare metal).

Comment 3 Jeff Layton 2010-10-21 13:40:55 UTC
I'm afraid I've not been able to do so:

...created an 8G file on an export ext4 filesystem:

server$ dd if=/dev/zero of=/export/filler bs=1M count=8000
server$ df -k /export
Filesystem           1K-blocks      Used Available Use% Mounted on
                      20642428   9479488  10114364  49% /export

...mounted up the filesystem on the client using NFSv3. Here's the fstab entry:

dantu.rdu.redhat.com:/export	/mnt/dantu	nfs	noauto	0 0
client# mount /mnt/dantu -o vers=3

...then acquired the lock:

client# ./locktest /mnt/dantu/filler
Press <Enter> to try to get lock: 
Press <Enter> to release lock: 

...removed the file on server -- free space is unchanged since lockd holds it open for now:

server$ rm /export/filler
server$ df -k /export
Filesystem           1K-blocks      Used Available Use% Mounted on
                      20642428   9479488  10114364  49% /export

...hit ^c on the client to kill locktest. df on the server shows the space as immediately freed:

server$ df -k /export
Filesystem           1K-blocks      Used Available Use% Mounted on
                      20642428   1287484  18306368   7% /export

...so apparently you and I are doing something different. Perhaps you can clarify what that is?

Comment 4 Marc Milgram 2010-10-21 15:09:58 UTC

I just retested...  In RHEL 6, the problem is with NFS V4.  NFS V2 and NFS V3 report that the disk space is freed promptly after killing locktest.


Comment 5 Jeff Layton 2010-10-21 15:21:16 UTC
Ok, that helps. I can reproduce it with that too. Interestingly, when I shut down nfs serving from the server free space still isn't shown. I had to shut it down, unmount and remount before the free space was visible.

I'll have to do some analysis to figure out what's happening.

Comment 6 Jeff Layton 2010-10-21 15:40:13 UTC
Some more info. Locking isn't necessary here. This happens when you just hold the file open and then delete it. Josef said it sounds like the inode is ending up on the unused list instead of being cleaned up properly when nfsd's filp is closed.

That seems likely -- when I do this:

    # echo 2 > /proc/sys/vm/drop_caches

...the space magically reappears.

Comment 7 J. Bruce Fields 2010-10-21 15:52:30 UTC
See also 636926, describing the problem RHEL6/NFSv4 problem.  I cann't find any leak of NFSv4 data structures--as far as NFSv4 is concerned I think it's done with the inode in question--but the inode isn't freed.  I think we must be misusing the vfs reference counting somehow.  I don't know whether you're seeing a symptom of the same problem.

Comment 8 Jeff Layton 2010-10-21 16:56:20 UTC
Thanks, Bruce...

Yeah, looks like the same problem. I'll close this one in favor of that one as it seems to have more info.

*** This bug has been marked as a duplicate of bug 636926 ***

Note You need to log in before you can comment on or make changes to this bug.