RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 623803 - File system free space leaks with NFS lock, rm, and signal
Summary: File system free space leaks with NFS lock, rm, and signal
Keywords:
Status: CLOSED DUPLICATE of bug 636926
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: kernel
Version: 6.0
Hardware: All
OS: Linux
high
high
Target Milestone: rc
: ---
Assignee: Jeff Layton
QA Contact: Red Hat Kernel QE team
URL:
Whiteboard:
Depends On: 613736 712054
Blocks:
TreeView+ depends on / blocked
 
Reported: 2010-08-12 19:45 UTC by Marc Milgram
Modified: 2011-06-09 11:40 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of: 613736
Environment:
Last Closed: 2010-10-21 16:56:20 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Comment 1 Jeff Layton 2010-10-19 18:05:40 UTC
Have you confirmed that this is indeed a problem on RHEL6?

If so, what kernels are the client and server running? What sort of filesystem is being exported?

Comment 2 Marc Milgram 2010-10-19 20:15:25 UTC
Jeff,

I tested this in loopback on RHEL 6 using kernel-2.6.32-59.1.el6.x86_64.  I tried exporting several filesystem types including ext3, ext4, and btrfs.

I just reran the test using kernel-2.6.32-71.el6.x86_64 on both NFS client and server.  I used btrfs as the underlying filesystem, but I believe that the type of filesystem is irrelevant.

Needless to say, this reproduced right away for me between 2 rhel 6 boxes (the client is a VM, the server bare metal).

Comment 3 Jeff Layton 2010-10-21 13:40:55 UTC
I'm afraid I've not been able to do so:

...created an 8G file on an export ext4 filesystem:

server$ dd if=/dev/zero of=/export/filler bs=1M count=8000
server$ df -k /export
Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/mapper/vg_data-export
                      20642428   9479488  10114364  49% /export

...mounted up the filesystem on the client using NFSv3. Here's the fstab entry:

dantu.rdu.redhat.com:/export	/mnt/dantu	nfs	noauto	0 0
client# mount /mnt/dantu -o vers=3

...then acquired the lock:

client# ./locktest /mnt/dantu/filler
Press <Enter> to try to get lock: 
waiting...Locked.
Press <Enter> to release lock: 

...removed the file on server -- free space is unchanged since lockd holds it open for now:

server$ rm /export/filler
server$ df -k /export
Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/mapper/vg_data-export
                      20642428   9479488  10114364  49% /export

...hit ^c on the client to kill locktest. df on the server shows the space as immediately freed:

server$ df -k /export
Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/mapper/vg_data-export
                      20642428   1287484  18306368   7% /export

...so apparently you and I are doing something different. Perhaps you can clarify what that is?

Comment 4 Marc Milgram 2010-10-21 15:09:58 UTC
Jeff,

I just retested...  In RHEL 6, the problem is with NFS V4.  NFS V2 and NFS V3 report that the disk space is freed promptly after killing locktest.

-Marc

Comment 5 Jeff Layton 2010-10-21 15:21:16 UTC
Ok, that helps. I can reproduce it with that too. Interestingly, when I shut down nfs serving from the server free space still isn't shown. I had to shut it down, unmount and remount before the free space was visible.

I'll have to do some analysis to figure out what's happening.

Comment 6 Jeff Layton 2010-10-21 15:40:13 UTC
Some more info. Locking isn't necessary here. This happens when you just hold the file open and then delete it. Josef said it sounds like the inode is ending up on the unused list instead of being cleaned up properly when nfsd's filp is closed.

That seems likely -- when I do this:

    # echo 2 > /proc/sys/vm/drop_caches

...the space magically reappears.

Comment 7 J. Bruce Fields 2010-10-21 15:52:30 UTC
See also 636926, describing the problem RHEL6/NFSv4 problem.  I cann't find any leak of NFSv4 data structures--as far as NFSv4 is concerned I think it's done with the inode in question--but the inode isn't freed.  I think we must be misusing the vfs reference counting somehow.  I don't know whether you're seeing a symptom of the same problem.

Comment 8 Jeff Layton 2010-10-21 16:56:20 UTC
Thanks, Bruce...

Yeah, looks like the same problem. I'll close this one in favor of that one as it seems to have more info.

*** This bug has been marked as a duplicate of bug 636926 ***


Note You need to log in before you can comment on or make changes to this bug.