Bug 5333

Summary: "find /u2" on NFS file system exhausts all kernel memory
Product: [Retired] Red Hat Linux Reporter: jim
Component: nfs-serverAssignee: Michael K. Johnson <johnsonm>
Severity: medium Docs Contact:
Priority: medium    
Version: 6.0   
Target Milestone: ---   
Target Release: ---   
Hardware: i386   
OS: Linux   
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2002-12-14 01:12:09 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Description jim 1999-09-23 17:45:45 UTC
On a 400MHz Pentium kernel 2.2.5-22 with /u2 mounted as an
NFS file system served by Windows NT running a slow NFS
server with InterGraph NFS software the command:
  find /u2 -name core -exec rm {} \;
will result in the messages:
  Unable to load interpreter
  Out of memory for find
  Out of memory for sendmail [etc.]
  NFS: server intrt33, readdir reply truncated
which all indicate an out-of-memory condition.  Sise of /u2
is 3 Gigs.

The same command run on a local SCSI (larger) drive shows no

Running "top" with the above command shows that "find" is
the top CPU and memory user, but that several minutes will
pass with only modest CPU and memory usage.  Then usage of
both will later increase rapidly until memory is exhausted.

My theory is that memory is being used by NFS filesystem
software (Note: not NFS server) because swap space is
available, but is not being used.  Source reveals that
kmalloc() is used by NFS.  I have no source for find.c but
it should run in user space and thus use swap.

Probably this problem will not show up except on large NFS
mounted drives.  Perhaps it is necessary to have a slow NFS
serve the drive, and perhaps the NFS error is a cause
instead of a symptom.

This is a serious problem.

Jim Ahlstrom

------- Additional Comments From   09/23/99 14:33 -------
More info:
The same "find" command on a different but similar NFS drive shows no

An "ls" command in a certain directory gives the message "memory
exhausted".  This directory has 2000 files.  The longest name is 126

It is starting to look like NFS client has a problem with large
directories.  BTW, a Sun OS4 system has no problem with this file

Jim Ahlstrom

------- Additional Comments From   09/23/99 15:16 -------
More info:
Removing five zero-length files, three with names of 126 characters
has fixed the problem on the /u2 drive.  BUT:

It is clear that the NFS client implementation, specifically nfs2xdr.c
has problems with large readdir responses and/or buffer overruns.  It
must be made more robust or others, especially large sites, will have
problems in the future.

Jim Ahlstrom

------- Additional Comments From   09/23/99 15:21 -------
The problem directory on /u2 has almost 2000 files.  The command
  ls | wc
in this directory shows a size of 17273 characters.

Jim Ahlstrom

Comment 1 Bill Nottingham 1999-09-23 20:32:59 UTC
I can't reproduce this here running kernel-2.2.12.
Can you try grabbing the latest kernel from Raw Hide and
seeing if that solves the problem?