Bug 3560 - rpc.mountd leaks memory
Summary: rpc.mountd leaks memory
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Linux
Classification: Retired
Component: glibc
Version: 6.0
Hardware: i386
OS: Linux
high
high
Target Milestone: ---
Assignee: Cristian Gafton
QA Contact:
URL:
Whiteboard:
: 2706 3129 4089 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 1999-06-18 08:52 UTC by florian.xhumari
Modified: 2008-05-01 15:37 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 1999-07-02 20:23:11 UTC
Embargoed:


Attachments (Terms of Use)

Description florian.xhumari 1999-06-18 08:52:15 UTC
My NFS server machine under RH6.0 serves some 20 clients.
There are lots of mount/unmount requests because of
automounters, and lots of lines in /etc/exports (three
exports to each of the 20 machines).

rpc.mountd's VSZ and RSS (as from ps) go up and up, about
48k per mount/unmount request.

This makes me reboot my server very often :-(

Comment 1 Jeff Johnson 1999-06-18 16:38:59 UTC
*** Bug 2706 has been marked as a duplicate of this bug. ***

Known memory leak which causes mount to "grow" dramatically
with usage

------- Additional Comments From jbj  06/03/99 09:51 -------
Can you verify this with playpen knfsd-1.3.3-1? Thanks ...

Comment 2 Jeff Johnson 1999-06-18 16:41:59 UTC
The memory leak will be fixed in a glibc errata which will be released
Real Soon Now. I'm changing the component to glibc ...

Comment 3 Cristian Gafton 1999-07-02 20:23:59 UTC
Fixed by the glibc 2.1.2 release. The bug was in the nss_nisplus
module.

Package available in rawhide shortly.

Comment 4 Cristian Gafton 1999-07-02 20:42:59 UTC
*** Bug 3129 has been marked as a duplicate of this bug. ***

After installing Redhat 6.0 on a production server, I
noticed a problem with syslogd.  syslogd was growing in
memory size proportional to the amount of data received.
This is a box that is doing lots and lots of syslogging from
lots of hosts.

Well after some investigation, and examining of the source
code, I have isolated the bug.  And I am including a sample
program that shows the bug.

It appears that if you call gethostbyaddr, and if the
address is not in /etc/hosts, the gethostbyaddr call leaks
memory.  If the address is in /etc/hosts (which is what we
did to solve the problem for now), it does not leak.  The
program below will call gethostbyaddr 100,000 times, and
then pause so you can take a look at the size of the
process.  Under 5.2 the process does not grow in size.
Under 6.0 the process grows rapidly.

test.c
----------------------------
#include <stdio.h>
#include <sys/socket.h>
#include <netinet/in.h>
#include <netdb.h>
void main()
{
        unsigned long n = 0x24b445cf;
        int i;
        struct hostent *hp = 0, *ohp = 0;
        struct sockaddr_in f;
        struct sockaddr_in *f2 = &f;
        f.sin_addr.s_addr = 0xCF45B424;
        f.sin_family = AF_INET;
        for (i=0; i<100000; i++) {
                hp = gethostbyaddr((char *) &n, 4, 2);
                if (ohp != hp)
                    printf("HP CHANGED (%d != %d)\n", ohp,
hp);
                ohp = hp;
                if (!(i % 1000))
                    printf(".");
                fflush(stdout);
        }
        printf("Press <RETURN>... ");
        getc(stdin);
}

Comment 5 Jeff Johnson 1999-07-23 08:54:59 UTC
*** Bug 4089 has been marked as a duplicate of this bug. ***

The rpc.mountd in RedHat 6.0 appears to have a memory
leak.  I have a file server and 6 diskless clients we
use for kernel development.  The clients mount all their
file systems off of the file server.  The server has a
fresh RedHat 6.0 installation.  All are running the 2.2.5
kernel.  Only the clients are booting with modified
kernels, the server's kernel is 2.2.5.  After several days,
the rpc.mountd process on the server can grow to 30MB.
Note that the clients are at times rebooted often and
each time they remount 5 file systems off of the server.
I've emailed the author listed in the mountd sources, but
haven't received a response.


Note You need to log in before you can comment on or make changes to this bug.