This service will be undergoing maintenance at 00:00 UTC, 2016-09-28. It is expected to last about 1 hours
Bug 65635 - lockd core when client starts the apache
lockd core when client starts the apache
Status: CLOSED WORKSFORME
Product: Red Hat Linux
Classification: Retired
Component: nfs-utils (Show other bugs)
7.2
i686 Linux
medium Severity high
: ---
: ---
Assigned To: Pete Zaitcev
Ben Levenson
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2002-05-29 02:10 EDT by Need Real Name
Modified: 2007-04-18 12:42 EDT (History)
0 users

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2005-05-13 13:24:10 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:


Attachments (Terms of Use)

  None (edit)
Description Need Real Name 2002-05-29 02:10:40 EDT
From Bugzilla Helper:
User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.0)

Description of problem:
I am using nfs-utils 0.3.1 to provide nfs service,but when the client machine 
starts apache frequently,the server's lockd core dump,and leave the [lockd 
defunct] there.there is no way to restart nfs service unless reboot the whole 
machine.
it seems the problem is that apache use flock to lock files on the nfs-mount 
file,and finally get called to server's lockd that cause this problem.
also I notice that as I am running nfs service on a machine with 1G memory,the 
free memory will keep falling,until the free mem is out,the lockd is very easy 
to core,I have to restart my machine twice a day to avoid this problem.
so there are two possible problem cause this problem:
1.the flock
2.the memory on server
currently I am running these program on my machine:
ps -ef
UID        PID  PPID  C STIME TTY          TIME CMD
root         1     0  0 12:32 ?        00:00:04 init [3] 
root         2     1  0 12:32 ?        00:00:00 [keventd]
root         3     0  0 12:32 ?        00:00:00 [ksoftirqd_CPU0]
root         4     0  0 12:32 ?        00:00:00 [kswapd]
root         5     0  0 12:32 ?        00:00:00 [kreclaimd]
root         6     0  0 12:32 ?        00:00:00 [bdflush]
root         7     0  0 12:32 ?        00:00:00 [kupdated]
root         8     1  0 12:32 ?        00:00:00 [mdrecoveryd]
root        14     1  0 12:32 ?        00:00:00 [AIFd]
root        18     1  0 12:32 ?        00:00:00 [kjournald]
root        93     1  0 12:32 ?        00:00:00 [khubd]
root       163     1  0 12:32 ?        00:00:00 minilogd
rpc        536     1  0 12:32 ?        00:00:00 portmap
rpcuser    564     1  0 12:32 ?        00:00:00 rpc.statd
root       715     1  0 12:32 ?        00:00:00 /usr/sbin/sshd
root       748     1  0 12:32 ?        00:00:00 xinetd -stayalive -reuse -
pidfile /var/run/xinetd.pid
root       778     1  0 12:32 ?        00:00:00 rpc.rquotad
root       783     1  0 12:32 ?        00:00:00 rpc.mountd
root       788     1  0 12:32 ?        00:00:02 [nfsd]
root       789     1  0 12:32 ?        00:00:02 [nfsd]
root       790     1  0 12:32 ?        00:00:02 [nfsd]
root       791   788  0 12:32 ?        00:00:00 [lockd]
root       792   791  0 12:32 ?        00:00:00 [rpciod]
root       793     1  0 12:32 ?        00:00:02 [nfsd]
root       794     1  0 12:32 ?        00:00:02 [nfsd]
root       795     1  0 12:32 ?        00:00:02 [nfsd]
root       796     1  0 12:32 ?        00:00:02 [nfsd]
root       797     1  0 12:32 ?        00:00:02 [nfsd]
wnn        817     1  0 12:32 ?        00:00:00 /usr/bin/cserver
root       835     1  0 12:32 ?        00:00:00 crond
xfs        889     1  0 12:32 ?        00:00:00 xfs -droppriv -daemon
root       914     1  0 12:32 tty1     00:00:00 /sbin/mingetty tty1
root       915     1  0 12:32 tty2     00:00:00 /sbin/mingetty tty2
root       916     1  0 12:32 tty3     00:00:00 /sbin/mingetty tty3
root       917     1  0 12:32 tty4     00:00:00 /sbin/mingetty tty4
root       918     1  0 12:32 tty5     00:00:00 /sbin/mingetty tty5
root       919     1  0 12:32 tty6     00:00:00 /sbin/mingetty tty6

Version-Release number of selected component (if applicable):


How reproducible:
Sometimes

Steps to Reproduce:
1.install apache on the nfs-mounted directory
2.start/stop apache
3.until the apache hangs,that's nfs hangs

	

Actual Results:  lockd core dump,but as it is some process's child process,it 
just hang in there with defunct state

Expected Results:  I hope it will run forever

Additional info:
Comment 1 Bob Matthews 2002-05-29 14:37:59 EDT
What kernel version are you running?
Comment 2 Need Real Name 2002-05-29 21:32:43 EDT
my kernel is 2.4.7-10(Redhat Enigma),actually I notice this,if I create a new 
file in the nfs-mounted directory in client machine,I see the used memory 
increased just the same as the file's size on the server machine,if I delete 
that new file,then the used memory will decreased the same as the file's 
size,just like the kernel cache the file in the memory,but deleting other 
already exist file will not decrease the used memory,that means only the file 
that create/delete in the same nfs life-cycle will cause the used memory to 
increase/decrease.
also if I cat /proc/meminfo I can notice that the Inact_dirty just increase the 
same as used memory.
Is there an option in nfs that set this "memory cache"?

p.s:I try the kernel 2.4.18,but also no good,same result
Comment 3 Pete Zaitcev 2005-05-13 13:24:10 EDT
Stale out, closing. Sorry...

Note You need to log in before you can comment on or make changes to this bug.