Red Hat Bugzilla – Bug 218231
NFSd shutdown is very slow due to script bug
Last modified: 2007-11-16 20:14:54 EST
Description of problem:
The /etc/init.d/nfs script tries to shut down nfsd using "killproc nfsd".
Killproc first sends a TERM signal to all nfsd instances, then waits for 4
seconds and finally sends a KILL signal. Since nfsd ignores the TERM signal, the
script will always have to wait 4 seconds, slowing down the shutdown by 4
seconds each time.
(I'm not sure about the rationale behind ignoring TERM, but based on some
comments in the equivalent code in freeBSD, it seems that the designers wanted
to make sure nfsd is the last daemon to get killed so the loopback mounts can be
As the comments in the kernel's fs/nfsd/nfssvc.c explain, the virtual nfsd
kernel processes should be killed by sending a KILL, HUP, INT or QUIT signal to
them (with HUP being the fastest as it will forgo cleaning the exports table).
An even more future-proof and robust solution is calling "rpc.nfsd -- 0", which
sets the number of nfsd processes to 0 thus killing all current instances.
(Internally it is equivalent to sending a HUP to all processes, but much faster
and more robust because it's only a single kernel call.)
Version-Release number of selected component (if applicable):
Steps to Reproduce:
1. Execute "/etc/init.d/nfs stop"
It takes just over 4 seconds for the script to complete
It should take less than 1 second for the script to complete
This request was evaluated by Red Hat Product Management for inclusion in a Red
Hat Enterprise Linux maintenance release. Product Management has requested
further review of this request by Red Hat Engineering, for potential
inclusion in a Red Hat Enterprise Linux Update release for currently deployed
products. This request is not yet committed for inclusion in an Update
Fixed in nfs-utils-1.0.6-76
An advisory has been issued which should help the problem
described in this bug report. This report is therefore being
closed with a resolution of ERRATA. For more information
on the solution and/or where to find the updated files,
please follow the link below. You may reopen this bug report
if the solution does not work for you.