Red Hat Bugzilla – Bug 548967
Bad interactions between NIS and NFS, rpc.mountd dies
Last modified: 2010-12-03 20:28:48 EST
Description of problem: NIS server wants to mount NFS directories from other hosts. Order of services startup runs netfs to mount those directories before ypserv and ypxfr are started. On the host with the drive, the /etc/exports entries are based on definitions in the NIS netgroup map.
SO: During reboot of NIS server, NFS server is unable to resolv the exports list because the NIS service is unavailable before the mount request comes in. rpc.mountd dies.
Version-Release number of selected component (if applicable):
How reproducible: If the NIS and NFS arrangement is as described above, quite.
Steps to Reproduce:
1. Set up NIS service with ypbind on clients and ypserv and ypxfr on server, with netgroups definitions in /etc/exports
2. Set up NFS shares to NIS server based on NIS netgroups map
3. Reboot NIS server
Actual results: Error messages during NIS server reboot, failure to mount NFS shares, unable to mount shares with "mount" command until remedy is applied, "showmount" of NFS server gives error message, ps shows that rpc.mountd is dead
% showmount adrenaline
rpc mount dump: RPC: Unable to receive; errno = Connection refused
To remedy: must restart NFS service on NFS host with /etc/init.d/nfs restart; then issue mount command on NIS server(=NFS client).
NFS directories mounted on NIS server
Additional info: I see two problems:
1) The order of services startup on the NIS server, with S25netfs coming before S26ypserv,
2) Lack of robustness in rpc.mountd; daemon death is not an appropriate reaction to this situation.
From /var/log/messages on the NFS server:
Dec 19 12:43:24 yardline kernel: rpc.mountd general protection ip:7f88dc3$
Dec 19 12:43:24 yardline abrtd: Directory 'ccpp-1261244604-1520' creation detec$
Dec 19 12:43:24 yardline abrtd: Lock file '/var/cache/abrt/ccpp-1261244604-1520$
Dec 19 12:43:24 yardline abrt: saved core dump of pid 1520 to /var/cache/abrt/c$
Dec 19 12:43:25 yardline abrtd: Getting local universal unique identification...
Dec 19 12:43:25 yardline abrtd: Crash is in database already
Dec 19 12:43:25 yardline abrtd: Already saved crash, just sending dbus signal
The one NFS share which consistently gets successfully mounted is from a Fedora 9 host with
All my other NFS hosts have been updated to Fedora 12; so this appears to be a new weakness in rpc.mountd.
Would it be possible to make that core available?
Or possibly a stack backtrace?
This message is a reminder that Fedora 12 is nearing its end of life.
Approximately 30 (thirty) days from now Fedora will stop maintaining
and issuing updates for Fedora 12. It is Fedora's policy to close all
bug reports from releases that are no longer maintained. At that time
this bug will be closed as WONTFIX if it remains open with a Fedora
'version' of '12'.
Package Maintainer: If you wish for this bug to remain open because you
plan to fix it in a currently maintained version, simply change the 'version'
to a later Fedora version prior to Fedora 12's end of life.
Bug Reporter: Thank you for reporting this issue and we are sorry that
we may not be able to fix it before Fedora 12 is end of life. If you
would still like to see this bug fixed and are able to reproduce it
against a later version of Fedora please change the 'version' of this
bug to the applicable version. If you are unable to change the version,
please add a comment here and someone will do it for you.
Although we aim to fix as many bugs as possible during every release's
lifetime, sometimes those efforts are overtaken by events. Often a
more recent Fedora release includes newer upstream software that fixes
bugs or makes them obsolete.
The process we are following is described here:
Fedora 12 changed to end-of-life (EOL) status on 2010-12-02. Fedora 12 is
no longer maintained, which means that it will not receive any further
security or bug fix updates. As a result we are closing this bug.
If you can reproduce this bug against a currently maintained version of
Fedora please feel free to reopen this bug against that version.
Thank you for reporting this bug and we are sorry it could not be fixed.