Red Hat Bugzilla – Bug 144897
autofs service starts successfully even if autofs mountpoint already in use
Last modified: 2007-11-30 17:07:05 EST
Description of problem:
If one of autofs' mountpoints is already in use (ie something's already mounted
there), the init script still starts successfully. This is because the daemon
function doesn't wait around to check status. It's not necessarily broken that
the daemon starts in spite of one failed mountpoint aqcuisition, but the service
startup should probably indicate the failure either way.
Version-Release number of selected component (if applicable):
Steps to Reproduce:
1. mount any filesystem on a mountpoint autofs is configured to occupy
2. start autofs
Autofs service startup indicates success, but the previously occupied mountpoint
is not acquired. This failure is indicated in the syslog, but not on the console.
Either the service fails to start or the failure is displayed as a warning when
the service starts.
How would you feel if we itemized the failures. It would look
something like this:
# service autofs start
Starting automount: /share - already mounted [WARNING]
/bar - already mounted [WARNING]
[ OK ]
The script would print OK if any one mountpoint started up
successfully. If they all failed, it would of course print out [FAILED].
When you start automount using the init script, it spawns one copy of the
automount daemon for each mount point. We definitely need to make sure that
each automount process does not return status until it is up and ready to
service requests (or has failed). Once you have that, there is the issue of
what status to return from the script itself when some of the daemons were
started successfully and others have failed. It is this point that I would like
clarification on. I put forth a proposal, and now I'm looking for feedback.
Created attachment 115061 [details]
Keep parent process around until the daemon is ready to service requests.
This patch implements the daemon portion of the code necessary for fixing this
bug. The parent process waits for the daemon to write a status message over a
pipe, and only then does the parent exit. Moreover, the parent will now exit
with a proper status code, indicating whether automount startup was successful.
Created attachment 115062 [details]
Init script changes for reporting the status of the automount daemon at startup.
This patch, along with the previous one, will report the status code returned
by the automount daemon to the console. If any mount was unsuccessful, the
printed status will be WARNING. If all are successful, it's OK, and if all
failed it is FAILED. Any failed mounts will be printed to the console, so the
user doesn't have to look in the logs.
I have proposed the following patches:
to the autofs mailing list and am waiting for feedback.
I tested these by putting together master maps that had no entries, all
valid entries, all invalid entries, and a mix of valid and invalid
entries. I verified the output in every case.
To test the exit paths after the become_daemon call in autofs, I provided
a map with type "invalid", by putting a line like so in /etc/auto.master:
The daemon will fail in open_lookup, when trying to dlopen
/usr/lib/autofs/lookup_foo.so. I verified that this did in fact fail,
and that the failure was reported by the init script.
Here is sample output for the following master map:
/dne /etc/auto.dne # auto.dne does not exist
# service autofs start
failed to load map auto.dne [WARNING]
Since auto.net started successfully, the init script printed out a
warning. If no maps load, FAILED is printed. If everything works fine,
then it prints OK as before.
I would appreciate testing results from the customer.
Reminder: I'm still awaiting feedback on the proposed fix. If there is no
testing feedback from the customer, this change may not be incorporated into the
This change was built into autofs-4.1.3-150.
An advisory has been issued which should help the problem
described in this bug report. This report is therefore being
closed with a resolution of ERRATA. For more information
on the solution and/or where to find the updated files,
please follow the link below. You may reopen this bug report
if the solution does not work for you.