Bug 144897 - autofs service starts successfully even if autofs mountpoint already in use
autofs service starts successfully even if autofs mountpoint already in use
Product: Red Hat Enterprise Linux 3
Classification: Red Hat
Component: autofs (Show other bugs)
All Linux
medium Severity medium
: ---
: ---
Assigned To: Jeff Moyer
Brock Organ
Depends On:
Blocks: 156320
  Show dependency treegraph
Reported: 2005-01-12 11:16 EST by David Lehman
Modified: 2007-11-30 17:07 EST (History)
3 users (show)

See Also:
Fixed In Version: RHBA-2005-654
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Last Closed: 2005-09-28 15:09:50 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)
Keep parent process around until the daemon is ready to service requests. (5.27 KB, patch)
2005-06-01 16:20 EDT, Jeff Moyer
no flags Details | Diff
Init script changes for reporting the status of the automount daemon at startup. (1.31 KB, patch)
2005-06-01 16:22 EDT, Jeff Moyer
no flags Details | Diff

  None (edit)
Description David Lehman 2005-01-12 11:16:33 EST
Description of problem:
If one of autofs' mountpoints is already in use (ie something's already mounted
there), the init script still starts successfully. This is because the daemon
function doesn't wait around to check status. It's not necessarily broken that
the daemon starts in spite of one failed mountpoint aqcuisition, but the service
startup should probably indicate the failure either way.

Version-Release number of selected component (if applicable):

How reproducible:

Steps to Reproduce:
1. mount any filesystem on a mountpoint autofs is configured to occupy
2. start autofs
Actual results:
Autofs service startup indicates success, but the previously occupied mountpoint
is not acquired. This failure is indicated in the syslog, but not on the console.

Expected results:
Either the service fails to start or the failure is displayed as a warning when
the service starts.

Additional info:
Comment 4 Jeff Moyer 2005-01-27 13:21:06 EST
How would you feel if we itemized the failures.  It would look
something like this:

# service autofs start
Starting automount:   /share - already mounted   [WARNING]
                      /bar - already mounted     [WARNING]
                                                 [  OK  ]

The script would print OK if any one mountpoint started up
successfully.  If they all failed, it would of course print out [FAILED].

Comment 6 Jeff Moyer 2005-01-27 15:39:28 EST
When you start automount using the init script, it spawns one copy of the
automount daemon for each mount point.  We definitely need to make sure that
each automount process does not return status until it is up and ready to
service requests (or has failed).  Once you have that, there is the issue of
what status to return from the script itself when some of the daemons were
started successfully and others have failed.  It is this point that I would like
clarification on.  I put forth a proposal, and now I'm looking for feedback.
Comment 17 Jeff Moyer 2005-06-01 16:20:10 EDT
Created attachment 115061 [details]
Keep parent process around until the daemon is ready to service requests.

This patch implements the daemon portion of the code necessary for fixing this
bug.  The parent process waits for the daemon to write a status message over a
pipe, and only then does the parent exit.  Moreover, the parent will now exit
with a proper status code, indicating whether automount startup was successful.
Comment 18 Jeff Moyer 2005-06-01 16:22:42 EDT
Created attachment 115062 [details]
Init script changes for reporting the status of the automount daemon at startup.

This patch, along with the previous one, will report the status code returned
by the automount daemon to the console.  If any mount was unsuccessful, the
printed status will be WARNING.  If all are successful, it's OK, and if all
failed it is FAILED.  Any failed mounts will be printed to the console, so the
user doesn't have to look in the logs.
Comment 19 Jeff Moyer 2005-06-01 16:26:55 EDT
I have proposed the following patches:

to the autofs mailing list and am waiting for feedback.

I tested these by putting together master maps that had no entries, all
valid entries, all invalid entries, and a mix of valid and invalid
entries.  I verified the output in every case.

To test the exit paths after the become_daemon call in autofs, I provided
a map with type "invalid", by putting a line like so in /etc/auto.master:

  /foo	 invalid:/etc/auto.foo

The daemon will fail in open_lookup, when trying to dlopen
/usr/lib/autofs/lookup_foo.so.  I verified that this did in fact fail,
and that the failure was reported by the init script.
Here is sample output for the following master map:
/dne    /etc/auto.dne  # auto.dne does not exist
/net    /etc/auto.net

# service autofs start
Starting automount: 
failed to load map auto.dne                                [WARNING]

Since auto.net started successfully, the init script printed out a
warning.  If no maps load, FAILED is printed.  If everything works fine,
then it prints OK as before.

I would appreciate testing results from the customer.
Comment 24 Jeff Moyer 2005-07-07 10:54:39 EDT
Reminder: I'm still awaiting feedback on the proposed fix.  If there is no
testing feedback from the customer, this change may not be incorporated into the
Comment 29 Jeff Moyer 2005-07-28 10:35:38 EDT
This change was built into autofs-4.1.3-150.
Comment 33 Red Hat Bugzilla 2005-09-28 15:09:50 EDT
An advisory has been issued which should help the problem
described in this bug report. This report is therefore being
closed with a resolution of ERRATA. For more information
on the solution and/or where to find the updated files,
please follow the link below. You may reopen this bug report
if the solution does not work for you.


Note You need to log in before you can comment on or make changes to this bug.