RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1120852 - 'systemctl start nfs-lock' fails to detect active rpc.statd instances
Summary: 'systemctl start nfs-lock' fails to detect active rpc.statd instances
Keywords:
Status: CLOSED DUPLICATE of bug 1144440
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: nfs-utils
Version: 7.1
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: Steve Dickson
QA Contact: Filesystem QE
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-07-17 21:15 UTC by David Vossel
Modified: 2019-04-16 14:14 UTC (History)
1 user (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2014-10-21 17:24:34 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description David Vossel 2014-07-17 21:15:04 UTC
Description of problem:

If you have nfs-server and nfs-lock disabled at boot (required for a cluster environment) and you mount an NFS client, rpc.statd magically gets started for us.  This is great because otherwise the NFSv3 client couldn't perform locking.  The problem is that this conflicts with the nfs-lock systemd unit file

Version-Release number of selected component (if applicable):


How reproducible:
100%

Steps to Reproduce:
1. make sure both nfs-server and nfs-lock stopped

systemctl stop nfs-server
systemctl stop nfs-lock

2. mount a nfs client
mount -v -o "vers=3" rhel7-alt1:/root/testnfs /root/testmount

3. rpc.statd magically appears, hooray!

ps aux | grep [r]pc.statd
rpcuser   2075  0.0  0.1  44544  1952 ?        Ss   16:36   0:00 rpc.statd --no-notify

4. Now run 'systemctl status rpc.statd"

systemctl status nfs-lock
nfs-lock.service - NFS file locking service.
   Loaded: loaded (/usr/lib/systemd/system/nfs-lock.service; disabled)
   Active: inactive (dead)

Status says nfs-lock is down, but we know rpc.statd is actually up.

Now try and start nfs-lock. it fails.

systemctl start nfs-lock
Job for nfs-lock.service failed. See 'systemctl status nfs-lock.service' and 'journalctl -xn' for details.


looking at the status we see that statd detects there's already a statd instance up... so it fails. 

systemctl status nfs-lock
nfs-lock.service - NFS file locking service.
   Loaded: loaded (/usr/lib/systemd/system/nfs-lock.service; disabled)
   Active: failed (Result: exit-code) since Thu 2014-07-17 17:02:01 EDT; 5min ago
  Process: 2147 ExecStart=/sbin/rpc.statd $STATDARG (code=exited, status=1/FAILURE)
  Process: 2145 ExecStartPre=/usr/libexec/nfs-utils/scripts/nfs-lock.preconfig (code=exited, status=0/SUCCESS)

Jul 17 17:02:01 rhel7-alt2 rpc.statd[2147]: Statd service already running!
Jul 17 17:02:01 rhel7-alt2 systemd[1]: nfs-lock.service: control process exited, code=exited status=1
Jul 17 17:02:01 rhel7-alt2 systemd[1]: Failed to start NFS file locking service..
Jul 17 17:02:01 rhel7-alt2 systemd[1]: Unit nfs-lock.service entered failed state.


Actual results:

The nfs-lock unit file can not reliably manage the rpc.statd daemon because the unit file is unable to detect rpc.statd is already running (and gracefully handle this situation)

Expected results:

nfs-lock should be able to manage rpc.statd regardless if the daemon was started outside of the systemd unit file.

Additional info:

This is a big deal for us in managing HA NFS with pacemaker. The nfs-lock unit file needs to reliably work.  As simple as this failure is, it results in a unrecoverable situation where the HA NFS server can not start.  The HA NFS server depends on this unit file to start the locking daemons.

Comment 2 Steve Dickson 2014-10-21 17:24:34 UTC
This now works due to bz 1144440

*** This bug has been marked as a duplicate of bug 1144440 ***


Note You need to log in before you can comment on or make changes to this bug.