RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1175005 - Cannot mount NFS v3 partition until after reboot
Summary: Cannot mount NFS v3 partition until after reboot
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: nfs-utils
Version: 7.1
Hardware: All
OS: Linux
unspecified
high
Target Milestone: rc
: ---
Assignee: Steve Dickson
QA Contact: Filesystem QE
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-12-16 23:22 UTC by Andrew Beekhof
Modified: 2020-12-15 07:32 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-12-15 07:32:25 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 1224756 1 None None None 2021-01-20 06:05:38 UTC

Internal Links: 1224756

Description Andrew Beekhof 2014-12-16 23:22:06 UTC
Description of problem:

Upon installing nfs-utils, nfs v3 partitions cannot be mounted until the machine is rebooted. Mounting with the default options seems to be unaffected.

Version-Release number of selected component (if applicable):

nfs-utils-1.3.0-0.5.el7.x86_64

How reproducible:

100%

Steps to Reproduce:
1. install nfs-utils
2. try to mount an NFS v3 partition
3.

Actual results:

[root@rhos5-swift1 ~]# rpm -qa nfs-utils
[root@rhos5-swift1 ~]# yum install -y nfs-utils
Loaded plugins: product-id, subscription-manager
This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.
 
Resolving Dependencies
--> Running transaction check
---> Package nfs-utils.x86_64 1:1.3.0-0.5.el7 will be installed
--> Processing Dependency: gssproxy >= 0.3.0-0 for package: 1:nfs-utils-1.3.0-0.5.el7.x86_64

...snip...

[root@rhos5-swift1 ~]# rpm -qa nfs-utils
nfs-utils-1.3.0-0.5.el7.x86_64
[root@rhos5-swift1 ~]# mount -v -t nfs 192.168.124.1:/srv /srv
mount.nfs: timeout set for Wed Dec 17 15:13:55 2014
mount.nfs: trying text-based options 'vers=4,addr=192.168.124.1,clientaddr=192.168.124.79'
[root@rhos5-swift1 ~]# mount | grep nfs
192.168.124.1:/srv on /srv type nfs4 (rw,relatime,vers=4.0,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=192.168.124.79,local_lock=none,addr=192.168.124.1)
[root@rhos5-swift1 ~]# mount -v -t nfs 192.168.124.1:/srv /srv -o v3
mount.nfs: timeout set for Wed Dec 17 15:16:24 2014
Job for rpc-statd.service failed. See 'systemctl status rpc-statd.service' and 'journalctl -xn' for details.
mount.nfs: rpc.statd is not running but is required for remote locking.
mount.nfs: Either use '-o nolock' to keep locks local, or start statd.
mount.nfs: an incorrect mount option was specified

mount.nfs: rpc.statd is not running but is required for remote locking.
mount.nfs: Either use '-o nolock' to keep locks local, or start statd.
mount.nfs: an incorrect mount option was specified

[root@rhos5-swift1 ~]# reboot
Connection to rhos5-swift1 closed by remote host.
Connection to rhos5-swift1 closed.

[waiting waiting]

[root@rhos5-lb1 ~]# ssh rhos5-swift1
Last login: Wed Dec 17 15:14:02 2014 from rhos5-lb1.vmnet.lab.bos.redhat.com
[root@rhos5-swift1 ~]# mount -v -t nfs 192.168.124.1:/srv /srv -o v3
mount.nfs: timeout set for Wed Dec 17 15:24:02 2014
mount.nfs: trying text-based options 'v3,addr=192.168.124.1'
mount.nfs: prog 100003, trying vers=3, prot=6
mount.nfs: trying 192.168.124.1 prog 100003 vers 3 prot TCP port 2049
mount.nfs: prog 100005, trying vers=3, prot=17
mount.nfs: trying 192.168.124.1 prog 100005 vers 3 prot UDP port 38902
[root@rhos5-swift1 ~]# uptime
 15:23:02 up 2 min,  1 user,  load average: 0.37, 0.36, 0.15
[root@rhos5-swift1 ~]# mount | grep nfs
192.168.124.1:/srv on /srv type nfs (rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=192.168.124.1,mountvers=3,mountport=38902,mountproto=udp,local_lock=none,addr=192.168.124.1)


Expected results:

File system is mounted in both cases prior to the reboot

Additional info:

[root@rhos5-swift1 ~]# systemctl status rpc-statd.service
rpc-statd.service - NFS status monitor for NFSv2/3 locking.
   Loaded: loaded (/usr/lib/systemd/system/rpc-statd.service; static)
   Active: failed (Result: exit-code) since Wed 2014-12-17 15:14:24 AEDT; 3min 27s ago
  Process: 25551 ExecStart=/usr/sbin/rpc.statd --no-notify $STATDARG (code=exited, status=1/FAILURE)

Dec 17 15:14:24 rhos5-swift1 systemd[1]: Starting NFS status monitor for NFSv2/3 locking....
Dec 17 15:14:24 rhos5-swift1 rpc.statd[25552]: Version 1.3.0 starting
Dec 17 15:14:24 rhos5-swift1 rpc.statd[25552]: Flags: TI-RPC
Dec 17 15:14:24 rhos5-swift1 rpc.statd[25552]: Initializing NSM state
Dec 17 15:14:24 rhos5-swift1 systemd[1]: rpc-statd.service: control process exited, code=exited status=1
Dec 17 15:14:24 rhos5-swift1 systemd[1]: Failed to start NFS status monitor for NFSv2/3 locking..
Dec 17 15:14:24 rhos5-swift1 systemd[1]: Unit rpc-statd.service entered failed state.

Comment 2 Steve Dickson 2015-01-05 12:19:54 UTC
(In reply to Andrew Beekhof from comment #0)
> Additional info:
> 
> [root@rhos5-swift1 ~]# systemctl status rpc-statd.service
> rpc-statd.service - NFS status monitor for NFSv2/3 locking.
>    Loaded: loaded (/usr/lib/systemd/system/rpc-statd.service; static)
>    Active: failed (Result: exit-code) since Wed 2014-12-17 15:14:24 AEDT;
> 3min 27s ago
>   Process: 25551 ExecStart=/usr/sbin/rpc.statd --no-notify $STATDARG
> (code=exited, status=1/FAILURE)
> 
> Dec 17 15:14:24 rhos5-swift1 systemd[1]: Starting NFS status monitor for
> NFSv2/3 locking....
> Dec 17 15:14:24 rhos5-swift1 rpc.statd[25552]: Version 1.3.0 starting
> Dec 17 15:14:24 rhos5-swift1 rpc.statd[25552]: Flags: TI-RPC
> Dec 17 15:14:24 rhos5-swift1 rpc.statd[25552]: Initializing NSM state
> Dec 17 15:14:24 rhos5-swift1 systemd[1]: rpc-statd.service: control process
> exited, code=exited status=1
> Dec 17 15:14:24 rhos5-swift1 systemd[1]: Failed to start NFS status monitor
> for NFSv2/3 locking..
> Dec 17 15:14:24 rhos5-swift1 systemd[1]: Unit rpc-statd.service entered
> failed state.

Was there any more details in /var/log/messages as to why rpc.statd 
did not startup up after the install?

Comment 3 Andrew Beekhof 2015-01-06 06:21:55 UTC
nada

Comment 4 Andrew Beekhof 2015-01-09 11:29:57 UTC
I checked again and there was a little more. Hope it helps:

Jan 10 07:31:40 localhost systemd: Starting NFS status monitor for NFSv2/3 locking....
Jan 10 07:31:40 localhost rpc.statd[24956]: Version 1.3.0 starting
Jan 10 07:31:40 localhost rpc.statd[24956]: Flags: TI-RPC
Jan 10 07:31:40 localhost rpc.statd[24956]: failed to create RPC listeners, exiting
Jan 10 07:31:40 localhost systemd: rpc-statd.service: control process exited, code=exited status=1
Jan 10 07:31:40 localhost systemd: Failed to start NFS status monitor for NFSv2/3 locking..
Jan 10 07:31:40 localhost systemd: Unit rpc-statd.service entered failed state.
Jan 10 07:31:40 localhost rpc.statd[24958]: Version 1.3.0 starting
Jan 10 07:31:40 localhost rpc.statd[24958]: Flags: TI-RPC
Jan 10 07:31:40 localhost rpc.statd[24958]: failed to create RPC listeners, exiting

Comment 5 Steve Dickson 2015-01-10 18:29:43 UTC
(In reply to Andrew Beekhof from comment #4)
> I checked again and there was a little more. Hope it helps:
> 
> Jan 10 07:31:40 localhost systemd: Starting NFS status monitor for NFSv2/3
> locking....
> Jan 10 07:31:40 localhost rpc.statd[24956]: Version 1.3.0 starting
> Jan 10 07:31:40 localhost rpc.statd[24956]: Flags: TI-RPC
> Jan 10 07:31:40 localhost rpc.statd[24956]: failed to create RPC listeners,
> exiting
> Jan 10 07:31:40 localhost systemd: rpc-statd.service: control process
> exited, code=exited status=1
> Jan 10 07:31:40 localhost systemd: Failed to start NFS status monitor for
> NFSv2/3 locking..
> Jan 10 07:31:40 localhost systemd: Unit rpc-statd.service entered failed
> state.
> Jan 10 07:31:40 localhost rpc.statd[24958]: Version 1.3.0 starting
> Jan 10 07:31:40 localhost rpc.statd[24958]: Flags: TI-RPC
> Jan 10 07:31:40 localhost rpc.statd[24958]: failed to create RPC listeners,
> exiting
This means statd could not create a UDP or TCP socket or it could not
register with rpcbind. 

Looking at rpc.statd systemd service file following After= exists
    After=network.target nss-lookup.target rpcbind.target

I wonder if this After= should have rpcbind.service instead
    After=network.target nss-lookup.target rpcbind.service

Comment 6 Andrew Beekhof 2015-01-12 04:29:49 UTC
Doesn't seem to help.
I experimented by starting things by hand:

[root@rhos6-cinder1 ~]# systemctl start rpcbind.service
[root@rhos6-cinder1 ~]# systemctl status rpcbind.service
rpcbind.service - RPC bind service
   Loaded: loaded (/usr/lib/systemd/system/rpcbind.service; static)
   Active: active (running) since Tue 2015-01-13 00:25:13 AEDT; 23s ago
  Process: 2673 ExecStart=/sbin/rpcbind -w ${RPCBIND_ARGS} (code=exited, status=0/SUCCESS)
 Main PID: 2674 (rpcbind)
   CGroup: /system.slice/rpcbind.service
           └─2674 /sbin/rpcbind -w

Jan 13 00:25:13 rhos6-cinder1.vmnet.lab.bos.redhat.com systemd[1]: Started RPC bind service.
Jan 13 00:25:13 rhos6-cinder1.vmnet.lab.bos.redhat.com rpcbind[2674]: Cannot open '/var/lib/rpcbind/rpcbind.xdr' file for reading, errno 2 (No such file or directory)
Jan 13 00:25:13 rhos6-cinder1.vmnet.lab.bos.redhat.com rpcbind[2674]: Cannot open '/var/lib/rpcbind/portmap.xdr' file for reading, errno 2 (No such file or directory)

this looks suspicious, yet still allows rpc-statd to start

[root@rhos6-cinder1 ~]# systemctl start rpc-statd.service
[root@rhos6-cinder1 ~]# systemctl status rpc-statd.service
rpc-statd.service - NFS status monitor for NFSv2/3 locking.
   Loaded: loaded (/usr/lib/systemd/system/rpc-statd.service; static)
   Active: active (running) since Tue 2015-01-13 00:25:26 AEDT; 26s ago
  Process: 2677 ExecStart=/usr/sbin/rpc.statd --no-notify $STATDARGS (code=exited, status=0/SUCCESS)
 Main PID: 2678 (rpc.statd)
   CGroup: /system.slice/rpc-statd.service
           └─2678 /usr/sbin/rpc.statd --no-notify

Jan 13 00:25:26 rhos6-cinder1.vmnet.lab.bos.redhat.com rpc.statd[2678]: Version 1.3.0 starting
Jan 13 00:25:26 rhos6-cinder1.vmnet.lab.bos.redhat.com rpc.statd[2678]: Flags: TI-RPC
Jan 13 00:25:26 rhos6-cinder1.vmnet.lab.bos.redhat.com systemd[1]: Started NFS status monitor for NFSv2/3 locking..


and now magically it works (or fails but only due to a config error on my part):

2015-01-13 00:30:40.294 1861 ERROR cinder.volume.drivers.remotefs [-] Exception during mounting NFS mount failed for share 192.168.124.1:/srv/rhos-6.0/cinder. Error - {'nfs': u"Unexpected error while running command.\nCommand: sudo cinder-rootwrap /etc/cinder/rootwrap.conf mount -t nfs -o v3 192.168.124.1:/srv/rhos-6.0/cinder /var/lib/cinder/mnt/61f52c57a53dbf069df27650ab88800c\nExit code: 32\nStdout: u''\nStderr: u'mount.nfs: mounting 192.168.124.1:/srv/rhos-6.0/cinder failed, reason given by server: No such file or directory\\n'"}

I wonder if `systemctl daemon-reload` wasn't enough for the new option to take effect

Comment 8 Steve Dickson 2015-07-29 10:02:52 UTC
Could you please retest with the latest nfs-utils bits?

The systemd scripts have been reworked so I'm thinking
this is not longer a problem.

Comment 9 Andrew Beekhof 2015-08-11 00:26:53 UTC
Isn't that what reproducers are for?

Comment 10 Fomalhaut 2015-09-29 18:27:58 UTC
Faced with the same problem. But the work, if after
# systemctl start rpcbind.servise
do
# systemctl restart rpcbind.servise
Then mounting takes place without problems.

Comment 11 Fomalhaut 2015-09-29 18:29:51 UTC
Forgot indicate that faced with this problem on Fedora 22 x86_64
nfs-utils-1.3.2-9.fc22.x86_64

Comment 12 Chris Routh 2017-10-04 23:15:21 UTC
I'm seeing this is still a problem on CentOS 7.4 as of today. I've spun up a fresh VM and the problem is consistent. It only goes away after reboot.

Manually starting the services in order, RPC-Statd always fails even with RPCBIND started, and after restarting it as well.

For some reason only a full system reboot fixes the issue.

Comment 14 RHEL Program Management 2020-12-15 07:32:25 UTC
After evaluating this issue, there are no plans to address it further or fix it in an upcoming release.  Therefore, it is being closed.  If plans change such that this issue will be fixed in an upcoming release, then the bug can be reopened.


Note You need to log in before you can comment on or make changes to this bug.