This service will be undergoing maintenance at 00:00 UTC, 2016-08-01. It is expected to last about 1 hours
Bug 799990 - Systemd doesn't start nfs-mounts at startup
Systemd doesn't start nfs-mounts at startup
Status: CLOSED DUPLICATE of bug 786050
Product: Fedora
Classification: Fedora
Component: systemd (Show other bugs)
16
x86_64 Linux
unspecified Severity unspecified
: ---
: ---
Assigned To: systemd-maint
Fedora Extras Quality Assurance
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2012-03-05 09:53 EST by aiscape
Modified: 2012-03-07 08:51 EST (History)
6 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2012-03-07 08:51:44 EST
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:


Attachments (Terms of Use)
systemd mount file from /lib/systemd/system(/sysinit.target.wants) (507 bytes, text/plain)
2012-03-05 09:53 EST, aiscape
no flags Details
output of dmesg after startup (122.31 KB, text/plain)
2012-03-06 11:45 EST, aiscape
no flags Details

  None (edit)
Description aiscape 2012-03-05 09:53:27 EST
Created attachment 567653 [details]
systemd mount file from /lib/systemd/system(/sysinit.target.wants)

Description of problem:
Systemd doesn't start nfs-mounts at startup. After investigating a bit the problem could be that rpcbind.service doesn't run even though its an enabled service (status 'dead' after first login)

Version-Release number of selected component (if applicable):
37-13.fc16

How reproducible:
I really don't know

Steps to Reproduce:
1.
2.
3.
  
Actual results:
[root@tracker ~]# systemctl status data.mount 
data.mount - Mount Dynardo /data Directory
	  Loaded: loaded (/lib/systemd/system/data.mount; static)
	  Active: failed since Mon, 05 Mar 2012 15:41:50 +0100; 5s ago
	   Where: /data
	    What: 192.168.1.1:/data
	 Process: 1090 ExecMount=/bin/mount 192.168.1.1:/data /data -t nfs -o defaults,rw,soft,intr,bg (code=exited, status=32)
	  CGroup: name=systemd:/system/data.mount

[root@tracker ~]# /bin/mount 192.168.1.1:/data /data -t nfs -o defaults,rw,soft,intr,bg
mount.nfs: rpc.statd is not running but is required for remote locking.
mount.nfs: Either use '-o nolock' to keep locks local, or start statd.
mount.nfs: an incorrect mount option was specified

Expected results:
The nfs were mounted correctly.

Additional info:
Comment 1 Michal Schmidt 2012-03-05 10:35:17 EST
Does it work any better if you use a classical /etc/fstab line for the mount as opposed to the unit file?
Comment 2 aiscape 2012-03-05 11:57:32 EST
(In reply to comment #1)
> Does it work any better if you use a classical /etc/fstab line for the mount as
> opposed to the unit file?

It does, but only if it is one single nfs-mount. In real life I need to mount 5 nfs exports what obviously is done in parallel then with the effect that only one of nem is  effectively mounted (same nfs-server, centos5-64) That why I switched to systemd where I can apply an iterative approach.
Comment 3 aiscape 2012-03-06 04:20:11 EST
Forgot to mention that in systemd-36-3.fc16.x86_64 everything works like a charm. As far as I remember the problems began with release systemd-37-3.fc16.x86_64.
Comment 4 Michal Schmidt 2012-03-06 06:54:05 EST
(In reply to comment #0)
> After investigating a bit the problem could be that rpcbind.service doesn't
> run even though its an enabled service (status 'dead' after first login)

Let's focus on this part.
Please paste the full output from "systemctl status rpcbind.service".
Additionally, boot with "systemd.log_level=debug systemd.log_target=kmsg log_buf_len=1M" and attach the output of the "dmesg" command.
Comment 5 aiscape 2012-03-06 11:43:40 EST
The output of "systemctl status rpcbind.service" is

rpcbind.service - RPC bind service
	  Loaded: loaded (/lib/systemd/system/rpcbind.service; enabled)
	  Active: inactive (dead)
	  CGroup: name=systemd:/system/rpcbind.service

Find the output of dmesg attached.
Comment 6 aiscape 2012-03-06 11:45:18 EST
Created attachment 568005 [details]
output of dmesg after startup

Attachement mentioned in comment #5:
https://bugzilla.redhat.com/show_bug.cgi?id=799990#c5
Comment 7 Michal Schmidt 2012-03-07 07:00:51 EST
There is an ordering dependency loop:

[    7.297344] systemd[1]: Found ordering cycle on rpcbind.service/start
[    7.297349] systemd[1]: Walked on cycle path to rpcbind.socket/start
[    7.297353] systemd[1]: Walked on cycle path to sysinit.target/start
[    7.297357] systemd[1]: Walked on cycle path to data.mount/start
[    7.297360] systemd[1]: Walked on cycle path to rpcbind.service/start
[    7.297365] systemd[1]: Breaking ordering cycle by deleting job rpcbind.socket/start
[    7.297370] systemd[1]: Deleting job rpcbind.service/start as dependency of job rpcbind.socket/start
[    7.297375] systemd[1]: Deleting job nfs-lock.service/start as dependency of job rpcbind.service/start

I don't see where the dependency between sysinit.target and data.mount comes from. Did you link to data.mount from sysinit.target.wants/ ?

I recommend getting rid of the native data.mount unit and using /etc/fstab instead, at least for now. Since fstab is the usual way of specifying mount points, we need to make sure it works.

I do not understand what you meant by this (in comment #2):
> In real life I need to mount 5 nfs exports what obviously is done in
> parallel then with the effect that only one of nem is  effectively mounted
> (same nfs-server, centos5-64) That why I switched to systemd where I can
> apply an iterative approach.

Could you paste the actual fstab and describe what behaviour you're observing? What gets mounted and what does not?
Comment 8 aiscape 2012-03-07 08:01:30 EST
(In reply to comment #7)
> I don't see where the dependency between sysinit.target and data.mount comes
> from. Did you link to data.mount from sysinit.target.wants/ ?

Yes, I do. Do you consider this a misuse?

> I recommend getting rid of the native data.mount unit and using /etc/fstab
> instead, at least for now. Since fstab is the usual way of specifying mount
> points, we need to make sure it works

Well

> Could you paste the actual fstab and describe what behaviour you're observing?
> What gets mounted and what does not?

The content of the fstab on a regular f16 workstation
192.168.1.1:/users     /users                 nfs     defaults,bg 0 0
192.168.1.1:/data      /data                  nfs     defaults,bg 0 0
192.168.1.1:/vmware    /home/vmware           nfs     defaults,bg 0 0
192.168.1.1:/video     /home/video            nfs     defaults,bg 0 0

It never happens that all mounts are usable after start-up and it's completely random which of them are. This behavior let me run into the arms of systemd.
Comment 9 Michal Schmidt 2012-03-07 08:51:44 EST
(In reply to comment #8)
> Yes, I do. Do you consider this a misuse?

It's wrong because it causes the ordering cycle.
When systemd parses NFS mounts in fstab, it makes them pulled from remote-fs.target.

> The content of the fstab on a regular f16 workstation
> 192.168.1.1:/users     /users                 nfs     defaults,bg 0 0
> 192.168.1.1:/data      /data                  nfs     defaults,bg 0 0
> 192.168.1.1:/vmware    /home/vmware           nfs     defaults,bg 0 0
> 192.168.1.1:/video     /home/video            nfs     defaults,bg 0 0
> 
> It never happens that all mounts are usable after start-up and it's completely
> random which of them are. This behavior let me run into the arms of systemd.

I think it's caused by the problems described here:
https://bugzilla.redhat.com/show_bug.cgi?id=786050#c11

*** This bug has been marked as a duplicate of bug 786050 ***

Note You need to log in before you can comment on or make changes to this bug.