Created attachment 567653 [details] systemd mount file from /lib/systemd/system(/sysinit.target.wants) Description of problem: Systemd doesn't start nfs-mounts at startup. After investigating a bit the problem could be that rpcbind.service doesn't run even though its an enabled service (status 'dead' after first login) Version-Release number of selected component (if applicable): 37-13.fc16 How reproducible: I really don't know Steps to Reproduce: 1. 2. 3. Actual results: [root@tracker ~]# systemctl status data.mount data.mount - Mount Dynardo /data Directory Loaded: loaded (/lib/systemd/system/data.mount; static) Active: failed since Mon, 05 Mar 2012 15:41:50 +0100; 5s ago Where: /data What: 192.168.1.1:/data Process: 1090 ExecMount=/bin/mount 192.168.1.1:/data /data -t nfs -o defaults,rw,soft,intr,bg (code=exited, status=32) CGroup: name=systemd:/system/data.mount [root@tracker ~]# /bin/mount 192.168.1.1:/data /data -t nfs -o defaults,rw,soft,intr,bg mount.nfs: rpc.statd is not running but is required for remote locking. mount.nfs: Either use '-o nolock' to keep locks local, or start statd. mount.nfs: an incorrect mount option was specified Expected results: The nfs were mounted correctly. Additional info:
Does it work any better if you use a classical /etc/fstab line for the mount as opposed to the unit file?
(In reply to comment #1) > Does it work any better if you use a classical /etc/fstab line for the mount as > opposed to the unit file? It does, but only if it is one single nfs-mount. In real life I need to mount 5 nfs exports what obviously is done in parallel then with the effect that only one of nem is effectively mounted (same nfs-server, centos5-64) That why I switched to systemd where I can apply an iterative approach.
Forgot to mention that in systemd-36-3.fc16.x86_64 everything works like a charm. As far as I remember the problems began with release systemd-37-3.fc16.x86_64.
(In reply to comment #0) > After investigating a bit the problem could be that rpcbind.service doesn't > run even though its an enabled service (status 'dead' after first login) Let's focus on this part. Please paste the full output from "systemctl status rpcbind.service". Additionally, boot with "systemd.log_level=debug systemd.log_target=kmsg log_buf_len=1M" and attach the output of the "dmesg" command.
The output of "systemctl status rpcbind.service" is rpcbind.service - RPC bind service Loaded: loaded (/lib/systemd/system/rpcbind.service; enabled) Active: inactive (dead) CGroup: name=systemd:/system/rpcbind.service Find the output of dmesg attached.
Created attachment 568005 [details] output of dmesg after startup Attachement mentioned in comment #5: https://bugzilla.redhat.com/show_bug.cgi?id=799990#c5
There is an ordering dependency loop: [ 7.297344] systemd[1]: Found ordering cycle on rpcbind.service/start [ 7.297349] systemd[1]: Walked on cycle path to rpcbind.socket/start [ 7.297353] systemd[1]: Walked on cycle path to sysinit.target/start [ 7.297357] systemd[1]: Walked on cycle path to data.mount/start [ 7.297360] systemd[1]: Walked on cycle path to rpcbind.service/start [ 7.297365] systemd[1]: Breaking ordering cycle by deleting job rpcbind.socket/start [ 7.297370] systemd[1]: Deleting job rpcbind.service/start as dependency of job rpcbind.socket/start [ 7.297375] systemd[1]: Deleting job nfs-lock.service/start as dependency of job rpcbind.service/start I don't see where the dependency between sysinit.target and data.mount comes from. Did you link to data.mount from sysinit.target.wants/ ? I recommend getting rid of the native data.mount unit and using /etc/fstab instead, at least for now. Since fstab is the usual way of specifying mount points, we need to make sure it works. I do not understand what you meant by this (in comment #2): > In real life I need to mount 5 nfs exports what obviously is done in > parallel then with the effect that only one of nem is effectively mounted > (same nfs-server, centos5-64) That why I switched to systemd where I can > apply an iterative approach. Could you paste the actual fstab and describe what behaviour you're observing? What gets mounted and what does not?
(In reply to comment #7) > I don't see where the dependency between sysinit.target and data.mount comes > from. Did you link to data.mount from sysinit.target.wants/ ? Yes, I do. Do you consider this a misuse? > I recommend getting rid of the native data.mount unit and using /etc/fstab > instead, at least for now. Since fstab is the usual way of specifying mount > points, we need to make sure it works Well > Could you paste the actual fstab and describe what behaviour you're observing? > What gets mounted and what does not? The content of the fstab on a regular f16 workstation 192.168.1.1:/users /users nfs defaults,bg 0 0 192.168.1.1:/data /data nfs defaults,bg 0 0 192.168.1.1:/vmware /home/vmware nfs defaults,bg 0 0 192.168.1.1:/video /home/video nfs defaults,bg 0 0 It never happens that all mounts are usable after start-up and it's completely random which of them are. This behavior let me run into the arms of systemd.
(In reply to comment #8) > Yes, I do. Do you consider this a misuse? It's wrong because it causes the ordering cycle. When systemd parses NFS mounts in fstab, it makes them pulled from remote-fs.target. > The content of the fstab on a regular f16 workstation > 192.168.1.1:/users /users nfs defaults,bg 0 0 > 192.168.1.1:/data /data nfs defaults,bg 0 0 > 192.168.1.1:/vmware /home/vmware nfs defaults,bg 0 0 > 192.168.1.1:/video /home/video nfs defaults,bg 0 0 > > It never happens that all mounts are usable after start-up and it's completely > random which of them are. This behavior let me run into the arms of systemd. I think it's caused by the problems described here: https://bugzilla.redhat.com/show_bug.cgi?id=786050#c11 *** This bug has been marked as a duplicate of bug 786050 ***