| Summary: | F15, diskless node, systemctl start tmpfs.device timeout | ||
|---|---|---|---|
| Product: | [Fedora] Fedora | Reporter: | IBM Bug Proxy <bugproxy> |
| Component: | systemd | Assignee: | Lennart Poettering <lpoetter> |
| Status: | CLOSED NOTABUG | QA Contact: | Fedora Extras Quality Assurance <extras-qa> |
| Severity: | urgent | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | 15 | CC: | harald, jkachuck, johannbg, kay, lpoetter, metherid, mschmidt, notting, plautrba, wgomerin |
| Target Milestone: | --- | ||
| Target Release: | --- | ||
| Hardware: | x86_64 | ||
| OS: | All | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | Bug Fix | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2011-10-14 14:29:33 UTC | Type: | --- |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
There should be no such thing as 'tmpfs.device'. What's in /etc/fstab? ------- Comment From clnperez.com 2011-10-14 10:02 EDT------- It caused by the incorrect setting in the fstab. It has been fixed. Closing this bug. Thanks! |
I made a diskless image against Fedora15, during the boot I found it displayed the following error message and went into emergency mode. ================================ The error message: Starting Relabel all filesystems, if necessary ^[[1;31maborted^[[0m because a dependency failed. [ 107.607155] systemd[1]: Job fedora-autorelabel.service/start failed with result 'dependency'. Starting Mark the need to relabel after reboot ^[[1;31maborted^[[0m because a dependency failed. [ 107.625156] systemd[1]: Job fedora-autorelabel-mark.service/start failed with result 'dependency'. [ 107.634580] systemd[1]: Job local-fs.target/start failed with result 'dependency'. [ 107.642615] systemd[1]: Triggering OnFailure= dependencies of local-fs.target. [ 107.650738] systemd[1]: Job var-tmp.mount/start failed with result 'dependency'. [ 107.658580] systemd[1]: Job fsck/start failed with result 'dependency'. [ 107.666876] systemd[1]: Job tmpfs.device/start failed with result 'timeout'. Welcome to emergency mode. Use "systemctl default" or ^D to activate default mode. ================================ Then I could log on the node. But when I run the command 'service sshd start' to start the sshd, it displayed the semilar error message: ================================ [ 8545.884404] udev[2872]: starting version 167 [ 8635.560123] systemd[1]: Job tmpfs.device/start timed out. [ 8635.565867] systemd[1]: Job fedora-autorelabel.service/start failed with result 'dependency'. [ 8635.574812] systemd[1]: Job fedora-autorelabel-mark.service/start failed with result 'dependency'. [ 8635.584217] systemd[1]: Job local-fs.target/start failed with result 'dependency'. [ 8635.592203] systemd[1]: Triggering OnFailure= dependencies of local-fs.target. [ 8635.600311] systemd[1]: Job var-tmp.mount/start failed with result 'dependency'. [ 8635.608102] systemd[1]: Job fsck/start failed with result 'dependency'. [ 8635.616355] systemd[1]: Job tmpfs.device/start failed with result 'timeout'. Welcome to emergency mode. Use "systemctl default" or ^D to activate default mode. ================================ Then I tried to start the tmpfs.device ================================ # systemctl start tmpfs.device [432039.543362] systemd[1]: Job tmpfs.device/start timed out. [432039.549335] systemdJob timed out. ================================ But from the output of 'systemctl start tmpfs.device', it has been loaded: ================================ # systemctl status tmpfs.device tmpfs.device Loaded: loaded Active: inactive (dead) ================================ I also tried to start the tmpfs.device on the Fedora 15 diskfull node, it also said timed out: ================================ # systemctl start tmpfs.device Job timed out. ================================ But the tmpfs looked work well. On the diskless node I found a way to workaround the starting of sshd. I commented out the '#. /etc/rc.d/init.d/functions' from the sshd control file: /etc/init.d/sshd. Then sshd was started successfully.