Bug 785418
Summary: | systemd drops into emergency mode if booted inside libvirt LXC container | ||
---|---|---|---|
Product: | [Fedora] Fedora | Reporter: | Robin Green <greenrd> |
Component: | systemd | Assignee: | systemd-maint |
Status: | CLOSED NOTABUG | QA Contact: | Fedora Extras Quality Assurance <extras-qa> |
Severity: | medium | Docs Contact: | |
Priority: | unspecified | ||
Version: | 17 | CC: | johannbg, lpoetter, metherid, mschmidt, notting, plautrba, systemd-maint |
Target Milestone: | --- | ||
Target Release: | --- | ||
Hardware: | x86_64 | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | Bug Fix | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2012-09-14 08:45:45 UTC | Type: | --- |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Robin Green
2012-01-28 20:48:00 UTC
Can you boot your LXC tree with "systemd-nspawn"? Do you get any logging output when you boot like this? What if you add systemd.log_level=debug to the command line of init? systemd-nspawn didn't work either. The problem was that step 1 (install onto disk image) had set up /etc/fstab with some partition UUIDs in it, and systemd couldn't find the disks mentioned in /etc/fstab when I tried to boot it in a container, because by that point, I had moved the root filesystem from the disk image to a directory in the host's /home (I did not mention that above, sorry). After I fixed this problem by emptying /etc/fstab, killed systemd, and retried systemd-nspawn again, it booted up. A clearer error message would have been nice. I had no clue that the problem was due to /etc/fstab. But basically it was my fault. Hmm, there should have been some message that some device unit never became available. Can you paste the output that was generated? To improve what is printed I'd first like to see what is currently being printed for you. These are the relevant lines: (src/job.c:730) Job dev-disk-by\x2duuid-5944a1ed\x2dbfdd\x2d4d67\x2d8de4\x2d963291a9db1b.device/start timed out. (src/job.c:556) Job dev-disk-by\x2duuid-5944a1ed\x2dbfdd\x2d4d67\x2d8de4\x2d963291a9db1b.device/start finished, result=timeout (src/job.c:556) Job boot.mount/start finished, result=dependency Dependency failed. Aborted start of /boot [ ABORT] (src/job.c:556) Job local-fs.target/start finished, result=dependency (src/job.c:556) Job fedora-autorelabel-mark.service/start finished, result=dependency Dependency failed. Aborted start of Mark the need to...el after reboot [ ABORT] (src/job.c:623) Job fedora-autorelabel-mark.service/start failed with result 'dependency'. (src/job.c:556) Job fedora-autorelabel.service/start finished, result=dependency Dependency failed. Aborted start of Relabel all file...s, if necessary [ ABORT] (src/job.c:623) Job fedora-autorelabel.service/start failed with result 'dependency'. (src/job.c:623) Job local-fs.target/start failed with result 'dependency'. (src/unit.c:1161) Triggering OnFailure= dependencies of local-fs.target. It might be behaving differently now though than when I first reported this bug, because I've upgraded since then. systemd-44-12.fc17 fixed many bugs related to containers. Are you still seeing the bug with this version? We don't start udev in the container, hence this will necessarily fail to start. Please remove fstab in your container, as we do not support virtualized devices in the container. |