Bug 785418 - systemd drops into emergency mode if booted inside libvirt LXC container
Summary: systemd drops into emergency mode if booted inside libvirt LXC container
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Fedora
Classification: Fedora
Component: systemd
Version: 17
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: ---
Assignee: systemd-maint
QA Contact: Fedora Extras Quality Assurance
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2012-01-28 20:48 UTC by Robin Green
Modified: 2012-09-14 08:45 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2012-09-14 08:45:45 UTC
Type: ---
Embargoed:


Attachments (Terms of Use)

Description Robin Green 2012-01-28 20:48:00 UTC
Description of problem:
I can't get systemd to boot inside an LXC container.

Version-Release number of selected component (if applicable):
systemd-38-6.git9fa2f41.fc17.x86_64

How reproducible:
Always

Steps to Reproduce:
1. Install Rawhide in a disk image
2. Mount the root partition from the disk image on the (Fedora 16) host
3. In (Fedora 16) virt-manager, create an LXC VM pointing at that mounted root partition, and "install" it
  
Actual results:
[after a while, in the guest console]
Welcome to emergency mode...
Give root password for maintenance

Expected results:
Normal login prompt

Additional info:
I tried using the workarounds here: http://berrange.com/posts/2011/09/27/getting-started-with-lxc-using-libvirt/
and here: https://raw.github.com/gist/1142202/341dcda53059644bc48ec58cf7ec539ed782e06b/setup_lxc_rootfs_fedora15.sh
but it still goes into emergency mode. I suspect I need to disable udev completely.

Comment 1 Lennart Poettering 2012-02-09 14:14:15 UTC
Can you boot your LXC tree with "systemd-nspawn"?

Do you get any logging output when you boot like this? What if you add systemd.log_level=debug to the command line of init?

Comment 2 Robin Green 2012-02-18 17:12:24 UTC
systemd-nspawn didn't work either.

The problem was that step 1 (install onto disk image) had set up /etc/fstab with some partition UUIDs in it, and systemd couldn't find the disks mentioned in /etc/fstab when I tried to boot it in a container, because by that point, I had moved the root filesystem from the disk image to a directory in the host's /home (I did not mention that above, sorry). After I fixed this problem by emptying /etc/fstab, killed systemd, and retried systemd-nspawn again, it booted up.

A clearer error message would have been nice. I had no clue that the problem was due to /etc/fstab. But basically it was my fault.

Comment 3 Lennart Poettering 2012-03-12 23:20:43 UTC
Hmm, there should have been some message that some device unit never became available. Can you paste the output that was generated? To improve what is printed I'd first like to see what is currently being printed for you.

Comment 4 Robin Green 2012-03-18 14:57:08 UTC
These are the relevant lines:

(src/job.c:730) Job dev-disk-by\x2duuid-5944a1ed\x2dbfdd\x2d4d67\x2d8de4\x2d963291a9db1b.device/start timed out.
(src/job.c:556) Job dev-disk-by\x2duuid-5944a1ed\x2dbfdd\x2d4d67\x2d8de4\x2d963291a9db1b.device/start finished, result=timeout
(src/job.c:556) Job boot.mount/start finished, result=dependency
Dependency failed. Aborted start of /boot                              [ ABORT]
(src/job.c:556) Job local-fs.target/start finished, result=dependency
(src/job.c:556) Job fedora-autorelabel-mark.service/start finished, result=dependency
Dependency failed. Aborted start of Mark the need to...el after reboot [ ABORT]
(src/job.c:623) Job fedora-autorelabel-mark.service/start failed with result 'dependency'.
(src/job.c:556) Job fedora-autorelabel.service/start finished, result=dependency
Dependency failed. Aborted start of Relabel all file...s, if necessary [ ABORT]
(src/job.c:623) Job fedora-autorelabel.service/start failed with result 'dependency'.
(src/job.c:623) Job local-fs.target/start failed with result 'dependency'.
(src/unit.c:1161) Triggering OnFailure= dependencies of local-fs.target.

It might be behaving differently now though than when I first reported this bug, because I've upgraded since then.

Comment 5 Michal Schmidt 2012-06-07 15:52:22 UTC
systemd-44-12.fc17 fixed many bugs related to containers. Are you still seeing the bug with this version?

Comment 6 Lennart Poettering 2012-09-14 08:45:45 UTC
We don't start udev in the container, hence this will necessarily fail to start. Please remove fstab in your container, as we do not support virtualized devices in the container.


Note You need to log in before you can comment on or make changes to this bug.