Bug 1754471

Summary: Cannot boot Fedora-Cloud-Base-Rawhide-20190922.n.1.x86_64.qcow2
Product: [Fedora] Fedora Reporter: Miroslav Vadkerti <mvadkert>
Component: selinux-policyAssignee: Lukas Vrabec <lvrabec>
Status: CLOSED CURRENTRELEASE QA Contact: Fedora Extras Quality Assurance <extras-qa>
Severity: high Docs Contact:
Priority: unspecified    
Version: rawhideCC: admiller, awilliam, bgalvani, bgoncalv, bruno, dcbw, dh.herrmann, dustymabe, dwalsh, fgiudici, gnome-sig, john.j5live, kevin, lkundrak, lrintel, lvrabec, mclasen, mgrepl, msrb, pbrezina, plautrba, pwhalen, rhughes, robatino, rstrode, sandmann, santiago, tgunders, vanmeeuwen+fedora, vpavlin, yaneti, zpytela
Target Milestone: ---   
Target Release: ---   
Hardware: All   
OS: Linux   
Whiteboard: openqa AcceptedBlocker
Fixed In Version: selinux-policy-3.14.5-16.fc32 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2019-11-27 18:03:22 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 245418, 1705303    
Attachments:
Description Flags
journal log dumped to console
none
disabled_selinux
none
installed_packages_Fedora-Rawhide-20191115.n.0
none
audit.log none

Description Miroslav Vadkerti 2019-09-23 11:13:14 UTC
Description of problem:
NetworkManager does not start with latest Rawhide cloud base image:

Console log can be found here:
curl -Lk https://artifacts-testing-farm.apps.ci.centos.org/2732e9d0-0b74-4bad-9ab8-59721b4e8fa2/testsets-docs/provision/Fedora-Cloud-Base-Rawhide-20190921.n.0.x86_64.qcow2.guest.log

Version-Release number of selected component (if applicable):
NetworkManager-1.20.2-3.fc32

How reproducible:
100%

Steps to Reproduce:
1. Boot cloud image

Actual results:
NetworkManager does not start

Expected results:
NetworkManager starts

Additional info:

Comment 1 Bruno Goncalves 2019-10-04 07:36:36 UTC
I see the same issue with Fedora-Cloud-Base-Rawhide-20191001.n.1.x86_64.qcow2

[    9.647891] fbcon: bochs-drmdrmfb (fb0) is primary device
[    9.661105] Console: switching to colour frame buffer device 128x48
[    9.667067] bochs-drm 0000:00:02.0: fb0: bochs-drmdrmfb frame buffer device
[[0;32m  OK  [0m] Started [0;1;39mD-Bus System Message Bus[0m.
[    9.710636] cloud-init[659]: /usr/lib/python3.8/site-packages/cloudinit/handlers/upstart_job.py:95: SyntaxWarning: "is" with a literal. Did you mean "=="?
[    9.712943] cloud-init[659]:   if e.exit_code is 1:
[    9.714082] cloud-init[659]: /usr/lib/python3.8/site-packages/cloudinit/dhclient_hook.py:32: SyntaxWarning: "is" with a literal. Did you mean "=="?
[    9.716183] cloud-init[659]:   if self.net_action is 'down' and os.path.exists(self.hook_file):
[   10.284244] ppdev: user-space parallel port driver
         Starting [0;1;39mD-Bus System Message Bus[0m...
[[0;32m  OK  [0m] Started [0;1;39mD-Bus System Message Bus[0m.
[  100.389242] cloud-init[659]: Cloud-init v. 17.1 running 'init-local' at Tue, 01 Oct 2019 20:22:05 +0000. Up 9.66 seconds.
[  100.391896] cloud-init[659]: 2019-10-01 20:23:35,836 - stages.py[ERROR]: Unable to render networking. Network config is likely broken: No available network renderers found. Searched through list: ['eni', 'sysconfig', 'netplan']
[[0;1;31mFAILED[0m] Failed to start [0;1;39mLogin Service[0m.
See 'systemctl status systemd-logind.service' for details.
[[0;32m  OK  [0m] Started [0;1;39mInitial cloud-init job (pre-networking)[0m.
[[0;32m  OK  [0m] Reached target [0;1;39mNetwork (Pre)[0m.
         Starting [0;1;39mInitial cloud-ini… (metadata service crawler)[0m...
         Starting [0;1;39mD-Bus System Message Bus[0m...
[[0;32m  OK  [0m] Stopped [0;1;39mLogin Service[0m.
         Starting [0;1;39mLogin Service[0m...
[[0;32m  OK  [0m] Started [0;1;39mD-Bus System Message Bus[0m.
         Starting [0;1;39mNetwork Manager[0m...
         Starting [0;1;39mD-Bus System Message Bus[0m...
[[0;32m  OK  [0m] Started [0;1;39mD-Bus System Message Bus[0m.
[  280.569075] cloud-init[759]: Cloud-init v. 17.1 running 'init' at Tue, 01 Oct 2019 20:25:05 +0000. Up 190.31 seconds.
[  280.571516] cloud-init[759]: ci-info: +++++++++++++++++++++++++++Net device info++++++++++++++++++++++++++++
[  280.573455] cloud-init[759]: ci-info: +--------+-------+-----------+-----------+-------+-------------------+
[  280.575470] cloud-init[759]: ci-info: | Device |   Up  |  Address  |    Mask   | Scope |     Hw-Address    |
[  280.577994] cloud-init[759]: ci-info: +--------+-------+-----------+-----------+-------+-------------------+
[  280.579273] cloud-init[759]: ci-info: | eth0:  | False |     .     |     .     |   .   | 52:54:00:12:34:56 |
[  280.580633] cloud-init[759]: ci-info: |  lo:   |  True | 127.0.0.1 | 255.0.0.0 |   .   |         .         |
[  280.582760] cloud-init[759]: ci-info: |  lo:   |  True |     .     |     .     |   d   |         .         |
[  280.584153] cloud-init[759]: ci-info: +--------+-------+-----------+-----------+-------+-------------------+
[  281.848192] EXT4-fs (vda1): resizing filesystem from 3669472 to 3669755 blocks
[  281.851184] EXT4-fs (vda1): resized filesystem to 3669755
         Starting [0;1;39mD-Bus System Message Bus[0m...
[[0;1;31mFAILED[0m] Failed to start [0;1;39mNetwork Manager[0m.
See 'systemctl status NetworkManager.service' for details.
[[0;1;38;5;185mDEPEND[0m] Dependency failed for [0;1;39mNetwork Manager Wait Online[0m.
[[0;32m  OK  [0m] Reached target [0;1;39mNetwork[0m.
[[0;32m  OK  [0m] Started [0;1;39mD-Bus System Message Bus[0m.
[  371.728573] cloud-init[759]: /usr/lib/python3.8/site-packages/cloudinit/config/cc_rsyslog.py:204: FutureWarning: Possible nested set at position 23
[  371.734071] cloud-init[759]:   HOST_PORT_RE = re.compile(
[  371.734993] cloud-init[759]: 2019-10-01 20:28:07,177 - util.py[WARNING]: Failed to set the hostname to localhost (localhost)
[  371.736682] cloud-init[759]: 2019-10-01 20:28:07,187 - util.py[WARNING]: Running module set_hostname (<module 'cloudinit.config.cc_set_hostname' from '/usr/lib/python3.8/site-packages/cloudinit/config/cc_set_hostname.py'>) failed
[[0;1;31mFAILED[0m] Failed to start [0;1;39mLogin Service[0m.
See 'systemctl status systemd-logind.service' for details.
[[0;32m  OK  [0m] Stopped [0;1;39mNetwork Manager[0m.
         Starting [0;1;39mD-Bus System Message Bus[0m...
[[0;32m  OK  [0m] Stopped [0;1;39mLogin Service[0m.
         Starting [0;1;39mLogin Service[0m...
[[0;32m  OK  [0m] Started [0;1;39mD-Bus System Message Bus[0m.
         Starting [0;1;39mNetwork Manager[0m...
         Starting [0;1;39mD-Bus System Message Bus[0m...
[[0;32m  OK  [0m] Started [0;1;39mD-Bus System Message Bus[0m.
         Starting [0;1;39mD-Bus System Message Bus[0m...
[[0;32m  OK  [0m] Started [0;1;39mD-Bus System Message Bus[0m.
         Starting [0;1;39mD-Bus System Message Bus[0m...
[[0;1;31mFAILED[0m] Failed to start [0;1;39mNetwork Manager[0m.
See 'systemctl status NetworkManager.service' for details.

Comment 2 Bruno Goncalves 2019-11-19 08:30:48 UTC
Is there any update to this issue? We still see it on Fedora-Cloud-Base-Rawhide-20191115.n.0.x86_64.qcow2

Comment 3 Lubomir Rintel 2019-11-19 12:28:32 UTC
Not sure why you're reporting this against NM. Virtually everything is broken in that system:

[lkundrak@belphegor libnma]$ curl -s https://jenkins-continuous-infra.apps.ci.centos.org/view/Fedora%20All%20Packages%20Pipeline/job/fedora-rawhide-image-test/112932/artifact/artifacts/Fedora-Rawhide.qcow2.guest.log |grep 'for details'
See 'systemctl status sssd.service' for details.
See 'systemctl status sssd.service' for details.
See 'systemctl status systemd-logind.service' for details.
See 'systemctl status NetworkManager.service' for details.
See 'systemctl status sssd.service' for details.
See 'systemctl status sssd.service' for details.
See 'systemctl status systemd-logind.service' for details.
See 'systemctl status NetworkManager.service' for details.
See 'systemctl status sssd.service' for details.
See 'systemctl status cloud-init.service' for details.
See 'systemctl status systemd-logind.service' for details.
See 'systemctl status NetworkManager.service' for details.
See 'systemctl status sssd.service' for details.

Could be anything though -- can't tell without the journal output.

Please attach the full journal dump. If you can't log in, then booting the system with systemd.journald.forward_to_console=1 may help.

Comment 4 Bruno Goncalves 2019-11-19 13:21:55 UTC
Created attachment 1637651 [details]
journal log dumped to console

Comment 5 Lubomir Rintel 2019-11-19 16:05:50 UTC
Thank you. Apparently, it's the dbus-broker that blows up here:

[  OK  ] Started D-Bus System Message Bus.
[    7.071174] systemd[1]: Started D-Bus System Message Bus.
[    7.073300] dbus-broker-launch[550]: ERROR launcher_run_child @ ../src/launch/launcher.c +324: No medium found
[    7.075093] dbus-broker-launch[550]: ERROR launcher_run @ ../src/launch/launcher.c +1381: Broken pipe
[    7.076548] dbus-broker-launch[550]:       run @ ../src/launch/main.c +153
[    7.077688] dbus-broker-launch[550]:       main @ ../src/launch/main.c +181
[    7.078645] dbus-broker-launch[550]: Exiting due to fatal error: -32

Reassigning.

Comment 6 Bruno Goncalves 2019-11-19 16:17:17 UTC
Created attachment 1637775 [details]
disabled_selinux

Booting the VM with enforcing=0 works.

Comment 7 Bruno Goncalves 2019-11-19 16:19:21 UTC
Created attachment 1637776 [details]
installed_packages_Fedora-Rawhide-20191115.n.0

Comment 8 Lubomir Rintel 2019-11-20 10:24:33 UTC
(In reply to Bruno Goncalves from comment #6)
> Created attachment 1637775 [details]
> disabled_selinux
> 
> Booting the VM with enforcing=0 works.

Thanks, that's good to know. Reassigning to selinux-policy then.

I'm wondering if you're able to find out what the denied AVC was? Can you log in and look into /var/log/audit.log?

Comment 9 Bruno Goncalves 2019-11-20 12:24:10 UTC
Created attachment 1638124 [details]
audit.log

audit.log when booting with enforcing=0

Comment 10 Petr Lautrbach 2019-11-20 12:48:09 UTC
[    6.507067] audit: type=1400 audit(1573880110.739:27): avc:  denied  { mounton } for  pid=1 comm="systemd" path="/etc/machine-id" dev="vda1" ino=134316 scontext=system_u:system_r:init_t:s0 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0

I'm not sure how important machine-id is but it's suspicious.

Comment 11 Lubomir Rintel 2019-11-20 13:44:52 UTC
Thanks for the log Bruno.

(In reply to Petr Lautrbach from comment #10)
> [    6.507067] audit: type=1400 audit(1573880110.739:27): avc:  denied  {
> mounton } for  pid=1 comm="systemd" path="/etc/machine-id" dev="vda1"
> ino=134316 scontext=system_u:system_r:init_t:s0
> tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0
> 
> I'm not sure how important machine-id is but it's suspicious.

I guess the machine-id shouldn't be labeled with etc_t. On my Fedora 31 machine it seems to be machineid_t.

  [root@belphegor lkundrak]# semanage fcontext -l |grep machine-id
  /etc/machine-id                                    regular file       system_u:object_r:machineid_t:s0 
  /var/run/systemd/machine-id                        regular file       system_u:object_r:machineid_t:s0 
  [root@belphegor lkundrak]# ls -Z /etc/machine-id 
  unconfined_u:object_r:machineid_t:s0 /etc/machine-id
  [root@belphegor lkundrak]# 

I'm wondering where does the /etc/machine-id comes from -- is it in the image, is it generated in initramfs and copied around, or generated later on startup...

Comment 12 Lubomir Rintel 2019-11-20 14:30:51 UTC
The policy looks okay in Rawhide, and once one removes the machine-id file and creates it back, it gets the correct label.

I suspect the messed up label has something to do with these lines in kickstart [1]:

  # Remove machine-id on pre generated images
  rm -f /etc/machine-id
  touch /etc/machine-id

[1] https://kojipkgs.fedoraproject.org//work/tasks/2020/39122020/koji-f32-build-39122020-base.ks

Reassigning once more.

Comment 13 Kevin Fenzi 2019-11-20 17:14:14 UTC
So, it should be removed, but not touched?

Comment 14 Bruno Goncalves 2019-11-20 18:14:36 UTC
nnote this patch in kickstart has been there for few years...
https://pagure.io/fedora-kickstarts/c/30c3f7e721709b354d2a98b48d533c359d57bb87

Comment 15 Petr Lautrbach 2019-11-21 13:09:45 UTC
FTR, selinux-policy investigates this problem whether it's related to some recent change in selinux-policy.

Do you have an idea when this problem happened? I've got Fedora-Cloud-Base-Rawhide-20190805.n.0.x86_64.qcow2 which contains /etc/machine_id correctly labeled with system_u:object_r:machineid_t:s0 I'm not sure where to get image from the time between this and the broken one.

Comment 16 Lubomir Rintel 2019-11-21 13:13:33 UTC
(In reply to Kevin Fenzi from comment #13)
> So, it should be removed, but not touched?

If the file is removed, nothing seems to recreate it on next boot. I suppose it should be there, it's just not clear why it it started being created with a wrong label. I'm not familiar enough with how the cloud images are built to be able to tell.

The label was still correct in "Fedora-Cloud-Base-31-1.9.x86_64.raw".

I'm wondering if a "restorecon /etc/machine-id" after the touch would restore the correct label.

Comment 17 Lubomir Rintel 2019-11-21 13:16:45 UTC
Note that the kickstart [1] (from the createImage task [2]) seems to include these lines:

  # FIXME: is this still needed?
  echo "Fixing SELinux contexts."
  touch /var/log/cron
  touch /var/log/boot.log
  # ignore return code because UEFI systems with vfat filesystems
  # that don't support selinux will give us errors
  /usr/sbin/fixfiles -R -a restore || true

[1] https://kojipkgs.fedoraproject.org//work/tasks/2998/39122998/fedora-cloud-base.ks
[2] https://koji.fedoraproject.org/koji/taskinfo?taskID=39122993

This pretty much suggests that messed up labels are a known issue. Perhaps a quick fix would be just moving them after the machine-id hack.

Comment 18 Bruno Goncalves 2019-11-21 13:53:49 UTC
The latest image we have working is Fedora-Rawhide-20190920.n.1 then on Fedora-Cloud-Base-Rawhide-20190922.n.1 we failed to boot, unfortunately that image seems to be removed from koji, so I can't confirm it is the same issue we see right now...

Comment 19 Dusty Mabe 2019-11-21 14:35:01 UTC
While it doesn't fully answer the question it is at least worth it to look at a package diff between those two versions. We no longer have atomic host running against rawhide, but we do have silverblue. Here is what I see:

```
[dustymabe@media fedora-ostree-repo-mirror]$ ostree show 2a1f1486e5f5937f3cb96ba7959519599bd5e65a215ef21e8f084b166ca4fe6e
commit 2a1f1486e5f5937f3cb96ba7959519599bd5e65a215ef21e8f084b166ca4fe6e
ContentChecksum:  4a8217237facd275a8a1d398aefef568af3678339f6356aee3d9802ac35c9dac
Date:  2019-09-22 08:22:04 +0000
Version: Rawhide.20190922.n.1
(no subject)

Found 1 signature:

  Signature made Sun 22 Sep 2019 04:22:14 AM EDT using RSA key ID 6C13026D12C944D0
  Can't check signature: public key not found
[dustymabe@media fedora-ostree-repo-mirror]$ 
[dustymabe@media fedora-ostree-repo-mirror]$ rpm-ostree --repo=./ db diff 0bdb672dd337031c063c00f45d29c64511d321afbecf8a019c205578fe11e008 2a1f1486e5f5937f3cb96ba7959519599bd5e65a215ef21e8f084b166ca4fe6e
ostree diff commit old: 0bdb672dd337031c063c00f45d29c64511d321afbecf8a019c205578fe11e008
ostree diff commit new: 2a1f1486e5f5937f3cb96ba7959519599bd5e65a215ef21e8f084b166ca4fe6e
Upgraded:
  bluez 5.50-9.fc31 -> 5.51-1.fc32
  bluez-cups 5.50-9.fc31 -> 5.51-1.fc32
  bluez-libs 5.50-9.fc31 -> 5.51-1.fc32
  bluez-obexd 5.50-9.fc31 -> 5.51-1.fc32
  buildah 1.12.0-0.18.dev.git04150e0.fc32 -> 1.12.0-0.19.dev.gitc3b1ec6.fc32
  gnome-shell 3.34.0-1.fc32 -> 3.34.0-2.fc32
  kernel 5.4.0-0.rc0.git2.1.fc32 -> 5.4.0-0.rc0.git3.1.fc32
  kernel-core 5.4.0-0.rc0.git2.1.fc32 -> 5.4.0-0.rc0.git3.1.fc32
  kernel-devel 5.4.0-0.rc0.git2.1.fc32 -> 5.4.0-0.rc0.git3.1.fc32
  kernel-modules 5.4.0-0.rc0.git2.1.fc32 -> 5.4.0-0.rc0.git3.1.fc32
  kernel-modules-extra 5.4.0-0.rc0.git2.1.fc32 -> 5.4.0-0.rc0.git3.1.fc32
  osinfo-db 20190905-1.fc32 -> 20190920-1.fc32
  python3-gobject 3.34.0-1.fc32 -> 3.34.0-2.fc32
  python3-gobject-base 3.34.0-1.fc32 -> 3.34.0-2.fc32
  selinux-policy 3.14.5-3.fc32 -> 3.14.5-5.fc32
  selinux-policy-targeted 3.14.5-3.fc32 -> 3.14.5-5.fc32
  shared-mime-info 1.13.1-1.fc32 -> 1.14-1.fc32
  systemd 243-1.fc32 -> 243-2.gitfab6f01.fc32
  systemd-libs 243-1.fc32 -> 243-2.gitfab6f01.fc32
  systemd-pam 243-1.fc32 -> 243-2.gitfab6f01.fc32
  systemd-rpm-macros 243-1.fc32 -> 243-2.gitfab6f01.fc32
  systemd-udev 243-1.fc32 -> 243-2.gitfab6f01.fc32
  podman 2:1.5.2-0.69.dev.git997c4b5.fc32 -> 2:1.5.2-0.20.dev.git18f2328.fc32
  podman-manpages 2:1.5.2-0.69.dev.git997c4b5.fc32 -> 2:1.5.2-0.20.dev.git18f2328.fc32
Removed:
  crun-0.9.1-1.fc32.x86_64
  timedatex-0.6-2.fc31.x86_64
  yajl-2.1.0-13.fc31.x86_64

```

Comment 20 Dusty Mabe 2019-11-21 14:36:12 UTC
forgot to include the `show` output for the `Rawhide.20190920.n.1` one:

```
[dustymabe@media fedora-ostree-repo-mirror]$ ostree show 0bdb672dd337031c063c00f45d29c64511d321afbecf8a019c205578fe11e008
commit 0bdb672dd337031c063c00f45d29c64511d321afbecf8a019c205578fe11e008
ContentChecksum:  73ebe2c031d98f8bdd40c5fdbb086cdaa40701f24f2dff7cba0d30b8bfda96a4
Date:  2019-09-20 15:27:03 +0000
Version: Rawhide.20190920.n.1
(no subject)

Found 1 signature:

  Signature made Fri 20 Sep 2019 11:27:12 AM EDT using RSA key ID 6C13026D12C944D0
  Can't check signature: public key not found

```

Comment 21 Lukas Vrabec 2019-11-21 15:15:54 UTC
Hi Guys, 

I'm almost sure, that this is regression in selinux-policy package. I made fixed and should be part of the next build (which is in progress already)

Commit fixes the issue:
commit a9839a5cc8e1f2cefd1dba7c2fcdf39529e905fb (HEAD -> rawhide, origin/rawhide)
Author: Lukas Vrabec <lvrabec>
Date:   Thu Nov 21 13:43:47 2019 +0100

    Make unconfined domains part of domain_named_attribute
    
    Allow unconfined domains (types using unconfined_domain macro) to be
    part of domain_named_attribute to allow these domains create several
    system wide files on the system with correct label.
    
    For more info see all file name transition macros used by
    domain_named_attribute.
    
    This issue regression from #279 and fixes rhbz#1754471

Comment 22 Zdenek Pytela 2019-11-21 16:53:59 UTC
*** Bug 1772211 has been marked as a duplicate of this bug. ***

Comment 23 Adam Williamson 2019-11-22 16:54:34 UTC
Looks like the fix is in selinux-policy-3.14.5-16.fc32 and that just missed the Fedora-Rawhide-20191122.n.0 compose, it still has -15.fc32. Next compose should have the new selinux-policy and we can see if it's fixed there.

Note I just hooked up Cloud image testing in openQA (to replace autocloud) and it is hitting this by the looks of things:

https://openqa.fedoraproject.org/tests/487429

also marking as an automatic Beta blocker per https://fedoraproject.org/wiki/QA:SOP_blocker_bug_process#Automatic_blockers entry "Complete failure of any release-blocking TC/RC image to boot at all under any circumstance - "DOA" image (conditional failure is not an automatic blocker)" - Cloud_Base qcow2 is a release-blocking image.

Comment 24 Dusty Mabe 2019-11-23 18:09:44 UTC
I was able to boot the vagrant box from last night's compose:

https://kojipkgs.fedoraproject.org/compose/rawhide/Fedora-Rawhide-20191123.n.0/compose/Cloud/x86_64/images/

...but it did take a long time to come up (entropy?) and has some failed units:



```
$ SSH_AUTH_SOCK= ssh vagrant.121.29
vagrant.121.29's password: 
[systemd]
Failed Units: 2
  systemd-hostnamed.service
  systemd-logind.service
[vagrant@localhost ~]$ 
[vagrant@localhost ~]$ 
[vagrant@localhost ~]$ systemctl status systemd-logind.service 
● systemd-logind.service - Login Service
     Loaded: loaded (/usr/lib/systemd/system/systemd-logind.service; static; vendor preset: disabled)
     Active: failed (Result: exit-code) since Sat 2019-11-23 17:29:21 UTC; 20min ago
       Docs: man:systemd-logind.service(8)
             man:logind.conf(5)
             https://www.freedesktop.org/wiki/Software/systemd/logind
             https://www.freedesktop.org/wiki/Software/systemd/multiseat
    Process: 698 ExecStartPre=/sbin/modprobe -abq drm (code=exited, status=226/NAMESPACE)
    Process: 699 ExecStart=/usr/lib/systemd/systemd-logind (code=exited, status=226/NAMESPACE)
   Main PID: 699 (code=exited, status=226/NAMESPACE)
        CPU: 17ms
[vagrant@localhost ~]$ sudo ls
[vagrant@localhost ~]$ 
[vagrant@localhost ~]$ systemctl status systemd-hostnamed.service
● systemd-hostnamed.service - Hostname Service
     Loaded: loaded (/usr/lib/systemd/system/systemd-hostnamed.service; static; vendor preset: disabled)
    Drop-In: /usr/lib/systemd/system/systemd-hostnamed.service.d
             └─disable-privatedevices.conf
     Active: failed (Result: exit-code) since Sat 2019-11-23 17:28:50 UTC; 38min ago
       Docs: man:systemd-hostnamed.service(8)
             man:hostname(5)
             man:machine-info(5)
             https://www.freedesktop.org/wiki/Software/systemd/hostnamed
    Process: 582 ExecStart=/usr/lib/systemd/systemd-hostnamed (code=exited, status=226/NAMESPACE)
   Main PID: 582 (code=exited, status=226/NAMESPACE)
        CPU: 10ms
[vagrant@localhost ~]$ 

```

Comment 25 Bruno Goncalves 2019-11-27 08:09:59 UTC
I can confirm Fedora-Rawhide-20191126.n.1 image boots up for us, and we also experience the issue of taking long time to be ready.

Should we close this BZ and open a new one to track the boot performance?

Snip of what I could see:

[    9.308375] NetworkManager[747]: <info>  [1574841831.6590] policy: set 'System eth0' (eth0) as default for IPv6 routing and DNS
[   13.076363] chronyd[626]: Selected source 193.1.31.66
[   13.079273] chronyd[626]: System clock TAI offset set to 37 seconds
[   19.089315] systemd[1]: NetworkManager-dispatcher.service: Succeeded.
[   19.400029] cloud-init[761]: Cloud-init v. 17.1 running 'init' at Wed, 27 Nov 2019 08:03:50 +0000. Up 8.04 seconds.
[   19.402049] cloud-init[761]: ci-info: +++++++++++++++++++++++++++Net device info++++++++++++++++++++++++++++
[   19.403902] cloud-init[761]: ci-info: +--------+-------+-----------+-----------+-------+-------------------+
[   19.405475] cloud-init[761]: ci-info: | Device |   Up  |  Address  |    Mask   | Scope |     Hw-Address    |
[   19.407071] cloud-init[761]: ci-info: +--------+-------+-----------+-----------+-------+-------------------+
[   19.408604] cloud-init[761]: ci-info: | eth0:  | False |     .     |     .     |   .   | 52:54:00:12:34:56 |
[   19.409759] cloud-init[761]: ci-info: |  lo:   |  True | 127.0.0.1 | 255.0.0.0 |   .   |         .         |
[   19.410883] cloud-init[761]: ci-info: |  lo:   |  True |     .     |     .     |   d   |         .         |
[   19.412053] cloud-init[761]: ci-info: +--------+-------+-----------+-----------+-------+-------------------+
[   19.413134] cloud-init[761]: 2019-11-27 08:04:01,750 - util.py[WARNING]: did not find either path /sys/class/dmi/id or dmidecode command
[   38.521595] systemd[1]: systemd-hostnamed.service: Succeeded.
[   69.477934] cloud-init[761]: 2019-11-27 08:04:51,828 - url_helper.py[WARNING]: Calling 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [50/120s]: request error [HTTPConnectionPool(host='169.254.169.254', port=80): Max retries exceeded with url: /2009-04-04/meta-data/instance-id (Caused by ConnectTimeoutError(<urllib3.connection.HTTPConnection object at 0x7f1b2473b4f0>, 'Connection to 169.254.169.254 timed out. (connect timeout=50.0)'))]
[   79.152248] chronyd[626]: Selected source 193.1.219.116
[  120.535141] cloud-init[761]: 2019-11-27 08:05:42,886 - url_helper.py[WARNING]: Calling 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [101/120s]: request error [HTTPConnectionPool(host='169.254.169.254', port=80): Max retries exceeded with url: /2009-04-04/meta-data/instance-id (Caused by ConnectTimeoutError(<urllib3.connection.HTTPConnection object at 0x7f1b2473bc70>, 'Connection to 169.254.169.254 timed out. (connect timeout=50.0)'))]
[  138.561087] cloud-init[761]: 2019-11-27 08:06:00,911 - url_helper.py[WARNING]: Calling 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [119/120s]: request error [HTTPConnectionPool(host='169.254.169.254', port=80): Max retries exceeded with url: /2009-04-04/meta-data/instance-id (Caused by ConnectTimeoutError(<urllib3.connection.HTTPConnection object at 0x7f1b24757670>, 'Connection to 169.254.169.254 timed out. (connect timeout=17.0)'))]

Comment 26 Ben Cotton 2019-11-27 14:21:02 UTC
Fedora 29 changed to end-of-life (EOL) status on 2019-11-26. Fedora 29 is
no longer maintained, which means that it will not receive any further
security or bug fix updates. As a result we are closing this bug.

If you can reproduce this bug against a currently maintained version of
Fedora please feel free to reopen this bug against that version. If you
are unable to reopen this bug, please file a new report against the
current release. If you experience problems, please add a comment to this
bug.

Thank you for reporting this bug and we are sorry it could not be fixed.

Comment 27 Bruno Goncalves 2019-11-27 14:25:58 UTC
Was this BZ closed by mistake? It is for Rawide, it has nothing to do with Fedora 29...

Comment 28 Ben Cotton 2019-11-27 14:45:14 UTC
Yes, it was a mistake. I did not properly filter the input the the EOL closing script. Reopening.

Comment 29 Adam Williamson 2019-11-27 16:03:11 UTC
Bruno, Dusty - check for SELinux denials and see if it boots normally with enforcing=0 . If so you're probably hitting https://bugzilla.redhat.com/show_bug.cgi?id=1775882 .

Comment 30 Lukas Vrabec 2019-11-27 18:03:22 UTC
Yes, please let's close this as CURRENTRELEASE and continue in #1775882.

Comment 31 Dusty Mabe 2019-12-02 15:33:37 UTC
(In reply to Adam Williamson from comment #29)
> Bruno, Dusty - check for SELinux denials and see if it boots normally with
> enforcing=0 . If so you're probably hitting
> https://bugzilla.redhat.com/show_bug.cgi?id=1775882 .

yep. seems to boot fine with enforcing=0. I also subsequently grabbed the latest cloud vagrant box from 12/02/19 and it boots fine. Thanks all!