Bug 2333848 - selinux policy blocked systemd-resolved start on Fedora rawhide [NEEDINFO]
Summary: selinux policy blocked systemd-resolved start on Fedora rawhide
Keywords:
Status: CLOSED DUPLICATE of bug 2334015
Alias: None
Product: Fedora
Classification: Fedora
Component: selinux-policy
Version: 42
Hardware: Unspecified
OS: Linux
medium
high
Target Milestone: ---
Assignee: Zdenek Pytela
QA Contact: Fedora Extras Quality Assurance
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2024-12-23 13:51 UTC by Xiaofeng Wang
Modified: 2025-09-02 17:57 UTC (History)
9 users (show)

Fixed In Version:
Clone Of:
Environment:
Last Closed: 2025-09-02 17:57:36 UTC
Type: ---
Embargoed:
zpytela: needinfo? (xiaofwan)


Attachments (Terms of Use)

Description Xiaofeng Wang 2024-12-23 13:51:07 UTC
systemd-resolved can't get started. Reported this issue in systemd, https://github.com/systemd/systemd/issues/35731. I got suggested "Please report that to the provider of your selinux policy, as it looks like it needs an update".

selinux policy version:
selinux-policy-41.27-1.fc42.noarch
selinux-policy-targeted-41.27-1.fc42.noarch

systemd version:
systemd-257.1-1.fc42.x86_64

# journalctl -u systemd-resolved.service
Dec 23 04:42:12 localhost systemd[1]: Starting systemd-resolved.service - Network Name Resolution...
Dec 23 04:42:12 localhost (resolved)[711]: Failed to create destination mount point node '/run/systemd/mount-rootfs/var/tmp', ignoring: Permission denied
Dec 23 04:42:12 localhost (resolved)[711]: Failed to mount /run/systemd/unit-private-tmp/var-tmp to /run/systemd/mount-rootfs/var/tmp: No such file or directory
Dec 23 04:42:12 localhost (resolved)[711]: systemd-resolved.service: Failed to set up mount namespacing: /var/tmp: No such file or directory
Dec 23 04:42:12 localhost (resolved)[711]: systemd-resolved.service: Failed at step NAMESPACE spawning /usr/lib/systemd/systemd-resolved: No such file or directory
Dec 23 04:42:12 localhost systemd[1]: systemd-resolved.service: Main process exited, code=exited, status=226/NAMESPACE
Dec 23 04:42:12 localhost systemd[1]: systemd-resolved.service: Failed with result 'exit-code'.
Dec 23 04:42:12 localhost systemd[1]: Failed to start systemd-resolved.service - Network Name Resolution.
Dec 23 04:42:12 localhost systemd[1]: systemd-resolved.service: Scheduled restart job, restart counter is at 1.

Reproducible: Always

Steps to Reproduce:
1. boot Fedora rawhide system with image https://dl.fedoraproject.org/pub/fedora/linux/development/rawhide/Cloud/x86_64/images/Fedora-Cloud-Base-Generic-Rawhide-20241222.n.0.x86_64.qcow2
    
2. ping www.cisco.com failed with error:

# ping www.cisco.com
ping: www.cisco.com: Temporary failure in name resolution

3. check systemd-resolved.service log:

# journalctl -u systemd-resolved.service
Dec 23 04:42:12 localhost systemd[1]: Starting systemd-resolved.service - Network Name Resolution...
Dec 23 04:42:12 localhost (resolved)[711]: Failed to create destination mount point node '/run/systemd/mount-rootfs/var/tmp', ignoring: Permission denied
Dec 23 04:42:12 localhost (resolved)[711]: Failed to mount /run/systemd/unit-private-tmp/var-tmp to /run/systemd/mount-rootfs/var/tmp: No such file or directory
Dec 23 04:42:12 localhost (resolved)[711]: systemd-resolved.service: Failed to set up mount namespacing: /var/tmp: No such file or directory
Dec 23 04:42:12 localhost (resolved)[711]: systemd-resolved.service: Failed at step NAMESPACE spawning /usr/lib/systemd/systemd-resolved: No such file or directory
Dec 23 04:42:12 localhost systemd[1]: systemd-resolved.service: Main process exited, code=exited, status=226/NAMESPACE
Dec 23 04:42:12 localhost systemd[1]: systemd-resolved.service: Failed with result 'exit-code'.
Dec 23 04:42:12 localhost systemd[1]: Failed to start systemd-resolved.service - Network Name Resolution.
Dec 23 04:42:12 localhost systemd[1]: systemd-resolved.service: Scheduled restart job, restart counter is at 1.

Actual Results:  
Can't start systemd-resolved

Expected Results:  
Start systemd-resolved without error

Comment 1 Zdenek Pytela 2025-01-02 21:57:37 UTC
It does not reproduce on a clean system, are you aware of any related changes?
Please upload denials with full auditing enabled:
https://fedoraproject.org/wiki/SELinux/Debugging#Enable_full_auditing

if no denial appears, remove dontaudit rules and try again:
semodule -DB

Comment 2 Fabien Boucher 2025-01-06 14:38:15 UTC
Hi,

I get a similar issue on a rawhide cloud image based on Fedora-Cloud-Base-Generic-Rawhide-20250105.n.0.x86_64.qcow2. The image starts on an OpenStack cloud provider.
At startup some services are down:

[systemd]                                               
Failed Units: 3                    
  systemd-oomd.service               
  systemd-resolved.service  
  systemd-oomd.socket


[zuul-worker@np0005073882 ~]$ ping -c1 mirrors.fedoraproject.org
ping: mirrors.fedoraproject.org: Temporary failure in name resolution
[zuul-worker@np0005073882 ~]$ systemctl status systemd-resolved
× systemd-resolved.service - Network Name Resolution
     Loaded: loaded (/usr/lib/systemd/system/systemd-resolved.service; enabled; preset: enabled)
    Drop-In: /usr/lib/systemd/system/service.d
             └─10-timeout-abort.conf, 50-keep-warm.conf
     Active: failed (Result: exit-code) since Mon 2025-01-06 14:24:56 UTC; 1min 40s ago
 Invocation: 93ab06a195a14d628458381b2e38fd71
       Docs: man:systemd-resolved.service(8)
             man:org.freedesktop.resolve1(5)
             https://systemd.io/WRITING_NETWORK_CONFIGURATION_MANAGERS
             https://systemd.io/WRITING_RESOLVER_CLIENTS
    Process: 810 ExecStart=/usr/lib/systemd/systemd-resolved (code=exited, status=226/NAMESPACE)
   Main PID: 810 (code=exited, status=226/NAMESPACE)


Restarting the service, makes it start as expected:


[zuul-worker@np0005073882 ~]$ sudo systemctl restart systemd-resolved
[zuul-worker@np0005073882 ~]$ systemctl status systemd-resolved
● systemd-resolved.service - Network Name Resolution
     Loaded: loaded (/usr/lib/systemd/system/systemd-resolved.service; enabled; preset: enabled)
    Drop-In: /usr/lib/systemd/system/service.d
             └─10-timeout-abort.conf, 50-keep-warm.conf
     Active: active (running) since Mon 2025-01-06 14:26:46 UTC; 5s ago
 Invocation: 47ebb1db2eeb4bb681186a3f4e24a002
       Docs: man:systemd-resolved.service(8)
             man:org.freedesktop.resolve1(5)
             https://systemd.io/WRITING_NETWORK_CONFIGURATION_MANAGERS
             https://systemd.io/WRITING_RESOLVER_CLIENTS
   Main PID: 1222 (systemd-resolve)
     Status: "Processing requests..."
      Tasks: 1 (limit: 9457)
     Memory: 4.6M (peak: 5.2M)
        CPU: 127ms
     CGroup: /system.slice/systemd-resolved.service
             └─1222 /usr/lib/systemd/systemd-resolved

[zuul-worker@np0005073882 ~]$ ping -c1 mirrors.fedoraproject.org
PING wildcard.fedoraproject.org (8.43.85.73) 56(84) bytes of data.
64 bytes from proxy03.fedoraproject.org (8.43.85.73): icmp_seq=1 ttl=53 time=22.6 ms

--- wildcard.fedoraproject.org ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 22.601/22.601/22.601/0.000 ms


So I've performed enabling the full auditing https://fedoraproject.org/wiki/SELinux/Debugging#Enable_full_auditing
then rebooted the machine. systemd-resolved is down as expected and ran ausearch:

[zuul-worker@np0005073882 ~]$ sudo ausearch -i -m avc,user_avc,selinux_err,user_selinux_err -ts today
----                                               
type=PROCTITLE msg=audit(01/06/2025 14:18:56.347:141) : proctitle=/sbin/ldconfig -X 
type=PATH msg=audit(01/06/2025 14:18:56.347:141) : item=3 name=/var/cache/ldconfig/aux-cache inode=2429 dev=00:20 mode=file,600 ouid=root ogid=root rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=DELETE cap_fp=none cap_fi=none cap_fe=0 cap_fver=0 cap_frootid=0 
type=PATH msg=audit(01/06/2025 14:18:56.347:141) : item=2 name=/var/cache/ldconfig/aux-cache~ inode=2441 dev=00:20 mode=file,600 ouid=root ogid=root rdev=00:00 obj=system_u:object_r:ldconfig_cache_t:s0 nametype=DELETE cap_fp=none cap_fi=none cap_fe=0 cap_fver=0 cap_frootid=0 
type=PATH msg=audit(01/06/2025 14:18:56.347:141) : item=1 name=/var/cache/ldconfig/ inode=279 dev=00:20 mode=dir,700 ouid=root ogid=root rdev=00:00 obj=system_u:object_r:ldconfig_cache_t:s0 nametype=PARENT cap_fp=none cap_fi=none cap_fe=0 cap_fver=0 cap_frootid=0 
type=PATH msg=audit(01/06/2025 14:18:56.347:141) : item=0 name=/var/cache/ldconfig/ inode=279 dev=00:20 mode=dir,700 ouid=root ogid=root rdev=00:00 obj=system_u:object_r:ldconfig_cache_t:s0 nametype=PARENT cap_fp=none cap_fi=none cap_fe=0 cap_fver=0 cap_frootid=0 
type=CWD msg=audit(01/06/2025 14:18:56.347:141) : cwd=/ 
type=SYSCALL msg=audit(01/06/2025 14:18:56.347:141) : arch=x86_64 syscall=rename success=no exit=EACCES(Permission denied) a0=0x5555700ee350 a1=0x7f37cf65a765 a2=0x0 a3=0x0 items=4 ppid=1 pid=918 auid=unset uid=root gid=root euid=root suid=root fsuid=root egid=root sgid=root fsgid=root tty=(none) ses=unset comm=ldconfig exe=/usr/sbin/ldconf
ig subj=system_u:system_r:ldconfig_t:s0 key=(null)  
type=AVC msg=audit(01/06/2025 14:18:56.347:141) : avc:  denied  { unlink } for  pid=918 comm=ldconfig name=aux-cache dev="vda4" ino=2429 scontext=system_u:system_r:ldconfig_t:s0 tcontext=system_u:object_r:unlabeled_t:s0 tclass=file permissive=0 

[zuul-worker@np0005073882 ~]$ sudo journalctl -u systemd-resolved
Jan 06 14:18:54 localhost systemd[1]: Starting systemd-resolved.service - Network Name Resolution...
Jan 06 14:18:54 localhost (resolved)[790]: Failed to create destination mount point node '/run/systemd/mount-rootfs/var/tmp', ignoring: Permission denied
Jan 06 14:18:54 localhost (resolved)[790]: Failed to mount /run/systemd/unit-private-tmp/var-tmp to /run/systemd/mount-rootfs/var/tmp: No such file or directory
Jan 06 14:18:54 localhost (resolved)[790]: systemd-resolved.service: Failed to set up mount namespacing: /var/tmp: No such file or directory
Jan 06 14:18:54 localhost (resolved)[790]: systemd-resolved.service: Failed at step NAMESPACE spawning /usr/lib/systemd/systemd-resolved: No such file or directory
Jan 06 14:18:54 localhost systemd[1]: systemd-resolved.service: Main process exited, code=exited, status=226/NAMESPACE
Jan 06 14:18:54 localhost systemd[1]: systemd-resolved.service: Failed with result 'exit-code'.
Jan 06 14:18:54 localhost systemd[1]: Failed to start systemd-resolved.service - Network Name Resolution.

Comment 3 Zdenek Pytela 2025-01-06 15:25:26 UTC
Thanks, Fabien.

The unlabeled_t label is displayed when a file was created in SELinux disabled state or when its actual label does not currently exist.

Do you know which service created the /var/cache/ldconfig/aux-cache file before running ldconfig and if SELinux was enabled at that time?
Other labels in this report seem to be fine.

Comment 4 Fabien Boucher 2025-01-07 10:09:55 UTC
My rawhide image is "customized" via the virt-customize command so in order to remove any doubt, I attempted to boot the latest bare (without customization) rawhide https://dl.fedoraproject.org/pub/fedora/linux/development/rawhide/Cloud/x86_64/images/Fedora-Cloud-Base-Generic-Rawhide-20250107.n.0.x86_64.qcow2
And the same issue is present. However now I don't get any match with ausearch:

[root@test-rawhide3 ~]# cat /etc/audit/rules.d/audit.rules 
## This set of rules is to suppress the performance effects of the
## audit system. The result is that you only get hardwired events.
-D

## This suppresses syscall auditing for all tasks started
## with this rule in effect.  Remove it if you need syscall
## auditing.
# -a task,never
-w /etc/shadow -p w

[root@test-rawhide3 ~]# reboot

The system will reboot now!


$ ssh fedora@test-rawhide3
[systemd]
Failed Units: 3
  systemd-oomd.service
  systemd-resolved.service
  systemd-oomd.socket
[fedora@test-rawhide3 ~]$ uptime
 09:56:08 up 0 min,  2 users,  load average: 0.24, 0.06, 0.02
[fedora@test-rawhide3 ~]$ sudo ausearch -i -m avc,user_avc,selinux_err,user_selinux_err -ts today
<no matches>


[root@test-rawhide3 ~]# sestatus 
SELinux status:                 enabled
SELinuxfs mount:                /sys/fs/selinux
SELinux root directory:         /etc/selinux
Loaded policy name:             targeted
Current mode:                   enforcing
Mode from config file:          enforcing
Policy MLS status:              enabled
Policy deny_unknown status:     allowed
Memory protection checking:     actual (secure)
Max kernel policy version:      33

[root@test-rawhide3 ~]# sed s/SELINUX=enforcing/SELINUX=permissive/ /etc/selinux/config
[root@test-rawhide3 ~]# reboot


$ ssh fedora@test-rawhide3
[fedora@test-rawhide3 ~]$ sudo systemctl list-units --state failed 
  UNIT LOAD ACTIVE SUB DESCRIPTION

0 loaded units listed.

> Do you know which service created the /var/cache/ldconfig/aux-cache file before running ldconfig and if SELinux was enabled at that time?
No I don't

Let me know how could I help

Comment 5 Martin Pitt 2025-01-07 10:23:25 UTC
Also note bug 2333743 , which is a race condition in systemd. I didn't dive into this deep enough to know if these are actual duplicates.

Comment 6 Zdenek Pytela 2025-01-07 10:36:19 UTC
(In reply to Fabien Boucher from comment #4)
> My rawhide image is "customized" via the virt-customize command so in order
This may be the cause of unlabeled_t labels. I am not familiar with these tools, but I've found

   SELINUX
       Guests which use SELinux (such as Fedora and Red Hat Enterprise Linux) require
       that each file has a correct SELinux label.

       Virt-builder does not know how to give new files a label,  so  there  are  two
       possible strategies it can use to ensure correct labelling:

       Automatic relabeling
           This runs setfiles(8) just before finalizing the guest, which sets SELinux
           labels correctly in the disk image.

           This is the recommended method.

       Using --no-selinux-relabel --touch /.autorelabel
           Guest templates may already contain a file called /.autorelabel or you may
           touch it.

           For  guests  that  use  SELinux, this causes restorecon(8) to run at first
           boot.  Guests will reboot themselves once the first  time  you  use  them,
           which is normal and harmless.


> to remove any doubt, I attempted to boot the latest bare (without
> customization) rawhide
> https://dl.fedoraproject.org/pub/fedora/linux/development/rawhide/Cloud/
> x86_64/images/Fedora-Cloud-Base-Generic-Rawhide-20250107.n.0.x86_64.qcow2
> And the same issue is present. However now I don't get any match with
> ausearch:
If there are no AVC denials now, what do you mean by "the same issue"?

Comment 7 Fabien Boucher 2025-01-07 10:49:08 UTC
(In reply to Zdenek Pytela from comment #6)
> > My rawhide image is "customized" via the virt-customize command so in order
> This may be the cause of unlabeled_t labels. I am not familiar with these
> tools, but I've found
> 
>    SELINUX
>        Guests which use SELinux (such as Fedora and Red Hat Enterprise
> Linux) require
>        that each file has a correct SELinux label.
> 
>        Virt-builder does not know how to give new files a label,  so  there 
> are  two
>        possible strategies it can use to ensure correct labelling:
> 
>        Automatic relabeling
>            This runs setfiles(8) just before finalizing the guest, which
> sets SELinux
>            labels correctly in the disk image.
> 
>            This is the recommended method.
> 
>        Using --no-selinux-relabel --touch /.autorelabel
>            Guest templates may already contain a file called /.autorelabel
> or you may
>            touch it.
> 
>            For  guests  that  use  SELinux, this causes restorecon(8) to run
> at first
>            boot.  Guests will reboot themselves once the first  time  you 
> use  them,
>            which is normal and harmless.
> 
> 

Yes that's why I did the last test with the bare rawhide image as virt-customize could have introduced the issue.

> > And the same issue is present. However now I don't get any match with
> > ausearch:
> If there are no AVC denials now, what do you mean by "the same issue"?

I mean systemd-resolved and systemd-oomd as failed units at startup.

Comment 8 Zdenek Pytela 2025-01-07 10:54:02 UTC
(In reply to Fabien Boucher from comment #7)
> > > And the same issue is present. However now I don't get any match with
> > > ausearch:
> > If there are no AVC denials now, what do you mean by "the same issue"?
> 
> I mean systemd-resolved and systemd-oomd as failed units at startup.
Do they fail also in selinux permissive mode? Are there any related data in the journal?

Comment 9 Fabien Boucher 2025-01-07 11:27:29 UTC
(In reply to Zdenek Pytela from comment #8)
> (In reply to Fabien Boucher from comment #7)
> > > > And the same issue is present. However now I don't get any match with
> > > > ausearch:
> > > If there are no AVC denials now, what do you mean by "the same issue"?
> > 
> > I mean systemd-resolved and systemd-oomd as failed units at startup.
> Do they fail also in selinux permissive mode? Are there any related data in
> the journal?

In permissive, those services start well.
In enforcing, those services are in failed state.

The only issues I saw was as I reported earlier:

Jan 06 14:18:54 localhost systemd[1]: Starting systemd-resolved.service - Network Name Resolution...
Jan 06 14:18:54 localhost (resolved)[790]: Failed to create destination mount point node '/run/systemd/mount-rootfs/var/tmp', ignoring: Permission denied
Jan 06 14:18:54 localhost (resolved)[790]: Failed to mount /run/systemd/unit-private-tmp/var-tmp to /run/systemd/mount-rootfs/var/tmp: No such file or directory
Jan 06 14:18:54 localhost (resolved)[790]: systemd-resolved.service: Failed to set up mount namespacing: /var/tmp: No such file or directory
Jan 06 14:18:54 localhost (resolved)[790]: systemd-resolved.service: Failed at step NAMESPACE spawning /usr/lib/systemd/systemd-resolved: No such file or directory
Jan 06 14:18:54 localhost systemd[1]: systemd-resolved.service: Main process exited, code=exited, status=226/NAMESPACE
Jan 06 14:18:54 localhost systemd[1]: systemd-resolved.service: Failed with result 'exit-code'.
Jan 06 14:18:54 localhost systemd[1]: Failed to start systemd-resolved.service - Network Name Resolution.

Comment 10 Zdenek Pytela 2025-01-07 12:12:31 UTC
Understood. However, I still cannot reproduce it. Everything is default?

f42# rpm -q systemd selinux-policy
systemd-257.1-1.fc42.x86_64
selinux-policy-41.27-1.fc42.noarch
f42# systemctl cat systemd-resolved
# /usr/lib/systemd/system/systemd-resolved.service
#  SPDX-License-Identifier: LGPL-2.1-or-later
#
#  This file is part of systemd.
#
#  systemd is free software; you can redistribute it and/or modify it
#  under the terms of the GNU Lesser General Public License as published by
#  the Free Software Foundation; either version 2.1 of the License, or
#  (at your option) any later version.

[Unit]
Description=Network Name Resolution
Documentation=man:systemd-resolved.service(8)
Documentation=man:org.freedesktop.resolve1(5)
Documentation=https://systemd.io/WRITING_NETWORK_CONFIGURATION_MANAGERS
Documentation=https://systemd.io/WRITING_RESOLVER_CLIENTS

DefaultDependencies=no
After=systemd-sysctl.service systemd-sysusers.service
Before=sysinit.target network.target nss-lookup.target shutdown.target initrd-switch-ro>
Conflicts=shutdown.target initrd-switch-root.target
Wants=nss-lookup.target

[Service]
AmbientCapabilities=CAP_SETPCAP CAP_NET_RAW CAP_NET_BIND_SERVICE
BusName=org.freedesktop.resolve1
CapabilityBoundingSet=CAP_SETPCAP CAP_NET_RAW CAP_NET_BIND_SERVICE
ExecStart=!!/usr/lib/systemd/systemd-resolved
LockPersonality=yes
MemoryDenyWriteExecute=yes
NoNewPrivileges=yes
PrivateDevices=yes
PrivateTmp=disconnected
ProtectClock=yes
ProtectControlGroups=yes
ProtectHome=yes
ProtectKernelLogs=yes
ProtectKernelModules=yes
ProtectKernelTunables=yes
ProtectSystem=strict
Restart=always
RestartSec=0
RestrictAddressFamilies=AF_UNIX AF_NETLINK AF_INET AF_INET6
RestrictNamespaces=yes
RestrictRealtime=yes
RestrictSUIDSGID=yes
RuntimeDirectory=systemd/resolve
RuntimeDirectoryPreserve=yes
SystemCallArchitectures=native
SystemCallErrorNumber=EPERM
SystemCallFilter=@system-service
Type=notify-reload
User=systemd-resolve
ImportCredential=network.dns
ImportCredential=network.search_domains

[Install]
WantedBy=sysinit.target
Alias=dbus-org.freedesktop.resolve1.service

# /usr/lib/systemd/system/service.d/10-timeout-abort.conf
# This file is part of the systemd package.
# See https://fedoraproject.org/wiki/Changes/Shorter_Shutdown_Timer.
#
# To facilitate debugging when a service fails to stop cleanly,
# TimeoutStopFailureMode=abort is set to "crash" services that fail to stop in
# the time allotted. This will cause the service to be terminated with SIGABRT
# and a coredump to be generated.
#
# To undo this configuration change, create a mask file:
#   sudo mkdir -p /etc/systemd/system/service.d
#   sudo ln -sv /dev/null /etc/systemd/system/service.d/10-timeout-abort.conf

[Service]
TimeoutStopFailureMode=abort

# /usr/lib/systemd/system/service.d/50-keep-warm.conf
# Disable freezing of user sessions to work around kernel bugs.
# See https://bugzilla.redhat.com/show_bug.cgi?id=2321268

[Service]
Environment=SYSTEMD_SLEEP_FREEZE_USER_SESSIONS=0

Comment 11 Fabien Boucher 2025-01-07 12:52:30 UTC
Yes everything was default.

However now I'm unable to connect anymore on fresh nodes I'm spawning and I'm wondering if cloud-init is failing to connect on cloud provider to get my public key thus I cannot get a shell. Perhaps related to the race condition mentioned in https://bugzilla.redhat.com/show_bug.cgi?id=2333743

Using the reproducer from Martin I get the same issue than I saw deploying on OpenStack.
qemu-system-x86_64 -cpu host -enable-kvm -nographic -m 2048 -drive file=Fedora-Cloud-Base-Generic-Rawhide-20250107.n.0.x86_64.qcow2,if=virtio -snapshot -cdrom cloud-init.iso

Fedora Linux 42 (Cloud Edition Prerelease)
Kernel 6.13.0-0.rc6.48.fc42.x86_64 on an x86_64 (ttyS0)

ens3: 10.0.2.15 fec0::5054:ff:fe12:3456
localhost login: root 
Password: 
[systemd]
Failed Units: 1
  systemd-resolved.service

Jan 07 12:46:09 localhost systemd[1]: Starting systemd-resolved.service - Network Name Resolution...
Jan 07 12:46:09 localhost (resolved)[665]: Failed to create destination mount point node '/run/systemd/mount-rootfs/var/tmp', ignoring: Permission denied
Jan 07 12:46:09 localhost (resolved)[665]: Failed to mount /run/systemd/unit-private-tmp/var-tmp to /run/systemd/mount-rootfs/var/tmp: No such file or directory
Jan 07 12:46:09 localhost (resolved)[665]: systemd-resolved.service: Failed to set up mount namespacing: /var/tmp: No such file or directory
Jan 07 12:46:09 localhost (resolved)[665]: systemd-resolved.service: Failed at step NAMESPACE spawning /usr/lib/systemd/systemd-resolved: No such file or directory
Jan 07 12:46:09 localhost systemd[1]: systemd-resolved.service: Main process exited, code=exited, status=226/NAMESPACE
Jan 07 12:46:09 localhost systemd[1]: systemd-resolved.service: Failed with result 'exit-code'.

The reproducer seems to use cloud-init too, so perhaps the issue is related to cloud-init ?

Note that however this time only systemd-resolved.service is failing and not systemd-oomd.service.

Comment 12 Martin Pitt 2025-01-07 13:00:08 UTC
After the recent comments this *really* looks like a duplicate of #2333743. Note that this is a race condition -- after booting, systemctl reset-failed and systemctl start systemd-resolved works, which would be strange for an SELinux policy issue.

Comment 13 Aoife Moloney 2025-02-26 13:20:34 UTC
This bug appears to have been reported against 'rawhide' during the Fedora Linux 42 development cycle.
Changing version to 42.

Comment 14 Zdenek Pytela 2025-09-02 17:57:36 UTC

*** This bug has been marked as a duplicate of bug 2334015 ***


Note You need to log in before you can comment on or make changes to this bug.