Bug 2297712 - SELinux is preventing check from 'mmap_zero' accesses on the memprotect labeled spc_t.
Summary: SELinux is preventing check from 'mmap_zero' accesses on the memprotect label...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Fedora
Classification: Fedora
Component: container-selinux
Version: 41
Hardware: x86_64
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: Daniel Walsh
QA Contact: Fedora Extras Quality Assurance
URL:
Whiteboard: abrt_hash:5c7c89f548c9328cf9181267c10...
: 2311085 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2024-07-13 17:36 UTC by Steve
Modified: 2024-11-16 04:34 UTC (History)
27 users (show)

Fixed In Version: container-selinux-2.234.2-1.fc41
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2024-11-16 02:14:44 UTC
Type: ---
Embargoed:


Attachments (Terms of Use)
File: os_info (709 bytes, text/plain)
2024-07-13 17:36 UTC, Steve
no flags Details
File: description (2.30 KB, text/plain)
2024-07-13 17:36 UTC, Steve
no flags Details
journalctl after rebooting and starting docker (234.66 KB, text/plain)
2024-07-14 15:37 UTC, Steve
no flags Details
strace output for dockerd child process showing 'execve("/check", ["/check"], ...)' and EACCES (26.57 KB, text/plain)
2024-07-15 08:13 UTC, Steve
no flags Details

Description Steve 2024-07-13 17:36:09 UTC
Description of problem:
Reboot.
$ sudo docker

moby-engine-24.0.5-1.fc39.x86_64
kernel 6.9.8-100.fc39.x86_64

In a VM.
SELinux is preventing check from 'mmap_zero' accesses on the memprotect labeled spc_t.

*****  Plugin mmap_zero (53.1 confidence) suggests   *************************

If you do not think check should need to mmap low memory in the kernel.
Then you may be under attack by a hacker, this is a very dangerous access.
Do
contact your security administrator and report this issue.

*****  Plugin catchall_boolean (42.6 confidence) suggests   ******************

If you want to allow mmap to low allowed
Then you must tell SELinux about this by enabling the 'mmap_low_allowed' boolean.

Do
setsebool -P mmap_low_allowed 1

*****  Plugin catchall (5.76 confidence) suggests   **************************

If you believe that check should be allowed mmap_zero access on memprotect labeled spc_t by default.
Then you should report this as a bug.
You can generate a local policy module to allow this access.
Do
allow this access for now by executing:
# ausearch -c 'check' --raw | audit2allow -M my-check
# semodule -X 300 -i my-check.pp

Additional Information:
Source Context                system_u:system_r:spc_t:s0
Target Context                system_u:system_r:spc_t:s0
Target Objects                Unknown [ memprotect ]
Source                        check
Source Path                   check
Port                          <Unknown>
Host                          (removed)
Source RPM Packages           
Target RPM Packages           
SELinux Policy RPM            selinux-policy-targeted-39.7-1.fc39.noarch
Local Policy RPM              container-selinux-2.232.1-1.fc39.noarch
Selinux Enabled               True
Policy Type                   targeted
Enforcing Mode                Enforcing
Host Name                     (removed)
Platform                      Linux (removed) 6.9.8-100.fc39.x86_64 #1 SMP
                              PREEMPT_DYNAMIC Fri Jul 5 16:07:15 UTC 2024 x86_64
Alert Count                   2
First Seen                    2024-07-13 09:59:28 PDT
Last Seen                     2024-07-13 10:18:50 PDT
Local ID                      2abc3bd3-a8fb-4a24-9819-dac4dc9b0bdb

Raw Audit Messages
type=AVC msg=audit(1720891130.208:223): avc:  denied  { mmap_zero } for  pid=2216 comm="check" scontext=system_u:system_r:spc_t:s0 tcontext=system_u:system_r:spc_t:s0 tclass=memprotect permissive=0


Hash: check,spc_t,spc_t,memprotect,mmap_zero

Version-Release number of selected component:
selinux-policy-targeted-39.7-1.fc39.noarch

Additional info:
reporter:       libreport-2.17.11
reason:         SELinux is preventing check from 'mmap_zero' accesses on the memprotect labeled spc_t.
package:        selinux-policy-targeted-39.7-1.fc39.noarch
type:           libreport
component:      container-selinux
hashmarkername: setroubleshoot
kernel:         6.9.8-100.fc39.x86_64
component:      container-selinux



Potential duplicate: bug 2169154

Comment 1 Steve 2024-07-13 17:36:11 UTC
Created attachment 2039518 [details]
File: os_info

Comment 2 Steve 2024-07-13 17:36:12 UTC
Created attachment 2039519 [details]
File: description

Comment 3 Daniel Walsh 2024-07-14 11:02:45 UTC
The alert tells you what to do.  mmap_zero is a fairly dangerous thing to do, I am not sure what the container is doing, but most applications should not require this access, which is why it is off by default.

*****  Plugin mmap_zero (53.1 confidence) suggests   *************************

If you do not think check should need to mmap low memory in the kernel.
Then you may be under attack by a hacker, this is a very dangerous access.
Do
contact your security administrator and report this issue.

*****  Plugin catchall_boolean (42.6 confidence) suggests   ******************

If you want to allow mmap to low allowed
Then you must tell SELinux about this by enabling the 'mmap_low_allowed' boolean.

Do
setsebool -P mmap_low_allowed 1

Comment 5 Steve 2024-07-14 14:56:09 UTC
(In reply to Daniel Walsh from comment #3)
> The alert tells you what to do.

selinux is an internal technology that users should never encounter. They certainly shouldn't have to use an arcane command to "fix" the problem.

>  mmap_zero is a fairly dangerous thing to do, I am not sure what the container is doing,

There was no container involved. The reproducer is:

Reboot.
$ sudo docker

In the default configuration:

$ systemctl list-unit-files -a docker\*
UNIT FILE      STATE    PRESET  
docker.service disabled disabled
docker.socket  enabled  enabled 

> but most applications should not require this access, which is why it is off by default.

Simply starting docker should not require this access.

This appears to be a docker bug, so reopening and changing the component to moby-engine.

Comment 6 Steve 2024-07-14 15:13:39 UTC
Here is the reproducer with moby-engine-24.0.5-1.fc39.x86_64 (in an F39 MATE VM):

Reboot.

$ systemctl -q list-units -a \*docker\*
  docker.service loaded inactive dead      Docker Application Container Engine
  docker.socket  loaded active   listening Docker Socket for the API

$ sudo docker
[sudo] password for [redacted]: 

Usage:  docker [OPTIONS] COMMAND
...

$ systemctl -q list-units -a \*docker\*
  sys-devices-virtual-net-docker0.device   loaded active plugged /sys/devices/virtual/net/docker0
  sys-subsystem-net-devices-docker0.device loaded active plugged /sys/subsystem/net/devices/docker0 
  docker.service                           loaded active running Docker Application Container Engine
  docker.socket                            loaded active running Docker Socket for the API

There are no containers:

$ sudo docker container ls -a
CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES

Comment 7 Steve 2024-07-14 15:37:17 UTC
Created attachment 2039547 [details]
journalctl after rebooting and starting docker

Comment 8 Brad Smith 2024-07-14 17:01:49 UTC
I created a fresh F39 vm using vagrant (fedora cloud image), updated via dnf and restarted. Installed moby-engine and restarted. I followed the steps outlined by @Steve: (1) reboot, (2) systemctl -q list-units -a \*docker\*, (3) sudo docker, (4) systemctl -q list-units -a \*docker\*, (5) sudo docker container ls -a.

I did not see any alerts from selinux. 

[vagrant@localhost ~]$ sudo sestatus
SELinux status:                 enabled
SELinuxfs mount:                /sys/fs/selinux
SELinux root directory:         /etc/selinux
Loaded policy name:             targeted
Current mode:                   enforcing
Mode from config file:          enforcing
Policy MLS status:              enabled
Policy deny_unknown status:     allowed
Memory protection checking:     actual (secure)
Max kernel policy version:      33

[vagrant@localhost ~]$ sudo ausearch -m AVC,USER_AVC -ts recent
<no matches>
[vagrant@localhost ~]$ sudo journalctl -t setroubleshoot
-- No entries --
[vagrant@localhost ~]$ sudo docker container ls -a
CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES

Comment 9 Steve 2024-07-14 18:37:15 UTC
Brad, thanks for looking at this.

I attached a journalctl log that covers booting and starting docker.

Could you verify that docker.service is socket-activated?

$ systemctl list-unit-files -a \*docker\*
UNIT FILE      STATE    PRESET  
docker.service disabled disabled
docker.socket  enabled  enabled 

2 unit files listed.

--
> a fresh F39 vm using vagrant (fedora cloud image)

My VM install was with Fedora-MATE_Compiz-Live-x86_64-39-1.5.iso, which was then fully updated.

Sometimes there are different packages on different spins.

Comment 10 Brad Smith 2024-07-14 19:18:59 UTC
I left the vm running just in case ...

I have:

[vagrant@localhost ~]$ systemctl list-unit-files -a \*docker\*
UNIT FILE      STATE    PRESET  
docker.service disabled disabled
docker.socket  enabled  enabled 

2 unit files listed.

I am sure you are correct that there are differences in rpm composition - especially since the vagrant box is derived from the cloud image which targets VMs or physical devices in data center environments and the Mate iso has a desktop environment.

Comment 11 Brad Smith 2024-07-14 19:36:50 UTC
In the same vagrant vm, I installed the mate-desktop group (dnf groupinstall mate-desktop). There are likely still differences with the Mate ISO as well as differences due to the libvirt based vagrant runtime.

I any case, I rebooted after all 728 rpms installed and did not get an selinux alert.

After the reboot:


[bgsmith@pico f39-docker]$ vagrant ssh
Last login: Sun Jul 14 16:02:48 2024 from 192.168.121.1
[vagrant@localhost ~]$ systemctl list-unit-files -a docker\*
UNIT FILE      STATE    PRESET  
docker.service disabled disabled
docker.socket  enabled  enabled 

2 unit files listed.
[vagrant@localhost ~]$ sudo docker

Usage:  docker [OPTIONS] COMMAND

A self-sufficient runtime for containers

Common Commands:
  run         Create and run a new container from an image
  exec        Execute a command in a running container

...

and
[vagrant@localhost ~]$ sudo journalctl -t setroubleshoot
-- No entries --
[vagrant@localhost ~]$ sudo ausearch -m AVC,USER_AVC -ts recent
<no matches>

I regret I am not being very helpful.

Comment 12 Steve 2024-07-14 19:57:03 UTC
Thanks, Brad. You are being very helpful in confirming that this problem is not easily reproducible.

And thanks for confirming that docker.service is socket-activated.

I installed and updated a second VM (fedora-mate-docker-test-2) with Fedora-MATE_Compiz-Live-x86_64-39-1.5.iso. Rebooted.

$ sudo dnf install moby-engine

Rebooted.

$ sudo docker

There are no AVCs:

$ sudo ausearch -i -ts boot -m avc,user_avc
<no matches>

There must be something unique about the first F39 MATE VM (fedora-mate-docker-test-1).

Comment 13 Brad Smith 2024-07-14 20:03:20 UTC
Steve - I am glad to see that you have a functioning VM. But, as you imply, there is an unanswered curiosity with the first vm. Perhaps you can reopen this if you see the problem again?

Comment 14 Steve 2024-07-14 20:08:58 UTC
I reproduced it:

$ sudo dnf install x11docker toolbox

Reboot.

$ sudo docker

$ sudo ausearch -i -ts boot -m avc,user_avc
----
type=AVC msg=audit(07/14/2024 13:04:56.862:231) : avc:  denied  { mmap_zero } for  pid=2180 comm=check scontext=system_u:system_r:spc_t:s0 tcontext=system_u:system_r:spc_t:s0 tclass=memprotect permissive=0 

TG for "dnf history". :-)

Comment 15 Steve 2024-07-14 21:03:05 UTC
(In reply to Steve from comment #14)
> $ sudo dnf install x11docker toolbox

I tried each one separately, and it appears that installing toolbox is sufficient as a reproducer.

$ dnf -Cq repoquery --requires toolbox
containers-common
flatpak-session-helper
libc.so.6(GLIBC_2.34)(64bit)
libresolv.so.2()(64bit)
podman >= 1.4.0
podman >= 1.6.4
rtld(GNU_HASH)

$ rpm -q toolbox containers-common flatpak-session-helper podman
toolbox-0.0.99.5-4.fc39.x86_64
containers-common-1-99.fc39.noarch
flatpak-session-helper-1.15.8-1.fc39.x86_64
podman-4.9.4-1.fc39.x86_64

Comment 16 Brad Smith 2024-07-14 21:40:58 UTC
I now see the same alert.  In the discussion thread Dan provided, it seems that many applications that need mmap_zero access are gui related. And this is also well outside my area of (pseudo) expertise. Also, like the discussion thread I have not been able to find 'check' or what process is actually responsible for the alert. 

Doing some more checks but not likely to be helpful.

So a question to ask, is it toolbox that is needed? And if so, would using podman (already installed) suffice, ie remove moby-engine?

Comment 17 Brad Smith 2024-07-14 21:53:46 UTC
So some possibly useless notes:

I can reproduce on a fresh, updated vagrant box (f39) with moby-engine and toolbox installed. Mate is irrelevant. 

The alert only fires for the first use of sudo docker. Subsequent runs of 'sudo docker' do not trigger the alert. Perhaps this is expected? A question comes to mind, is this alert actually preventing work being done?

Comment 18 Steve 2024-07-14 22:39:55 UTC
Thanks for confirming that you can now reproduce the alert and that a desktop environment is not needed to reproduce it.

Investigating the origin of the "check" process sounds like a good next step.

Unfortunately, AVCs are often vague about critical information.

Zdenek sometimes recommends enabling full auditing in an attempt get more information about the cause of an AVC:
https://fedoraproject.org/wiki/SELinux/Debugging#Enable_full_auditing

I did that and got this:

$ sudo ausearch -i -ts boot -m avc,user_avc
----
type=PROCTITLE msg=audit(07/14/2024 15:35:24.324:229) : proctitle=/usr/bin/qemu-arm-static /check 
type=SYSCALL msg=audit(07/14/2024 15:35:24.324:229) : arch=x86_64 syscall=mmap success=no exit=EACCES(Permission denied) a0=0x1000 a1=0xfffff000 a2=PROT_NONE a3=MAP_PRIVATE|MAP_ANONYMOUS|MAP_NORESERVE|MAP_FIXED_NOREPLACE items=0 ppid=1998 pid=2171 auid=unset uid=root gid=root euid=root suid=root fsuid=root egid=root sgid=root fsgid=root tty=(none) ses=unset comm=check exe=/usr/bin/qemu-arm-static subj=system_u:system_r:spc_t:s0 key=(null) 
type=AVC msg=audit(07/14/2024 15:35:24.324:229) : avc:  denied  { mmap_zero } for  pid=2171 comm=check scontext=system_u:system_r:spc_t:s0 tcontext=system_u:system_r:spc_t:s0 tclass=memprotect permissive=0 

$ rpm -qf /usr/bin/qemu-arm-static
qemu-user-static-arm-8.1.3-5.fc39.x86_64

$ dnf -Cq repoquery --whatrequires qemu-user-static-arm
containers-common-extra-4:1-95.fc39.noarch
containers-common-extra-4:1-99.fc39.noarch
qemu-user-static-2:8.1.0-1.fc39.i686
qemu-user-static-2:8.1.0-1.fc39.x86_64
qemu-user-static-2:8.1.3-5.fc39.i686
qemu-user-static-2:8.1.3-5.fc39.x86_64

Comment 19 Steve 2024-07-14 22:50:59 UTC
Those packages were indeed installed with toolbox:

$ dnf -C history info last | egrep 'toolbox|qemu-user-static-arm|containers-common-extra'
Command Line   : install toolbox
    Install containers-common-extra-4:1-99.fc39.noarch         @updates
    Install qemu-user-static-arm-2:8.1.3-5.fc39.x86_64         @updates
    Install toolbox-0.0.99.5-4.fc39.x86_64                     @updates

And qemu-user-static-arm is under the qemu component:

$ rpm -qi qemu-user-static-arm | fgrep Source
Source RPM  : qemu-8.1.3-5.fc39.src.rpm

So why are qemu packages needed to run containers?

Comment 20 Steve 2024-07-15 00:17:05 UTC
Dan: This looks like a separate bug:

A highly informative "proctitle" field in the PROCTITLE record:

"proctitle=/usr/bin/qemu-arm-static /check"

is reduced to a completely misleading "comm" field in the AVC record:

"comm=check".

Comment 21 Steve 2024-07-15 01:26:55 UTC
Reproduced with F41 Server in a VM.

Full auditing is enabled in rawhide (F41) by default, so the PROCTITLE record (Comment 18) is logged without having to modify any auditing rules.

$ rpm -q moby-engine toolbox selinux-policy container-selinux qemu-user-static-arm containers-common-extra | sort
containers-common-extra-0.59.2-1.fc41.noarch
container-selinux-2.232.1-1.fc41.noarch
moby-engine-24.0.5-4.fc40.x86_64
qemu-user-static-arm-9.0.0-1.fc41.x86_64
selinux-policy-41.8-4.fc41.noarch
toolbox-0.0.99.5-11.fc41.x86_64

$ uname -a
Linux fedora-rawhide-1 6.10.0-0.rc7.20240712git43db1e03c086.62.fc41.x86_64 #1 SMP PREEMPT_DYNAMIC Fri Jul 12 22:31:14 UTC 2024 x86_64 GNU/Linux

$ fgrep PRETTY_NAME /etc/os-release 
PRETTY_NAME="Fedora Linux 41 (Server Edition Prerelease)"

Comment 22 Steve 2024-07-15 08:13:20 UTC
Created attachment 2039610 [details]
strace output for dockerd child process showing 'execve("/check", ["/check"], ...)' and EACCES

Starting dockerd is sufficient to trigger the AVC:

$ sudo strace -ff -s 1024 -o dockerd-5.strace dockerd

$ sudo ausearch -i -ts 00:38:51 -m avc,user_avc
----
type=AVC msg=audit(07/15/2024 00:38:51.354:568) : avc:  denied  { mmap_zero } for  pid=4242 comm=check scontext=unconfined_u:unconfined_r:spc_t:s0-s0:c0.c1023 tcontext=unconfined_u:unconfined_r:spc_t:s0-s0:c0.c1023 tclass=memprotect permissive=0 

The attachment is the strace output for pid=4242. Note the "chroot", "execve", and "EACCES":

$ less -N dockerd-5.strace.4242
...
     56 chroot("/var/lib/docker/tmp/qemu-check1548882429") = 0
     57 dup3(23, 0, 0)                          = 0
     58 dup3(24, 1, 0)                          = 1
     59 dup3(25, 2, 0)                          = 2
     60 setrlimit(RLIMIT_NOFILE, {rlim_cur=1024, rlim_max=512*1024}) = 0
     61 execve("/check", ["/check"], 0xc0004c4500 /* 19 vars */) = 0
...
    106 mmap(0x1000, 4294963200, PROT_NONE, MAP_PRIVATE|MAP_ANONYMOUS|MAP_NORESERVE|MAP_FIXED_NOREPLACE, -1, 0) = -1 EACCES (Permission denied)
...

Comment 23 Daniel Walsh 2024-07-15 08:47:12 UTC
spc_t is only going to happen when dockerd execs a program without setting an SELinux label on it.  So this looks like dockerd is executing a program that triggers a qemu-user-static-arm  application?

Can you remove this package from your system and see if the problem goes away?

Comment 24 Daniel Walsh 2024-07-15 08:54:51 UTC
Lokesh is there a reason for a hard requirement from containers-common-extra to qemu-user-static-arm?

Comment 25 Daniel Walsh 2024-07-15 09:10:54 UTC
Ok I was able to generate the AVC, turns out I had mmap_low_allowed turned on, when I turn it off and restart the dockerd, I see it.

This seems to be something with running code in emulation mode or with /usr/bin/qemu-arm-static.  I have no idea why dockerd is executing
this program.

Comment 26 Steve 2024-07-15 09:23:22 UTC
(In reply to Daniel Walsh from comment #23)
> spc_t is only going to happen when dockerd execs a program without setting an SELinux label on it.
> So this looks like dockerd is executing a program that triggers a qemu-user-static-arm  application?

The toolbox package pulls in the qemu-user-static* packages, but they don't seem to be required dependencies, since they can be removed without removing anything else.

> Can you remove this package from your system and see if the problem goes away?

After removing all the qemu-user-static* packages, the AVC does not occur:

$ sudo dnf remove qemu-user-static\*

$ sudo dockerd # This is sufficient for the reproducer.

After qemu-user-static-arm is installed alone, the AVC occurs:

$ sudo dnf install qemu-user-static-arm

$ sudo dockerd

$ dnf -Cq list --installed qemu-user-static\*
Installed Packages
qemu-user-static-arm.x86_64                                                2:8.1.3-5.fc39                                                @updates

Comment 27 Daniel Walsh 2024-07-15 09:33:50 UTC
My guess is dockerd is checking whether it can support cross arch builds and qemu-user-static-arm execution is causing the AVC.

Comment 28 Brad Smith 2024-07-15 14:40:47 UTC
Thanks Steve for the thorough follow through and analysis.

Comment 29 strasharo2000 2024-09-10 10:22:49 UTC
*** Bug 2311085 has been marked as a duplicate of this bug. ***

Comment 30 strasharo2000 2024-09-10 10:23:52 UTC
I'm also hitting it on latest Fedora 40 and I have docker installed, but no containers running currently. Please, let me know if I can somehow assist with the troubleshooting.

Comment 31 Daniel Walsh 2024-09-12 16:48:00 UTC
Does this actually cause anything to break or just report the AVC.

I am thinking of just dontauding the message

Comment 32 Lokesh Mandvekar 2024-09-18 14:14:52 UTC
(In reply to Daniel Walsh from comment #24)
> Lokesh is there a reason for a hard requirement from containers-common-extra
> to qemu-user-static-arm?

sorry about the late response on this. qemu-user-static-* are hard-requires on fedora server installations because fedora-server ignores `Recommends`.

Comment 33 Lokesh Mandvekar 2024-09-18 14:34:08 UTC
Dan has an upstream PR at https://github.com/containers/container-selinux/pull/328 . There are test rpms generated via packit which you can try on an f40 env using the instructions at https://dashboard.packit.dev/jobs/copr/1880882 . CAUTION: The test rpm will break upgrade path from the official fedora repos which can be fixed by an uninstall followed by install from the official repo, but it's best you try it on a disposable env.

Comment 34 Alexey 2024-11-09 09:57:17 UTC
So, what is the bug status now?

Comment 35 Lokesh Mandvekar 2024-11-11 11:10:58 UTC
I'll cut a new release in maybe an hour. Sorry about the delay.

Comment 36 Fedora Update System 2024-11-11 13:05:09 UTC
FEDORA-2024-ca940914ba (container-selinux-2.234.1-1.fc41) has been submitted as an update to Fedora 41.
https://bodhi.fedoraproject.org/updates/FEDORA-2024-ca940914ba

Comment 37 Fedora Update System 2024-11-11 13:05:31 UTC
FEDORA-2024-e1f129aec4 (container-selinux-2.234.1-1.fc40) has been submitted as an update to Fedora 40.
https://bodhi.fedoraproject.org/updates/FEDORA-2024-e1f129aec4

Comment 38 Fedora Update System 2024-11-12 02:06:41 UTC
FEDORA-2024-925d56702d has been pushed to the Fedora 41 testing repository.
Soon you'll be able to install the update with the following command:
`sudo dnf upgrade --enablerepo=updates-testing --refresh --advisory=FEDORA-2024-925d56702d`
You can provide feedback for this update here: https://bodhi.fedoraproject.org/updates/FEDORA-2024-925d56702d

See also https://fedoraproject.org/wiki/QA:Updates_Testing for more information on how to test updates.

Comment 39 Fedora Update System 2024-11-12 03:22:16 UTC
FEDORA-2024-c3e23922c8 has been pushed to the Fedora 40 testing repository.
Soon you'll be able to install the update with the following command:
`sudo dnf upgrade --enablerepo=updates-testing --refresh --advisory=FEDORA-2024-c3e23922c8`
You can provide feedback for this update here: https://bodhi.fedoraproject.org/updates/FEDORA-2024-c3e23922c8

See also https://fedoraproject.org/wiki/QA:Updates_Testing for more information on how to test updates.

Comment 40 Aoife Moloney 2024-11-13 12:34:33 UTC
This message is a reminder that Fedora Linux 39 is nearing its end of life.
Fedora will stop maintaining and issuing updates for Fedora Linux 39 on 2024-11-26.
It is Fedora's policy to close all bug reports from releases that are no longer
maintained. At that time this bug will be closed as EOL if it remains open with a
'version' of '39'.

Package Maintainer: If you wish for this bug to remain open because you
plan to fix it in a currently maintained version, change the 'version' 
to a later Fedora Linux version. Note that the version field may be hidden.
Click the "Show advanced fields" button if you do not see it.

Thank you for reporting this issue and we are sorry that we were not 
able to fix it before Fedora Linux 39 is end of life. If you would still like 
to see this bug fixed and are able to reproduce it against a later version 
of Fedora Linux, you are encouraged to change the 'version' to a later version
prior to this bug being closed.

Comment 41 Alexey 2024-11-13 12:57:09 UTC
I didn't understand, this issue is actual for 41 version of Fedora! I have encountered it after upgrade from 40 to 41 version.

Comment 42 Fedora Update System 2024-11-16 02:14:44 UTC
FEDORA-2024-925d56702d (container-selinux-2.234.2-1.fc41) has been pushed to the Fedora 41 stable repository.
If problem still persists, please make note of it in this bug report.

Comment 43 Alexey 2024-11-16 04:34:43 UTC
The bug was fixed for my Fedora 41 after last update.


Note You need to log in before you can comment on or make changes to this bug.