Bug 1425278

Summary: "SELinux: mount invalid. Same superblock, different security settings for (dev mqueue, type mqueue)" error message in logs
Product: Red Hat Enterprise Linux 7 Reporter: Sam Ghods <ceptorial>
Component: container-selinuxAssignee: Daniel Walsh <dwalsh>
Status: CLOSED CANTFIX QA Contact: atomic-bugs <atomic-bugs>
Severity: high Docs Contact:
Priority: medium    
Version: 7.3CC: amurdaca, andcosta, ansverma, arun.ghanta, astupnik, bfahr, carl, cfillekes, christoph.karl, development-K9RvgheM1OmXW9pm, dominik.mierzejewski, dornelas, dustymabe, dwalsh, egegunes, erich, fshaikh, gchakkar, hendrik, jkaur, jnordell, jnovy, jrosenta, jsantos, knakayam, lars+bugzilla.redhat.com, lsm5, marcandre.lureau, mheon, miabbott, michael.morello, mrhodes, pasik, plarsen, stwalter, tcarlin, tsweeney, william.caban
Target Milestone: rcKeywords: Extras, PrioBumpGSS
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2020-01-07 21:26:23 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1186913, 1420851    

Description Sam Ghods 2017-02-21 04:23:54 UTC
Description of problem:

When I run:

sudo docker run -ti nginx bash

...with the latest docker and container-selinux RPM's, I get the following error message in my journalctl:

SELinux: mount invalid.  Same superblock, different security settings for (dev mqueue, type mqueue)

It *seems* to be harmless, in that I can't find any adverse affects from the presence of this message, but I'd love to figure out what's going on in case something is going wrong and will bite us later. Here is the full journalctl -x output when I execute the above command:


Feb 20 20:21:28 centos7-server dockerd-current[21204]: time="2017-02-20T20:21:28.315183595-08:00" level=info msg="{Action=create, Username=sam, LoginUID=1002, PID=30858}"
Feb 20 20:21:28 centos7-server kernel: XFS (dm-15): Mounting V5 Filesystem
Feb 20 20:21:28 centos7-server kernel: XFS (dm-15): Ending clean mount
Feb 20 20:21:28 centos7-server kernel: XFS (dm-15): Unmounting Filesystem
Feb 20 20:21:28 centos7-server kernel: XFS (dm-15): Mounting V5 Filesystem
Feb 20 20:21:28 centos7-server kernel: XFS (dm-15): Ending clean mount
Feb 20 20:21:28 centos7-server kernel: XFS (dm-15): Unmounting Filesystem
Feb 20 20:21:28 centos7-server dockerd-current[21204]: time="2017-02-20T20:21:28.502340471-08:00" level=info msg="{Action=attach, Username=sam, LoginUID=1002, PID=30858}"
Feb 20 20:21:28 centos7-server dockerd-current[21204]: time="2017-02-20T20:21:28.503627371-08:00" level=info msg="{Action=start, Username=sam, LoginUID=1002, PID=30858}"
Feb 20 20:21:28 centos7-server kernel: XFS (dm-15): Mounting V5 Filesystem
Feb 20 20:21:28 centos7-server kernel: XFS (dm-15): Ending clean mount
Feb 20 20:21:28 centos7-server kernel: device veth0286b89 entered promiscuous mode
Feb 20 20:21:28 centos7-server kernel: IPv6: ADDRCONF(NETDEV_UP): veth0286b89: link is not ready
Feb 20 20:21:28 centos7-server systemd[1]: Scope libcontainer-30924-systemd-test-default-dependencies.scope has no PIDs. Refusing.
Feb 20 20:21:28 centos7-server systemd[1]: Scope libcontainer-30924-systemd-test-default-dependencies.scope has no PIDs. Refusing.
Feb 20 20:21:28 centos7-server systemd[1]: Started docker container 4ec670e5a4773c509e569a8d5c11e73f3ce47f13e070965f127e4cd261932654.
-- Subject: Unit docker-4ec670e5a4773c509e569a8d5c11e73f3ce47f13e070965f127e4cd261932654.scope has finished start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit docker-4ec670e5a4773c509e569a8d5c11e73f3ce47f13e070965f127e4cd261932654.scope has finished starting up.
--
-- The start-up result is done.
Feb 20 20:21:28 centos7-server systemd[1]: Starting docker container 4ec670e5a4773c509e569a8d5c11e73f3ce47f13e070965f127e4cd261932654.
-- Subject: Unit docker-4ec670e5a4773c509e569a8d5c11e73f3ce47f13e070965f127e4cd261932654.scope has begun start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit docker-4ec670e5a4773c509e569a8d5c11e73f3ce47f13e070965f127e4cd261932654.scope has begun starting up.
Feb 20 20:21:28 centos7-server kernel: SELinux: mount invalid.  Same superblock, different security settings for (dev mqueue, type mqueue)
Feb 20 20:21:28 centos7-server kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth0286b89: link becomes ready
Feb 20 20:21:28 centos7-server kernel: docker0: port 5(veth0286b89) entered forwarding state
Feb 20 20:21:28 centos7-server kernel: docker0: port 5(veth0286b89) entered forwarding state
Feb 20 20:21:28 centos7-server systemd[1]: Scope libcontainer-30958-systemd-test-default-dependencies.scope has no PIDs. Refusing.
Feb 20 20:21:28 centos7-server systemd[1]: Scope libcontainer-30958-systemd-test-default-dependencies.scope has no PIDs. Refusing.
Feb 20 20:21:28 centos7-server dockerd-current[21204]: time="2017-02-20T20:21:28.684434380-08:00" level=info msg="{Action=resize, Username=sam, LoginUID=1002, PID=30858}"


I'm seeing this with docker and container-selinux 1.12 latest as well as 1.10 latest.

Version-Release number of selected component (if applicable):

docker-1.12.5-14.el7.centos.x86_64
container-selinux-1.12.5-14.el7.centos.x86_64
<<also with 1.10.3>
CentOS Linux release 7.3.1611 (Core) 3.10.0-514.6.1.el7.x86_64


How reproducible:

Every time I run the docker run command.

Comment 2 Sam Ghods 2017-03-02 00:59:55 UTC
Hi... any thoughts on this?

Comment 5 Sam Ghods 2017-03-14 05:42:15 UTC
Has anyone else been able to reproduce this? Is it a known issue? Any timeline for a fix?

Comment 6 Joel Rosental R. 2017-04-11 12:00:53 UTC
any updates on this?

Comment 8 Joel Rosental R. 2017-05-08 10:36:22 UTC
any updates on this?

Comment 9 Peter Larsen 2017-06-14 14:52:01 UTC
(In reply to Joel Rosental R. from comment #8)
> any updates on this?

I'm seeing this too - I found this BZ that has details on a fix for F25:
https://bugzilla.redhat.com/show_bug.cgi?id=1312665

Comment 10 Dominic Robinson 2017-08-07 17:44:39 UTC
I'm also seeing this

Comment 12 Daniel Walsh 2017-10-11 16:50:42 UTC
What is going on here is that SELinux is attempting to mountthe mqueue device inside of the container with a different label then the host, but /dev/mqueue is shared on the host.  Since there is an existing label the kernel is complaining.

Comment 13 Daniel Walsh 2017-10-11 16:52:04 UTC
What needs to happen is docker needs to stop mounting the /dev/mqueue in the container from the host with a different label, or after the switch of namespaces.

This is not a container-selinux issue but a docker issue.

Comment 16 Daniel Walsh 2017-11-29 13:31:23 UTC
There are no adverse effects to this.  It is a potential kernel issue, but should be just ignored by the customer.  Nothing is going to break.

Comment 18 Daniel Walsh 2018-03-16 18:47:12 UTC
This is noise and should be ignored.
There is no real issue other then the splatter in the logs.

Comment 19 William Caban 2018-09-25 13:44:15 UTC
To add to the bug report: this is not isolated to docker. I'm also seeing this when using podman to run containers in CentOS Linux release 7.5.1804

[  227.660181] cni0: port 1(veth967ed9b1) entered blocking state
[  227.660184] cni0: port 1(veth967ed9b1) entered disabled state
[  227.660237] device veth967ed9b1 entered promiscuous mode
[  227.660283] cni0: port 1(veth967ed9b1) entered blocking state
[  227.660285] cni0: port 1(veth967ed9b1) entered forwarding state
[  227.939653] SELinux: mount invalid.  Same superblock, different security settings for (dev mqueue, type mqueue)

Comment 20 Ege Güneş 2018-09-30 18:24:16 UTC
I see this when I try to use vagrant from a container using podman on Fedora 29 Beta.

Podman version: 0.8.4

Command to run container:

sudo podman run -it --rm -v /run/libvirt:/run/libvirt:Z -v $(pwd):/root:Z localhost/vagrant vagrant up

Logs:

Sep 30 21:08:39 Home systemd[1]: Started libpod-conmon-4bcfd7439fc3b45abc61cdb6de4dea4457691b6859394cff3c27ab8ddaa0a120.scope.

Sep 30 21:08:39 Home systemd[1]: libcontainer-21658-systemd-test-default-dependencies.scope: Scope has no PIDs. Refusing.

Sep 30 21:08:39 Home systemd[1]: libcontainer-21658-systemd-test-default-dependencies.scope: Scope has no PIDs. Refusing.

Sep 30 21:08:39 Home systemd[1]: Created slice libcontainer_21658_systemd_test_default.slice.

Sep 30 21:08:39 Home systemd[1]: Removed slice libcontainer_21658_systemd_test_default.slice.

Sep 30 21:08:39 Home systemd[1]: Started libcontainer container 4bcfd7439fc3b45abc61cdb6de4dea4457691b6859394cff3c27ab8ddaa0a120.

Sep 30 21:08:39 Home kernel: SELinux: mount invalid.  Same superblock, different security settings for (dev mqueue, type mqueue)

Sep 30 21:17:25 Home audit[22760]: AVC avc:  denied  { connectto } for  pid=22760 comm="batch_action.r*" path="/run/libvirt/libvirt-sock" scontext=system_u:system_r:container_t:s0:c57,c527 tcontext=system_u:system_r:virtd_t:s0-s0:c0.c1023 tclass=unix_stream_socket permissive=0

Sep 30 21:08:40 Home systemd[1]: libpod-4bcfd7439fc3b45abc61cdb6de4dea4457691b6859394cff3c27ab8ddaa0a120.scope: Consumed 719ms CPU time

Comment 21 Daniel Walsh 2018-10-02 09:06:28 UTC
This is SELinux doing what it is supposed to do.  It is blocking the container from interacting with libvirt.

You should disable selinux separation within the container to make this work.

sudo podman run -it --rm -v /run/libvirt:/run/libvirt -v $(pwd):/root --security-opt label:disable localhost/vagrant vagrant up

Comment 22 Matthew Heon 2018-11-13 16:18:30 UTC
We've seen this problem on Podman as well, so this is clearly not just a Docker issue - moving to runc, where the issue likely lies

Comment 23 Hendrik M Halkow 2019-03-26 17:56:19 UTC
I see this issue when I try to run Buildah or Podman within a Podman container. Steps to reproduce:

# Build an image that contains podman or buildah
dnf install --installroot ... podman buildah
...
buildah commit ...

# Run a container using that image
podman run --rm -it -v /mnt/containers/$(whoami):/var/lib/containers:rw,Z --cap-add=SYS_ADMIN "${IMAGE}"

# Running podman inside that container causes the issue.
podman run --rm -it alpine

# Some more potentially useful info
xfs_info /mnt/containers
meta-data=/dev/mapper/vg0-containers isize=512    agcount=4, agsize=4194048 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=1, sparse=1, rmapbt=0
         =                       reflink=0
data     =                       bsize=4096   blocks=16776192, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
log      =internal log           bsize=4096   blocks=8191, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0


ls -laZ /mnt/containers
total 3
drwxr-xr-x. 5 root    root    system_u:object_r:unlabeled_t:s0                  47 Mar 26 07:19 .
drwxr-xr-x. 4 root    root    system_u:object_r:mnt_t:s0                      4096 Mar 20 00:55 ..
drwx------. 3 hendrik hendrik system_u:object_r:container_file_t:s0:c335,c541   21 Mar 20 01:34 hendrik
drwx------. 3 root    root    system_u:object_r:container_file_t:s0:c465,c650   21 Mar 26 07:19 root

Comment 24 Daniel Walsh 2019-03-27 11:01:57 UTC
You will have to Disable SELinux to run podman within a container.  It is going to do stuff that SELinux will block.

If you want to run buildah within a locked down container, this should work as long as you use

--isolation=chroot on `buildah bud` and `buildah run` commands

SELinux will block the use of runc inside of a container.

Comment 25 Andre Costa 2020-01-05 11:37:14 UTC
Hi,

This is also happening on the latest RHCOS 4.2 OS:

# dmesg -HP
[Jan 5 11:33] SELinux: mount invalid.  Same superblock, different security settings for (dev mqueue, type mqueue)
[  +0.516625] SELinux: mount invalid.  Same superblock, different security settings for (dev mqueue, type mqueue)
[  +0.948788] SELinux: mount invalid.  Same superblock, different security settings for (dev mqueue, type mqueue)
[  +1.009263] SELinux: mount invalid.  Same superblock, different security settings for (dev mqueue, type mqueue)
[  +0.267986] SELinux: mount invalid.  Same superblock, different security settings for (dev mqueue, type mqueue)

# rpm-ostree status
State: idle
AutomaticUpdates: disabled
Deployments:
* pivot://quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8b27d94ab2fb60005be6ee12300508fbd7d23c717d0332045a64eb925ddbf4a4
              CustomOrigin: Managed by machine-config-operator
                   Version: 42.81.20191210.1 (2019-12-10T19:52:52Z)

  pivot://quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc71fbd134f063d9fc0ccc78933b89c8dd2b1418b7a7b85bb70de87bc80486d7
              CustomOrigin: Image generated via coreos-assembler
                   Version: 42.80.20191002.0 (2019-10-02T13:31:28Z)
# uname -a
Linux ocp4-master2 4.18.0-147.0.3.el8_1.x86_64 #1 SMP Mon Nov 11 12:58:36 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux

Comment 26 Tom Sweeney 2020-01-06 22:19:00 UTC
I think this might be a separate problem.  Regardless I'm changing the Component to container-selinux and assiging to Dan Walsh.

Comment 27 Daniel Walsh 2020-01-07 21:26:23 UTC
This is a kernel issue and can be safely ignored.  It happens burried down in runc code and is not likely to be fixed soon.

Comment 28 Cheryl A Fillekes 2020-01-27 22:52:06 UTC
*** Bug 1775711 has been marked as a duplicate of this bug. ***

Comment 29 Arun Ghanta 2021-03-17 16:01:33 UTC
(In reply to Daniel Walsh from comment #27)
> This is a kernel issue and can be safely ignored.  It happens burried down
> in runc code and is not likely to be fixed soon.

I'm attempting to run podman as rootless with username as podman that's created as service account on a RHEL 8.3 (Latest Stable Release) Virtual Server.
All the below good stuff specified in Red Hat's documentation.

echo "user.max_user_namespaces=28633" > /etc/sysctl.d/userns.conf
sysctl -p /etc/sysctl.d/userns.conf
echo "podman:165537:65536" >> /etc/subuid
echo "podman:165537:65536" >> /etc/subgid

I have too many issues.
If I try to bring up a container with running /sbin/init for systemclt availability that doesn't work.
I get error that cannot access to DBus failed.

If I try to map containers drive on the host, I end up with this error.
kernel: SELinux: mount invalid.  Same superblock, different security settings for (dev mqueue, type mqueue)
I don't mind ignoring it. The problem is, it's not mounting the volume.

podman run -dit --rm -v marketing:/var/www:z -p 5001:8080 <ImageID>

The worst part is, it doesn't even run. It exits.