Bug 1425278
Summary: | "SELinux: mount invalid. Same superblock, different security settings for (dev mqueue, type mqueue)" error message in logs | ||
---|---|---|---|
Product: | Red Hat Enterprise Linux 7 | Reporter: | Sam Ghods <ceptorial> |
Component: | container-selinux | Assignee: | Daniel Walsh <dwalsh> |
Status: | CLOSED CANTFIX | QA Contact: | atomic-bugs <atomic-bugs> |
Severity: | high | Docs Contact: | |
Priority: | medium | ||
Version: | 7.3 | CC: | amurdaca, andcosta, ansverma, arun.ghanta, astupnik, bfahr, carl, cfillekes, christoph.karl, development-K9RvgheM1OmXW9pm, dominik.mierzejewski, dornelas, dustymabe, dwalsh, egegunes, erich, fshaikh, gchakkar, hendrik, jkaur, jnordell, jnovy, jrosenta, jsantos, knakayam, lars+bugzilla.redhat.com, lsm5, marcandre.lureau, mheon, miabbott, michael.morello, mrhodes, pasik, plarsen, stwalter, tcarlin, tsweeney, william.caban |
Target Milestone: | rc | Keywords: | Extras, PrioBumpGSS |
Target Release: | --- | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | If docs needed, set a value | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2020-01-07 21:26:23 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | |||
Bug Blocks: | 1186913, 1420851 |
Description
Sam Ghods
2017-02-21 04:23:54 UTC
Hi... any thoughts on this? Has anyone else been able to reproduce this? Is it a known issue? Any timeline for a fix? any updates on this? any updates on this? (In reply to Joel Rosental R. from comment #8) > any updates on this? I'm seeing this too - I found this BZ that has details on a fix for F25: https://bugzilla.redhat.com/show_bug.cgi?id=1312665 I'm also seeing this What is going on here is that SELinux is attempting to mountthe mqueue device inside of the container with a different label then the host, but /dev/mqueue is shared on the host. Since there is an existing label the kernel is complaining. What needs to happen is docker needs to stop mounting the /dev/mqueue in the container from the host with a different label, or after the switch of namespaces. This is not a container-selinux issue but a docker issue. There are no adverse effects to this. It is a potential kernel issue, but should be just ignored by the customer. Nothing is going to break. This is noise and should be ignored. There is no real issue other then the splatter in the logs. To add to the bug report: this is not isolated to docker. I'm also seeing this when using podman to run containers in CentOS Linux release 7.5.1804 [ 227.660181] cni0: port 1(veth967ed9b1) entered blocking state [ 227.660184] cni0: port 1(veth967ed9b1) entered disabled state [ 227.660237] device veth967ed9b1 entered promiscuous mode [ 227.660283] cni0: port 1(veth967ed9b1) entered blocking state [ 227.660285] cni0: port 1(veth967ed9b1) entered forwarding state [ 227.939653] SELinux: mount invalid. Same superblock, different security settings for (dev mqueue, type mqueue) I see this when I try to use vagrant from a container using podman on Fedora 29 Beta. Podman version: 0.8.4 Command to run container: sudo podman run -it --rm -v /run/libvirt:/run/libvirt:Z -v $(pwd):/root:Z localhost/vagrant vagrant up Logs: Sep 30 21:08:39 Home systemd[1]: Started libpod-conmon-4bcfd7439fc3b45abc61cdb6de4dea4457691b6859394cff3c27ab8ddaa0a120.scope. Sep 30 21:08:39 Home systemd[1]: libcontainer-21658-systemd-test-default-dependencies.scope: Scope has no PIDs. Refusing. Sep 30 21:08:39 Home systemd[1]: libcontainer-21658-systemd-test-default-dependencies.scope: Scope has no PIDs. Refusing. Sep 30 21:08:39 Home systemd[1]: Created slice libcontainer_21658_systemd_test_default.slice. Sep 30 21:08:39 Home systemd[1]: Removed slice libcontainer_21658_systemd_test_default.slice. Sep 30 21:08:39 Home systemd[1]: Started libcontainer container 4bcfd7439fc3b45abc61cdb6de4dea4457691b6859394cff3c27ab8ddaa0a120. Sep 30 21:08:39 Home kernel: SELinux: mount invalid. Same superblock, different security settings for (dev mqueue, type mqueue) Sep 30 21:17:25 Home audit[22760]: AVC avc: denied { connectto } for pid=22760 comm="batch_action.r*" path="/run/libvirt/libvirt-sock" scontext=system_u:system_r:container_t:s0:c57,c527 tcontext=system_u:system_r:virtd_t:s0-s0:c0.c1023 tclass=unix_stream_socket permissive=0 Sep 30 21:08:40 Home systemd[1]: libpod-4bcfd7439fc3b45abc61cdb6de4dea4457691b6859394cff3c27ab8ddaa0a120.scope: Consumed 719ms CPU time This is SELinux doing what it is supposed to do. It is blocking the container from interacting with libvirt. You should disable selinux separation within the container to make this work. sudo podman run -it --rm -v /run/libvirt:/run/libvirt -v $(pwd):/root --security-opt label:disable localhost/vagrant vagrant up We've seen this problem on Podman as well, so this is clearly not just a Docker issue - moving to runc, where the issue likely lies I see this issue when I try to run Buildah or Podman within a Podman container. Steps to reproduce: # Build an image that contains podman or buildah dnf install --installroot ... podman buildah ... buildah commit ... # Run a container using that image podman run --rm -it -v /mnt/containers/$(whoami):/var/lib/containers:rw,Z --cap-add=SYS_ADMIN "${IMAGE}" # Running podman inside that container causes the issue. podman run --rm -it alpine # Some more potentially useful info xfs_info /mnt/containers meta-data=/dev/mapper/vg0-containers isize=512 agcount=4, agsize=4194048 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=1, sparse=1, rmapbt=0 = reflink=0 data = bsize=4096 blocks=16776192, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0, ftype=1 log =internal log bsize=4096 blocks=8191, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 ls -laZ /mnt/containers total 3 drwxr-xr-x. 5 root root system_u:object_r:unlabeled_t:s0 47 Mar 26 07:19 . drwxr-xr-x. 4 root root system_u:object_r:mnt_t:s0 4096 Mar 20 00:55 .. drwx------. 3 hendrik hendrik system_u:object_r:container_file_t:s0:c335,c541 21 Mar 20 01:34 hendrik drwx------. 3 root root system_u:object_r:container_file_t:s0:c465,c650 21 Mar 26 07:19 root You will have to Disable SELinux to run podman within a container. It is going to do stuff that SELinux will block. If you want to run buildah within a locked down container, this should work as long as you use --isolation=chroot on `buildah bud` and `buildah run` commands SELinux will block the use of runc inside of a container. Hi, This is also happening on the latest RHCOS 4.2 OS: # dmesg -HP [Jan 5 11:33] SELinux: mount invalid. Same superblock, different security settings for (dev mqueue, type mqueue) [ +0.516625] SELinux: mount invalid. Same superblock, different security settings for (dev mqueue, type mqueue) [ +0.948788] SELinux: mount invalid. Same superblock, different security settings for (dev mqueue, type mqueue) [ +1.009263] SELinux: mount invalid. Same superblock, different security settings for (dev mqueue, type mqueue) [ +0.267986] SELinux: mount invalid. Same superblock, different security settings for (dev mqueue, type mqueue) # rpm-ostree status State: idle AutomaticUpdates: disabled Deployments: * pivot://quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8b27d94ab2fb60005be6ee12300508fbd7d23c717d0332045a64eb925ddbf4a4 CustomOrigin: Managed by machine-config-operator Version: 42.81.20191210.1 (2019-12-10T19:52:52Z) pivot://quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc71fbd134f063d9fc0ccc78933b89c8dd2b1418b7a7b85bb70de87bc80486d7 CustomOrigin: Image generated via coreos-assembler Version: 42.80.20191002.0 (2019-10-02T13:31:28Z) # uname -a Linux ocp4-master2 4.18.0-147.0.3.el8_1.x86_64 #1 SMP Mon Nov 11 12:58:36 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux I think this might be a separate problem. Regardless I'm changing the Component to container-selinux and assiging to Dan Walsh. This is a kernel issue and can be safely ignored. It happens burried down in runc code and is not likely to be fixed soon. *** Bug 1775711 has been marked as a duplicate of this bug. *** (In reply to Daniel Walsh from comment #27) > This is a kernel issue and can be safely ignored. It happens burried down > in runc code and is not likely to be fixed soon. I'm attempting to run podman as rootless with username as podman that's created as service account on a RHEL 8.3 (Latest Stable Release) Virtual Server. All the below good stuff specified in Red Hat's documentation. echo "user.max_user_namespaces=28633" > /etc/sysctl.d/userns.conf sysctl -p /etc/sysctl.d/userns.conf echo "podman:165537:65536" >> /etc/subuid echo "podman:165537:65536" >> /etc/subgid I have too many issues. If I try to bring up a container with running /sbin/init for systemclt availability that doesn't work. I get error that cannot access to DBus failed. If I try to map containers drive on the host, I end up with this error. kernel: SELinux: mount invalid. Same superblock, different security settings for (dev mqueue, type mqueue) I don't mind ignoring it. The problem is, it's not mounting the volume. podman run -dit --rm -v marketing:/var/www:z -p 5001:8080 <ImageID> The worst part is, it doesn't even run. It exits. |