Bug 1867892
| Summary: | running containerized buildah leads to error | ||
|---|---|---|---|
| Product: | Red Hat Enterprise Linux 8 | Reporter: | Suhaas Bhat <subhat> |
| Component: | podman | Assignee: | Tom Sweeney <tsweeney> |
| Status: | CLOSED ERRATA | QA Contact: | Alex Jia <ajia> |
| Severity: | urgent | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | 8.2 | CC: | ajia, bbaude, dornelas, dwalsh, fratto, gscrivan, jligon, jnovy, leiwang, lsm5, mheon, pthomas, ronald.van.zantvoort, smccarty, tsweeney, umohnani, van.zantvoort, ypu |
| Target Milestone: | rc | Flags: | pm-rhel:
mirror+
|
| Target Release: | 8.0 | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | podman-2.2 | Doc Type: | If docs needed, set a value |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2021-05-18 15:32:55 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
| Bug Depends On: | |||
| Bug Blocks: | 1186913, 1823899 | ||
|
Description
Suhaas Bhat
2020-08-11 07:13:51 UTC
The quay.io/buildah/stable image is the unsupported upstream version. However, I can reproduce this with RHEL 8 podman using our registry.redhat.io/rhel8/buildah image
# rpm -q podman
podman-1.9.3-2.module+el8.2.1+6867+366c07d6.x86_64
# podman run -ti --device /dev/fuse --rm registry.redhat.io/rhel8/buildah bash
[root@df5a32ec5f5d /]# env
LANG=C.utf8
HOSTNAME=df5a32ec5f5d
container=oci
PWD=/
HOME=/root
BUILDAH_ISOLATION=chroot
TERM=xterm
_BUILDAH_STARTED_IN_USERNS=
SHLVL=1
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
_=/usr/bin/env
[root@df5a32ec5f5d /]# rpm -q buildah fuse-overlayfs
buildah-1.14.9-1.module+el8.2.1+6689+748e6520.x86_64
fuse-overlayfs-1.0.0-2.module+el8.2.1+6465+1a51e8b6.x86_64
[root@df5a32ec5f5d /]# buildah info
{
"host": {
"CgroupVersion": "v1",
"Distribution": {
"distribution": "\"rhel\"",
"version": "8.2"
},
"MemTotal": 2012942336,
"MenFree": 471195648,
"OCIRuntime": "runc",
"SwapFree": 2148470784,
"SwapTotal": 2151673856,
"arch": "amd64",
"cpus": 2,
"hostname": "df5a32ec5f5d",
"kernel": "4.18.0-193.13.2.el8_2.x86_64",
"os": "linux",
"rootless": true,
"uptime": "362h 17m 52.62s (Approximately 15.08 days)"
},
"store": {
"ContainerStore": {
"number": 0
},
"GraphDriverName": "overlay",
"GraphOptions": [
"overlay.imagestore=/var/lib/shared",
"overlay.mount_program=/usr/bin/fuse-overlayfs",
"overlay.mountopt=nodev,metacopy=on"
],
"GraphRoot": "/var/lib/containers/storage",
"GraphStatus": {
"Backing Filesystem": "overlayfs",
"Native Overlay Diff": "false",
"Supports d_type": "true",
"Using metacopy": "false"
},
"ImageStore": {
"number": 0
},
"RunRoot": "/var/run/containers/storage"
}
}
[root@df5a32ec5f5d /]# buildah --storage-opt=overlay.mount_program=/usr/bin/fuse-overlayfs from registry.access.redhat.com/ubi8
Getting image source signatures
Copying blob 77c58f19bd6e done
Copying blob 47db82df7f3f done
Copying config a1f8c96997 done
Writing manifest to image destination
Storing signatures
ubi8-working-container
[root@df5a32ec5f5d /]# buildah --storage-opt=overlay.mount_program=/usr/bin/fuse-overlayfs run --isolation=chroot ubi8-working-container ls /
ERRO error unmounting /var/lib/containers/storage/overlay/30c058cdadc888177361dd14a7ed7edab441c58525b341df321f07bc11440e68/merged: invalid argument
error mounting container "1ae176ca72b3da7c70af31db7434bcf6f94b07dbc0328bc7e4e8fc9579d0dc2e": error mounting build container "1ae176ca72b3da7c70af31db7434bcf6f94b07dbc0328bc7e4e8fc9579d0dc2e": error creating overlay mount to /var/lib/containers/storage/overlay/30c058cdadc888177361dd14a7ed7edab441c58525b341df321f07bc11440e68/merged: using mount program /usr/bin/fuse-overlayfs: fuse: device not found, try 'modprobe fuse' first
fuse-overlayfs: cannot mount: No such device
: exit status 1
ERRO exit status 1
I'm not sure if this is an issue with buildah itself or with the way that the image is configured, so I'll leave the component as buildah.
If you load the fuse kernel module, does it work? (In reply to Daniel Walsh from comment #5) > If you load the fuse kernel module, does it work? Yes, this seems to solve it, thanks. I had incorrectly assumed the module was loaded because /dev/fuse existed. # ls -l /dev/fuse crw-rw-rw-. 1 root root 10, 229 Aug 18 16:49 /dev/fuse # lsmod | grep fuse | wc -l 0 # modprobe fuse # ls -l /dev/fuse crw-rw-rw-. 1 root root 10, 229 Aug 18 16:53 /dev/fuse # lsmod | grep fuse fuse 131072 1 # podman run -ti --device /dev/fuse --rm registry.redhat.io/rhel8/buildah bash [root@04ecab79def1 /]# buildah from registry.access.redhat.com/ubi8 Getting image source signatures Copying blob 47db82df7f3f done Copying blob 77c58f19bd6e done Copying config a1f8c96997 done Writing manifest to image destination Storing signatures ubi8-working-container [root@04ecab79def1 /]# buildah ubi8-working-container ls / unknown command "ubi8-working-container" for "buildah" [root@04ecab79def1 /]# buildah run ubi8-working-container ls / bin boot dev etc home lib lib64 lost+found media mnt opt proc root run sbin srv sys tmp usr var What loads the fuse module so that it's available for rootless podman? Is it possible to have the module loaded automatically when '--device /dev/fuse' is passed to podman in non-rootless mode, or should users be told that they need to load it manually when they want to use the buildah image in this way? Strange, use of the fuse device should trigger the load, but this is blocked with SELinux inside of the container. I would have thought that fuse would have been loaded automatically when udev created the /dev/fuse, also. Running podman or buildah rootless must cause the module to load when it mounts the fuse file system. echo fuse > /etc/modules-load.d/fuse.conf Will cause this fuse module to be loaded at boot time. Hi Daniel, I've uploaded a test script to outline the issue we're talking about here. As there's some non-publicable details in there so I have to put them in our closed support issue leading up to this bugzilla. The essence: RHEL7 with Docker 1.13 or sudo podman running a containerized buildah leads to: ``` [37mDEBU[0m [graphdriver] trying provided driver "overlay" [37mDEBU[0m overlay: mount_program=/usr/bin/fuse-overlayfs error building at STEP "COPY --chown=root:root include /": error resolving symlinks for copy destination /: lstat /var/lib/containers/storage/overlay/3144fff1d319d3694222a588fb68fcf3e8acfa7a61b86d31c48e7ab328a3a7e2/merged: invalid argument ``` (this can happen at any STEP, not specifically COPY) As a side note, I've been unable to reproduce the aforementioned fuse error so it might either be operator malfunction or specific to rootless podman (which we don't use) See the log and script in https://access.redhat.com/support/cases/#/case/02704514 I've also looked at the missing executables in the container image. The problem there, by design or by oversight, is that the Quay buildah img bases on Fedora; the buildah rpm has a dependency on fuse3 packages. On the UBI8 images, the buildah package does not depend on fuse3 so that package, containing the executables. That's why they're never included in the red hat version. Adding Scott McCarty to the cc list as he'll probably have an opinion. Ronald, there are several container images floating around, which were you looking at? FWIW, the container images on quay.io/buildah/stable:latest (and upstream:latest and testing:latest) were all intentionally built on Fedora for use in the OpenSource community. These images are not fully supported by Red Hat. The UBI8 images are a separate beast that Scott was involved with. Hi Tom, Both the Quay/Fedora and Red Hat/UBI8 buildah containers report this lstat error on a 'merged' (see the logs) Only the Red Hat/UBI8 container logs two additional errors regarding missing FUSE fusermount(3)? executables. Suhaas reported "the errors are pointing towards the non existence of fuse binary which tells us that fuse driver is not enabled." This led me to look into the missing executables in the UBI8 container. Turns out both (for relevant intents & purposes) simply call "yum install buildah fuse-overlayfs" Fedora's buildah rpm pulls in fuse3 as dependency, UBI8 buildah rpm does not. Hence the missing binaries in the UBI8 version. Dan/Scott anything further we can do for these images to lessen the fuse errors? Also, Dan is there a way you could tweak the blog to include the `modprobe fuse` config step? Derrick, FWIW, I've created https://github.com/containers/buildah/pull/2570 and https://github.com/containers/podman/pull/7453 to help this problem with the quay.io buildah/podman container images. I don't quite understand what is being asked of me :-) It sounds like the modprobe error with the fuse module is part of the container host. Perhaps, we could have this loaded by default in RHEL? As for the lstat error, I don't fully understand what is happening? Scott, Honestly, I'm not completely sure what needs to be done. I'm not sure if the UBI8 container image needs to have any adjustments made to it or not, my questioning came from comment 14 https://bugzilla.redhat.com/show_bug.cgi?id=1867892#c14. It looks like the quay.io/buildah/stable:latest image installs fuse-overlayfs but apparently the UBI8 does not. I'm not sure if it's just as simple as adding fuse-overlayfs into the UBI8 image, and if that's possible. Or even if that's something you'd do or if we'd have to loop someone else in to do the changes. There is some effort to get podman to load the fuse module when giving the container the /dev/fuse device. https://github.com/containers/podman/pull/7456 This would solve the modprobe fuse issue. I tried to explain in the PR what is going on. > Is there a way we can trigger this automatically in Podman. Currently the kernel loads some kernel modules automatically on first use of a device. SELinux is blocking this for the container. > I wonder if podman just opened and closed the /dev/fuse device, if it would trigger the load and this would work in rootless and rootfull mode. It also would be less invasive then this change. > I believe the issue here is that the confined container is the first on on the system to use /dev/fuse, so container_t process triggers a kernel module load, which is blocked. > If podman triggers the load then everything is happy. Podman now will trigger the loading of the fuse module, when it starts. https://github.com/containers/podman/pull/7456 This should be in the podman 2.2 release or perhaps the podman 2.1.2 release. Assigning to Jindrich for any packaging needs. Yes this, the inability to remove the fuse modules, is probably not related to podman. Are you sure there are no fuse mounts left over? Alex thoughts? My thinking too is this isn't a Podman problem, but a fuse problem at this point. Do we have filesystems contact? (In reply to Daniel Walsh from comment #37) > Yes this, the inability to remove the fuse modules, is probably not related > to podman. Are you sure there are no fuse mounts left over? There is no related mount point left on the host. Move this bug to VERIFIED status per Comment 33, Comment 37 and Comment 38. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: container-tools:rhel8 security, bug fix, and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2021:1796 |