RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1867892 - running containerized buildah leads to error
Summary: running containerized buildah leads to error
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: podman
Version: 8.2
Hardware: Unspecified
OS: Unspecified
Target Milestone: rc
: 8.0
Assignee: Tom Sweeney
QA Contact: Alex Jia
Depends On:
Blocks: 1186913 1823899
TreeView+ depends on / blocked
Reported: 2020-08-11 07:13 UTC by Suhaas Bhat
Modified: 2023-12-15 18:47 UTC (History)
18 users (show)

Fixed In Version: podman-2.2
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Last Closed: 2021-05-18 15:32:55 UTC
Type: Bug
Target Upstream Version:

Attachments (Terms of Use)

Description Suhaas Bhat 2020-08-11 07:13:51 UTC
Description of problem:
Following the contents of https://developers.redhat.com/blog/2019/08/14/best-practices-for-running-buildah-in-a-container/

Running it without user namespaces enabled.

Followed above article but everytime getting below error :

ERRO error unmounting /var/lib/containers/storage/overlay/b9df41f4abfda569b2dfbe6d4cf4c0f52453cbbda5c3a8be4ec4d44356bf631d/merged: invalid argument 
error mounting new container: error mounting build container "e6bebd0a39dba54e13fc6381d7342f4b20f8e17b82b13bed7acf30e130385586": error creating overlay mount to /var/lib/containers/storage/overlay/b9df41f4abfda569b2dfbe6d4cf4c0f52453cbbda5c3a8be4ec4d44356bf631d/merged: using mount program /usr/bin/fuse-overlayfs: fuse: device not found, try 'modprobe fuse' first
fuse-overlayfs: cannot mount: No such device
: exit status 1
ERRO exit status 125 

Commands used :

1. podman run -it --device /dev/fuse  quay.io/buildah/stable bash
2. podman run -it --device /dev/fuse -v /var/lib/containers1:/var/lib/containers:Z quay.io/buildah/stable bash

Expected results: How to build images using containerized buildah

Additional info:

Comment 2 Derrick Ornelas 2020-08-12 20:18:36 UTC
The quay.io/buildah/stable image is the unsupported upstream version.  However, I can reproduce this with RHEL 8 podman using our registry.redhat.io/rhel8/buildah image

# rpm -q podman

# podman run -ti --device /dev/fuse --rm registry.redhat.io/rhel8/buildah bash

[root@df5a32ec5f5d /]# env

[root@df5a32ec5f5d /]# rpm -q buildah fuse-overlayfs

[root@df5a32ec5f5d /]# buildah info
    "host": {
        "CgroupVersion": "v1",
        "Distribution": {
            "distribution": "\"rhel\"",
            "version": "8.2"
        "MemTotal": 2012942336,
        "MenFree": 471195648,
        "OCIRuntime": "runc",
        "SwapFree": 2148470784,
        "SwapTotal": 2151673856,
        "arch": "amd64",
        "cpus": 2,
        "hostname": "df5a32ec5f5d",
        "kernel": "4.18.0-193.13.2.el8_2.x86_64",
        "os": "linux",
        "rootless": true,
        "uptime": "362h 17m 52.62s (Approximately 15.08 days)"
    "store": {
        "ContainerStore": {
            "number": 0
        "GraphDriverName": "overlay",
        "GraphOptions": [
        "GraphRoot": "/var/lib/containers/storage",
        "GraphStatus": {
            "Backing Filesystem": "overlayfs",
            "Native Overlay Diff": "false",
            "Supports d_type": "true",
            "Using metacopy": "false"
        "ImageStore": {
            "number": 0
        "RunRoot": "/var/run/containers/storage"

[root@df5a32ec5f5d /]# buildah --storage-opt=overlay.mount_program=/usr/bin/fuse-overlayfs from registry.access.redhat.com/ubi8
Getting image source signatures
Copying blob 77c58f19bd6e done  
Copying blob 47db82df7f3f done  
Copying config a1f8c96997 done  
Writing manifest to image destination
Storing signatures

[root@df5a32ec5f5d /]# buildah --storage-opt=overlay.mount_program=/usr/bin/fuse-overlayfs run --isolation=chroot ubi8-working-container ls /
ERRO error unmounting /var/lib/containers/storage/overlay/30c058cdadc888177361dd14a7ed7edab441c58525b341df321f07bc11440e68/merged: invalid argument 
error mounting container "1ae176ca72b3da7c70af31db7434bcf6f94b07dbc0328bc7e4e8fc9579d0dc2e": error mounting build container "1ae176ca72b3da7c70af31db7434bcf6f94b07dbc0328bc7e4e8fc9579d0dc2e": error creating overlay mount to /var/lib/containers/storage/overlay/30c058cdadc888177361dd14a7ed7edab441c58525b341df321f07bc11440e68/merged: using mount program /usr/bin/fuse-overlayfs: fuse: device not found, try 'modprobe fuse' first
fuse-overlayfs: cannot mount: No such device
: exit status 1
ERRO exit status 1                                

I'm not sure if this is an issue with buildah itself or with the way that the image is configured, so I'll leave the component as buildah.

Comment 5 Daniel Walsh 2020-08-17 13:26:54 UTC
If you load the fuse kernel module, does it work?

Comment 6 Derrick Ornelas 2020-08-18 21:32:01 UTC
(In reply to Daniel Walsh from comment #5)
> If you load the fuse kernel module, does it work?

Yes, this seems to solve it, thanks.  I had incorrectly assumed the module was loaded because /dev/fuse existed.  

# ls -l /dev/fuse 
crw-rw-rw-. 1 root root 10, 229 Aug 18 16:49 /dev/fuse

# lsmod | grep fuse | wc -l

# modprobe fuse

# ls -l /dev/fuse 
crw-rw-rw-. 1 root root 10, 229 Aug 18 16:53 /dev/fuse

# lsmod | grep fuse
fuse                  131072  1

# podman run -ti --device /dev/fuse --rm registry.redhat.io/rhel8/buildah bash

[root@04ecab79def1 /]# buildah from registry.access.redhat.com/ubi8
Getting image source signatures
Copying blob 47db82df7f3f done  
Copying blob 77c58f19bd6e done  
Copying config a1f8c96997 done  
Writing manifest to image destination
Storing signatures

[root@04ecab79def1 /]# buildah ubi8-working-container ls /
unknown command "ubi8-working-container" for "buildah"

[root@04ecab79def1 /]# buildah run ubi8-working-container ls /
bin  boot  dev	etc  home  lib	lib64  lost+found  media  mnt  opt  proc  root	run  sbin  srv	sys  tmp  usr  var

What loads the fuse module so that it's available for rootless podman?  Is it possible to have the module loaded automatically when '--device /dev/fuse' is passed to podman in non-rootless mode, or should users be told that they need to load it manually when they want to use the buildah image in this way?

Comment 7 Daniel Walsh 2020-08-19 10:17:07 UTC
Strange, use of the fuse device should trigger the load, but this is blocked with SELinux inside of the container.
I would have thought that fuse would have been loaded automatically when udev created the /dev/fuse, also.

Running podman or buildah rootless must cause the module to load when it mounts the fuse file system.

echo fuse > /etc/modules-load.d/fuse.conf

Will cause this fuse module to be loaded at boot time.

Comment 8 Ronald van Zantvoort 2020-08-21 15:50:04 UTC
Hi Daniel, 

I've uploaded a test script to outline the issue we're talking about here.
As there's some non-publicable details in there so I have to put them in our closed support issue leading up to this bugzilla.

The essence:
RHEL7 with Docker 1.13 or sudo podman running a containerized buildah leads to:
[37mDEBU[0m [graphdriver] trying provided driver "overlay" 
[37mDEBU[0m overlay: mount_program=/usr/bin/fuse-overlayfs 
error building at STEP "COPY --chown=root:root include /": error resolving symlinks for copy destination /: lstat /var/lib/containers/storage/overlay/3144fff1d319d3694222a588fb68fcf3e8acfa7a61b86d31c48e7ab328a3a7e2/merged: invalid argument

(this can happen at any STEP, not specifically COPY) 

As a side note, I've been unable to reproduce the aforementioned fuse error so it might either be operator malfunction or specific to rootless podman (which we don't use)
See the log and script in https://access.redhat.com/support/cases/#/case/02704514

Comment 9 Ronald van Zantvoort 2020-08-24 09:44:20 UTC
I've also looked at the missing executables in the container image.

The problem there, by design or by oversight, is that the Quay buildah img bases on Fedora; the buildah rpm has a dependency on fuse3 packages.
On the UBI8 images, the buildah package does not depend on fuse3 so that package, containing the executables. That's why they're never included in the red hat version.

Comment 10 Tom Sweeney 2020-08-24 18:56:32 UTC
Adding Scott McCarty to the cc list as he'll probably have an opinion.

Ronald, there are several container images floating around, which were you looking at?

FWIW, the container images on quay.io/buildah/stable:latest (and upstream:latest and testing:latest) were all intentionally built on Fedora for use in the OpenSource community.  These images are not fully supported by Red Hat.  The UBI8 images are a separate beast that Scott was involved with.

Comment 14 Ronald van Zantvoort 2020-08-25 21:41:34 UTC
Hi Tom,

Both the Quay/Fedora and Red Hat/UBI8 buildah containers report this lstat error on a 'merged' (see the logs)
Only the Red Hat/UBI8 container logs two additional errors regarding missing FUSE fusermount(3)? executables.

Suhaas reported "the errors are pointing towards the non existence of fuse binary which tells us that fuse driver is not enabled."
This led me to look into the missing executables in the UBI8 container.

Turns out both (for relevant intents & purposes) simply call "yum install buildah fuse-overlayfs"
Fedora's buildah rpm pulls in fuse3 as dependency, UBI8 buildah rpm does not. Hence the missing binaries in the UBI8 version.

Comment 15 Tom Sweeney 2020-08-25 22:04:04 UTC
Dan/Scott anything further we can do for these images to lessen the fuse errors?

Comment 16 Tom Sweeney 2020-08-25 22:05:39 UTC
Also, Dan is there a way you could tweak the blog to include the `modprobe fuse` config step?

Comment 17 Tom Sweeney 2020-08-25 22:27:41 UTC

FWIW, I've created https://github.com/containers/buildah/pull/2570 and https://github.com/containers/podman/pull/7453 to help this problem with the quay.io buildah/podman container images.

Comment 18 Scott McCarty 2020-08-26 15:22:31 UTC
I don't quite understand what is being asked of me :-) It sounds like the modprobe error with the fuse module is part of the container host. Perhaps, we could have this loaded by default in RHEL? 

As for the lstat error, I don't fully understand what is happening?

Comment 19 Tom Sweeney 2020-08-26 15:44:38 UTC

Honestly, I'm not completely sure what needs to be done.  I'm not sure if the UBI8 container image needs to have any adjustments made to it or not, my questioning came from comment 14 https://bugzilla.redhat.com/show_bug.cgi?id=1867892#c14.  It looks like the quay.io/buildah/stable:latest image installs fuse-overlayfs but apparently the UBI8 does not.  I'm not sure if it's just as simple as adding fuse-overlayfs into the UBI8 image, and if that's possible.  Or even if that's something you'd do or if we'd have to loop someone else in to do the changes.

Comment 20 Daniel Walsh 2020-08-31 11:51:57 UTC
There is some effort to get podman to load the fuse module when giving the container the /dev/fuse device.


This would solve the modprobe fuse issue.

I tried to explain in the PR what is going on.

> Is there a way we can trigger this automatically in Podman. Currently the kernel loads some kernel modules automatically on first use of a device. SELinux is blocking this for the container.

> I wonder if podman just opened and closed the /dev/fuse device, if it would trigger the load and this would work in rootless and rootfull mode. It also would be less invasive then this change.

> I believe the issue here is that the confined container is the first on on the system to use /dev/fuse, so container_t process triggers a kernel module load, which is blocked.

> If podman triggers the load then everything is happy.

Comment 23 Daniel Walsh 2020-10-15 12:41:53 UTC
Podman now will trigger the loading of the fuse module, when it starts.


This should be in the podman 2.2 release or perhaps the podman 2.1.2 release.

Comment 24 Tom Sweeney 2020-10-15 17:46:00 UTC
Assigning to Jindrich for any packaging needs.

Comment 37 Daniel Walsh 2021-01-22 09:51:56 UTC
Yes this, the inability to remove the fuse modules, is probably not related to podman.  Are you sure there are no fuse mounts left over?

Comment 38 Tom Sweeney 2021-01-22 18:23:52 UTC
Alex thoughts?  My thinking too is this isn't a Podman problem, but a fuse problem at this point.  Do we have filesystems contact?

Comment 39 Alex Jia 2021-02-01 13:21:52 UTC
(In reply to Daniel Walsh from comment #37)
> Yes this, the inability to remove the fuse modules, is probably not related
> to podman.  Are you sure there are no fuse mounts left over?

There is no related mount point left on the host.

Comment 41 Alex Jia 2021-02-01 13:31:50 UTC
Move this bug to VERIFIED status per Comment 33, Comment 37 and Comment 38.

Comment 43 errata-xmlrpc 2021-05-18 15:32:55 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: container-tools:rhel8 security, bug fix, and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.


Note You need to log in before you can comment on or make changes to this bug.