RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1872240 - Try to automatically load fuse module if users forget to do it when starting a nested container
Summary: Try to automatically load fuse module if users forget to do it when starting ...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: podman
Version: 8.3
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: rc
: 8.0
Assignee: Giuseppe Scrivano
QA Contact: Alex Jia
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-08-25 09:17 UTC by Alex Jia
Modified: 2022-01-26 10:51 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-01-26 10:51:42 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Alex Jia 2020-08-25 09:17:34 UTC
Description of problem:
At present, to run podman command w/ '--device /dev/fuse' option for a nested container like buildah-container, it will be failed to start nested container if users haven't loaded fuse module on the host by themselves, it will be more friendly for users if the podman command w/ '--device /dev/fuse' option will try to load fuse module when users forget to do this.

In addition, I haven't found a official document(within my search area) to tell users the modprobe fuse is necessary before staring a nested container.
https://bugzilla.redhat.com/show_bug.cgi?id=1818701#c11

Current, some users hit the same issue I have ever met before - https://bugzilla.redhat.com/show_bug.cgi?id=1867892#c0
https://bugzilla.redhat.com/show_bug.cgi?id=1818701#c1

Version-Release number of selected component (if applicable):
please see Description section

How reproducible:
always

Steps to Reproduce:
please see https://bugzilla.redhat.com/show_bug.cgi?id=1867892#c0 or
https://bugzilla.redhat.com/show_bug.cgi?id=1818701#c0


Actual results:
please see Steps or Description section

Expected results:


Additional info:

Comment 2 Giuseppe Scrivano 2020-08-25 09:52:32 UTC
I am fine if we add a better error message, but I am not sure we should try to automatically load the kernel module.  Would it be an acceptable solution?

Comment 4 Tom Sweeney 2020-08-25 15:42:12 UTC
Parker can you take a run at this one please?

Comment 5 Daniel Walsh 2020-08-25 19:57:46 UTC
Lets keep Parker concentrating on Short Names.

Giuseppe can you take care of this?

Comment 6 Giuseppe Scrivano 2020-08-26 10:53:45 UTC
PR here: https://github.com/containers/podman/pull/7456

Comment 7 Daniel Walsh 2020-08-27 11:17:11 UTC
Is there a way we can trigger this automatically in Podman.  Currently the kernel loads some kernel modules automatically on first use of a device.  SELinux is blocking this for the container.

I wonder if podman just opened and closed the /dev/fuse device, if it would trigger the load and this would work in rootless and rootfull mode.  It also would be less invasive then this change.

I believe the issue here is that the confined container is the first on on the system to use /dev/fuse, so container_t process triggers a kernel module load, which is blocked.

If podman triggers the load then everything is happy.

Comment 9 Alex Jia 2020-10-21 13:24:03 UTC
The patch https://github.com/containers/podman/pull/7456/files works for me.

[root@hp-dl360g9-03 libpod]# git rev-parse HEAD
b4a10538e1a094f407b581572f8cb3e55656d470

[root@hp-dl360g9-03 libpod]# lsmod|grep fuse
fuse                  131072  0
[root@hp-dl360g9-03 libpod]# modprobe -r fuse
[root@hp-dl360g9-03 libpod]# lsmod|grep fuse
[root@hp-dl360g9-03 libpod]# ./bin/podman run --rm --device /dev/fuse -it registry-proxy.engineering.redhat.com/rh-osbs/rhel8-buildah:8.3-13 /bin/bash
Trying to pull registry-proxy.engineering.redhat.com/rh-osbs/rhel8-buildah:8.3-13...
Getting image source signatures
Copying blob 9598d2bbd6ed done
Copying blob ccd0627b3ce2 done
Copying blob d02623442d02 done
Copying config f6b40a46c0 done
Writing manifest to image destination
Storing signatures
[root@c1962238e275 /]# rpm -q buildah fuse-overlayfs
buildah-1.15.1-2.module+el8.3.0+8221+97165c3f.x86_64
fuse-overlayfs-1.1.2-3.module+el8.3.0+8221+97165c3f.x86_64
[root@c1962238e275 /]# buildah info
{
    "host": {
        "CgroupVersion": "v1",
        "Distribution": {
            "distribution": "\"rhel\"",
            "version": "8.3"
        },
        "MemTotal": 16513163264,
        "MenFree": 1162301440,
        "OCIRuntime": "runc",
        "SwapFree": 8382836736,
        "SwapTotal": 8409575424,
        "arch": "amd64",
        "cpus": 12,
        "hostname": "c1962238e275",
        "kernel": "4.18.0-190.el8.x86_64",
        "os": "linux",
        "rootless": true,
        "uptime": "3246h 30m 44.68s (Approximately 135.25 days)"
    },
    "store": {
        "ContainerStore": {
            "number": 0
        },
        "GraphDriverName": "overlay",
        "GraphOptions": [
            "overlay.imagestore=/var/lib/shared",
            "overlay.mount_program=/usr/bin/fuse-overlayfs",
            "overlay.mountopt=nodev,metacopy=on"
        ],
        "GraphRoot": "/var/lib/containers/storage",
        "GraphStatus": {
            "Backing Filesystem": "overlayfs",
            "Native Overlay Diff": "false",
            "Supports d_type": "true",
            "Using metacopy": "false"
        },
        "ImageStore": {
            "number": 0
        },
        "RunRoot": "/var/run/containers/storage"
    }
}
[root@c1962238e275 /]# buildah --storage-opt=overlay.mount_program=/usr/bin/fuse-overlayfs from registry.access.redhat.com/ubi8
Getting image source signatures
Copying blob c4d668e229cd done
Copying blob ec1681b6a383 done
Copying config ecbc6f53bb done
Writing manifest to image destination
Storing signatures
ubi8-working-container
[root@c1962238e275 /]# buildah ps
CONTAINER ID  BUILDER  IMAGE ID     IMAGE NAME                       CONTAINER NAME
88ff17a5c051     *     ecbc6f53bba0 registry.access.redhat.com/ub... ubi8-working-container
[root@c1962238e275 /]# buildah --storage-opt=overlay.mount_program=/usr/bin/fuse-overlayfs run --isolation=chroot ubi8-working-container ls /
bin  boot  dev  etc  home  lib  lib64  lost+found  media  mnt  opt  proc  root  run  sbin  srv  sys  tmp  usr  var
[root@c1962238e275 /]# exit
exit
[root@hp-dl360g9-03 libpod]# lsmod|grep fuse
fuse                  131072  0


Note You need to log in before you can comment on or make changes to this bug.