RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1838670 - non-root user : host subscription contents are not available on containers
Summary: non-root user : host subscription contents are not available on containers
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: subscription-manager
Version: 8.3
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: 8.3
Assignee: Jiri Hnidek
QA Contact: Red Hat subscription-manager QE Team
URL:
Whiteboard:
Depends On:
Blocks: 1842946
TreeView+ depends on / blocked
 
Reported: 2020-05-21 14:29 UTC by Shwetha Kallesh
Modified: 2024-10-01 16:36 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-11-04 01:39:09 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github candlepin subscription-manager pull 2299 0 None closed set permissions on rhsm.conf 2021-02-18 10:39:19 UTC
Red Hat Product Errata RHBA-2020:4460 0 None None None 2020-11-04 01:39:30 UTC

Comment 2 Jiri Hnidek 2020-07-08 15:27:14 UTC
Hi Shwetha,
I cannot reproduce this issue. Can you please confirm that it works in the lates version of RHEL8.3.

Thanks in advance,

Jiri

Comment 4 Jiri Hnidek 2020-07-10 08:13:07 UTC
When I run container in debug mode, then I can see reason for this issue:

WARN[0000] error mounting secrets, skipping entry in /usr/share/containers/mounts.conf: getting host secret data failed: failed to read secrets from "/usr/share/rhel/secrets": open /usr/share/rhel/secrets/rhsm/rhsm.conf: permission denied

Testing user have access to the /usr/share/rhel/secrets:

[example@hpe-dl380pgen8-02-vm-2 ~]$ ls -l /usr/share/rhel/secrets
total 0
lrwxrwxrwx. 1 root root 20 Jun 19 03:55 etc-pki-entitlement -> /etc/pki/entitlement
lrwxrwxrwx. 1 root root 28 Jun 19 03:55 redhat.repo -> /etc/yum.repos.d/redhat.repo
lrwxrwxrwx. 1 root root  9 Jun 19 03:55 rhsm -> /etc/rhsm

[example@hpe-dl380pgen8-02-vm-2 ~]$ ls -ld /usr/share/rhel/secrets
drwxr-xr-x. 2 root root 64 Jul 10 02:41 /usr/share/rhel/secrets

It only doesn't have access to rhsm.conf:

[root@hpe-dl380pgen8-02-vm-2 ~]# ls -l /usr/share/rhel/secrets/rhsm/rhsm.conf
-rw-------. 1 root root 2981 Jul 10 02:40 /usr/share/rhel/secrets/rhsm/rhsm.conf

[root@hpe-dl380pgen8-02-vm-2 ~]# ls -l /etc/rhsm/rhsm.conf 
-rw-------. 1 root root 2981 Jul 10 02:40 /etc/rhsm/rhsm.conf

When I added read permission to group and other users: 'chmod go+r /etc/rhsm/rhsm.conf', then this issue is fixed and it is possible to see secrets shared by host to container.

[example@hpe-dl380pgen8-02-vm-2 ~]$ podman run -t -i --rm registry.redhat.io/ubi8

[root@07d4673699fc /]# ls /run/secrets/
etc-pki-entitlement  redhat.repo  rhsm

[root@07d4673699fc /]# ls /etc/pki/entitlement-host/
3333824629696557075-key.pem  3333824629696557075.pem


Complete output of podman run, when the /etc/rhsm/rhsm.conf can be read only by root:

[example@hpe-dl380pgen8-02-vm-2 ~]$ podman run --log-level=debug -t -i --rm registry.redhat.io/ubi8
INFO[0000] podman filtering at log level debug          
DEBU[0000] Called run.PersistentPreRunE(podman run --log-level=debug -t -i --rm registry.redhat.io/ubi8) 
DEBU[0000] Ignoring libpod.conf EventsLogger setting "/home/example/.config/containers/containers.conf". Use "journald" if you want to change this setting and remove libpod.conf files. 
DEBU[0000] Reading configuration file "/usr/share/containers/containers.conf" 
DEBU[0000] Merged system config "/usr/share/containers/containers.conf": &{{[] [] container-default [] host enabled [CAP_AUDIT_WRITE CAP_CHOWN CAP_DAC_OVERRIDE CAP_FOWNER CAP_FSETID CAP_KILL CAP_MKNOD CAP_NET_BIND_SERVICE CAP_NET_RAW CAP_SETFCAP CAP_SETGID CAP_SETPCAP CAP_SETUID CAP_SYS_CHROOT] [] []  [] [] [] true [PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin] false false false  private k8s-file -1 slirp4netns false 2048 private /usr/share/containers/seccomp.json 65536k private host 65536} {false systemd [PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin] [/usr/libexec/podman/conmon /usr/local/libexec/podman/conmon /usr/local/lib/podman/conmon /usr/bin/conmon /usr/sbin/conmon /usr/local/bin/conmon /usr/local/sbin/conmon /run/current-system/sw/bin/conmon] ctrl-p,ctrl-q true /tmp/run-1003/libpod/tmp/events/events.log file [/usr/share/containers/oci/hooks.d] docker:// /pause k8s.gcr.io/pause:3.2 /usr/libexec/podman/catatonit shm   false 2048 runc map[crun:[/usr/bin/crun /usr/sbin/crun /usr/local/bin/crun /usr/local/sbin/crun /sbin/crun /bin/crun /run/current-system/sw/bin/crun] kata:[/usr/bin/kata-runtime /usr/sbin/kata-runtime /usr/local/bin/kata-runtime /usr/local/sbin/kata-runtime /sbin/kata-runtime /bin/kata-runtime /usr/bin/kata-qemu /usr/bin/kata-fc] runc:[/usr/bin/runc /usr/sbin/runc /usr/local/bin/runc /usr/local/sbin/runc /sbin/runc /bin/runc /usr/lib/cri-o-runc/sbin/runc /run/current-system/sw/bin/runc]] missing false   [] [crun runc] [crun] [kata kata-runtime kata-qemu kata-fc] {false false false false false false} /etc/containers/policy.json false 3 /home/example/.local/share/containers/storage/libpod 10 /tmp/run-1003/libpod/tmp /home/example/.local/share/containers/storage/volumes} {[/usr/libexec/cni /usr/lib/cni /usr/local/lib/cni /opt/cni/bin] podman /etc/cni/net.d/}} 
WARN[0000] The cgroupv2 manager is set to systemd but there is no systemd user session available 
WARN[0000] For using systemd, you may need to login using an user session 
WARN[0000] Alternatively, you can enable lingering with: `loginctl enable-linger 1003` (possibly as root) 
WARN[0000] Falling back to --cgroup-manager=cgroupfs    
DEBU[0000] Using conmon: "/usr/bin/conmon"              
DEBU[0000] Initializing boltdb state at /home/example/.local/share/containers/storage/libpod/bolt_state.db 
DEBU[0000] Using graph driver overlay                   
DEBU[0000] Using graph root /home/example/.local/share/containers/storage 
DEBU[0000] Using run root /tmp/run-1003/containers      
DEBU[0000] Using static dir /home/example/.local/share/containers/storage/libpod 
DEBU[0000] Using tmp dir /tmp/run-1003/libpod/tmp       
DEBU[0000] Using volume path /home/example/.local/share/containers/storage/volumes 
DEBU[0000] Set libpod namespace to ""                   
DEBU[0000] Not configuring container store              
DEBU[0000] Initializing event backend file              
WARN[0000] Error initializing configured OCI runtime kata: no valid executable found for OCI runtime kata: invalid argument 
DEBU[0000] using runtime "/usr/bin/runc"                
WARN[0000] Error initializing configured OCI runtime crun: no valid executable found for OCI runtime crun: invalid argument 
INFO[0000] Setting parallel job count to 7              
INFO[0000] podman filtering at log level debug          
DEBU[0000] Called run.PersistentPreRunE(podman run --log-level=debug -t -i --rm registry.redhat.io/ubi8) 
DEBU[0000] Ignoring libpod.conf EventsLogger setting "/home/example/.config/containers/containers.conf". Use "journald" if you want to change this setting and remove libpod.conf files. 
DEBU[0000] Reading configuration file "/usr/share/containers/containers.conf" 
DEBU[0000] Merged system config "/usr/share/containers/containers.conf": &{{[] [] container-default [] host enabled [CAP_AUDIT_WRITE CAP_CHOWN CAP_DAC_OVERRIDE CAP_FOWNER CAP_FSETID CAP_KILL CAP_MKNOD CAP_NET_BIND_SERVICE CAP_NET_RAW CAP_SETFCAP CAP_SETGID CAP_SETPCAP CAP_SETUID CAP_SYS_CHROOT] [] []  [] [] [] true [PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin] false false false  private k8s-file -1 slirp4netns false 2048 private /usr/share/containers/seccomp.json 65536k private host 65536} {false systemd [PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin] [/usr/libexec/podman/conmon /usr/local/libexec/podman/conmon /usr/local/lib/podman/conmon /usr/bin/conmon /usr/sbin/conmon /usr/local/bin/conmon /usr/local/sbin/conmon /run/current-system/sw/bin/conmon] ctrl-p,ctrl-q true /tmp/run-1003/libpod/tmp/events/events.log file [/usr/share/containers/oci/hooks.d] docker:// /pause k8s.gcr.io/pause:3.2 /usr/libexec/podman/catatonit shm   false 2048 runc map[crun:[/usr/bin/crun /usr/sbin/crun /usr/local/bin/crun /usr/local/sbin/crun /sbin/crun /bin/crun /run/current-system/sw/bin/crun] kata:[/usr/bin/kata-runtime /usr/sbin/kata-runtime /usr/local/bin/kata-runtime /usr/local/sbin/kata-runtime /sbin/kata-runtime /bin/kata-runtime /usr/bin/kata-qemu /usr/bin/kata-fc] runc:[/usr/bin/runc /usr/sbin/runc /usr/local/bin/runc /usr/local/sbin/runc /sbin/runc /bin/runc /usr/lib/cri-o-runc/sbin/runc /run/current-system/sw/bin/runc]] missing false   [] [crun runc] [crun] [kata kata-runtime kata-qemu kata-fc] {false false false false false false} /etc/containers/policy.json false 3 /home/example/.local/share/containers/storage/libpod 10 /tmp/run-1003/libpod/tmp /home/example/.local/share/containers/storage/volumes} {[/usr/libexec/cni /usr/lib/cni /usr/local/lib/cni /opt/cni/bin] podman /etc/cni/net.d/}} 
WARN[0000] The cgroupv2 manager is set to systemd but there is no systemd user session available 
WARN[0000] For using systemd, you may need to login using an user session 
WARN[0000] Alternatively, you can enable lingering with: `loginctl enable-linger 1003` (possibly as root) 
WARN[0000] Falling back to --cgroup-manager=cgroupfs    
DEBU[0000] Using conmon: "/usr/bin/conmon"              
DEBU[0000] Initializing boltdb state at /home/example/.local/share/containers/storage/libpod/bolt_state.db 
DEBU[0000] Using graph driver overlay                   
DEBU[0000] Using graph root /home/example/.local/share/containers/storage 
DEBU[0000] Using run root /tmp/run-1003/containers      
DEBU[0000] Using static dir /home/example/.local/share/containers/storage/libpod 
DEBU[0000] Using tmp dir /tmp/run-1003/libpod/tmp       
DEBU[0000] Using volume path /home/example/.local/share/containers/storage/volumes 
DEBU[0000] Set libpod namespace to ""                   
DEBU[0000] [graphdriver] trying provided driver "overlay" 
DEBU[0000] overlay: mount_program=/usr/bin/fuse-overlayfs 
DEBU[0000] backingFs=xfs, projectQuotaSupported=false, useNativeDiff=false, usingMetacopy=false 
DEBU[0000] Initializing event backend file              
DEBU[0000] using runtime "/usr/bin/runc"                
WARN[0000] Error initializing configured OCI runtime crun: no valid executable found for OCI runtime crun: invalid argument 
WARN[0000] Error initializing configured OCI runtime kata: no valid executable found for OCI runtime kata: invalid argument 
INFO[0000] Setting parallel job count to 7              
DEBU[0000] parsed reference into "[overlay@/home/example/.local/share/containers/storage+/tmp/run-1003/containers:overlay.mount_program=/usr/bin/fuse-overlayfs]registry.redhat.io/ubi8:latest" 
DEBU[0000] parsed reference into "[overlay@/home/example/.local/share/containers/storage+/tmp/run-1003/containers:overlay.mount_program=/usr/bin/fuse-overlayfs]registry.redhat.io/ubi8:latest" 
DEBU[0000] parsed reference into "[overlay@/home/example/.local/share/containers/storage+/tmp/run-1003/containers:overlay.mount_program=/usr/bin/fuse-overlayfs]@a523835cfc89a97a23936b6efc60a629fa68265923d03492c5b400ac0280c68c" 
DEBU[0000] exporting opaque data as blob "sha256:a523835cfc89a97a23936b6efc60a629fa68265923d03492c5b400ac0280c68c" 
DEBU[0000] parsed reference into "[overlay@/home/example/.local/share/containers/storage+/tmp/run-1003/containers:overlay.mount_program=/usr/bin/fuse-overlayfs]registry.redhat.io/ubi8:latest" 
DEBU[0000] parsed reference into "[overlay@/home/example/.local/share/containers/storage+/tmp/run-1003/containers:overlay.mount_program=/usr/bin/fuse-overlayfs]@a523835cfc89a97a23936b6efc60a629fa68265923d03492c5b400ac0280c68c" 
DEBU[0000] exporting opaque data as blob "sha256:a523835cfc89a97a23936b6efc60a629fa68265923d03492c5b400ac0280c68c" 
DEBU[0000] No hostname set; container's hostname will default to runtime default 
DEBU[0000] Loading seccomp profile from "/usr/share/containers/seccomp.json" 
DEBU[0000] Allocated lock 0 for container e490723df72905b480b9d38697ea3f425f069f1ccf9f7c03e6574ac7c5f318f5 
DEBU[0000] parsed reference into "[overlay@/home/example/.local/share/containers/storage+/tmp/run-1003/containers:overlay.mount_program=/usr/bin/fuse-overlayfs]@a523835cfc89a97a23936b6efc60a629fa68265923d03492c5b400ac0280c68c" 
DEBU[0000] exporting opaque data as blob "sha256:a523835cfc89a97a23936b6efc60a629fa68265923d03492c5b400ac0280c68c" 
DEBU[0000] created container "e490723df72905b480b9d38697ea3f425f069f1ccf9f7c03e6574ac7c5f318f5" 
DEBU[0000] container "e490723df72905b480b9d38697ea3f425f069f1ccf9f7c03e6574ac7c5f318f5" has work directory "/home/example/.local/share/containers/storage/overlay-containers/e490723df72905b480b9d38697ea3f425f069f1ccf9f7c03e6574ac7c5f318f5/userdata" 
DEBU[0000] container "e490723df72905b480b9d38697ea3f425f069f1ccf9f7c03e6574ac7c5f318f5" has run directory "/tmp/run-1003/containers/overlay-containers/e490723df72905b480b9d38697ea3f425f069f1ccf9f7c03e6574ac7c5f318f5/userdata" 
DEBU[0000] container "e490723df72905b480b9d38697ea3f425f069f1ccf9f7c03e6574ac7c5f318f5" has CgroupParent "/libpod_parent/libpod-e490723df72905b480b9d38697ea3f425f069f1ccf9f7c03e6574ac7c5f318f5" 
DEBU[0000] Handling terminal attach                     
DEBU[0000] overlay: mount_data=lowerdir=/home/example/.local/share/containers/storage/overlay/l/QEJSG6SHHW32BZKKHXJSCRGWLD:/home/example/.local/share/containers/storage/overlay/l/7WGU6AISBZRNVSZ4RB637DIIQJ,upperdir=/home/example/.local/share/containers/storage/overlay/11c2a0fdd4e8a99c3988a7f87408ff48aeea73c8e3a8ddc9a210e394aa7dbf02/diff,workdir=/home/example/.local/share/containers/storage/overlay/11c2a0fdd4e8a99c3988a7f87408ff48aeea73c8e3a8ddc9a210e394aa7dbf02/work,context="system_u:object_r:container_file_t:s0:c763,c875" 
DEBU[0000] Made network namespace at /tmp/run-1003/netns/cni-5c134677-17c8-fb13-5e66-2ec0ed98fe0f for container e490723df72905b480b9d38697ea3f425f069f1ccf9f7c03e6574ac7c5f318f5 
DEBU[0000] mounted container "e490723df72905b480b9d38697ea3f425f069f1ccf9f7c03e6574ac7c5f318f5" at "/home/example/.local/share/containers/storage/overlay/11c2a0fdd4e8a99c3988a7f87408ff48aeea73c8e3a8ddc9a210e394aa7dbf02/merged" 
DEBU[0000] slirp4netns command: /usr/bin/slirp4netns --disable-host-loopback --mtu 65520 --enable-sandbox --enable-seccomp -c -e 3 -r 4 --netns-type=path /tmp/run-1003/netns/cni-5c134677-17c8-fb13-5e66-2ec0ed98fe0f tap0 
DEBU[0000] Created root filesystem for container e490723df72905b480b9d38697ea3f425f069f1ccf9f7c03e6574ac7c5f318f5 at /home/example/.local/share/containers/storage/overlay/11c2a0fdd4e8a99c3988a7f87408ff48aeea73c8e3a8ddc9a210e394aa7dbf02/merged 
WARN[0000] error mounting secrets, skipping entry in /usr/share/containers/mounts.conf: getting host secret data failed: failed to read secrets from "/usr/share/rhel/secrets": open /usr/share/rhel/secrets/rhsm/rhsm.conf: permission denied 
DEBU[0000] /etc/system-fips does not exist on host, not mounting FIPS mode secret 
DEBU[0000] reading hooks from /usr/share/containers/oci/hooks.d 
DEBU[0000] Created OCI spec for container e490723df72905b480b9d38697ea3f425f069f1ccf9f7c03e6574ac7c5f318f5 at /home/example/.local/share/containers/storage/overlay-containers/e490723df72905b480b9d38697ea3f425f069f1ccf9f7c03e6574ac7c5f318f5/userdata/config.json 
DEBU[0000] /usr/bin/conmon messages will be logged to syslog 
DEBU[0000] running conmon: /usr/bin/conmon               args="[--api-version 1 -c e490723df72905b480b9d38697ea3f425f069f1ccf9f7c03e6574ac7c5f318f5 -u e490723df72905b480b9d38697ea3f425f069f1ccf9f7c03e6574ac7c5f318f5 -r /usr/bin/runc -b /home/example/.local/share/containers/storage/overlay-containers/e490723df72905b480b9d38697ea3f425f069f1ccf9f7c03e6574ac7c5f318f5/userdata -p /tmp/run-1003/containers/overlay-containers/e490723df72905b480b9d38697ea3f425f069f1ccf9f7c03e6574ac7c5f318f5/userdata/pidfile -n elated_robinson --exit-dir /tmp/run-1003/libpod/tmp/exits --socket-dir-path /tmp/run-1003/libpod/tmp/socket -l k8s-file:/home/example/.local/share/containers/storage/overlay-containers/e490723df72905b480b9d38697ea3f425f069f1ccf9f7c03e6574ac7c5f318f5/userdata/ctr.log --log-level debug --syslog -t --conmon-pidfile /tmp/run-1003/containers/overlay-containers/e490723df72905b480b9d38697ea3f425f069f1ccf9f7c03e6574ac7c5f318f5/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /home/example/.local/share/containers/storage --exit-command-arg --runroot --exit-command-arg /tmp/run-1003/containers --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg cgroupfs --exit-command-arg --tmpdir --exit-command-arg /tmp/run-1003/libpod/tmp --exit-command-arg --runtime --exit-command-arg runc --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --storage-opt --exit-command-arg overlay.mount_program=/usr/bin/fuse-overlayfs --exit-command-arg --events-backend --exit-command-arg file --exit-command-arg container --exit-command-arg cleanup --exit-command-arg --rm --exit-command-arg e490723df72905b480b9d38697ea3f425f069f1ccf9f7c03e6574ac7c5f318f5]"
WARN[0000] Failed to add conmon to cgroupfs sandbox cgroup: error creating cgroup for cpu: mkdir /sys/fs/cgroup/cpu/libpod_parent: permission denied 
DEBU[0000] Received: 21092                              
INFO[0000] Got Conmon PID as 21082                      
DEBU[0000] Created container e490723df72905b480b9d38697ea3f425f069f1ccf9f7c03e6574ac7c5f318f5 in OCI runtime 
DEBU[0000] Attaching to container e490723df72905b480b9d38697ea3f425f069f1ccf9f7c03e6574ac7c5f318f5 
DEBU[0000] connecting to socket /tmp/run-1003/libpod/tmp/socket/e490723df72905b480b9d38697ea3f425f069f1ccf9f7c03e6574ac7c5f318f5/attach 
DEBU[0000] Received a resize event: {Width:145 Height:45} 
DEBU[0000] Starting container e490723df72905b480b9d38697ea3f425f069f1ccf9f7c03e6574ac7c5f318f5 with command [/bin/bash] 
DEBU[0000] Started container e490723df72905b480b9d38697ea3f425f069f1ccf9f7c03e6574ac7c5f318f5 
DEBU[0000] Enabling signal proxying                     

[root@e490723df729 /]# exit
DEBU[0032] Removing container e490723df72905b480b9d38697ea3f425f069f1ccf9f7c03e6574ac7c5f318f5 
DEBU[0032] Removing all exec sessions for container e490723df72905b480b9d38697ea3f425f069f1ccf9f7c03e6574ac7c5f318f5 
DEBU[0032] Cleaning up container e490723df72905b480b9d38697ea3f425f069f1ccf9f7c03e6574ac7c5f318f5 
DEBU[0032] Tearing down network namespace at /tmp/run-1003/netns/cni-5c134677-17c8-fb13-5e66-2ec0ed98fe0f for container e490723df72905b480b9d38697ea3f425f069f1ccf9f7c03e6574ac7c5f318f5 
DEBU[0032] Successfully cleaned up container e490723df72905b480b9d38697ea3f425f069f1ccf9f7c03e6574ac7c5f318f5 
DEBU[0032] Error unmounting /home/example/.local/share/containers/storage/overlay/11c2a0fdd4e8a99c3988a7f87408ff48aeea73c8e3a8ddc9a210e394aa7dbf02/merged with fusermount3 - exec: "fusermount3": executable file not found in $PATH 
DEBU[0032] Error unmounting /home/example/.local/share/containers/storage/overlay/11c2a0fdd4e8a99c3988a7f87408ff48aeea73c8e3a8ddc9a210e394aa7dbf02/merged with fusermount - exec: "fusermount": executable file not found in $PATH 
DEBU[0032] unmounted container "e490723df72905b480b9d38697ea3f425f069f1ccf9f7c03e6574ac7c5f318f5" 
DEBU[0032] Container e490723df72905b480b9d38697ea3f425f069f1ccf9f7c03e6574ac7c5f318f5 storage is already unmounted, skipping... 
DEBU[0032] Called run.PersistentPostRunE(podman run --log-level=debug -t -i --rm registry.redhat.io/ubi8)

Comment 5 Jiri Hnidek 2020-07-10 08:59:52 UTC
When I build rpm using tito from master branch and I install the rpm of subscription-manager to the system (the old rhsm.conf was deleted before rpm installation), then I can observer that rhsm.conf has correct access permission:


[root@localhost subscription-manager]# ll /etc/rhsm/rhsm.conf
-rw-r--r--. 1 root root 2975 10. Jul 10.19 /etc/rhsm/rhsm.conf


When I do following on the testing machine then it seems that permission in rpm provided by repository is also correct:


[root@hpe-dl380pgen8-02-vm-2 ~]# dnf download subscription-manager.x86_64

[root@hpe-dl380pgen8-02-vm-2 ~]# rpm -qp --dump ./subscription-manager-1.27.9-1.el8.x86_64.rpm | grep 'rhsm\.conf'
warning: ./subscription-manager-1.27.9-1.el8.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID fd431d51: NOKEY
/etc/rhsm/rhsm.conf 2975 1593402264 653aa6322198f577101e44d3e9aabda3ef8d6ea2db1023b2d2b7723b0a9df3c5 0100644 root root 1 0 0 X
/usr/share/man/man5/rhsm.conf.5.gz 3598 1593400818 532f87b80c4259071cbeddcf593db96b39aaf1bd23de8b438fd9dc51fc7cb2ed 0100644 root root 0 1 0 X


When I reinstall subscription-manager, then permission also looks correct:


[root@hpe-dl380pgen8-02-vm-2 ~]# dnf reinstall subscription-manager
Failed to set locale, defaulting to C.UTF-8
Updating Subscription Management repositories.
Red Hat Enterprise Linux 8 for x86_64 - BaseOS Beta (RPMs)                                                                                             20 kB/s | 3.7 kB     00:00    
Red Hat Enterprise Linux 8 for x86_64 - AppStream Beta (RPMs)                                                                                          20 kB/s | 3.7 kB     00:00    
Dependencies resolved.
======================================================================================================================================================================================
 Package                                            Architecture                         Version                                    Repository                                   Size
======================================================================================================================================================================================
Reinstalling:
 subscription-manager                               x86_64                               1.27.9-1.el8                               beaker-BaseOS                               1.1 M

Transaction Summary
======================================================================================================================================================================================

Total download size: 1.1 M
Installed size: 4.3 M
Is this ok [y/N]: y
Downloading Packages:
subscription-manager-1.27.9-1.el8.x86_64.rpm                                                                                                           35 MB/s | 1.1 MB     00:00    
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Total                                                                                                                                                  34 MB/s | 1.1 MB     00:00     
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
  Preparing        :                                                                                                                                                              1/1 
  Running scriptlet: subscription-manager-1.27.9-1.el8.x86_64                                                                                                                     1/1 
  Reinstalling     : subscription-manager-1.27.9-1.el8.x86_64                                                                                                                     1/2 
  Running scriptlet: subscription-manager-1.27.9-1.el8.x86_64                                                                                                                     1/2 
  Running scriptlet: subscription-manager-1.27.9-1.el8.x86_64                                                                                                                     2/2 
  Cleanup          : subscription-manager-1.27.9-1.el8.x86_64                                                                                                                     2/2 
  Running scriptlet: subscription-manager-1.27.9-1.el8.x86_64                                                                                                                     2/2 
  Verifying        : subscription-manager-1.27.9-1.el8.x86_64                                                                                                                     1/2 
  Verifying        : subscription-manager-1.27.9-1.el8.x86_64                                                                                                                     2/2 
Installed products updated.

Reinstalled:
  subscription-manager-1.27.9-1.el8.x86_64                                                                                                                                            

Complete!
[root@hpe-dl380pgen8-02-vm-2 ~]# ll /etc/rhsm/
total 12
drwxr-xr-x. 2 root root   68 Jul 10 02:32 ca
drwxr-xr-x. 2 root root    6 Jun 28 23:44 facts
-rw-r--r--. 1 root root 1662 Jun 28 23:44 logging.conf
drwxr-xr-x. 2 root root    6 Jun 28 23:44 pluginconf.d
-rw-r--r--. 1 root root 2981 Jul 10 04:37 rhsm.conf
-rw-r--r--. 1 root root 2981 Jul 10 02:40 rhsm.conf.backup
drwxr-xr-x. 2 root root   54 Jul 10 02:39 syspurpose
[root@hpe-dl380pgen8-02-vm-2 ~]# date
Fri Jul 10 04:37:32 EDT 2020


I suspect that something else (e.g. some installation script) has altered the rhsm.conf and it also changed default permission

Comment 12 Jiri Hnidek 2020-07-23 11:38:07 UTC
The reason for this issue was caused by --serverurl CLI option that was used during registration. This caused rewriting rhsm.conf with wrong access permissions.

I can confirm that provided PR fixed the issue.

Comment 15 John Sefler 2020-08-21 21:04:14 UTC
Verifying Version...

[root@kvm-06-guest05 ~]# rpm -q --verify subscription-manager
[root@kvm-06-guest05 ~]# 
[root@kvm-06-guest05 ~]# ls -l /etc/rhsm/rhsm.conf
-rw-r--r--. 1 root root 2826 Aug 20 12:06 /etc/rhsm/rhsm.conf
[root@kvm-06-guest05 ~]# 
[root@kvm-06-guest05 ~]# subscription-manager register --serverurl subscription.rhsm.stage.redhat.com:443/subscription
Registering to: subscription.rhsm.stage.redhat.com:443/subscription
Username: stage_auto_syspurpose001
Password: 
The system has been registered with ID: d8aa2c4e-4544-411a-98c2-bc4e01084a40
The registered system name is: kvm-06-guest05.hv2.lab.eng.bos.redhat.com
[root@kvm-06-guest05 ~]# 
[root@kvm-06-guest05 ~]# ls -l /etc/rhsm/rhsm.conf
-rw-r--r--. 1 root root 2832 Aug 21 16:19 /etc/rhsm/rhsm.conf
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
VERIFIED: File permissions remain the same as desired despite updated file contents for the non-default server url


[root@kvm-06-guest05 ~]# subscription-manager list --avail --matches 479 --pool-only | tail -1
8a99f9ae6e3a9cac016e3b6777a1009f
[root@kvm-06-guest05 ~]# subscription-manager list --avail --matches 486 --pool-only | tail -1
8a99f9ae6e3a9cac016e3b677d1b00c3
[root@kvm-06-guest05 ~]# subscription-manager list --avail --matches 230 --pool-only | tail -1
8a99f9ae6e3a9cac016e3b677ae500b4
[root@kvm-06-guest05 ~]# 
[root@kvm-06-guest05 ~]# subscription-manager attach --pool=8a99f9ae6e3a9cac016e3b6777a1009f --pool=8a99f9ae6e3a9cac016e3b677d1b00c3 --pool=8a99f9ae6e3a9cac016e3b677ae500b4
Successfully attached a subscription for: Red Hat Enterprise Linux, Self-Support (128 Sockets, NFR, Partner Only)
Successfully attached a subscription for: Red Hat Beta Access
Successfully attached a subscription for: Red Hat Enterprise Linux High Touch Beta
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
VERIFIED: root has attached entitlements for RHEL GA,Beta,HTB so the ubi8:latest container will be able to access whichever RHEL product is currently baked into the latest image.


[root@kvm-06-guest05 ~]# ls -l /usr/share/rhel/secrets
total 0
lrwxrwxrwx. 1 root root 20 Aug 20 04:32 etc-pki-entitlement -> /etc/pki/entitlement
lrwxrwxrwx. 1 root root 28 Aug 20 04:32 redhat.repo -> /etc/yum.repos.d/redhat.repo
lrwxrwxrwx. 1 root root  9 Aug 20 04:32 rhsm -> /etc/rhsm
[root@kvm-06-guest05 ~]# 
[root@kvm-06-guest05 ~]# ls -l /usr/share/rhel/secrets/rhsm/rhsm.conf
-rw-r--r--. 1 root root 2832 Aug 21 16:19 /usr/share/rhel/secrets/rhsm/rhsm.conf
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
VERIFIED: The shared rhsm.conf file that will be accessible from the container has non-root readable permissions 


[root@kvm-06-guest05 ~]# 
[root@kvm-06-guest05 ~]# useradd testUser
[root@kvm-06-guest05 ~]# su - testUser -c 'podman login -u **REDACTED** -p **REDACTED** registry.redhat.io'
Login Succeeded!
[root@kvm-06-guest05 ~]# su - testUser -c 'podman pull registry.redhat.io/ubi8:latest'
Trying to pull registry.redhat.io/ubi8:latest...
Getting image source signatures
Copying blob 47db82df7f3f done  
Copying blob 77c58f19bd6e done  
Copying config a1f8c96997 done  
Writing manifest to image destination
Storing signatures
a1f8c969978652a6d1b2dfb265ae0c6c346da69000160cd3ecd5f619e26fa9f3
[root@kvm-06-guest05 ~]# su - testUser -c 'podman images'
REPOSITORY               TAG     IMAGE ID      CREATED      SIZE
registry.redhat.io/ubi8  latest  a1f8c9699786  4 weeks ago  211 MB
[root@kvm-06-guest05 ~]# 
[root@kvm-06-guest05 ~]# su - testUser -c 'podman run --rm registry.redhat.io/ubi8 dnf repolist --disablerepo=ubi-*'
Updating Subscription Management repositories.
Unable to read consumer identity
Subscription Manager is operating in container mode.
repo id                          repo name
rhel-8-for-x86_64-appstream-rpms Red Hat Enterprise Linux 8 for x86_64 - AppStream (RPMs)
rhel-8-for-x86_64-baseos-rpms    Red Hat Enterprise Linux 8 for x86_64 - BaseOS (RPMs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
VERIFIED: A non-root user (testUser) was created and used to successfully pull the latest ubi8 image from registry.redhat.io and successfully access an entitlement (for RHEL GA) that was attached by root to the host system.


[root@kvm-06-guest05 ~]# su - testUser -c 'podman run -t -i --rm registry.redhat.io/ubi8'
[root@a9d2c71cd0ba /]# ls -l  /etc/pki/entitlement/
total 0
[root@a9d2c71cd0ba /]# ls -l  /etc/pki/entitlement-host/
total 100
-rw-r--r--. 1 root root  3243 Aug 21 20:42 1893191342479235802-key.pem
-rw-r--r--. 1 root root 72027 Aug 21 20:42 1893191342479235802.pem
-rw-r--r--. 1 root root  3243 Aug 21 20:42 1971787791939613062-key.pem
-rw-r--r--. 1 root root  8139 Aug 21 20:42 1971787791939613062.pem
-rw-r--r--. 1 root root  3243 Aug 21 20:42 5825943239187071887-key.pem
-rw-r--r--. 1 root root  6328 Aug 21 20:42 5825943239187071887.pem
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
VERIFIED: The non-root user (testUser) can access entitlements from the host from within the ubi8 container because they are shared under /etc/pki/entitlement-host/



Moving to VERIFIED

Comment 16 John Sefler 2020-08-21 21:09:13 UTC
I should also add that the automated ClosedLoopTests.testCantPullUBIImagesRootless() which was blocked by this bug is now passing.

Comment 19 errata-xmlrpc 2020-11-04 01:39:09 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (subscription-manager bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:4460


Note You need to log in before you can comment on or make changes to this bug.