Bug 1651228
| Summary: | Cannot start rootless containers when /home is mounted noexec | ||
|---|---|---|---|
| Product: | Red Hat Enterprise Linux 8 | Reporter: | Wouter Hummelink <wouter.hummelink> |
| Component: | podman | Assignee: | Daniel Walsh <dwalsh> |
| Status: | CLOSED UPSTREAM | QA Contact: | atomic-bugs <atomic-bugs> |
| Severity: | unspecified | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | 8.0 | CC: | bbaude, dwalsh, ebiederm, jligon, lsm5, mheon, tjay |
| Target Milestone: | rc | ||
| Target Release: | 8.0 | ||
| Hardware: | x86_64 | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2019-01-14 22:21:53 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
I don't think we can fix this on the Podman side - we put rootless containers in the user's home directory, and mounting said directory noexec means we can't run anything in those containers. It is possible to work around this by manually specifying a container storage path that's not in a noexec mount. Copying /etc/containers/storage.conf to ~/.config/containers/ (creating the directory if necessary), you can adjust the paths so runroot points to a unique directory in /tmp (/run/user/$UID/libpod/storage/ is generally what we use, I think?) and graphroot to a directory that is not on a noexec mount, your user has read/write privileges on, and is not on a tmpfs. These proposed solutions do make sense. Maybe add something in documentation about these gotchas? (And possibly make the error message a bit more descriptive about the actual error) Agree on both counts - more descriptive errors and documentation on the workaround would help a lot here Yup, sorry I misread the issue. Wouter, I think the error message is the best we can do. We are telling you that exec is denied. I am not crazy about podman attempting to diagnose the issue. Since there are many reasons to get permission denied. We can add some information to the troublehshooting guide on the github. But not sure where we would put other information which would help out this situation. I opened this PR to solve this bugzilla. https://github.com/containers/libpod/pull/2137 Since the PR has been merged. I am going to close this bugzilla. |
Description of problem: When running podman an error occurs starting a rootless container Version-Release number of selected component (if applicable): Red Hat Enterprise Linux release 8.0 Beta (Ootpa) podman.x86_64 0.10.1.3-5.gitdb08685.el8+2131+7e3e9e07 How reproducible: Steps to Reproduce: 1. dnf install @container-tools:1.0/default 2. mount -o remount,noexec /home 3. podman run centos:7 Actual results: standard_init_linux.go:203: exec user process caused "permission denied" Expected results: running container Additional info: There are several warnings before the actual crash. Removing the mounts.conf share for secrets did not help. DEBU[0095] Start untar layer DEBU[0099] Untar time: 4.188800822s DEBU[0099] setting image creation date to 2018-10-09 18:19:48.447478476 +0000 UTC DEBU[0099] created new image ID "75835a67d1341bdc7f4cc4ed9fa1631a7d7b6998e9327272afea342d90c4ab6d" DEBU[0099] set names of image "75835a67d1341bdc7f4cc4ed9fa1631a7d7b6998e9327272afea342d90c4ab6d" to [docker.io/library/centos:7] DEBU[0099] saved image metadata "{}" DEBU[0099] parsed reference into "[vfs@/home/vagrant/.local/share/containers/storage+/run/user/1000/run]docker.io/library/centos:7" DEBU[0099] parsed reference into "[vfs@/home/vagrant/.local/share/containers/storage+/run/user/1000/run]@75835a67d1341bdc7f4cc4ed9fa1631a7d7b6998e9327272afea342d90c4ab6d" DEBU[0099] exporting opaque data as blob "sha256:75835a67d1341bdc7f4cc4ed9fa1631a7d7b6998e9327272afea342d90c4ab6d" DEBU[0099] parsed reference into "[vfs@/home/vagrant/.local/share/containers/storage+/run/user/1000/run]@75835a67d1341bdc7f4cc4ed9fa1631a7d7b6998e9327272afea342d90c4ab6d" DEBU[0099] exporting opaque data as blob "sha256:75835a67d1341bdc7f4cc4ed9fa1631a7d7b6998e9327272afea342d90c4ab6d" DEBU[0099] parsed reference into "[vfs@/home/vagrant/.local/share/containers/storage+/run/user/1000/run]@75835a67d1341bdc7f4cc4ed9fa1631a7d7b6998e9327272afea342d90c4ab6d" WARN[0099] AppArmor security is not available in rootless mode DEBU[0099] Using bridge netmode DEBU[0099] parsed reference into "[vfs@/home/vagrant/.local/share/containers/storage+/run/user/1000/run]@75835a67d1341bdc7f4cc4ed9fa1631a7d7b6998e9327272afea342d90c4ab6d" DEBU[0099] exporting opaque data as blob "sha256:75835a67d1341bdc7f4cc4ed9fa1631a7d7b6998e9327272afea342d90c4ab6d" DEBU[0099] Creating dest directory: /home/vagrant/.local/share/containers/storage/vfs/dir/c9d3e7aa42a0d175be34b27b4f2c9ade00563554fd982e9da332933ec7982676 DEBU[0099] Calling TarUntar(/home/vagrant/.local/share/containers/storage/vfs/dir/f972d139738dfcd1519fd2461815651336ee25a8b54c358834c50af094bb262f, /home/vagrant/.local/share/containers/storage/vfs/dir/c9d3e7aa42a0d175be34b27b4f2c9ade00563554fd982e9da332933ec7982676) DEBU[0099] TarUntar(/home/vagrant/.local/share/containers/storage/vfs/dir/f972d139738dfcd1519fd2461815651336ee25a8b54c358834c50af094bb262f /home/vagrant/.local/share/containers/storage/vfs/dir/c9d3e7aa42a0d175be34b27b4f2c9ade00563554fd982e9da332933ec7982676) DEBU[0102] created container "73a4fb9769188ae5dc51cb7e24b9f2752a4af7b802a8949f06a7b2f2363ab0e9" DEBU[0102] container "73a4fb9769188ae5dc51cb7e24b9f2752a4af7b802a8949f06a7b2f2363ab0e9" has work directory "/home/vagrant/.local/share/containers/storage/vfs-containers/73a4fb9769188ae5dc51cb7e24b9f2752a4af7b802a8949f06a7b2f2363ab0e9/userdata" DEBU[0102] container "73a4fb9769188ae5dc51cb7e24b9f2752a4af7b802a8949f06a7b2f2363ab0e9" has run directory "/run/user/1000/run/vfs-containers/73a4fb9769188ae5dc51cb7e24b9f2752a4af7b802a8949f06a7b2f2363ab0e9/userdata" DEBU[0102] New container created "73a4fb9769188ae5dc51cb7e24b9f2752a4af7b802a8949f06a7b2f2363ab0e9" DEBU[0102] container "73a4fb9769188ae5dc51cb7e24b9f2752a4af7b802a8949f06a7b2f2363ab0e9" has CgroupParent "/libpod_parent/libpod-73a4fb9769188ae5dc51cb7e24b9f2752a4af7b802a8949f06a7b2f2363ab0e9" DEBU[0102] Not attaching to stdin DEBU[0102] mounted container "73a4fb9769188ae5dc51cb7e24b9f2752a4af7b802a8949f06a7b2f2363ab0e9" at "/home/vagrant/.local/share/containers/storage/vfs/dir/c9d3e7aa42a0d175be34b27b4f2c9ade00563554fd982e9da332933ec7982676" DEBU[0102] Created root filesystem for container 73a4fb9769188ae5dc51cb7e24b9f2752a4af7b802a8949f06a7b2f2363ab0e9 at /home/vagrant/.local/share/containers/storage/vfs/dir/c9d3e7aa42a0d175be34b27b4f2c9ade00563554fd982e9da332933ec7982676 WARN[0102] error mounting secrets, skipping: getting host secret data failed: failed to read secrets from "/usr/share/rhel/secrets": open /usr/share/rhel/secrets: permission denied DEBU[0102] /etc/system-fips does not exist on host, not mounting FIPS mode secret DEBU[0102] parsed reference into "[vfs@/home/vagrant/.local/share/containers/storage+/run/user/1000/run]@75835a67d1341bdc7f4cc4ed9fa1631a7d7b6998e9327272afea342d90c4ab6d" DEBU[0102] parsed reference into "[vfs@/home/vagrant/.local/share/containers/storage+/run/user/1000/run]@75835a67d1341bdc7f4cc4ed9fa1631a7d7b6998e9327272afea342d90c4ab6d" DEBU[0102] exporting opaque data as blob "sha256:75835a67d1341bdc7f4cc4ed9fa1631a7d7b6998e9327272afea342d90c4ab6d" DEBU[0102] parsed reference into "[vfs@/home/vagrant/.local/share/containers/storage+/run/user/1000/run]@75835a67d1341bdc7f4cc4ed9fa1631a7d7b6998e9327272afea342d90c4ab6d" DEBU[0102] exporting opaque data as blob "sha256:75835a67d1341bdc7f4cc4ed9fa1631a7d7b6998e9327272afea342d90c4ab6d" DEBU[0102] parsed reference into "[vfs@/home/vagrant/.local/share/containers/storage+/run/user/1000/run]@75835a67d1341bdc7f4cc4ed9fa1631a7d7b6998e9327272afea342d90c4ab6d" DEBU[0102] Created OCI spec for container 73a4fb9769188ae5dc51cb7e24b9f2752a4af7b802a8949f06a7b2f2363ab0e9 at /home/vagrant/.local/share/containers/storage/vfs-containers/73a4fb9769188ae5dc51cb7e24b9f2752a4af7b802a8949f06a7b2f2363ab0e9/userdata/config.json DEBU[0102] /usr/libexec/podman/conmon messages will be logged to syslog DEBU[0102] running conmon: /usr/libexec/podman/conmon args=[-c 73a4fb9769188ae5dc51cb7e24b9f2752a4af7b802a8949f06a7b2f2363ab0e9 -u 73a4fb9769188ae5dc51cb7e24b9f2752a4af7b802a8949f06a7b2f2363ab0e9 -r /usr/bin/runc -b /home/vagrant/.local/share/containers/storage/vfs-containers/73a4fb9769188ae5dc51cb7e24b9f2752a4af7b802a8949f06a7b2f2363ab0e9/userdata -p /run/user/1000/run/vfs-containers/73a4fb9769188ae5dc51cb7e24b9f2752a4af7b802a8949f06a7b2f2363ab0e9/userdata/pidfile -l /home/vagrant/.local/share/containers/storage/vfs-containers/73a4fb9769188ae5dc51cb7e24b9f2752a4af7b802a8949f06a7b2f2363ab0e9/userdata/ctr.log --exit-dir /run/user/1000/libpod/tmp/exits --socket-dir-path /run/user/1000/libpod/tmp/socket --log-level debug --syslog] WARN[0102] Failed to add conmon to cgroupfs sandbox cgroup: mkdir /sys/fs/cgroup/systemd/libpod_parent: permission denied DEBU[0102] Received container pid: 29445 DEBU[0102] Created container 73a4fb9769188ae5dc51cb7e24b9f2752a4af7b802a8949f06a7b2f2363ab0e9 in OCI runtime DEBU[0102] Enabling signal proxying DEBU[0102] Attaching to container 73a4fb9769188ae5dc51cb7e24b9f2752a4af7b802a8949f06a7b2f2363ab0e9 DEBU[0102] connecting to socket /run/user/1000/libpod/tmp/socket/73a4fb9769188ae5dc51cb7e24b9f2752a4af7b802a8949f06a7b2f2363ab0e9/attach DEBU[0102] Started container 73a4fb9769188ae5dc51cb7e24b9f2752a4af7b802a8949f06a7b2f2363ab0e9 standard_init_linux.go:203: exec user process caused "permission denied" DEBU[0102] Checking container 73a4fb9769188ae5dc51cb7e24b9f2752a4af7b802a8949f06a7b2f2363ab0e9 status... DEBU[0102] Cleaning up container 73a4fb9769188ae5dc51cb7e24b9f2752a4af7b802a8949f06a7b2f2363ab0e9 DEBU[0102] Network is already cleaned up, skipping... DEBU[0103] unmounted container "73a4fb9769188ae5dc51cb7e24b9f2752a4af7b802a8949f06a7b2f2363ab0e9" DEBU[0103] Successfully cleaned up container 73a4fb9769188ae5dc51cb7e24b9f2752a4af7b802a8949f06a7b2f2363ab0e9