Bug 2223598
| Summary: | OpenShift pod disks are sometimes unmounted when podman is invoked | ||
|---|---|---|---|
| Product: | Red Hat Enterprise Linux 8 | Reporter: | Alona Kaplan <alkaplan> |
| Component: | podman | Assignee: | Paul Holzinger <pholzing> |
| Status: | ASSIGNED --- | QA Contact: | atomic-bugs <atomic-bugs> |
| Severity: | medium | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | 8.6 | CC: | akalenyu, bbaude, bnemec, dwalsh, edwardh, gscrivan, jligon, jnovy, lpivarc, lsm5, mboddu, mheon, nalin, pholzing, phoracek, pthomas, tsweeney |
| Target Milestone: | rc | Flags: | pholzing:
needinfo?
(gscrivan) |
| Target Release: | --- | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | Type: | Bug | |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
|
Description
Alona Kaplan
2023-07-18 11:16:28 UTC
Hi Alex, Could you please add your input to this as well? Summary: We have tried auditing unmount calls when this test runs, one interesting entry was a call by podman: type=SYSCALL msg=audit(1687717442.714:9602): arch=c000003e syscall=166 success=yes exit=0 a0=c00061f0e0 a1=2 a2=0 a3=0 items=1 ppid=2604531 pid=2604626 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="podman" exe="/usr/bin/podman" subj=system_u:system_r:container_runtime_t:s0 key=(null)ARCH=x86_64 SYSCALL=umount2 AUID="unset" UID="root" GID="root" EUID="root" Which is apparently being used for some MCO flows.. We tried to catch the process that invokes these podman calls: sh-4.4# grep -rnw '/root/ps_logs' -e '200072' /root/ps_logs/ps_2023-06-28_15:00:59.log:372:root 200072 0.0 0.0 23188 3108 ? Ss 15:00 0:00 /bin/bash /usr/local/bin/resolv-prepender.sh /root/ps_logs/ps_2023-06-28_15:01:00.log:372:root 200072 0.0 0.0 23188 3108 ? Ss 15:00 0:00 /bin/bash /usr/local/bin/resolv-prepender.sh sh-4.4# grep -rnw '/root/ps_logs' -e '200213' /root/ps_logs/ps_2023-06-28_15:01:00.log:375:root 200213 0.0 0.0 145360 3788 ? Ssl 15:00 0:00 /usr/bin/conmon --api-version 1 <TRIMMED> Suspecting the resolv-prepender.sh script and podman in general, we tried the following: - Removing the podman binary from the worker nodes -> Issue does not reproduce - Commenting out the resolv-prepender.sh script on the worker nodes -> Issue does not reproduce Could there be some clash between podman and crio? Any tips to isolate this further? @bnemec in case any of this adds up to issues you have seen before I have not seen anything like this before and we do run tests on kubernetes-nmstate that would be creating bridges. It's possible we do it with less frequency than CNV though. This seems like a podman or crio bug since I don't think their containers should be stepping on each other. I am moving this BZ to Podman for further investigation. Unfortunately we don't have an isolated reproducer, but perhaps the mountinfo and syscalls may ring some bells.
Podman team, please feel free to move it back to OpenShift Virtualization if you consider this INSUFFICIENT_DATA or ask for additional details.
We have only been able to reproduce it in our e2e test suite, but we assume what happens underneath is:
1. We run a pod on OpenShift with an unusual container image used - VM container disk with one big file in it.
2. Changes in the worker trigger a script that uses podman to fetch some information about the system [1].
3. The moment this podman container is started, our pod created in step 1 loses its mounted disk.
The comments above should provide additional details.
Podman: podman-4.2.0-6.1.rhaos4.12.el8.x86_64
CRI-O: cri-o-1.25.3-5.rhaos4.12.git44a2cb2.el8.x86_64
Reproducible: Sometimes
[1]
NAMESERVER_IP="$(/usr/bin/podman run --rm \
--authfile /var/lib/kubelet/config.json \
--net=host \
quay.io/openshift-release-dev/ocp-v4.0-art-dev@... \
node-ip \
show \
--retry-on-failure \
"192.168.0.5" \
"192.168.0.7" )"
If you say this started with 4.12.6 and it works 4.12.5 what is the working podman and cri-o version in 4.12.5? That should help to isolate the changes that went into these packages. @nalin @gscrivan Do you know of any storage problem where podman could unmount cri-o containers? I'm not aware of code paths which cause the libraries to unmount container roots outside of calls which explicitly unmount - or remove - the rootfs for a particular container. One case where this would be expected is when the container which is run in the example from comment #4 exits. Looking at the diff between 4.12.5 and 4.12.6 [1]: https://releases-rhcos-art.apps.ocp-virt.prod.psi.redhat.com/diff.html?arch=x86_64&first_release=412.86.202302170236-0&first_stream=prod%2Fstreams%2F4.12&second_release=412.86.202302282003-0&second_stream=prod%2Fstreams%2F4.12 podman was unchanged, cri-o was bumped from cri-o-0-1.25.2-6.rhaos4.12.git3c4e50c.el8-x86_64 to cri-o-0-1.25.2-10.rhaos4.12.git0a083f9.el8-x86_64. [1] Linked from https://openshift-release.apps.ci.l2s4.p1.openshiftapps.com/releasestream/4-stable/release/4.12.6 |