Bug 2223598 - OpenShift pod disks are sometimes unmounted when podman is invoked [NEEDINFO]
Summary: OpenShift pod disks are sometimes unmounted when podman is invoked
Keywords:
Status: ASSIGNED
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: podman
Version: 8.6
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: rc
: ---
Assignee: Paul Holzinger
QA Contact: atomic-bugs@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2023-07-18 11:16 UTC by Alona Kaplan
Modified: 2023-08-03 12:47 UTC (History)
17 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed:
Type: Bug
Target Upstream Version:
Embargoed:
pholzing: needinfo? (gscrivan)


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker CNV-31124 0 None None None 2023-07-18 11:17:09 UTC
Red Hat Issue Tracker RHELPLAN-164216 0 None None None 2023-08-02 14:38:27 UTC

Description Alona Kaplan 2023-07-18 11:16:28 UTC
Description of problem:


We frequently see specific types of failures occur on the cnv-4.12-network-ovn lane:
https://main-jenkins-csb-cnvqe.apps.ocp-c1.prod.psi.redhat.com/job/test-kubevirt-cnv-4.12-network-ovn-ocs/
These usually present themselves as a test failure with the following error:

Unexpected Warning event received: testvmi-pnsj4,77452e3c-155a-44dd-bfa0-43ffacfe9bb5: failed to detect root mount point of containerDisk disk0 on the node: no mount containing / found in the mount namespace of pid 1 Expected <string>: Warning not to equal <string>: Warning

And are exclusive to the network 4.12 lane.

This bug is about investigating the root cause for these.

Current findings:

    /proc/1/mountinfo on the node doesn't have the containerdisk container mount in it (the mount doesn't exist?)
    [test_id:676] seems to trigger it often (local cluster-sync/functest against external cluster)
    This happens from 4.12.6 onwards (4.12.5 doesn't get these errs)
    Diff between 4.12.5 and 4.12.6

Seems that podman is responsible for the unmount.
The lane passed successfully when podman was uninstalled from the node.

We suspect that the network tests are creating bridges on the node, this invokes `/usr/local/bin/resolv-prepender.sh` which creates a podman container.
Since the node is using crio for k8s there is some conclusion between podman and crio.



Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 1 Edward Haas 2023-07-18 12:37:55 UTC
Hi Alex,

Could you please add your input to this as well?

Comment 2 Alex Kalenyuk 2023-07-18 13:30:12 UTC
Summary:

We have tried auditing unmount calls when this test runs,
one interesting entry was a call by podman:
type=SYSCALL msg=audit(1687717442.714:9602): arch=c000003e syscall=166 success=yes exit=0 a0=c00061f0e0 a1=2 a2=0 a3=0 items=1 ppid=2604531 pid=2604626 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="podman" exe="/usr/bin/podman" subj=system_u:system_r:container_runtime_t:s0 key=(null)ARCH=x86_64 SYSCALL=umount2 AUID="unset" UID="root" GID="root" EUID="root"
Which is apparently being used for some MCO flows..
We tried to catch the process that invokes these podman calls:
sh-4.4# grep -rnw '/root/ps_logs' -e '200072'
/root/ps_logs/ps_2023-06-28_15:00:59.log:372:root      200072  0.0  0.0  23188  3108 ?        Ss   15:00   0:00 /bin/bash /usr/local/bin/resolv-prepender.sh
/root/ps_logs/ps_2023-06-28_15:01:00.log:372:root      200072  0.0  0.0  23188  3108 ?        Ss   15:00   0:00 /bin/bash /usr/local/bin/resolv-prepender.sh
sh-4.4# grep -rnw '/root/ps_logs' -e '200213'
/root/ps_logs/ps_2023-06-28_15:01:00.log:375:root      200213  0.0  0.0 145360  3788 ?        Ssl  15:00   0:00 /usr/bin/conmon --api-version 1 <TRIMMED>

Suspecting the resolv-prepender.sh script and podman in general, we tried the following:
- Removing the podman binary from the worker nodes -> Issue does not reproduce
- Commenting out the resolv-prepender.sh script on the worker nodes -> Issue does not reproduce

Could there be some clash between podman and crio? Any tips to isolate this further?
@bnemec in case any of this adds up to issues you have seen before

Comment 3 Ben Nemec 2023-07-24 22:58:14 UTC
I have not seen anything like this before and we do run tests on kubernetes-nmstate that would be creating bridges. It's possible we do it with less frequency than CNV though.

This seems like a podman or crio bug since I don't think their containers should be stepping on each other.

Comment 4 Petr Horáček 2023-08-02 09:00:28 UTC
I am moving this BZ to Podman for further investigation. Unfortunately we don't have an isolated reproducer, but perhaps the mountinfo and syscalls may ring some bells.

Podman team, please feel free to move it back to OpenShift Virtualization if you consider this INSUFFICIENT_DATA or ask for additional details.

We have only been able to reproduce it in our e2e test suite, but we assume what happens underneath is:
1. We run a pod on OpenShift with an unusual container image used - VM container disk with one big file in it.
2. Changes in the worker trigger a script that uses podman to fetch some information about the system [1].
3. The moment this podman container is started, our pod created in step 1 loses its mounted disk.

The comments above should provide additional details.

Podman: podman-4.2.0-6.1.rhaos4.12.el8.x86_64
CRI-O: cri-o-1.25.3-5.rhaos4.12.git44a2cb2.el8.x86_64
Reproducible: Sometimes

[1]
        NAMESERVER_IP="$(/usr/bin/podman run --rm \
            --authfile /var/lib/kubelet/config.json \
            --net=host \
            quay.io/openshift-release-dev/ocp-v4.0-art-dev@... \
            node-ip \
            show \
            --retry-on-failure \
            "192.168.0.5"  \
            "192.168.0.7" )"

Comment 7 Paul Holzinger 2023-08-02 16:57:32 UTC
If you say this started with 4.12.6 and it works 4.12.5 what is the working podman and cri-o version in 4.12.5? That should help to isolate the changes that went into these packages.

@nalin @gscrivan Do you know of any storage problem where podman could unmount cri-o containers?

Comment 8 Nalin Dahyabhai 2023-08-02 22:07:01 UTC
I'm not aware of code paths which cause the libraries to unmount container roots outside of calls which explicitly unmount - or remove - the rootfs for a particular container.  One case where this would be expected is when the container which is run in the example from comment #4 exits.

Comment 9 Petr Horáček 2023-08-03 08:29:27 UTC
Looking at the diff between 4.12.5 and 4.12.6 [1]: https://releases-rhcos-art.apps.ocp-virt.prod.psi.redhat.com/diff.html?arch=x86_64&first_release=412.86.202302170236-0&first_stream=prod%2Fstreams%2F4.12&second_release=412.86.202302282003-0&second_stream=prod%2Fstreams%2F4.12

podman was unchanged, cri-o was bumped from cri-o-0-1.25.2-6.rhaos4.12.git3c4e50c.el8-x86_64 to cri-o-0-1.25.2-10.rhaos4.12.git0a083f9.el8-x86_64.

[1] Linked from https://openshift-release.apps.ci.l2s4.p1.openshiftapps.com/releasestream/4-stable/release/4.12.6


Note You need to log in before you can comment on or make changes to this bug.