Description of problem: https://search.ci.openshift.org/?search=+verify+%2Frun+filesystem+contents&maxAge=48h&context=1&type=all&name=&excludeName=&maxMatches=5&maxBytes=20971520&groupBy=job CI jobs are failing because of the above test failing Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
`/run` on a UBI8 container now seems to contain a lot more stuff: ``` /run: console cryptsetup faillock lock log rhsm secrets sepermit setrans systemd user /run/console: /run/cryptsetup: /run/faillock: /run/lock: subsys /run/lock/subsys: /run/log: /run/rhsm: /run/secrets: rhsm /run/secrets/rhsm: ca /run/secrets/rhsm/ca: redhat-entitlement-authority.pem redhat-uep.pem /run/sepermit: /run/setrans: /run/systemd: ask-password machines seats sessions shutdown users /run/systemd/ask-password: /run/systemd/machines: /run/systemd/seats: /run/systemd/sessions: /run/systemd/shutdown: /run/systemd/users: ``` See https://prow.ci.openshift.org/view/gs/origin-ci-test/pr-logs/pull/openshift_oc/821/pull-ci-openshift-oc-master-e2e-aws/1391875953382133760
I don't think this is from a change to UBI: [root@keith-dc2-crunchtools-com ~]# podman run -it ubi8 bash [root@fc2074814725 /]# find /run/ /run/ /run/lock /run/.containerenv /run/secrets /run/secrets/rhsm /run/secrets/rhsm/syspurpose /run/secrets/rhsm/syspurpose/valid_fields.json /run/secrets/rhsm/syspurpose/syspurpose.json /run/secrets/rhsm/rhsm.conf.kat-backup /run/secrets/rhsm/rhsm.conf /run/secrets/rhsm/logging.conf /run/secrets/rhsm/facts /run/secrets/rhsm/facts/uuid.facts /run/secrets/rhsm/facts/katello.facts /run/secrets/rhsm/ca /run/secrets/rhsm/ca/redhat-uep.pem /run/secrets/rhsm/ca/redhat-entitlement-authority.pem /run/secrets/rhsm/ca/katello-server-ca.pem /run/secrets/rhsm/ca/katello-default-ca.pem /run/secrets/redhat.repo /run/secrets/etc-pki-entitlement /run/secrets/etc-pki-entitlement/6470214438861842971.pem /run/secrets/etc-pki-entitlement/6470214438861842971-key.pem
@Scott I believe the node team is checking which changes to RHCOS or cri-o added the extra content to /run.
I asked Scott to check because I can't think of any CRI-O changes that would cause this. but I've just added another person who's confused about this :D
This looks like a bad image, could someone give the exact image this came from.
@Dan so it looks like in the test we run a Dockerfile build with the origin-tools image [1]. It appears this image was updated to include the `stress-ng` and `fio` packages - when I pull that image from quay, the console, systemd, and other directories are present. We can safely rule out node/cri-o as the root cause of this issue. Why those added the extra bits to /run, and why it took over two weeks for these changes to show up in CI, is a mystery to me [2]. I don't see these extra bits in the `oc` image, which leads me to believe that these bits are specific to the tools image. Moving this back to the Build team so we make our test more resilient. [1] https://quay.io/repository/openshift/origin-tools?tag=latest&tab=tags [2] https://github.com/openshift/oc/pull/771
Note that the cli image doesn't have this extra content in /run, so we can use this image in our tests instead: ``` $ podman run --rm -i -t quay.io/openshift/origin-cli:latest /bin/bash [root@1fdfbfe41dcb /]# ls /run lock rhsm secrets [root@1fdfbfe41dcb /]# ```
Has verified on pr, manually change the bug status.
Additional changes coming for this, back to post
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: OpenShift Container Platform 4.8.2 bug fix and security update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2021:2438