Bug 2033311
Summary: | FIO should not create aide.reinit under /etc/ | ||
---|---|---|---|
Product: | OpenShift Container Platform | Reporter: | Jakub Hrozek <jhrozek> |
Component: | File Integrity Operator | Assignee: | Matt Rogers <mrogers> |
Status: | CLOSED ERRATA | QA Contact: | Prashant Dhamdhere <pdhamdhe> |
Severity: | high | Docs Contact: | |
Priority: | high | ||
Version: | 4.10 | CC: | jhrozek, kmccarro, obockows, pdhamdhe, stevsmit |
Target Milestone: | --- | ||
Target Release: | 4.10.0 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | Release Note | |
Doc Text: |
Previously, a system with a File Integrity Operator installed might interrupt the OpenShift Container Platform update, due to the `/etc/kubernetes/aide.reinit` file. This occurred if the `/etc/kubernetes/aide.reinit` file was present, but later removed prior to the `ostree` validation. With this update, `/etc/kubernetes/aide.reinit` is moved to the `/run` directory so that it does not conflict with the OpenShift Container Platform update.
|
Story Points: | --- |
Clone Of: | Environment: | ||
Last Closed: | 2022-01-25 12:10:10 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Jakub Hrozek
2021-12-16 13:40:58 UTC
[PR Pre-Merge Testing] This looks good now, the aide daemon looks for AIDE temporary runtime files inside /run directory instead of /etc $ gh pr checkout 212 remote: Enumerating objects: 14, done. remote: Counting objects: 100% (14/14), done. remote: Compressing objects: 100% (3/3), done. remote: Total 14 (delta 10), reused 14 (delta 10), pack-reused 0 Unpacking objects: 100% (14/14), 2.93 KiB | 166.00 KiB/s, done. From https://github.com/openshift/file-integrity-operator * [new ref] refs/pull/212/head -> move_aide_reinit Switched to branch 'move_aide_reinit' A new release of gh is available: 2.3.0 → v2.4.0 https://github.com/cli/cli/releases/tag/v2.4.0 $ git branch master * move_aide_reinit $ make deploy-local Ensuring 'openshift-file-integrity' namespace/project namespace/openshift-file-integrity created /home/pdhamdhe/go/bin/operator-sdk build quay.io/file-integrity-operator/file-integrity-operator:latest --image-builder podman INFO[0023] Building OCI image quay.io/file-integrity-operator/file-integrity-operator:latest STEP 1: FROM registry.access.redhat.com/ubi8/go-toolset AS builder STEP 2: USER root --> Using cache 86a3b78bf1c18fae4af4b48de5c3c365ca6e0dd2ab9e9752cc1525d14264a402 --> 86a3b78bf1c STEP 3: WORKDIR /go/src/github.com/openshift/file-integrity-operator --> Using cache 18cb87e0b950d19474dbe0f534e43433d7741e15fb222e85c92b2bf3b72f5d58 --> 18cb87e0b95 STEP 4: ENV GOFLAGS="-mod=vendor" --> Using cache 85900c07e47343dec62129f0a4df308e8bf4cc1405c5e4788834c2b328b65185 --> 85900c07e47 STEP 5: COPY . . --> 8a6dec84c4d STEP 6: RUN make operator-bin GOFLAGS=-mod=vendor GO111MODULE=auto go build -o /go/src/github.com/openshift/file-integrity-operator/build/_output/bin/file-integrity-operator github.com/openshift/file-integrity-operator/cmd/manager --> 3046eff2e5a STEP 7: FROM registry.fedoraproject.org/fedora-minimal:34 STEP 8: RUN microdnf -y install aide golang && microdnf clean all --> Using cache b2932b488b31ba7c3c319b8d1ac743ec869e79e178e7744264c67f88b580d9d2 --> b2932b488b3 STEP 9: ENV OPERATOR=/usr/local/bin/file-integrity-operator USER_UID=1001 USER_NAME=file-integrity-operator --> Using cache 56a651e02c00a1277ff751b47ce2392e3c9dc88b75a4acaad162f4dc8fa882f7 --> 56a651e02c0 STEP 10: COPY --from=builder /go/src/github.com/openshift/file-integrity-operator/build/_output/bin/file-integrity-operator ${OPERATOR} --> 10232bb0e41 STEP 11: COPY build/bin /usr/local/bin --> 8648d4c1d7a STEP 12: RUN /usr/local/bin/user_setup + mkdir -p /root + chown 1001:0 /root + chmod ug+rwx /root + chmod g+rw /etc/passwd + rm /usr/local/bin/user_setup --> 811e94adb38 STEP 13: ENTRYPOINT ["/usr/local/bin/entrypoint"] --> f8183d3ecd5 STEP 14: USER ${USER_UID} STEP 15: COMMIT quay.io/file-integrity-operator/file-integrity-operator:latest --> ae655d0119e ae655d0119ed7828f9ac36c03788b157809d4493df6f6fd79dffd287ca8da9df INFO[0085] Operator build complete. podman build -t quay.io/file-integrity-operator/file-integrity-operator-bundle:latest -f bundle.Dockerfile . STEP 1: FROM scratch STEP 2: LABEL operators.operatorframework.io.bundle.mediatype.v1=registry+v1 --> Using cache 26a8e91ab2e4f3354de988b8939b27aaeb178ca953f3de3216fc819239e3f191 --> 26a8e91ab2e STEP 3: LABEL operators.operatorframework.io.bundle.manifests.v1=manifests/ --> Using cache ef3b2a97d47e6e3248dcc4d1502700bec2c42b3de3717e038d9157a341ec9a69 --> ef3b2a97d47 STEP 4: LABEL operators.operatorframework.io.bundle.metadata.v1=metadata/ --> Using cache ee9235ef331c45523adc4a9656f90324cef507779881a1ce87c2225a36502a21 --> ee9235ef331 STEP 5: LABEL operators.operatorframework.io.bundle.package.v1=file-integrity-operator --> Using cache 28b04557efafe26b93f75483cf8eddb5f003424a3446c09433d1c9441172e3dc --> 28b04557efa STEP 6: LABEL operators.operatorframework.io.bundle.channels.v1=alpha --> Using cache 733a2fce8e3ed670ab012670bbaeec7e8861783d1856d08cafad05050b446a2c --> 733a2fce8e3 STEP 7: LABEL operators.operatorframework.io.bundle.channel.default.v1=alpha --> Using cache 1f8b0c2a625387e11069fbb747f91d69bd518d6b3accf076cefc38fab52ad139 --> 1f8b0c2a625 STEP 8: COPY deploy/olm-catalog/file-integrity-operator/manifests /manifests/ --> 6bc70fe4f73 STEP 9: COPY deploy/olm-catalog/file-integrity-operator/metadata /metadata/ STEP 10: COMMIT quay.io/file-integrity-operator/file-integrity-operator-bundle:latest --> 49577f6b02a 49577f6b02af87aedd22291190b767c3b2bf086df4eba6f05bd3ff6c13548bb0 IMAGE_FROM_CI variable missing. We're in local enviornment. Temporarily exposing the default route to the image registry config.imageregistry.operator.openshift.io/cluster patched Pushing image quay.io/file-integrity-operator/file-integrity-operator:latest to the image registry IMAGE_REGISTRY_HOST=$(oc get route default-route -n openshift-image-registry --template='{{ .spec.host }}'); \ podman login --tls-verify=false -u kubeadmin -p sha256~3OuYzgXdHhvRFfUyy60L6iMq6mz-LKGeglE0Ms_QA_I ${IMAGE_REGISTRY_HOST}; \ podman push --tls-verify=false quay.io/file-integrity-operator/file-integrity-operator:latest ${IMAGE_REGISTRY_HOST}/openshift-file-integrity/file-integrity-operator:latest Login Succeeded! Getting image source signatures Copying blob c6fac5b8005e done Copying blob 8a9c07290549 done Copying blob dcfa66c5c215 done Copying blob 8552877c830f done Copying blob 0bdbb0192544 done Copying config ae655d0119 done Writing manifest to image destination Storing signatures Removing the route from the image registry config.imageregistry.operator.openshift.io/cluster patched customresourcedefinition.apiextensions.k8s.io/fileintegrities.fileintegrity.openshift.io created customresourcedefinition.apiextensions.k8s.io/fileintegritynodestatuses.fileintegrity.openshift.io created namespace/openshift-file-integrity unchanged deployment.apps/file-integrity-operator created role.rbac.authorization.k8s.io/file-integrity-operator created role.rbac.authorization.k8s.io/file-integrity-daemon created clusterrole.rbac.authorization.k8s.io/file-integrity-operator created rolebinding.rbac.authorization.k8s.io/file-integrity-operator created rolebinding.rbac.authorization.k8s.io/file-integrity-daemon created clusterrolebinding.rbac.authorization.k8s.io/file-integrity-operator created rolebinding.rbac.authorization.k8s.io/prometheus-k8s created serviceaccount/file-integrity-operator created serviceaccount/file-integrity-daemon created clusterrole.rbac.authorization.k8s.io/file-integrity-operator-metrics created clusterrolebinding.rbac.authorization.k8s.io/file-integrity-operator-metrics created [pdhamdhe@prashant-carbonX1 file-integrity-operator]$ oc project openshift-file-integrity Now using project "openshift-file-integrity" on server "https://api.pdhamdhe12410.qe.devcluster.openshift.com:6443". $ oc get pods NAME READY STATUS RESTARTS AGE file-integrity-operator-9bff4c47f-llqg6 1/1 Running 1 (11m ago) 11m $ oc apply -f - <<EOF > apiVersion: fileintegrity.openshift.io/v1alpha1 > kind: FileIntegrity > metadata: > name: example-fileintegrity > namespace: openshift-file-integrity > spec: > debug: true > config: > gracePeriod: 15 > EOF fileintegrity.fileintegrity.openshift.io/example-fileintegrity created $ oc get all NAME READY STATUS RESTARTS AGE pod/aide-example-fileintegrity-7gccq 1/1 Running 0 92s pod/aide-example-fileintegrity-9rkx2 1/1 Running 0 92s pod/aide-example-fileintegrity-9z828 1/1 Running 0 92s pod/aide-example-fileintegrity-qjfrn 1/1 Running 0 92s pod/aide-example-fileintegrity-vlwbc 1/1 Running 0 92s pod/file-integrity-operator-9bff4c47f-llqg6 1/1 Running 1 (13m ago) 13m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/metrics ClusterIP 172.30.202.97 <none> 8383/TCP,8686/TCP,8585/TCP 13m NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/aide-example-fileintegrity 5 5 5 5 5 <none> 93s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/file-integrity-operator 1/1 1 1 14m NAME DESIRED CURRENT READY AGE replicaset.apps/file-integrity-operator-9bff4c47f 1 1 1 14m NAME IMAGE REPOSITORY TAGS UPDATED imagestream.image.openshift.io/file-integrity-operator image-registry.openshift-image-registry.svc:5000/openshift-file-integrity/file-integrity-operator latest 14 minutes ago $ oc get fileintegritynodestatuses NAME NODE STATUS example-fileintegrity-ip-10-0-155-64.us-east-2.compute.internal ip-10-0-155-64.us-east-2.compute.internal Succeeded example-fileintegrity-ip-10-0-159-138.us-east-2.compute.internal ip-10-0-159-138.us-east-2.compute.internal Succeeded example-fileintegrity-ip-10-0-186-158.us-east-2.compute.internal ip-10-0-186-158.us-east-2.compute.internal Succeeded example-fileintegrity-ip-10-0-188-93.us-east-2.compute.internal ip-10-0-188-93.us-east-2.compute.internal Succeeded example-fileintegrity-ip-10-0-223-175.us-east-2.compute.internal ip-10-0-223-175.us-east-2.compute.internal Succeeded $ oc get nodes NAME STATUS ROLES AGE VERSION ip-10-0-155-64.us-east-2.compute.internal Ready master 11h v1.22.1+6859754 ip-10-0-159-138.us-east-2.compute.internal Ready worker,wscan 11h v1.22.1+6859754 ip-10-0-186-158.us-east-2.compute.internal Ready master 11h v1.22.1+6859754 ip-10-0-188-93.us-east-2.compute.internal Ready worker,wscan 11h v1.22.1+6859754 ip-10-0-223-175.us-east-2.compute.internal Ready master 11h v1.22.1+6859754 $ oc annotate fileintegrity example-fileintegrity file-integrity.openshift.io/re-init= fileintegrity.fileintegrity.openshift.io/example-fileintegrity annotated $ oc debug node/ip-10-0-155-64.us-east-2.compute.internal -- chroot /host ls -ltr /run |grep aide.reinit Starting pod/ip-10-0-155-64us-east-2computeinternal-debug ... To use host binaries, run `chroot /host` Removing debug pod ... -rw-r--r--. 1 root root 0 Jan 12 14:31 aide.reinit <<---------- $ oc debug node/ip-10-0-155-64.us-east-2.compute.internal -- chroot /host ls -ltr /etc |grep aide.reinit Starting pod/ip-10-0-155-64us-east-2computeinternal-debug ... To use host binaries, run `chroot /host` Removing debug pod ... $ oc debug node/ip-10-0-155-64.us-east-2.compute.internal -- chroot /host ls -ltr /etc/kubernetes |grep aide.reinit Starting pod/ip-10-0-155-64us-east-2computeinternal-debug ... To use host binaries, run `chroot /host` Removing debug pod ... $ oc get fileintegritynodestatuses NAME NODE STATUS example-fileintegrity-ip-10-0-155-64.us-east-2.compute.internal ip-10-0-155-64.us-east-2.compute.internal Succeeded example-fileintegrity-ip-10-0-159-138.us-east-2.compute.internal ip-10-0-159-138.us-east-2.compute.internal Succeeded example-fileintegrity-ip-10-0-186-158.us-east-2.compute.internal ip-10-0-186-158.us-east-2.compute.internal Succeeded example-fileintegrity-ip-10-0-188-93.us-east-2.compute.internal ip-10-0-188-93.us-east-2.compute.internal Succeeded example-fileintegrity-ip-10-0-223-175.us-east-2.compute.internal ip-10-0-223-175.us-east-2.compute.internal Succeeded Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (file-integrity-operator bug fix and/or enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2022:0142 |