Bug 1921692
| Summary: | Please report fileintegritynodestatus (active/ failed / etc) in column when running `oc get fileintegritynodestatus` | |||
|---|---|---|---|---|
| Product: | OpenShift Container Platform | Reporter: | Andreas Karis <akaris> | |
| Component: | File Integrity Operator | Assignee: | Matt Rogers <mrogers> | |
| Status: | CLOSED ERRATA | QA Contact: | xiyuan | |
| Severity: | unspecified | Docs Contact: | ||
| Priority: | unspecified | |||
| Version: | 4.6.z | CC: | jhrozek, josorior, pdhamdhe | |
| Target Milestone: | --- | |||
| Target Release: | 4.7.0 | |||
| Hardware: | Unspecified | |||
| OS: | Unspecified | |||
| Whiteboard: | ||||
| Fixed In Version: | Doc Type: | If docs needed, set a value | ||
| Doc Text: | Story Points: | --- | ||
| Clone Of: | ||||
| : | 1926033 (view as bug list) | Environment: | ||
| Last Closed: | 2021-02-24 21:18:51 UTC | Type: | Bug | |
| Regression: | --- | Mount Type: | --- | |
| Documentation: | --- | CRM: | ||
| Verified Versions: | Category: | --- | ||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
| Cloudforms Team: | --- | Target Upstream Version: | ||
| Embargoed: | ||||
| Bug Depends On: | ||||
| Bug Blocks: | 1926033 | |||
|
Description
Andreas Karis
2021-01-28 12:35:58 UTC
Will be fixed upstream with https://github.com/openshift/file-integrity-operator/pull/134 [ Bug Verification ]
Looks good to me. Now, the fileintegritynodestatus object reports the latest status of an Aide
run and expose the status as Failed or Succeeded in a status field.
Verified on:
4.6.0-0.nightly-2021-01-30-211400
file-integrity-operator.v0.1.10
$ oc get csv
NAME DISPLAY VERSION REPLACES PHASE
elasticsearch-operator.4.6.0-202101300140.p0 OpenShift Elasticsearch Operator 4.6.0-202101300140.p0 Succeeded
file-integrity-operator.v0.1.10 File Integrity Operator 0.1.10 Succeeded
$ oc get pod
NAME READY STATUS RESTARTS AGE
file-integrity-operator-54fbb9f57d-gqj7g 1/1 Running 0 77s
$ oc get nodes
NAME STATUS ROLES AGE VERSION
ip-10-0-134-186.us-east-2.compute.internal Ready master 9h v1.19.0+e49167a
ip-10-0-150-230.us-east-2.compute.internal Ready worker 9h v1.19.0+e49167a
ip-10-0-169-137.us-east-2.compute.internal Ready master 9h v1.19.0+e49167a
ip-10-0-180-200.us-east-2.compute.internal Ready worker 9h v1.19.0+e49167a
ip-10-0-194-66.us-east-2.compute.internal Ready worker,wscan 9h v1.19.0+e49167a
ip-10-0-222-188.us-east-2.compute.internal Ready master 9h v1.19.0+e49167a
$ oc debug node/ip-10-0-194-66.us-east-2.compute.internal
Creating debug namespace/openshift-debug-node-96kpk ...
Starting pod/ip-10-0-194-66us-east-2computeinternal-debug ...
To use host binaries, run `chroot /host`
Pod IP: 10.0.194.66
If you don't see a command prompt, try pressing enter.
sh-4.4# chroot /host
sh-4.4# touch /root/test
sh-4.4# exit
sh-4.4# exit
Removing debug pod ...
Removing debug namespace/openshift-debug-node-96kpk ...
$ oc apply -f - <<EOF
> apiVersion: fileintegrity.openshift.io/v1alpha1
> kind: FileIntegrity
> metadata:
> name: example-fileintegrity
> namespace: openshift-file-integrity
> spec:
> # Change to debug: true to enable more verbose logging from the logcollector
> # container in the aide pods
> debug: false
> config:
> gracePeriod: 15
> EOF
fileintegrity.fileintegrity.openshift.io/example-fileintegrity created
$ oc get pods -w
NAME READY STATUS RESTARTS AGE
aide-ds-example-fileintegrity-5rkjr 1/1 Running 0 17s
aide-ds-example-fileintegrity-d9xqk 1/1 Running 0 17s
aide-ds-example-fileintegrity-dfs79 1/1 Running 0 17s
aide-ds-example-fileintegrity-q5kp4 1/1 Running 0 17s
aide-ds-example-fileintegrity-sdl8g 1/1 Running 0 17s
aide-ds-example-fileintegrity-w85hn 1/1 Running 0 17s
file-integrity-operator-54fbb9f57d-gqj7g 1/1 Running 0 14m
$ oc get fileintegritynodestatuses
NAME NODE STATUS
example-fileintegrity-ip-10-0-134-186.us-east-2.compute.internal ip-10-0-134-186.us-east-2.compute.internal Succeeded
example-fileintegrity-ip-10-0-150-230.us-east-2.compute.internal ip-10-0-150-230.us-east-2.compute.internal Succeeded
example-fileintegrity-ip-10-0-169-137.us-east-2.compute.internal ip-10-0-169-137.us-east-2.compute.internal Succeeded
example-fileintegrity-ip-10-0-180-200.us-east-2.compute.internal ip-10-0-180-200.us-east-2.compute.internal Succeeded
example-fileintegrity-ip-10-0-194-66.us-east-2.compute.internal ip-10-0-194-66.us-east-2.compute.internal Failed
example-fileintegrity-ip-10-0-222-188.us-east-2.compute.internal ip-10-0-222-188.us-east-2.compute.internal Succeeded
$ oc get fileintegritynodestatuses -w
NAME NODE STATUS
example-fileintegrity-ip-10-0-134-186.us-east-2.compute.internal ip-10-0-134-186.us-east-2.compute.internal Succeeded
example-fileintegrity-ip-10-0-150-230.us-east-2.compute.internal ip-10-0-150-230.us-east-2.compute.internal Succeeded
example-fileintegrity-ip-10-0-169-137.us-east-2.compute.internal ip-10-0-169-137.us-east-2.compute.internal Succeeded
example-fileintegrity-ip-10-0-180-200.us-east-2.compute.internal ip-10-0-180-200.us-east-2.compute.internal Succeeded
example-fileintegrity-ip-10-0-194-66.us-east-2.compute.internal ip-10-0-194-66.us-east-2.compute.internal Failed
example-fileintegrity-ip-10-0-222-188.us-east-2.compute.internal ip-10-0-222-188.us-east-2.compute.internal Succeeded
example-fileintegrity-ip-10-0-134-186.us-east-2.compute.internal ip-10-0-134-186.us-east-2.compute.internal Succeeded
example-fileintegrity-ip-10-0-222-188.us-east-2.compute.internal ip-10-0-222-188.us-east-2.compute.internal Succeeded
example-fileintegrity-ip-10-0-194-66.us-east-2.compute.internal ip-10-0-194-66.us-east-2.compute.internal Failed
example-fileintegrity-ip-10-0-150-230.us-east-2.compute.internal ip-10-0-150-230.us-east-2.compute.internal Succeeded
example-fileintegrity-ip-10-0-180-200.us-east-2.compute.internal ip-10-0-180-200.us-east-2.compute.internal Succeeded
Awesome, thanks! Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: OpenShift Container Platform 4.7 file-integrity-operator image security update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2021:0100 |