Bug 1926033
| Summary: | [OCP v46] Please report fileintegritynodestatus (active/ failed / etc) in column when running `oc get fileintegritynodestatus` | ||
|---|---|---|---|
| Product: | OpenShift Container Platform | Reporter: | Prashant Dhamdhere <pdhamdhe> |
| Component: | File Integrity Operator | Assignee: | Matt Rogers <mrogers> |
| Status: | CLOSED ERRATA | QA Contact: | xiyuan |
| Severity: | medium | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | 4.6.z | CC: | akaris, jhrozek, josorior, mrogers, pdhamdhe, xiyuan |
| Target Milestone: | --- | ||
| Target Release: | 4.6.z | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | 1921692 | Environment: | |
| Last Closed: | 2021-02-16 09:18:42 UTC | Type: | --- |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
| Bug Depends On: | 1921692 | ||
| Bug Blocks: | |||
|
Description
Prashant Dhamdhere
2021-02-08 04:46:13 UTC
[ Bug Verification ]
Looks good to me. Now, the fileintegritynodestatus object reports the latest status of an Aide
run and expose the status as Failed or Succeeded in a status field.
Verified on:
4.6.0-0.nightly-2021-01-30-211400
file-integrity-operator.v0.1.10
$ oc get csv
NAME DISPLAY VERSION REPLACES PHASE
elasticsearch-operator.4.6.0-202101300140.p0 OpenShift Elasticsearch Operator 4.6.0-202101300140.p0 Succeeded
file-integrity-operator.v0.1.10 File Integrity Operator 0.1.10 Succeeded
$ oc get pod
NAME READY STATUS RESTARTS AGE
file-integrity-operator-54fbb9f57d-gqj7g 1/1 Running 0 77s
$ oc get nodes
NAME STATUS ROLES AGE VERSION
ip-10-0-134-186.us-east-2.compute.internal Ready master 9h v1.19.0+e49167a
ip-10-0-150-230.us-east-2.compute.internal Ready worker 9h v1.19.0+e49167a
ip-10-0-169-137.us-east-2.compute.internal Ready master 9h v1.19.0+e49167a
ip-10-0-180-200.us-east-2.compute.internal Ready worker 9h v1.19.0+e49167a
ip-10-0-194-66.us-east-2.compute.internal Ready worker,wscan 9h v1.19.0+e49167a
ip-10-0-222-188.us-east-2.compute.internal Ready master 9h v1.19.0+e49167a
$ oc debug node/ip-10-0-194-66.us-east-2.compute.internal
Creating debug namespace/openshift-debug-node-96kpk ...
Starting pod/ip-10-0-194-66us-east-2computeinternal-debug ...
To use host binaries, run `chroot /host`
Pod IP: 10.0.194.66
If you don't see a command prompt, try pressing enter.
sh-4.4# chroot /host
sh-4.4# touch /root/test
sh-4.4# exit
sh-4.4# exit
Removing debug pod ...
Removing debug namespace/openshift-debug-node-96kpk ...
$ oc apply -f - <<EOF
> apiVersion: fileintegrity.openshift.io/v1alpha1
> kind: FileIntegrity
> metadata:
> name: example-fileintegrity
> namespace: openshift-file-integrity
> spec:
> # Change to debug: true to enable more verbose logging from the logcollector
> # container in the aide pods
> debug: false
> config:
> gracePeriod: 15
> EOF
fileintegrity.fileintegrity.openshift.io/example-fileintegrity created
$ oc get pods -w
NAME READY STATUS RESTARTS AGE
aide-ds-example-fileintegrity-5rkjr 1/1 Running 0 17s
aide-ds-example-fileintegrity-d9xqk 1/1 Running 0 17s
aide-ds-example-fileintegrity-dfs79 1/1 Running 0 17s
aide-ds-example-fileintegrity-q5kp4 1/1 Running 0 17s
aide-ds-example-fileintegrity-sdl8g 1/1 Running 0 17s
aide-ds-example-fileintegrity-w85hn 1/1 Running 0 17s
file-integrity-operator-54fbb9f57d-gqj7g 1/1 Running 0 14m
$ oc get fileintegritynodestatuses
NAME NODE STATUS
example-fileintegrity-ip-10-0-134-186.us-east-2.compute.internal ip-10-0-134-186.us-east-2.compute.internal Succeeded
example-fileintegrity-ip-10-0-150-230.us-east-2.compute.internal ip-10-0-150-230.us-east-2.compute.internal Succeeded
example-fileintegrity-ip-10-0-169-137.us-east-2.compute.internal ip-10-0-169-137.us-east-2.compute.internal Succeeded
example-fileintegrity-ip-10-0-180-200.us-east-2.compute.internal ip-10-0-180-200.us-east-2.compute.internal Succeeded
example-fileintegrity-ip-10-0-194-66.us-east-2.compute.internal ip-10-0-194-66.us-east-2.compute.internal Failed
example-fileintegrity-ip-10-0-222-188.us-east-2.compute.internal ip-10-0-222-188.us-east-2.compute.internal Succeeded
$ oc get fileintegritynodestatuses -w
NAME NODE STATUS
example-fileintegrity-ip-10-0-134-186.us-east-2.compute.internal ip-10-0-134-186.us-east-2.compute.internal Succeeded
example-fileintegrity-ip-10-0-150-230.us-east-2.compute.internal ip-10-0-150-230.us-east-2.compute.internal Succeeded
example-fileintegrity-ip-10-0-169-137.us-east-2.compute.internal ip-10-0-169-137.us-east-2.compute.internal Succeeded
example-fileintegrity-ip-10-0-180-200.us-east-2.compute.internal ip-10-0-180-200.us-east-2.compute.internal Succeeded
example-fileintegrity-ip-10-0-194-66.us-east-2.compute.internal ip-10-0-194-66.us-east-2.compute.internal Failed
example-fileintegrity-ip-10-0-222-188.us-east-2.compute.internal ip-10-0-222-188.us-east-2.compute.internal Succeeded
example-fileintegrity-ip-10-0-134-186.us-east-2.compute.internal ip-10-0-134-186.us-east-2.compute.internal Succeeded
example-fileintegrity-ip-10-0-222-188.us-east-2.compute.internal ip-10-0-222-188.us-east-2.compute.internal Succeeded
example-fileintegrity-ip-10-0-194-66.us-east-2.compute.internal ip-10-0-194-66.us-east-2.compute.internal Failed
example-fileintegrity-ip-10-0-150-230.us-east-2.compute.internal ip-10-0-150-230.us-east-2.compute.internal Succeeded
example-fileintegrity-ip-10-0-180-200.us-east-2.compute.internal ip-10-0-180-200.us-east-2.compute.internal Succeeded
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: OpenShift Container Platform 4.6 file-integrity-operator image security update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2021:0568 |