Bug 1921692 - Please report fileintegritynodestatus (active/ failed / etc) in column when running `oc get fileintegritynodestatus`
Summary: Please report fileintegritynodestatus (active/ failed / etc) in column when r...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: File Integrity Operator
Version: 4.6.z
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: 4.7.0
Assignee: Matt Rogers
QA Contact: xiyuan
URL:
Whiteboard:
Depends On:
Blocks: 1926033
TreeView+ depends on / blocked
 
Reported: 2021-01-28 12:35 UTC by Andreas Karis
Modified: 2024-03-25 18:02 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1926033 (view as bug list)
Environment:
Last Closed: 2021-02-24 21:18:51 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift file-integrity-operator pull 134 0 None closed Bug 1921692: Add status columns to fileintegritynodestatus output 2021-02-18 02:29:19 UTC
Red Hat Product Errata RHSA-2021:0100 0 None None None 2021-02-24 21:19:19 UTC

Description Andreas Karis 2021-01-28 12:35:58 UTC
Description of problem:
Please report the latest status of an Aide run in a status field and expose the status (such as failed) in the output of `oc get fileintegritynodestatus`.

It's not a complicated change in your Operator's code, but it will make admins' lifes so much better :-)

Thanks!

Andreas

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 1 Matt Rogers 2021-01-28 15:46:30 UTC
Will be fixed upstream with https://github.com/openshift/file-integrity-operator/pull/134

Comment 5 Prashant Dhamdhere 2021-02-03 14:56:17 UTC
[ Bug Verification ]

Looks good to me. Now, the fileintegritynodestatus object reports the latest status of an Aide 
run and expose the status as Failed or Succeeded in a status field.


Verified on:
4.6.0-0.nightly-2021-01-30-211400
file-integrity-operator.v0.1.10


$ oc get csv
NAME                                           DISPLAY                            VERSION                 REPLACES   PHASE
elasticsearch-operator.4.6.0-202101300140.p0   OpenShift Elasticsearch Operator   4.6.0-202101300140.p0              Succeeded
file-integrity-operator.v0.1.10                File Integrity Operator            0.1.10                             Succeeded

$ oc get pod
NAME                                       READY   STATUS    RESTARTS   AGE
file-integrity-operator-54fbb9f57d-gqj7g   1/1     Running   0          77s

$ oc get nodes
NAME                                         STATUS   ROLES          AGE   VERSION
ip-10-0-134-186.us-east-2.compute.internal   Ready    master         9h    v1.19.0+e49167a
ip-10-0-150-230.us-east-2.compute.internal   Ready    worker         9h    v1.19.0+e49167a
ip-10-0-169-137.us-east-2.compute.internal   Ready    master         9h    v1.19.0+e49167a
ip-10-0-180-200.us-east-2.compute.internal   Ready    worker         9h    v1.19.0+e49167a
ip-10-0-194-66.us-east-2.compute.internal    Ready    worker,wscan   9h    v1.19.0+e49167a
ip-10-0-222-188.us-east-2.compute.internal   Ready    master         9h    v1.19.0+e49167a

$ oc debug node/ip-10-0-194-66.us-east-2.compute.internal
Creating debug namespace/openshift-debug-node-96kpk ...
Starting pod/ip-10-0-194-66us-east-2computeinternal-debug ...
To use host binaries, run `chroot /host`
Pod IP: 10.0.194.66
If you don't see a command prompt, try pressing enter.
sh-4.4# chroot /host
sh-4.4# touch /root/test
sh-4.4# exit
sh-4.4# exit

Removing debug pod ...
Removing debug namespace/openshift-debug-node-96kpk ...

$ oc apply -f - <<EOF
> apiVersion: fileintegrity.openshift.io/v1alpha1
> kind: FileIntegrity
> metadata:
>   name: example-fileintegrity
>   namespace: openshift-file-integrity
> spec:
>   # Change to debug: true to enable more verbose logging from the logcollector
>   # container in the aide pods
>   debug: false
>   config: 
>     gracePeriod: 15
> EOF
fileintegrity.fileintegrity.openshift.io/example-fileintegrity created


$ oc get pods -w
NAME                                       READY   STATUS    RESTARTS   AGE
aide-ds-example-fileintegrity-5rkjr        1/1     Running   0          17s
aide-ds-example-fileintegrity-d9xqk        1/1     Running   0          17s
aide-ds-example-fileintegrity-dfs79        1/1     Running   0          17s
aide-ds-example-fileintegrity-q5kp4        1/1     Running   0          17s
aide-ds-example-fileintegrity-sdl8g        1/1     Running   0          17s
aide-ds-example-fileintegrity-w85hn        1/1     Running   0          17s
file-integrity-operator-54fbb9f57d-gqj7g   1/1     Running   0          14m

$ oc get fileintegritynodestatuses 
NAME                                                               NODE                                         STATUS
example-fileintegrity-ip-10-0-134-186.us-east-2.compute.internal   ip-10-0-134-186.us-east-2.compute.internal   Succeeded
example-fileintegrity-ip-10-0-150-230.us-east-2.compute.internal   ip-10-0-150-230.us-east-2.compute.internal   Succeeded
example-fileintegrity-ip-10-0-169-137.us-east-2.compute.internal   ip-10-0-169-137.us-east-2.compute.internal   Succeeded
example-fileintegrity-ip-10-0-180-200.us-east-2.compute.internal   ip-10-0-180-200.us-east-2.compute.internal   Succeeded
example-fileintegrity-ip-10-0-194-66.us-east-2.compute.internal    ip-10-0-194-66.us-east-2.compute.internal    Failed
example-fileintegrity-ip-10-0-222-188.us-east-2.compute.internal   ip-10-0-222-188.us-east-2.compute.internal   Succeeded

$ oc get fileintegritynodestatuses -w
NAME                                                               NODE                                         STATUS
example-fileintegrity-ip-10-0-134-186.us-east-2.compute.internal   ip-10-0-134-186.us-east-2.compute.internal   Succeeded
example-fileintegrity-ip-10-0-150-230.us-east-2.compute.internal   ip-10-0-150-230.us-east-2.compute.internal   Succeeded
example-fileintegrity-ip-10-0-169-137.us-east-2.compute.internal   ip-10-0-169-137.us-east-2.compute.internal   Succeeded
example-fileintegrity-ip-10-0-180-200.us-east-2.compute.internal   ip-10-0-180-200.us-east-2.compute.internal   Succeeded
example-fileintegrity-ip-10-0-194-66.us-east-2.compute.internal    ip-10-0-194-66.us-east-2.compute.internal    Failed
example-fileintegrity-ip-10-0-222-188.us-east-2.compute.internal   ip-10-0-222-188.us-east-2.compute.internal   Succeeded
example-fileintegrity-ip-10-0-134-186.us-east-2.compute.internal   ip-10-0-134-186.us-east-2.compute.internal   Succeeded
example-fileintegrity-ip-10-0-222-188.us-east-2.compute.internal   ip-10-0-222-188.us-east-2.compute.internal   Succeeded
example-fileintegrity-ip-10-0-194-66.us-east-2.compute.internal    ip-10-0-194-66.us-east-2.compute.internal    Failed
example-fileintegrity-ip-10-0-150-230.us-east-2.compute.internal   ip-10-0-150-230.us-east-2.compute.internal   Succeeded
example-fileintegrity-ip-10-0-180-200.us-east-2.compute.internal   ip-10-0-180-200.us-east-2.compute.internal   Succeeded

Comment 6 Andreas Karis 2021-02-03 16:13:42 UTC
Awesome, thanks!

Comment 10 errata-xmlrpc 2021-02-24 21:18:51 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: OpenShift Container Platform 4.7 file-integrity-operator image security update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2021:0100


Note You need to log in before you can comment on or make changes to this bug.