Bug 2010706 - The file-integrity-operator container panics with invalid memory address or nil pointer dereference during upgrade v0.1.16 > v0.1.19
Summary: The file-integrity-operator container panics with invalid memory address or n...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: File Integrity Operator
Version: 4.8
Hardware: Unspecified
OS: Unspecified
high
urgent
Target Milestone: ---
: 4.10.0
Assignee: Matt Rogers
QA Contact: Prashant Dhamdhere
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-10-05 11:46 UTC by Prashant Dhamdhere
Modified: 2021-11-15 11:11 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-11-15 11:11:40 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift file-integrity-operator pull 203 0 None open Bug 2010706: Fix nil deref in daemonSet upgrade path 2021-10-05 15:12:19 UTC
Red Hat Product Errata RHBA-2021:4631 0 None None None 2021-11-15 11:11:45 UTC

Description Prashant Dhamdhere 2021-10-05 11:46:39 UTC
Description of problem:

The file-integrity-operator container panics with invalid memory address or nil pointer dereference
during upgrade v0.1.16 > v0.1.19

$ oc logs file-integrity-operator-65b844f87f-7p2qm -c file-integrity-operator |grep panic
E1005 10:38:36.551980       1 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)
panic(0x15670e0, 0x2262050)
	/opt/rh/go-toolset-1.16/root/usr/lib/go-toolset-1.16-golang/src/runtime/panic.go:965 +0x1b9
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
	panic: runtime error: invalid memory address or nil pointer dereference
panic(0x15670e0, 0x2262050)
	/opt/rh/go-toolset-1.16/root/usr/lib/go-toolset-1.16-golang/src/runtime/panic.go:965 +0x1b9

$ oc get csv
NAME                              DISPLAY                   VERSION   REPLACES   PHASE
file-integrity-operator.v0.1.16   File Integrity Operator   0.1.16               Succeeded

$ oc get packagemanifest file-integrity-operator -ojsonpath={.status.channels[0].currentCSV}
file-integrity-operator.v0.1.19

$ oc patch subscriptions file-integrity-operator -p '{"spec":{"source":"file-integrity-operator"}}' --type='merge'
subscription.operators.coreos.com/file-integrity-operator patched

$ oc get subscriptions file-integrity-operator -ojsonpath={.spec} | jq -r
{
  "channel": "release-0.1",
  "name": "file-integrity-operator",
  "source": "file-integrity-operator",
  "sourceNamespace": "openshift-marketplace"
}

$ oc get csv
NAME                              DISPLAY                   VERSION   REPLACES                          PHASE
file-integrity-operator.v0.1.19   File Integrity Operator   0.1.19    file-integrity-operator.v0.1.16   Installing

$ oc get pods
NAME                                       READY   STATUS             RESTARTS        AGE
aide-ds-example-fileintegrity-58msm        1/1     Running            4               5d2h
aide-ds-example-fileintegrity-5cj89        1/1     Running            3               5d2h
aide-ds-example-fileintegrity-5fvn7        1/1     Running            2               5d2h
aide-ds-example-fileintegrity-6jbnf        1/1     Running            2               5d2h
aide-ds-example-fileintegrity-dnc5x        1/1     Running            4               5d2h
aide-ds-example-fileintegrity-jd447        1/1     Running            2               5d2h
aide-ini-example-fileintegrity-8rb4t       1/1     Running            0               44m
aide-ini-example-fileintegrity-9db6g       1/1     Running            0               44m
aide-ini-example-fileintegrity-blc7w       1/1     Running            0               44m
aide-ini-example-fileintegrity-ngxlg       1/1     Running            0               44m
aide-ini-example-fileintegrity-t66jh       1/1     Running            0               44m
aide-ini-example-fileintegrity-zjbh4       1/1     Running            0               44m
file-integrity-operator-65b844f87f-7p2qm   0/1     CrashLoopBackOff   12 (5m2s ago)   45m   <<----


Version-Release number of selected component (if applicable):
OCP 4.8.13-x86_64 + file-integrity-operator.v0.1.19

How reproducible:
Always

Steps to Reproduce:
1. Install file-integrity-operator.v0.1.16 through redhat-operators
2. Create CatalogSource using latest available version index image

$oc create -f - <<EOF  
> apiVersion: operators.coreos.com/v1alpha1
> kind: CatalogSource
> metadata:
>   name: file-integrity-operator
>   namespace: openshift-marketplace
> spec:
>   displayName: openshift-file-integrity-operator
>   publisher: Red Hat
>   sourceType: grpc
>   image: quay.io/openshift-qe-optional-operators/file-integrity-operator-index-0.1:latest
> EOF
catalogsource.operators.coreos.com/file-integrity-operator created

3. Verify the latest operator version through packagemanifest

$ oc get catsrc file-integrity-operator -nopenshift-marketplace
$ oc get packagemanifest file-integrity-operator -ojsonpath={.status.channels[0].currentCSV}

4. Then patch the subscription with above CatalogSource

$ oc patch subscriptions file-integrity-operator -p '{"spec":{"source":"file-integrity-operator"}}' --type='merge'

5. Monitor  file-integrity-operator installPlan, ClusterServiceVersion and pod

$ oc get ip
$ oc get csv
$ oc get pod

Actual results:
The file-integrity-operator container panics with invalid memory address or nil pointer dereference during upgrade v0.1.16 > v0.1.19

Expected results:
The file-integrity-operator container should not gets panic on invalid memory address or nil pointer dereference during upgrade

Additional info:

$ oc logs file-integrity-operator-65b844f87f-7p2qm -c file-integrity-operator |tail -75
{"level":"info","ts":1633430316.5514717,"logger":"controller","msg":"Starting Controller","controller":"status-controller"}
{"level":"info","ts":1633430316.5515082,"logger":"controller","msg":"Starting workers","controller":"status-controller","worker count":1}
{"level":"info","ts":1633430316.5515225,"logger":"controller_fileintegrity","msg":"reconciling FileIntegrity","Request.Namespace":"openshift-file-integrity","Request.Name":"example-fileintegrity"}
{"level":"info","ts":1633430316.5515792,"logger":"controller_status","msg":"reconciling FileIntegrityStatus","Request.Namespace":"openshift-file-integrity","Request.Name":"example-fileintegrity"}
{"level":"info","ts":1633430316.5517216,"logger":"controller_status","msg":"reconciling FileIntegrityStatus","Request.Namespace":"openshift-file-integrity","Request.Name":"example-fileintegrity"}
{"level":"info","ts":1633430316.5517936,"logger":"controller_status","msg":"reconciling FileIntegrityStatus","Request.Namespace":"openshift-file-integrity","Request.Name":"example-fileintegrity"}
E1005 10:38:36.551980       1 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)
goroutine 1459 [running]:
k8s.io/apimachinery/pkg/util/runtime.logPanic(0x15670e0, 0x2262050)
	/go/src/github.com/openshift/file-integrity-operator/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0x95
k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
	/go/src/github.com/openshift/file-integrity-operator/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x86
panic(0x15670e0, 0x2262050)
	/opt/rh/go-toolset-1.16/root/usr/lib/go-toolset-1.16-golang/src/runtime/panic.go:965 +0x1b9
sigs.k8s.io/controller-runtime/pkg/client.(*DeleteOptions).ApplyOptions(0xc0009d57c0, 0xc000673cb0, 0x1, 0x1, 0x0)
	/go/src/github.com/openshift/file-integrity-operator/vendor/sigs.k8s.io/controller-runtime/pkg/client/options.go:240 +0x50
sigs.k8s.io/controller-runtime/pkg/client.(*typedClient).Delete(0xc000b63a70, 0x190d528, 0xc000118008, 0x18e58c0, 0xc0008f8d80, 0xc000673cb0, 0x1, 0x1, 0x10, 0x10)
	/go/src/github.com/openshift/file-integrity-operator/vendor/sigs.k8s.io/controller-runtime/pkg/client/typed_client.go:77 +0xdb
sigs.k8s.io/controller-runtime/pkg/client.(*client).Delete(0xc000b63a70, 0x190d528, 0xc000118008, 0x18e58c0, 0xc0008f8d80, 0xc000673cb0, 0x1, 0x1, 0x1, 0xc000673cb0)
	/go/src/github.com/openshift/file-integrity-operator/vendor/sigs.k8s.io/controller-runtime/pkg/client/client.go:137 +0x125
github.com/openshift/file-integrity-operator/pkg/controller/fileintegrity.(*ReconcileFileIntegrity).deleteLegacyDaemonSets(0xc0000f75e0, 0xc0009c0680, 0x170e652, 0x8)
	/go/src/github.com/openshift/file-integrity-operator/pkg/controller/fileintegrity/fileintegrity_controller.go:484 +0x333
github.com/openshift/file-integrity-operator/pkg/controller/fileintegrity.(*ReconcileFileIntegrity).Reconcile(0xc0000f75e0, 0xc000046e70, 0x18, 0xc000046e58, 0x15, 0xc000673ba0, 0xc0005d5a70, 0xc000614d88, 0xc000614d80)
	/go/src/github.com/openshift/file-integrity-operator/pkg/controller/fileintegrity/fileintegrity_controller.go:424 +0xc05
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler(0xc000c522d0, 0x15b3940, 0xc000834760, 0x0)
	/go/src/github.com/openshift/file-integrity-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:235 +0x2a9
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem(0xc000c522d0, 0x203000)
	/go/src/github.com/openshift/file-integrity-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:209 +0xb0
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker(...)
	/go/src/github.com/openshift/file-integrity-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:188
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc000673b60)
	/go/src/github.com/openshift/file-integrity-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000673b60, 0x18dcc00, 0xc0008a35c0, 0xc00099cb01, 0xc0004166c0)
	/go/src/github.com/openshift/file-integrity-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0x9b
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000673b60, 0x3b9aca00, 0x0, 0x17c6001, 0xc0004166c0)
	/go/src/github.com/openshift/file-integrity-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98
k8s.io/apimachinery/pkg/util/wait.Until(0xc000673b60, 0x3b9aca00, 0xc0004166c0)
	/go/src/github.com/openshift/file-integrity-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90 +0x4d
created by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1
	/go/src/github.com/openshift/file-integrity-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:170 +0x3ba
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
	panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x18 pc=0x1242a90]

goroutine 1459 [running]:
k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
	/go/src/github.com/openshift/file-integrity-operator/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:55 +0x109
panic(0x15670e0, 0x2262050)
	/opt/rh/go-toolset-1.16/root/usr/lib/go-toolset-1.16-golang/src/runtime/panic.go:965 +0x1b9
sigs.k8s.io/controller-runtime/pkg/client.(*DeleteOptions).ApplyOptions(0xc0009d57c0, 0xc000673cb0, 0x1, 0x1, 0x0)
	/go/src/github.com/openshift/file-integrity-operator/vendor/sigs.k8s.io/controller-runtime/pkg/client/options.go:240 +0x50
sigs.k8s.io/controller-runtime/pkg/client.(*typedClient).Delete(0xc000b63a70, 0x190d528, 0xc000118008, 0x18e58c0, 0xc0008f8d80, 0xc000673cb0, 0x1, 0x1, 0x10, 0x10)
	/go/src/github.com/openshift/file-integrity-operator/vendor/sigs.k8s.io/controller-runtime/pkg/client/typed_client.go:77 +0xdb
sigs.k8s.io/controller-runtime/pkg/client.(*client).Delete(0xc000b63a70, 0x190d528, 0xc000118008, 0x18e58c0, 0xc0008f8d80, 0xc000673cb0, 0x1, 0x1, 0x1, 0xc000673cb0)
	/go/src/github.com/openshift/file-integrity-operator/vendor/sigs.k8s.io/controller-runtime/pkg/client/client.go:137 +0x125
github.com/openshift/file-integrity-operator/pkg/controller/fileintegrity.(*ReconcileFileIntegrity).deleteLegacyDaemonSets(0xc0000f75e0, 0xc0009c0680, 0x170e652, 0x8)
	/go/src/github.com/openshift/file-integrity-operator/pkg/controller/fileintegrity/fileintegrity_controller.go:484 +0x333
github.com/openshift/file-integrity-operator/pkg/controller/fileintegrity.(*ReconcileFileIntegrity).Reconcile(0xc0000f75e0, 0xc000046e70, 0x18, 0xc000046e58, 0x15, 0xc000673ba0, 0xc0005d5a70, 0xc000614d88, 0xc000614d80)
	/go/src/github.com/openshift/file-integrity-operator/pkg/controller/fileintegrity/fileintegrity_controller.go:424 +0xc05
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler(0xc000c522d0, 0x15b3940, 0xc000834760, 0x0)
	/go/src/github.com/openshift/file-integrity-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:235 +0x2a9
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem(0xc000c522d0, 0x203000)
	/go/src/github.com/openshift/file-integrity-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:209 +0xb0
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker(...)
	/go/src/github.com/openshift/file-integrity-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:188
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc000673b60)
	/go/src/github.com/openshift/file-integrity-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000673b60, 0x18dcc00, 0xc0008a35c0, 0xc00099cb01, 0xc0004166c0)
	/go/src/github.com/openshift/file-integrity-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0x9b
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000673b60, 0x3b9aca00, 0x0, 0x17c6001, 0xc0004166c0)
	/go/src/github.com/openshift/file-integrity-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98
k8s.io/apimachinery/pkg/util/wait.Until(0xc000673b60, 0x3b9aca00, 0xc0004166c0)
	/go/src/github.com/openshift/file-integrity-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90 +0x4d
created by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1
	/go/src/github.com/openshift/file-integrity-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:170 +0x3ba

Comment 1 Matt Rogers 2021-10-05 14:55:26 UTC
This has a quick fix.

Comment 4 Prashant Dhamdhere 2021-10-19 05:41:49 UTC
[Bug_Verification]

Looks good. The file-integrity-operator container do not gets panic on invalid memory address now
and the scan gets successfully performed.


Verified On:

4.9.0-x86_64 + file-integrity-operator.v0.1.20
https://brewweb.engineering.redhat.com/brew/buildinfo?buildID=1762715


$ oc get clusterversion
NAME      VERSION   AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.9.0     True        False         91m     Cluster version is 4.9.0

$ oc project openshift-file-integrity
Now using project "openshift-file-integrity" on server "https://api.pdhamdhe-19.qe.devcluster.openshift.com:6443".

$ oc get csv
NAME                              DISPLAY                            VERSION    REPLACES   PHASE
elasticsearch-operator.5.2.2-33   OpenShift Elasticsearch Operator   5.2.2-33              Succeeded
file-integrity-operator.v0.1.16   File Integrity Operator            0.1.16                Succeeded

$ oc get pods
NAME                                       READY   STATUS    RESTARTS   AGE
file-integrity-operator-758c5cb4b7-wdfgx   1/1     Running   0          104s
 
$ oc create -f - <<EOF  
> apiVersion: operators.coreos.com/v1alpha1
> kind: CatalogSource
> metadata:
>   name: file-integrity-operator
>   namespace: openshift-marketplace
> spec:
>   displayName: openshift-file-integrity-operator
>   publisher: Red Hat
>   sourceType: grpc
>   image: quay.io/openshift-qe-optional-operators/file-integrity-operator-index-0.1:latest
> EOF
catalogsource.operators.coreos.com/file-integrity-operator created

$ oc get pods -nopenshift-marketplace
NAME                                                              READY   STATUS      RESTARTS   AGE
3a96b092adcbc63283408026e6146d64f3fbecf0375b5102042d2a--1-j2rr9   0/1     Completed   0          44m
8023db5d693fe57d2d938248d4780c8607ca1611aa8b73d9df6495--1-922rw   0/1     Completed   0          133m
certified-operators-bvsn5                                         1/1     Running     0          156m
community-operators-gfnhp                                         1/1     Running     0          156m
f6c9dcab52c9c936c7d0fb0b62840d609ce63ea2210c4473e68fab--1-tz5ww   0/1     Completed   0          133m
file-integrity-operator-fhnrx                                     1/1     Running     0          41m
marketplace-operator-6dc6dd9896-qqdx5                             1/1     Running     0          158m
qe-app-registry-wv6s9                                             1/1     Running     0          133m
redhat-marketplace-xtm8q                                          1/1     Running     0          156m
redhat-operators-2brvg                                            1/1     Running     0          156m

$ oc create -f - <<EOF
> apiVersion: fileintegrity.openshift.io/v1alpha1
> kind: FileIntegrity
> metadata:
>   name: example-fileintegrity
>   namespace: openshift-file-integrity
> spec:
>   debug: false
>   config: 
>     gracePeriod: 15
> EOF
fileintegrity.fileintegrity.openshift.io/example-fileintegrity created


$ oc get pods
NAME                                       READY   STATUS    RESTARTS   AGE
aide-ds-example-fileintegrity-8rrk6        1/1     Running   0          34s
aide-ds-example-fileintegrity-d8qt8        1/1     Running   0          34s
aide-ds-example-fileintegrity-f79tm        1/1     Running   0          34s
aide-ds-example-fileintegrity-hzjnv        1/1     Running   0          34s
aide-ds-example-fileintegrity-l2jhn        1/1     Running   0          34s
aide-ds-example-fileintegrity-n8mf9        1/1     Running   0          34s
file-integrity-operator-758c5cb4b7-wdfgx   1/1     Running   0          4m55s

$ oc get fileintegritynodestatus
NAME                                                               NODE                                         STATUS
example-fileintegrity-ip-10-0-130-127.us-east-2.compute.internal   ip-10-0-130-127.us-east-2.compute.internal   Succeeded
example-fileintegrity-ip-10-0-153-16.us-east-2.compute.internal    ip-10-0-153-16.us-east-2.compute.internal    Succeeded
example-fileintegrity-ip-10-0-177-237.us-east-2.compute.internal   ip-10-0-177-237.us-east-2.compute.internal   Succeeded
example-fileintegrity-ip-10-0-184-30.us-east-2.compute.internal    ip-10-0-184-30.us-east-2.compute.internal    Succeeded
example-fileintegrity-ip-10-0-216-27.us-east-2.compute.internal    ip-10-0-216-27.us-east-2.compute.internal    Succeeded
example-fileintegrity-ip-10-0-219-91.us-east-2.compute.internal    ip-10-0-219-91.us-east-2.compute.internal    Succeeded

$ oc get packagemanifest file-integrity-operator -ojsonpath={.status.channels[0].currentCSV}
file-integrity-operator.v0.1.20

$ oc patch subscriptions file-integrity-operator -p '{"spec":{"source":"file-integrity-operator"}}' --type='merge'
subscription.operators.coreos.com/file-integrity-operator patched

$ oc get subscriptions file-integrity-operator -ojsonpath={.spec} | jq -r
{
  "channel": "release-0.1",
  "name": "file-integrity-operator",
  "source": "file-integrity-operator",
  "sourceNamespace": "openshift-marketplace"
}

$ oc get csv -w
NAME                              DISPLAY                            VERSION    REPLACES   PHASE
elasticsearch-operator.5.2.2-33   OpenShift Elasticsearch Operator   5.2.2-33              Succeeded
file-integrity-operator.v0.1.16   File Integrity Operator            0.1.16                Succeeded

$ oc get ip
NAME            CSV                               APPROVAL    APPROVED
install-2z8sb   file-integrity-operator.v0.1.16   Automatic   true
install-l6bdr   file-integrity-operator.v0.1.20   Automatic   true

$ oc get csv -w
NAME                              DISPLAY                            VERSION    REPLACES                          PHASE
elasticsearch-operator.5.2.2-33   OpenShift Elasticsearch Operator   5.2.2-33                                     Succeeded
file-integrity-operator.v0.1.16   File Integrity Operator            0.1.16                                       Replacing
file-integrity-operator.v0.1.20   File Integrity Operator            0.1.20     file-integrity-operator.v0.1.16   Installing
file-integrity-operator.v0.1.20   File Integrity Operator            0.1.20     file-integrity-operator.v0.1.16   Succeeded
file-integrity-operator.v0.1.16   File Integrity Operator            0.1.16                                       Deleting
file-integrity-operator.v0.1.16   File Integrity Operator            0.1.16                                       Deleting

$ oc get csv
NAME                              DISPLAY                            VERSION    REPLACES                          PHASE
elasticsearch-operator.5.2.2-33   OpenShift Elasticsearch Operator   5.2.2-33                                     Succeeded
file-integrity-operator.v0.1.20   File Integrity Operator            0.1.20     file-integrity-operator.v0.1.16   Installing

$ oc get pods -w
NAME                                      READY   STATUS    RESTARTS     AGE
aide-ds-example-fileintegrity-8rrk6       1/1     Running   0            40m
aide-ds-example-fileintegrity-d8qt8       1/1     Running   0            40m
aide-ds-example-fileintegrity-f79tm       1/1     Running   0            40m
aide-ds-example-fileintegrity-hzjnv       1/1     Running   0            40m
aide-ds-example-fileintegrity-l2jhn       1/1     Running   0            40m
aide-ds-example-fileintegrity-n8mf9       1/1     Running   0            40m
file-integrity-operator-f5c454df9-dbld8   1/1     Running   1 (8s ago)   33s
aide-ini-example-fileintegrity-kf828      0/1     Pending   0            0s
aide-ini-example-fileintegrity-kf828      0/1     Pending   0            0s

]$ oc get csv
NAME                              DISPLAY                            VERSION    REPLACES                          PHASE
elasticsearch-operator.5.2.2-33   OpenShift Elasticsearch Operator   5.2.2-33                                     Succeeded
file-integrity-operator.v0.1.20   File Integrity Operator            0.1.20     file-integrity-operator.v0.1.16   Succeeded

$ oc get pods -w
NAME                                      READY   STATUS              RESTARTS      AGE
aide-ds-example-fileintegrity-hzjnv       1/1     Terminating         0             41m
aide-ds-example-fileintegrity-l2jhn       1/1     Terminating         0             41m
aide-ds-example-fileintegrity-n8mf9       1/1     Terminating         0             41m
aide-example-fileintegrity-fjtmq          1/1     Running             0             17s
aide-example-fileintegrity-gdjxd          0/1     ContainerCreating   0             17s
aide-example-fileintegrity-jmwmf          0/1     ContainerCreating   0             17s
aide-example-fileintegrity-l7vpf          1/1     Running             0             17s
aide-example-fileintegrity-mnxzn          0/1     ContainerCreating   0             17s
aide-example-fileintegrity-vnmt2          1/1     Running             0             17s
aide-ini-example-fileintegrity-5599x      0/1     Init:0/1            0             18s
aide-ini-example-fileintegrity-dclmk      0/1     Init:0/1            0             18s
aide-ini-example-fileintegrity-fnrpr      0/1     Init:0/1            0             18s
aide-ini-example-fileintegrity-kf828      0/1     Init:0/1            0             18s
aide-ini-example-fileintegrity-v8lld      1/1     Running             0             18s
aide-ini-example-fileintegrity-zc2zx      0/1     Init:0/1            0             18s
file-integrity-operator-f5c454df9-dbld8   1/1     Running             1 (33s ago)   58s
aide-ini-example-fileintegrity-fnrpr      0/1     PodInitializing     0             19s
aide-ds-example-fileintegrity-l2jhn       0/1     Terminating         0             41m
aide-ds-example-fileintegrity-l2jhn       0/1     Terminating         0             41m
aide-ds-example-fileintegrity-l2jhn       0/1     Terminating         0             41m
aide-example-fileintegrity-jmwmf          1/1     Running             0             18s
aide-ini-example-fileintegrity-kf828      0/1     Init:0/1            0             19s
aide-ini-example-fileintegrity-kf828      0/1     PodInitializing     0             20s
aide-ini-example-fileintegrity-fnrpr      1/1     Running             0             21s


$ oc get pods 
NAME                                      READY   STATUS    RESTARTS        AGE
aide-example-fileintegrity-fjtmq          1/1     Running   0               3m43s
aide-example-fileintegrity-gdjxd          1/1     Running   0               3m43s
aide-example-fileintegrity-jmwmf          1/1     Running   0               3m43s
aide-example-fileintegrity-l7vpf          1/1     Running   0               3m43s
aide-example-fileintegrity-mnxzn          1/1     Running   0               3m43s
aide-example-fileintegrity-vnmt2          1/1     Running   0               3m43s
aide-ini-example-fileintegrity-5599x      1/1     Running   0               3m44s
aide-ini-example-fileintegrity-dclmk      1/1     Running   0               3m44s
aide-ini-example-fileintegrity-fnrpr      1/1     Running   0               3m44s
aide-ini-example-fileintegrity-kf828      1/1     Running   0               3m44s
aide-ini-example-fileintegrity-v8lld      1/1     Running   0               3m44s
aide-ini-example-fileintegrity-zc2zx      1/1     Running   0               3m44s
file-integrity-operator-f5c454df9-dbld8   1/1     Running   1 (3m59s ago)   4m24s

$ oc describe pod file-integrity-operator-f5c454df9-dbld8 |grep -A1 "RELATED_IMAGE_OPERATOR"
      RELATED_IMAGE_OPERATOR:   registry.redhat.io/compliance/openshift-file-integrity-rhel8-operator@sha256:3a1d27c689a1283edbd809097e963d734a294a31226d6b984b86b9eb1226e77e
      OPERATOR_CONDITION_NAME:  file-integrity-operator.v0.1.20


$ oc get nodes
NAME                                         STATUS   ROLES    AGE     VERSION
ip-10-0-130-127.us-east-2.compute.internal   Ready    worker   3h4m    v1.22.0-rc.0+894a78b
ip-10-0-153-16.us-east-2.compute.internal    Ready    master   3h10m   v1.22.0-rc.0+894a78b
ip-10-0-177-237.us-east-2.compute.internal   Ready    worker   3h6m    v1.22.0-rc.0+894a78b
ip-10-0-184-30.us-east-2.compute.internal    Ready    master   3h12m   v1.22.0-rc.0+894a78b
ip-10-0-216-27.us-east-2.compute.internal    Ready    master   3h11m   v1.22.0-rc.0+894a78b
ip-10-0-219-91.us-east-2.compute.internal    Ready    worker   3h6m    v1.22.0-rc.0+894a78b

$ oc debug no/ip-10-0-219-91.us-east-2.compute.internal -- chroot /host mkdir -p /root/test
Starting pod/ip-10-0-219-91us-east-2computeinternal-debug ...
To use host binaries, run `chroot /host`

Removing debug pod ...
 
$ oc get fileintegritynodestatus -w
NAME                                                               NODE                                         STATUS
example-fileintegrity-ip-10-0-130-127.us-east-2.compute.internal   ip-10-0-130-127.us-east-2.compute.internal   Succeeded
example-fileintegrity-ip-10-0-153-16.us-east-2.compute.internal    ip-10-0-153-16.us-east-2.compute.internal    Succeeded
example-fileintegrity-ip-10-0-177-237.us-east-2.compute.internal   ip-10-0-177-237.us-east-2.compute.internal   Succeeded
example-fileintegrity-ip-10-0-184-30.us-east-2.compute.internal    ip-10-0-184-30.us-east-2.compute.internal    Succeeded
example-fileintegrity-ip-10-0-216-27.us-east-2.compute.internal    ip-10-0-216-27.us-east-2.compute.internal    Succeeded
example-fileintegrity-ip-10-0-219-91.us-east-2.compute.internal    ip-10-0-219-91.us-east-2.compute.internal    Failed

$  oc get cm
NAME                                                                          DATA   AGE
aide-example-fileintegrity-ip-10-0-219-91.us-east-2.compute.internal-failed   1      37s
aide-pause                                                                    1      76m
aide-reinit                                                                   1      76m
example-fileintegrity                                                         1      76m
file-integrity-operator-lock                                                  0      35m
kube-root-ca.crt                                                              1      80m
openshift-service-ca.crt                                                      1      80m

$ oc extract cm/aide-example-fileintegrity-ip-10-0-219-91.us-east-2.compute.internal-failed --confirm
integritylog
 
$ cat integritylog 
Start timestamp: 2021-10-19 05:30:53 +0000 (AIDE 0.16)
AIDE found differences between database and filesystem!!

Summary:
  Total number of entries:	35145
  Added entries:		1
  Removed entries:		0
  Changed entries:		0

---------------------------------------------------
Added entries:
---------------------------------------------------

d++++++++++++++++: /hostroot/root/test

---------------------------------------------------
The attributes of the (uncompressed) database(s):
---------------------------------------------------

/hostroot/etc/kubernetes/aide.db.gz
  MD5      : Ohs3dVT1W05ErcBEHOCqHw==
  SHA1     : MPMUr6v2K/Qmu5X8jaJAzYwJJmI=
  RMD160   : g1AyMlBzpzEDvunYtytj4QxBue4=
  TIGER    : qR6Byvqpy/UVGnSWspq2kLn6yjJU+M32
  SHA256   : +i/aZSuz8Zc5S3tiOVl1yhqclGF+5ehn
             wuGqcx/BzfE=
  SHA512   : s25fLVTdvyh6HBSGnYJE7x+gZ8Omdxyf
             A8J1RZvnkeyTYWF9v1X6AtHVcwz9D+T2
             necQZaeWoTmyInNcqR/Rtg==


End timestamp: 2021-10-19 05:31:25 +0000 (run time: 0m 32s)

Comment 6 errata-xmlrpc 2021-11-15 11:11:40 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (File Integrity Operator version 0.1.21 bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2021:4631


Note You need to log in before you can comment on or make changes to this bug.