Created attachment 1754417 [details] screencast with Activity Pane Description of problem: ========================== Noobaa and rgw related events from openshift-storage namespace are seen in the Activity Pane of Persistent Storage Dashboard too. Ideally this dashboard should have non-noobaa and non-rgw related events only Version-Release number of selected component (if applicable): ================================================================ OCP = 4.7.0-0.nightly-2021-01-31-031653 OCS = ocs-operator.v4.7.0-241.ci How reproducible: ===================== Always Steps to Reproduce: =========================== 1. Install OCS 4.7 2. check the Object Service Dashboard and events are related to Noobaa and RGW too 3. Check the Activity Pane in the Persistent Storage Dashboard. >> Working as expected - Object Service Dashboard PS: Check the Object Service Dashboard and it lists only Noobaa and RGW related events in Activity Pane (expected). Even if we respin say MGR pod, no events related to MGR is shown in Object Service Dashboard (expected) Actual results: ====================== Persistent Storage Dashboard shows noobaa and RGW related events too. Expected results: ====================== Persistent Storage Dashboard should not show Noobaa and RGW related events. Additional info: ==================== All these events are shown in Persistent Dashboard too ------------------------------------------------------- $ oc get events |grep rgw 2m19s Normal SuccessfulCreate replicaset/rook-ceph-rgw-ocs-storagecluster-cephobjectstore-a-78585c8b4f Created pod: rook-ceph-rgw-ocs-storagecluster-cephobjectstore-a-78585c8hpt8v 2m18s Normal Scheduled pod/rook-ceph-rgw-ocs-storagecluster-cephobjectstore-a-78585c8hpt8v Successfully assigned openshift-storage/rook-ceph-rgw-ocs-storagecluster-cephobjectstore-a-78585c8hpt8v to compute-1 2m17s Normal AddedInterface pod/rook-ceph-rgw-ocs-storagecluster-cephobjectstore-a-78585c8hpt8v Add eth0 [10.128.2.16/23] 2m17s Normal Pulled pod/rook-ceph-rgw-ocs-storagecluster-cephobjectstore-a-78585c8hpt8v Container image "quay.io/rhceph-dev/rhceph@sha256:f6089f1cddd42ab1e9624227c80b68fc441cf4acaa446574f388df447b444fd7" already present on machine 2m17s Normal Created pod/rook-ceph-rgw-ocs-storagecluster-cephobjectstore-a-78585c8hpt8v Created container chown-container-data-dir 2m17s Normal Started pod/rook-ceph-rgw-ocs-storagecluster-cephobjectstore-a-78585c8hpt8v Started container chown-container-data-dir 2m16s Normal Pulled pod/rook-ceph-rgw-ocs-storagecluster-cephobjectstore-a-78585c8hpt8v Container image "quay.io/rhceph-dev/rhceph@sha256:f6089f1cddd42ab1e9624227c80b68fc441cf4acaa446574f388df447b444fd7" already present on machine 2m16s Normal Created pod/rook-ceph-rgw-ocs-storagecluster-cephobjectstore-a-78585c8hpt8v Created container rgw 2m16s Normal Started pod/rook-ceph-rgw-ocs-storagecluster-cephobjectstore-a-78585c8hpt8v Started container rgw 2m19s Normal Killing pod/rook-ceph-rgw-ocs-storagecluster-cephobjectstore-a-78585c8r7rv2 Stopping container rgw 2m15s Warning FailedToUpdateEndpointSlices service/rook-ceph-rgw-ocs-storagecluster-cephobjectstore Error updating Endpoint Slices for Service openshift-storage/rook-ceph-rgw-ocs-storagecluster-cephobjectstore: failed to update rook-ceph-rgw-ocs-storagecluster-cephobjectstore-2lzg8 EndpointSlice for Service openshift-storage/rook-ceph-rgw-ocs-storagecluster-cephobjectstore: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "rook-ceph-rgw-ocs-storagecluster-cephobjectstore-2lzg8": the object has been modified; please apply your changes to the latest version and try again [nberry@localhost logs]$ oc get events |grep noobaa 4h18m Normal BackingStorePhaseReady backingstore/noobaa-default-backing-store Backing store mode: OPTIMAL 139m Warning BackingStorePhaseRejected backingstore/noobaa-default-backing-store Backing store mode: ALL_NODES_OFFLINE 77m Normal BackingStorePhaseReady backingstore/noobaa-default-backing-store Backing store mode: OPTIMAL 72m Warning BackingStorePhaseRejected backingstore/noobaa-default-backing-store Backing store mode: ALL_NODES_OFFLINE 61m Warning BackingStorePhaseRejected backingstore/noobaa-default-backing-store Backing store mode: ALL_NODES_OFFLINE 61m Normal BackingStorePhaseReady backingstore/noobaa-default-backing-store Backing store mode: OPTIMAL 4m42s Normal BackingStorePhaseReady backingstore/noobaa-default-backing-store Backing store mode: OPTIMAL 5m6s Warning BackingStorePhaseRejected backingstore/noobaa-default-backing-store Backing store mode: ALL_NODES_OFFLINE 139m Warning RejectedBackingStore bucketclass/noobaa-default-bucket-class NooBaa BackingStore "noobaa-default-backing-store" is in rejected phase 72m Warning RejectedBackingStore bucketclass/noobaa-default-bucket-class NooBaa BackingStore "noobaa-default-backing-store" is in rejected phase 61m Warning RejectedBackingStore bucketclass/noobaa-default-bucket-class NooBaa BackingStore "noobaa-default-backing-store" is in rejected phase 5m6s Warning RejectedBackingStore bucketclass/noobaa-default-bucket-class NooBaa BackingStore "noobaa-default-backing-store" is in rejected phase 131m Warning FailedGetScale horizontalpodautoscaler/noobaa-endpoint deployments/scale.apps "noobaa-endpoint" not found 71m Warning FailedGetScale horizontalpodautoscaler/noobaa-endpoint deployments/scale.apps "noobaa-endpoint" not found 59m Normal Pulled pod/noobaa-operator-6d66964f8c-9cjtc Container image "quay.io/rhceph-dev/mcg-operator@sha256:c636dd4f39a1ffd78d790b10571a266c92aca0c84497a8986e3a42cc16147e5e" already present on machine 59m Normal Created pod/noobaa-operator-6d66964f8c-9cjtc Created container noobaa-operator 59m Normal Started pod/noobaa-operator-6d66964f8c-9cjtc Started container noobaa-operator 59m Warning BackOff pod/noobaa-operator-6d66964f8c-9cjtc Back-off restarting failed container 131m Normal InstallWaiting clusterserviceversion/ocs-operator.v4.7.0-241.ci installing: waiting for deployment noobaa-operator to become ready: Waiting for rollout to finish: 0 of 1 updated replicas are available...
Segregating Object Storage related events from Persistent Storage Dashboard.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: OpenShift Container Platform 4.8.2 bug fix and security update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2021:2438