Bug 2233027

Summary: [UI] Topology shows rook-ceph-operator on every node
Product: [Red Hat Storage] Red Hat OpenShift Data Foundation Reporter: Daniel Osypenko <dosypenk>
Component: management-consoleAssignee: Bipul Adhikari <badhikar>
Status: CLOSED ERRATA QA Contact: Daniel Osypenko <dosypenk>
Severity: medium Docs Contact:
Priority: unspecified    
Version: 4.14CC: badhikar, kbg, odf-bz-bot, skatiyar, tdesala
Target Milestone: ---   
Target Release: ODF 4.14.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: 4.14.0-123 Doc Type: Bug Fix
Doc Text:
Previously, the topology view showed the Rook-Ceph operator deployment in all the nodes as Rook-Ceph operator deployment is an owner of multiple pods that are actually not related to it. With this fix, the mapping mechanism of deployment to node in the topology view is changed and as a result, Rook-Ceph operator deployment is shown only in one node.
Story Points: ---
Clone Of: Environment:
Last Closed: 2023-11-08 18:54:15 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 2244409    

Description Daniel Osypenko 2023-08-21 07:35:05 UTC
Description of problem (please be detailed as possible and provide log
snippests):

As a continuation of the BZ #2209251 rook-ceph-operator is now depicted on every worker node https://drive.google.com/file/d/1ZwxWvnrluoCu6H40PjPpL7u47yHP3cTU/ taking in account that it has only one replica and pod deployed on one node.


Version of all relevant components (if applicable):

OC version:
Client Version: 4.13.4
Kustomize Version: v4.5.7
Server Version: 4.14.0-0.nightly-2023-08-11-055332
Kubernetes Version: v1.27.4+deb2c60

OCS verison:
ocs-operator.v4.14.0-110.stable              OpenShift Container Storage   4.14.0-110.stable              Succeeded

Cluster version
NAME      VERSION                              AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.14.0-0.nightly-2023-08-11-055332   True        False         4d11h   Cluster version is 4.14.0-0.nightly-2023-08-11-055332

Rook version:
rook: v4.14.0-0.2d8264501d13c4389310b7fe2bab06bf060916d2
go: go1.20.5

Ceph version:
ceph version 17.2.6-107.el9cp (4079b48a400e4d23864de0da6d093e200038d7fb) quincy (stable)


Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?
no

Is there any workaround available to the best of your knowledge?
no

Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?
1

Can this issue reproducible?
yes, every time

Can this issue reproduce from the UI?
yes

If this is a regression, please provide more details to justify this:
new feature

Steps to Reproduce:

1. oc get pod -n openshift-storage -l app=rook-ceph-operator -o custom-columns=NODE:.spec.nodeName --no-headers=true 
> compute-<num>
remember the node name
2. Login to management-console and navigate to Storage / Data Foundation / Topology
3. Navigate to any node which is not equal to the node from step 1
4. Find rook-ceph-operator deployment, open deployment resources, navigate to the node info deployment and see a Node, compare it to the Node 


Actual results:
deployment rook-ceph-operator should not be found on any node differs from the step 1

Expected results:
deployment rook-ceph-operator is found on every node 

Additional info:
described bug also found on ODF 4.13

Comment 6 Daniel Osypenko 2023-08-30 15:26:37 UTC
verified. rook-ceph-operator deployed on one node https://url.corp.redhat.com/a4ae185

Comment 9 errata-xmlrpc 2023-11-08 18:54:15 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: Red Hat OpenShift Data Foundation 4.14.0 security, enhancement & bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2023:6832