Bug 2209251

Summary: [UI] ODF Topology rook-ceph-operator deployment shows wrong resources
Product: [Red Hat Storage] Red Hat OpenShift Data Foundation Reporter: Daniel Osypenko <dosypenk>
Component: management-consoleAssignee: Bipul Adhikari <badhikar>
Status: ON_QA --- QA Contact: Daniel Osypenko <dosypenk>
Severity: medium Docs Contact:
Priority: medium    
Version: 4.13CC: badhikar, ebenahar, muagarwa, nigoyal, odf-bz-bot, skatiyar, tdesala, tnielsen, uchapaga
Target Milestone: ---Flags: dosypenk: needinfo? (badhikar)
Target Release: ODF 4.14.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: 4.14.0-110 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Daniel Osypenko 2023-05-23 08:47:35 UTC
Created attachment 1966389 [details]
rook-ceph-operator__on_any_node

Description of problem (please be detailed as possible and provide log
snippests):

When opening deployments from ANY node the rook-ceph-operator is always present even though the rook-ceph-operator deployment has only one pod and should be present on one node.
When opening rook-ceph-operator resources in a side-bar it shows the resources which do not belong to rook-ceph-operator deployment. 


Version of all relevant components (if applicable):

OC version:
Client Version: 4.12.0-202208031327
Kustomize Version: v4.5.4
Server Version: 4.13.0-0.nightly-2023-05-20-014943
Kubernetes Version: v1.26.3+b404935

OCS verison:
ocs-operator.v4.13.0-203.stable              OpenShift Container Storage   4.13.0-203.stable              Succeeded

Cluster version
NAME      VERSION                              AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.13.0-0.nightly-2023-05-20-014943   True        False         6h30m   Cluster version is 4.13.0-0.nightly-2023-05-20-014943

Rook version:
rook: v4.13.0-0.e5648f0a2577b9bfd2aa256d4853dc3e8d94862a
go: go1.19.6

Ceph version:
ceph version 17.2.6-50.el9cp (c202ddb5589554af0ce43432ff07cd7ce8f35243) quincy (stable)

Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?


Is there any workaround available to the best of your knowledge?


Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?


Can this issue reproducible?
yes, 10/10

Can this issue reproduce from the UI?
yes

If this is a regression, please provide more details to justify this:
new feature, not regression

Steps to Reproduce:
1. Deploy the OCP cluster with ODF and storage system
2. Login to the management-console and navigate to Storage / Data Foundation / Topology Tab
3. Navigate to any node and open rook-ceph-operator



Actual results:
rook-ceph-operator is present on any node and has resources not related to the deployment

Expected results:
rook-ceph-operator is present only on the node where its pod deployed. when selecting rook-ceph-operator and open Resources in a side bar only pod with prefix 'rook-ceph-operator' should be present pointing on it's single pod.

Additional info:

Comment 3 Mudit Agarwal 2023-05-23 12:51:13 UTC
Not a 4.13 blocker, moving out

Comment 4 Bipul Adhikari 2023-05-24 10:26:39 UTC
csi-rbdplugin and csi-cephfsplugin DemonSets have owner references set to `rook-ceph-operator` deployment. Because of this when we are listing the pods that make up `rook-ceph-operator` Deployment these are getting listed there. I am not sure why the operator is doing so. Maybe this is something that we need to update in the operator. Adding a needinfo on OCS engineers.

Comment 5 Nitin Goyal 2023-05-29 04:43:29 UTC
Moving the need info on Travis

Comment 6 Travis Nielsen 2023-05-30 19:30:47 UTC
The owner references are an important design and implementation detail related to uninstall. If the Rook operator is uninstalled (more precisely, if the rook operator deployment is deleted), all resources that have the owner references set to the rook operator will also be deleted. Thus, if the Rook operator is removed, the csi driver is also removed. This behavior is by design, so hopefully the UI can fix this to not show the CSI driver as being the same component as the rook operator.

Comment 11 Daniel Osypenko 2023-08-17 13:41:52 UTC
even though now we do not see a number of pods that have references to rook-ceph-operator in the Resources sidebar we still see the rook-ceph-operator deployment on every node. 
The deployment has 1 pod and it can't be deployed on each node. See attachment: https://drive.google.com/file/d/1ZwxWvnrluoCu6H40PjPpL7u47yHP3cTU/view?usp=sharing
@badhikar