Bug 2209251 - [UI] ODF Topology rook-ceph-operator deployment shows wrong resources
Summary: [UI] ODF Topology rook-ceph-operator deployment shows wrong resources
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat Storage
Component: management-console
Version: 4.13
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: ODF 4.14.0
Assignee: Bipul Adhikari
QA Contact: Daniel Osypenko
URL:
Whiteboard:
Depends On:
Blocks: 2244409
TreeView+ depends on / blocked
 
Reported: 2023-05-23 08:47 UTC by Daniel Osypenko
Modified: 2023-11-08 18:50 UTC (History)
10 users (show)

Fixed In Version: 4.14.0-110
Doc Type: Bug Fix
Doc Text:
. OpenShift Data Foundation Topology rook-ceph-operator deployment now shows correct resources Previously, the owner references for CSI pods and other pods were set to rook-ceph-operator that caused the mapping to show these pods as part of the deployment too. With this fix, the mapping pods approach is changed to top down instead of bottom up, which ensures that only the pods that are related to the deployment are shown.
Clone Of:
Environment:
Last Closed: 2023-11-08 18:50:55 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github red-hat-storage odf-console pull 938 0 None Merged Bug 2209251: Fixes Topology view Deployment resources view 2023-07-28 05:18:15 UTC

Description Daniel Osypenko 2023-05-23 08:47:35 UTC
Created attachment 1966389 [details]
rook-ceph-operator__on_any_node

Description of problem (please be detailed as possible and provide log
snippests):

When opening deployments from ANY node the rook-ceph-operator is always present even though the rook-ceph-operator deployment has only one pod and should be present on one node.
When opening rook-ceph-operator resources in a side-bar it shows the resources which do not belong to rook-ceph-operator deployment. 


Version of all relevant components (if applicable):

OC version:
Client Version: 4.12.0-202208031327
Kustomize Version: v4.5.4
Server Version: 4.13.0-0.nightly-2023-05-20-014943
Kubernetes Version: v1.26.3+b404935

OCS verison:
ocs-operator.v4.13.0-203.stable              OpenShift Container Storage   4.13.0-203.stable              Succeeded

Cluster version
NAME      VERSION                              AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.13.0-0.nightly-2023-05-20-014943   True        False         6h30m   Cluster version is 4.13.0-0.nightly-2023-05-20-014943

Rook version:
rook: v4.13.0-0.e5648f0a2577b9bfd2aa256d4853dc3e8d94862a
go: go1.19.6

Ceph version:
ceph version 17.2.6-50.el9cp (c202ddb5589554af0ce43432ff07cd7ce8f35243) quincy (stable)

Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?


Is there any workaround available to the best of your knowledge?


Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?


Can this issue reproducible?
yes, 10/10

Can this issue reproduce from the UI?
yes

If this is a regression, please provide more details to justify this:
new feature, not regression

Steps to Reproduce:
1. Deploy the OCP cluster with ODF and storage system
2. Login to the management-console and navigate to Storage / Data Foundation / Topology Tab
3. Navigate to any node and open rook-ceph-operator



Actual results:
rook-ceph-operator is present on any node and has resources not related to the deployment

Expected results:
rook-ceph-operator is present only on the node where its pod deployed. when selecting rook-ceph-operator and open Resources in a side bar only pod with prefix 'rook-ceph-operator' should be present pointing on it's single pod.

Additional info:

Comment 3 Mudit Agarwal 2023-05-23 12:51:13 UTC
Not a 4.13 blocker, moving out

Comment 4 Bipul Adhikari 2023-05-24 10:26:39 UTC
csi-rbdplugin and csi-cephfsplugin DemonSets have owner references set to `rook-ceph-operator` deployment. Because of this when we are listing the pods that make up `rook-ceph-operator` Deployment these are getting listed there. I am not sure why the operator is doing so. Maybe this is something that we need to update in the operator. Adding a needinfo on OCS engineers.

Comment 5 Nitin Goyal 2023-05-29 04:43:29 UTC
Moving the need info on Travis

Comment 6 Travis Nielsen 2023-05-30 19:30:47 UTC
The owner references are an important design and implementation detail related to uninstall. If the Rook operator is uninstalled (more precisely, if the rook operator deployment is deleted), all resources that have the owner references set to the rook operator will also be deleted. Thus, if the Rook operator is removed, the csi driver is also removed. This behavior is by design, so hopefully the UI can fix this to not show the CSI driver as being the same component as the rook operator.

Comment 11 Daniel Osypenko 2023-08-17 13:41:52 UTC
even though now we do not see a number of pods that have references to rook-ceph-operator in the Resources sidebar we still see the rook-ceph-operator deployment on every node. 
The deployment has 1 pod and it can't be deployed on each node. See attachment: https://drive.google.com/file/d/1ZwxWvnrluoCu6H40PjPpL7u47yHP3cTU/view?usp=sharing
@badhikar

Comment 12 Bipul Adhikari 2023-08-21 06:36:51 UTC
This behavior should be a different bug. Please open a different bug.(In reply to Daniel Osypenko from comment #11)
> even though now we do not see a number of pods that have references to
> rook-ceph-operator in the Resources sidebar we still see the
> rook-ceph-operator deployment on every node. 
> The deployment has 1 pod and it can't be deployed on each node. See
> attachment:
> https://drive.google.com/file/d/1ZwxWvnrluoCu6H40PjPpL7u47yHP3cTU/
> view?usp=sharing
> @badhikar

Comment 13 Daniel Osypenko 2023-08-22 09:50:42 UTC
(In reply to Bipul Adhikari from comment #12)
> This behavior should be a different bug. Please open a different bug.(In
> reply to Daniel Osypenko from comment #11)
> > even though now we do not see a number of pods that have references to
> > rook-ceph-operator in the Resources sidebar we still see the
> > rook-ceph-operator deployment on every node. 
> > The deployment has 1 pod and it can't be deployed on each node. See
> > attachment:
> > https://drive.google.com/file/d/1ZwxWvnrluoCu6H40PjPpL7u47yHP3cTU/
> > view?usp=sharing
> > @badhikar

created BZ # 2233027
moving this bug to Verified

Comment 16 errata-xmlrpc 2023-11-08 18:50:55 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: Red Hat OpenShift Data Foundation 4.14.0 security, enhancement & bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2023:6832


Note You need to log in before you can comment on or make changes to this bug.