Bug 2233027 - [UI] Topology shows rook-ceph-operator on every node
Summary: [UI] Topology shows rook-ceph-operator on every node
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat Storage
Component: management-console
Version: 4.14
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: ODF 4.14.0
Assignee: Bipul Adhikari
QA Contact: Daniel Osypenko
URL:
Whiteboard:
Depends On:
Blocks: 2244409
TreeView+ depends on / blocked
 
Reported: 2023-08-21 07:35 UTC by Daniel Osypenko
Modified: 2023-11-08 18:55 UTC (History)
5 users (show)

Fixed In Version: 4.14.0-123
Doc Type: Bug Fix
Doc Text:
Previously, the topology view showed the Rook-Ceph operator deployment in all the nodes as Rook-Ceph operator deployment is an owner of multiple pods that are actually not related to it. With this fix, the mapping mechanism of deployment to node in the topology view is changed and as a result, Rook-Ceph operator deployment is shown only in one node.
Clone Of:
Environment:
Last Closed: 2023-11-08 18:54:15 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github red-hat-storage odf-console pull 992 0 None Merged [release-4.14] Bug 2233027: Fixes Deployment node map 2023-08-25 11:33:39 UTC
Github red-hat-storage odf-console pull 993 0 None open [release-4.14-compatibility] Bug 2233027: Fixes Deployment node map 2023-08-29 12:02:42 UTC
Red Hat Product Errata RHSA-2023:6832 0 None None None 2023-11-08 18:55:04 UTC

Description Daniel Osypenko 2023-08-21 07:35:05 UTC
Description of problem (please be detailed as possible and provide log
snippests):

As a continuation of the BZ #2209251 rook-ceph-operator is now depicted on every worker node https://drive.google.com/file/d/1ZwxWvnrluoCu6H40PjPpL7u47yHP3cTU/ taking in account that it has only one replica and pod deployed on one node.


Version of all relevant components (if applicable):

OC version:
Client Version: 4.13.4
Kustomize Version: v4.5.7
Server Version: 4.14.0-0.nightly-2023-08-11-055332
Kubernetes Version: v1.27.4+deb2c60

OCS verison:
ocs-operator.v4.14.0-110.stable              OpenShift Container Storage   4.14.0-110.stable              Succeeded

Cluster version
NAME      VERSION                              AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.14.0-0.nightly-2023-08-11-055332   True        False         4d11h   Cluster version is 4.14.0-0.nightly-2023-08-11-055332

Rook version:
rook: v4.14.0-0.2d8264501d13c4389310b7fe2bab06bf060916d2
go: go1.20.5

Ceph version:
ceph version 17.2.6-107.el9cp (4079b48a400e4d23864de0da6d093e200038d7fb) quincy (stable)


Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?
no

Is there any workaround available to the best of your knowledge?
no

Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?
1

Can this issue reproducible?
yes, every time

Can this issue reproduce from the UI?
yes

If this is a regression, please provide more details to justify this:
new feature

Steps to Reproduce:

1. oc get pod -n openshift-storage -l app=rook-ceph-operator -o custom-columns=NODE:.spec.nodeName --no-headers=true 
> compute-<num>
remember the node name
2. Login to management-console and navigate to Storage / Data Foundation / Topology
3. Navigate to any node which is not equal to the node from step 1
4. Find rook-ceph-operator deployment, open deployment resources, navigate to the node info deployment and see a Node, compare it to the Node 


Actual results:
deployment rook-ceph-operator should not be found on any node differs from the step 1

Expected results:
deployment rook-ceph-operator is found on every node 

Additional info:
described bug also found on ODF 4.13

Comment 6 Daniel Osypenko 2023-08-30 15:26:37 UTC
verified. rook-ceph-operator deployed on one node https://url.corp.redhat.com/a4ae185

Comment 9 errata-xmlrpc 2023-11-08 18:54:15 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: Red Hat OpenShift Data Foundation 4.14.0 security, enhancement & bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2023:6832


Note You need to log in before you can comment on or make changes to this bug.