Bug 2216707 - Disable topology view for external mode
Summary: Disable topology view for external mode
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat Storage
Component: management-console
Version: 4.13
Hardware: ppc64le
OS: Linux
unspecified
unspecified
Target Milestone: ---
: ODF 4.14.0
Assignee: Bipul Adhikari
QA Contact: Prasad Desala
URL:
Whiteboard:
Depends On: 2213739
Blocks: 2154341 2244409
TreeView+ depends on / blocked
 
Reported: 2023-06-22 10:41 UTC by Bipul Adhikari
Modified: 2023-11-08 18:51 UTC (History)
11 users (show)

Fixed In Version: 4.14.0-110
Doc Type: Bug Fix
Doc Text:
Previously, topology view showed a blank screen for external mode as external mode is not supported in topology view. With this fix, external mode is disabled and a message is appears instead of the blank screen.
Clone Of: 2213739
Environment:
Last Closed: 2023-11-08 18:51:26 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github red-hat-storage odf-console pull 902 0 None Merged Bug 2216707: Fixes for topology view for external mode 2023-06-26 13:58:11 UTC

Description Bipul Adhikari 2023-06-22 10:41:44 UTC
+++ This bug was initially created as a clone of Bug #2213739 +++

Description of problem (please be detailed as possible and provide log
snippests):
Topology is missing inside Storage system  in External mode ODF cluster.

Version of all relevant components (if applicable):
ODF: 4.13.0-207
OCP: 4.13

Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?


Is there any workaround available to the best of your knowledge?
No

Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?


Can this issue reproducible?
Yes

Can this issue reproduce from the UI?
Yes

If this is a regression, please provide more details to justify this:


Steps to Reproduce:
1.Create RHCS Cluster
2. CReate OCP cluster, install ODF Operator on it. 
3. create storage system using external RHCS.
4. Navigate to Storage -> Data Foundation -> Topology


Actual results:
Topology is not appearing in topology tab.

Expected results:
Topology should be appearing in topology tab. 

Additional info:

--- Additional comment from RHEL Program Management on 2023-06-09 05:38:01 UTC ---

This bug having no release flag set previously, is now set with release flag 'odf‑4.13.0' to '?', and so is being proposed to be fixed at the ODF 4.13.0 release. Note that the 3 Acks (pm_ack, devel_ack, qa_ack), if any previously set while release flag was missing, have now been reset since the Acks are to be set against a release flag.

--- Additional comment from Aaruni Aggarwal on 2023-06-09 05:40:01 UTC ---

Storagecluster and Cephcluster:


[root@rdr-cicd-odf-ef1f-bastion-0 ~]# oc get storagecluster -n openshift-storage
NAME                          AGE   PHASE   EXTERNAL   CREATED AT             VERSION
ocs-external-storagecluster   15m   Ready   true       2023-06-08T06:06:04Z   4.13.0

[root@rdr-cicd-odf-ef1f-bastion-0 ~]# oc get cephcluster -n openshift-storage
NAME                                      DATADIRHOSTPATH   MONCOUNT   AGE   PHASE       MESSAGE                          HEALTH      EXTERNAL   FSID
ocs-external-storagecluster-cephcluster                                15m   Connected   Cluster connected successfully   HEALTH_OK   true       46ab550e-ca6e-11ed-af21-00000a0b13a5

[root@rdr-cicd-odf-ef1f-bastion-0 ~]# oc describe storagecluster -n openshift-storage
Name:         ocs-external-storagecluster
Namespace:    openshift-storage
Labels:       <none>
Annotations:  uninstall.ocs.openshift.io/cleanup-policy: delete
              uninstall.ocs.openshift.io/mode: graceful
API Version:  ocs.openshift.io/v1
Kind:         StorageCluster
Metadata:
  Creation Timestamp:  2023-06-08T06:06:04Z
  Finalizers:
    storagecluster.ocs.openshift.io
  Generation:  2
  Managed Fields:
    API Version:  ocs.openshift.io/v1
    Fields Type:  FieldsV1
    fieldsV1:
      f:spec:
        .:
        f:externalStorage:
          .:
          f:enable:
        f:network:
          .:
          f:connections:
            .:
            f:encryption:
    Manager:      Mozilla
    Operation:    Update
    Time:         2023-06-08T06:06:04Z
    API Version:  ocs.openshift.io/v1
    Fields Type:  FieldsV1
    fieldsV1:
      f:metadata:
        f:ownerReferences:
          .:
          k:{"uid":"712d0159-3df2-43b1-a611-eae1e9de5f0b"}:
    Manager:      manager
    Operation:    Update
    Time:         2023-06-08T06:06:04Z
    API Version:  ocs.openshift.io/v1
    Fields Type:  FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .:
          f:uninstall.ocs.openshift.io/cleanup-policy:
          f:uninstall.ocs.openshift.io/mode:
        f:finalizers:
          .:
          v:"storagecluster.ocs.openshift.io":
      f:spec:
        f:arbiter:
        f:encryption:
          .:
          f:kms:
        f:labelSelector:
        f:managedResources:
          .:
          f:cephBlockPools:
          f:cephCluster:
          f:cephConfig:
          f:cephDashboard:
          f:cephFilesystems:
          f:cephNonResilientPools:
          f:cephObjectStoreUsers:
          f:cephObjectStores:
          f:cephRBDMirror:
          f:cephToolbox:
        f:mirroring:
        f:network:
          f:multiClusterService:
    Manager:      ocs-operator
    Operation:    Update
    Time:         2023-06-08T06:06:04Z
    API Version:  ocs.openshift.io/v1
    Fields Type:  FieldsV1
    fieldsV1:
      f:status:
        .:
        f:conditions:
        f:externalSecretHash:
        f:images:
          .:
          f:ceph:
            .:
            f:desiredImage:
          f:noobaaCore:
            .:
            f:actualImage:
            f:desiredImage:
          f:noobaaDB:
            .:
            f:actualImage:
            f:desiredImage:
        f:kmsServerConnection:
        f:phase:
        f:relatedObjects:
        f:version:
    Manager:      ocs-operator
    Operation:    Update
    Subresource:  status
    Time:         2023-06-08T06:25:06Z
  Owner References:
    API Version:     odf.openshift.io/v1alpha1
    Kind:            StorageSystem
    Name:            ocs-external-storagecluster-storagesystem
    UID:             712d0159-3df2-43b1-a611-eae1e9de5f0b
  Resource Version:  1402178
  UID:               dd67158d-dc54-4499-aa6c-38c0e70c75d4
Spec:
  Arbiter:
  Encryption:
    Kms:
  External Storage:
    Enable:  true
  Label Selector:
  Managed Resources:
    Ceph Block Pools:
    Ceph Cluster:
    Ceph Config:
    Ceph Dashboard:
    Ceph Filesystems:
    Ceph Non Resilient Pools:
    Ceph Object Store Users:
    Ceph Object Stores:
    Ceph RBD Mirror:
    Ceph Toolbox:
  Mirroring:
  Network:
    Connections:
      Encryption:
    Multi Cluster Service:
Status:
  Conditions:
    Last Heartbeat Time:   2023-06-08T06:06:06Z
    Last Transition Time:  2023-06-08T06:06:06Z
    Message:               Version check successful
    Reason:                VersionMatched
    Status:                False
    Type:                  VersionMismatch
    Last Heartbeat Time:   2023-06-08T06:25:06Z
    Last Transition Time:  2023-06-08T06:06:06Z
    Message:               Reconcile completed successfully
    Reason:                ReconcileCompleted
    Status:                True
    Type:                  ReconcileComplete
    Last Heartbeat Time:   2023-06-08T06:25:06Z
    Last Transition Time:  2023-06-08T06:08:28Z
    Message:               Reconcile completed successfully
    Reason:                ReconcileCompleted
    Status:                True
    Type:                  Available
    Last Heartbeat Time:   2023-06-08T06:25:06Z
    Last Transition Time:  2023-06-08T06:08:28Z
    Message:               Reconcile completed successfully
    Reason:                ReconcileCompleted
    Status:                False
    Type:                  Progressing
    Last Heartbeat Time:   2023-06-08T06:25:06Z
    Last Transition Time:  2023-06-08T06:06:06Z
    Message:               Reconcile completed successfully
    Reason:                ReconcileCompleted
    Status:                False
    Type:                  Degraded
    Last Heartbeat Time:   2023-06-08T06:25:06Z
    Last Transition Time:  2023-06-08T06:08:28Z
    Message:               Reconcile completed successfully
    Reason:                ReconcileCompleted
    Status:                True
    Type:                  Upgradeable
  External Secret Hash:    5fe48ad90d8bf08a712e2f5a67384c5ebf2348fca08124574afb4a77f0a5eb97e7841ad23c23ab7f180437ec01356789b94ca5aa6a73dca853a730cde0a63fc0
  Images:
    Ceph:
      Desired Image:  quay.io/rhceph-dev/rhceph@sha256:fa6d01cdef17bc32d2b95b8121b02f4d41adccc5ba8a9b95f38c97797ff6621f
    Noobaa Core:
      Actual Image:   quay.io/rhceph-dev/odf4-mcg-core-rhel9@sha256:574b6258ee4d7ac2532b9143390a130acf89b34b77abc6c446a329286bcd27a5
      Desired Image:  quay.io/rhceph-dev/odf4-mcg-core-rhel9@sha256:574b6258ee4d7ac2532b9143390a130acf89b34b77abc6c446a329286bcd27a5
    Noobaa DB:
      Actual Image:   quay.io/rhceph-dev/rhel8-postgresql-12@sha256:f7f678d44d5934ed3d95c83b4428fee4b616f37e8eadc5049778f133b4ce3713
      Desired Image:  quay.io/rhceph-dev/rhel8-postgresql-12@sha256:f7f678d44d5934ed3d95c83b4428fee4b616f37e8eadc5049778f133b4ce3713
  Kms Server Connection:
  Phase:  Ready
  Related Objects:
    API Version:       ceph.rook.io/v1
    Kind:              CephCluster
    Name:              ocs-external-storagecluster-cephcluster
    Namespace:         openshift-storage
    Resource Version:  1401943
    UID:               fe6a1e03-0a05-4ecf-9aae-c7b6565cd7b7
    API Version:       noobaa.io/v1alpha1
    Kind:              NooBaa
    Name:              noobaa
    Namespace:         openshift-storage
    Resource Version:  1402174
    UID:               39ed0313-cb11-4020-8bad-e8afd812a047
  Version:             4.13.0
Events:                <none>
 
[root@rdr-cicd-odf-ef1f-bastion-0 ~]# oc describe cephcluster -n openshift-storage
Name:         ocs-external-storagecluster-cephcluster
Namespace:    openshift-storage
Labels:       app=ocs-external-storagecluster
Annotations:  <none>
API Version:  ceph.rook.io/v1
Kind:         CephCluster
Metadata:
  Creation Timestamp:  2023-06-08T06:06:05Z
  Finalizers:
    cephcluster.ceph.rook.io
  Generation:  1
  Managed Fields:
    API Version:  ceph.rook.io/v1
    Fields Type:  FieldsV1
    fieldsV1:
      f:metadata:
        f:labels:
          .:
          f:app:
        f:ownerReferences:
          .:
          k:{"uid":"dd67158d-dc54-4499-aa6c-38c0e70c75d4"}:
      f:spec:
        .:
        f:cephVersion:
        f:cleanupPolicy:
          .:
          f:sanitizeDisks:
        f:crashCollector:
          .:
          f:disable:
        f:dashboard:
        f:disruptionManagement:
        f:external:
          .:
          f:enable:
        f:healthCheck:
          .:
          f:daemonHealth:
            .:
            f:mon:
            f:osd:
            f:status:
        f:labels:
          .:
          f:monitoring:
            .:
            f:rook.io/managedBy:
        f:logCollector:
        f:mgr:
        f:mon:
        f:monitoring:
          .:
          f:enabled:
          f:externalMgrEndpoints:
          f:externalMgrPrometheusPort:
        f:network:
          .:
          f:connections:
            .:
            f:encryption:
          f:multiClusterService:
        f:security:
          .:
          f:keyRotation:
            .:
            f:enabled:
          f:kms:
        f:storage:
    Manager:      ocs-operator
    Operation:    Update
    Time:         2023-06-08T06:06:05Z
    API Version:  ceph.rook.io/v1
    Fields Type:  FieldsV1
    fieldsV1:
      f:metadata:
        f:finalizers:
          .:
          v:"cephcluster.ceph.rook.io":
    Manager:      rook
    Operation:    Update
    Time:         2023-06-08T06:06:12Z
    API Version:  ceph.rook.io/v1
    Fields Type:  FieldsV1
    fieldsV1:
      f:status:
        .:
        f:ceph:
          .:
          f:capacity:
            .:
            f:bytesAvailable:
            f:bytesTotal:
            f:bytesUsed:
            f:lastUpdated:
          f:fsid:
          f:health:
          f:lastChecked:
          f:versions:
            .:
            f:mds:
              .:
              f:ceph version 16.2.10-138.el8cp (a63ae467c8e1f7503ea3855893f1e5ca189a71b9) pacific (stable):
            f:mgr:
              .:
              f:ceph version 16.2.10-138.el8cp (a63ae467c8e1f7503ea3855893f1e5ca189a71b9) pacific (stable):
            f:mon:
              .:
              f:ceph version 16.2.10-138.el8cp (a63ae467c8e1f7503ea3855893f1e5ca189a71b9) pacific (stable):
            f:osd:
              .:
              f:ceph version 16.2.10-138.el8cp (a63ae467c8e1f7503ea3855893f1e5ca189a71b9) pacific (stable):
            f:overall:
              .:
              f:ceph version 16.2.10-138.el8cp (a63ae467c8e1f7503ea3855893f1e5ca189a71b9) pacific (stable):
            f:rgw:
              .:
              f:ceph version 16.2.10-138.el8cp (a63ae467c8e1f7503ea3855893f1e5ca189a71b9) pacific (stable):
        f:conditions:
        f:message:
        f:phase:
        f:state:
        f:version:
          .:
          f:version:
    Manager:      rook
    Operation:    Update
    Subresource:  status
    Time:         2023-06-08T06:24:39Z
  Owner References:
    API Version:           ocs.openshift.io/v1
    Block Owner Deletion:  true
    Controller:            true
    Kind:                  StorageCluster
    Name:                  ocs-external-storagecluster
    UID:                   dd67158d-dc54-4499-aa6c-38c0e70c75d4
  Resource Version:        1401943
  UID:                     fe6a1e03-0a05-4ecf-9aae-c7b6565cd7b7
Spec:
  Ceph Version:
  Cleanup Policy:
    Sanitize Disks:
  Crash Collector:
    Disable:  true
  Dashboard:
  Disruption Management:
  External:
    Enable:  true
  Health Check:
    Daemon Health:
      Mon:
      Osd:
      Status:
  Labels:
    Monitoring:
      rook.io/managedBy:  ocs-external-storagecluster
  Log Collector:
  Mgr:
  Mon:
  Monitoring:
    Enabled:  true
    External Mgr Endpoints:
      Ip:                          9.46.254.57
    External Mgr Prometheus Port:  9283
  Network:
    Connections:
      Encryption:
    Multi Cluster Service:
  Security:
    Key Rotation:
      Enabled:  false
    Kms:
  Storage:
Status:
  Ceph:
    Capacity:
      Bytes Available:  3198717943808
      Bytes Total:      3221200306176
      Bytes Used:       22482362368
      Last Updated:     2023-06-08T06:24:37Z
    Fsid:               46ab550e-ca6e-11ed-af21-00000a0b13a5
    Health:             HEALTH_OK
    Last Checked:       2023-06-08T06:24:37Z
    Versions:
      Mds:
        ceph version 16.2.10-138.el8cp (a63ae467c8e1f7503ea3855893f1e5ca189a71b9) pacific (stable):  1
      Mgr:
        ceph version 16.2.10-138.el8cp (a63ae467c8e1f7503ea3855893f1e5ca189a71b9) pacific (stable):  2
      Mon:
        ceph version 16.2.10-138.el8cp (a63ae467c8e1f7503ea3855893f1e5ca189a71b9) pacific (stable):  5
      Osd:
        ceph version 16.2.10-138.el8cp (a63ae467c8e1f7503ea3855893f1e5ca189a71b9) pacific (stable):  6
      Overall:
        ceph version 16.2.10-138.el8cp (a63ae467c8e1f7503ea3855893f1e5ca189a71b9) pacific (stable):  16
      Rgw:
        ceph version 16.2.10-138.el8cp (a63ae467c8e1f7503ea3855893f1e5ca189a71b9) pacific (stable):  2
  Conditions:
    Last Heartbeat Time:   2023-06-08T06:06:12Z
    Last Transition Time:  2023-06-08T06:06:12Z
    Message:               Attempting to connect to an external Ceph cluster
    Reason:                ClusterConnecting
    Status:                True
    Type:                  Connecting
    Last Heartbeat Time:   2023-06-08T06:24:39Z
    Last Transition Time:  2023-06-08T06:06:31Z
    Message:               Cluster connected successfully
    Reason:                ClusterConnected
    Status:                True
    Type:                  Connected
  Message:                 Cluster connected successfully
  Phase:                   Connected
  State:                   Connected
  Version:
    Version:  16.2.10-138
Events:
  Type    Reason              Age   From                          Message
  ----    ------              ----  ----                          -------
  Normal  ReconcileSucceeded  18m   rook-ceph-cluster-controller  successfully configured CephCluster "openshift-storage/ocs-external-storagecluster-cephcluster"

--- Additional comment from Daniel Osypenko on 2023-06-14 08:36:55 UTC ---

vSphere based deployment, external mode has the same issue: 

OC version:
Client Version: 4.12.0-202208031327
Kustomize Version: v4.5.4
Server Version: 4.13.0-0.nightly-2023-06-12-231643
Kubernetes Version: v1.26.5+7d22122

OCS verison:
ocs-operator.v4.13.0-rhodf              OpenShift Container Storage   4.13.0-rhodf   ocs-operator.v4.12.4-rhodf              Succeeded

Cluster version
NAME      VERSION                              AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.13.0-0.nightly-2023-06-12-231643   True        False         16h     Cluster version is 4.13.0-0.nightly-2023-06-12-231643

Rook version:
rook: v4.12.4-0.bc1e9806c3281090b58872e303e947ff5437c078
go: go1.18.10

Ceph version:
ceph version 16.2.10-172.el8cp (00a157ecd158911ece116ae43095de793ed9f389) pacific (stable)
----------
screen-recording https://drive.google.com/file/d/1MtVp4PoPcY-rX40G0fV9jvdNhI2Tea6C/view?usp=sharing
web console output http://pastebin.test.redhat.com/1102577

--- Additional comment from Eran Tamir on 2023-06-14 11:17:49 UTC ---

The information for external mode is not valuable to the customers. We should disable it for external mode and document that topology view is available only for internal mode

Comment 7 Daniel Osypenko 2023-08-17 11:38:00 UTC
content of the ODF Topology tab has been replaced with with notification that Topology is not supported for external mode deployments

Comment 10 errata-xmlrpc 2023-11-08 18:51:26 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: Red Hat OpenShift Data Foundation 4.14.0 security, enhancement & bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2023:6832


Note You need to log in before you can comment on or make changes to this bug.