Bug 2092217 - [External] UI for uploding JSON data for external cluster connection has some strict checks
Summary: [External] UI for uploding JSON data for external cluster connection has some...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat Storage
Component: ocs-operator
Version: 4.11
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: ODF 4.11.0
Assignee: gowtham
QA Contact: Vijay Avuthu
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-06-01 06:16 UTC by Parth Arora
Modified: 2023-08-09 17:00 UTC (History)
10 users (show)

Fixed In Version: 4.11.0-96
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-08-24 13:54:12 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github red-hat-storage odf-console pull 274 0 None open Fix uploding JSON data for external cluster connection strict checking issue 2022-06-13 06:08:49 UTC
Github red-hat-storage odf-console pull 275 0 None open Bug 2092217:[release-4.11] Fix uploding JSON data for external cluster connection strict checking issue 2022-06-13 06:14:53 UTC
Github red-hat-storage odf-console pull 276 0 None open Bug 2092217: [release-4.11-compatibility] Fix uploding JSON data for external cluster connection strict checking issue 2022-06-13 06:29:20 UTC
Red Hat Product Errata RHSA-2022:6156 0 None None None 2022-08-24 13:54:26 UTC

Description Parth Arora 2022-06-01 06:16:57 UTC
Description of problem
======================

In the UI for uploding JSON data for external cluster connection, it has some strict checks for csi-clients (client.csi-cephfs-node,etc) but from now while using restricted auths these values can be variable,

So instead of strict checking we should check if the data/string contains client.csi-cephfs-node

For example, if the client is client.csi-cephfs-node-vavuthupr10278-cephfs,
So we should check client.csi-cephfs-node-vavuthupr10278-cephfs.Contains(client.csi-cephfs-node)

Version of all relevant components
==================================


Does this issue impact your ability to continue to work with the product?
=========================================================================


Is there any workaround available to the best of your knowledge?
================================================================

Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?
========================================

Can this issue reproducible?
============================

Can this issue reproduce from the UI?
=====================================


If this is a regression, please provide more details to justify this
====================================================================


Steps to Reproduce
==================

1.
2.
3.


Actual results
==============


Expected results
================

Additional info
===============

Comment 5 Parth Arora 2022-06-06 07:13:23 UTC
Okay, will do so, I guess Gowtham has already started working into it.

Comment 11 Martin Bukatovic 2022-06-07 14:07:56 UTC
Providing QA ack, based on comment 10. Testing should include import of external cluster.

Comment 16 Vijay Avuthu 2022-07-18 12:01:00 UTC
Verified with build: ocs-registry:4.11.0-113


deployed with restricted auths enabled here: https://ocs4-jenkins-csb-odf-qe.apps.ocp-c1.prod.psi.redhat.com/job/qe-deploy-ocs-cluster/14657/consoleFull

> Go to UI, workloads ---> Secrets ---> Actions ---> Edit Secret

and then update the secrets from the o/p of below script

# python3 /tmp/external-cluster-details-exporter-hdkjadkg.py --rbd-data-pool-name rbd --rgw-endpoint 10.x.xxx.xx7:8080 --cluster-name vavuthu2-1996829 --cephfs-filesystem-name cephfs


> No issue observed.

> Again Go to UI, workloads ---> Secrets ---> Actions ---> Edit Secret

and then update the secrets from the o/p of below script

# python3 /tmp/external-cluster-details-exporter-hdkjadkg.py --rbd-data-pool-name rbd --rgw-endpoint 10.x.xxx.xx7:8080 --cluster-name vavuthu2-1996829 --cephfs-filesystem-name cephfs --restricted-auth-permission true

> No issues observed

> check the health

$ oc get cephobjectstore
NAME                                          PHASE
ocs-external-storagecluster-cephobjectstore   Connected
$ oc get storagecluster
NAME                          AGE    PHASE   EXTERNAL   CREATED AT             VERSION
ocs-external-storagecluster   3d4h   Ready   true       2022-07-15T06:54:35Z   4.11.0

$ oc get pods
NAME                                               READY   STATUS    RESTARTS       AGE
csi-addons-controller-manager-6bc4944bfb-pw466     2/2     Running   0              3d4h
csi-cephfsplugin-g7pq9                             3/3     Running   0              3d4h
csi-cephfsplugin-hhhf6                             3/3     Running   0              3d4h
csi-cephfsplugin-l7cfx                             3/3     Running   0              3d4h
csi-cephfsplugin-provisioner-85cb6589cc-2f6zr      6/6     Running   0              3d4h
csi-cephfsplugin-provisioner-85cb6589cc-jsglw      6/6     Running   1 (3d3h ago)   3d4h
csi-rbdplugin-6fxsw                                4/4     Running   0              3d4h
csi-rbdplugin-kk5vr                                4/4     Running   0              3d4h
csi-rbdplugin-provisioner-76d6c94989-p967j         7/7     Running   0              3d4h
csi-rbdplugin-provisioner-76d6c94989-vmjhm         7/7     Running   2 (3d3h ago)   3d4h
csi-rbdplugin-snk5j                                4/4     Running   0              3d4h
noobaa-core-0                                      1/1     Running   0              3d4h
noobaa-db-pg-0                                     1/1     Running   0              3d4h
noobaa-endpoint-85d9766c8-ssdtm                    1/1     Running   0              3d4h
noobaa-operator-6bd45d8bcb-m6pdl                   1/1     Running   1 (3d4h ago)   3d4h
ocs-metrics-exporter-778cdc4cb6-mh9w5              1/1     Running   0              3d4h
ocs-operator-f9f56c775-pf6xd                       1/1     Running   0              3d4h
odf-console-7788bdf946-4shbk                       1/1     Running   0              3d4h
odf-operator-controller-manager-6fc9794b76-ksznk   2/2     Running   0              3d4h
rook-ceph-operator-6c6879f9fb-76dlg                1/1     Running   0              3d4h
rook-ceph-tools-external-d69d7d79d-h49mx           1/1     Running   0              166m
$ oc rsh rook-ceph-tools-external-d69d7d79d-h49mx ceph health
HEALTH_OK
$ 

Moving to verified

Comment 18 errata-xmlrpc 2022-08-24 13:54:12 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: Red Hat OpenShift Data Foundation 4.11.0 security, enhancement, & bugfix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2022:6156


Note You need to log in before you can comment on or make changes to this bug.