Description of problem ====================== In the UI for uploding JSON data for external cluster connection, it has some strict checks for csi-clients (client.csi-cephfs-node,etc) but from now while using restricted auths these values can be variable, So instead of strict checking we should check if the data/string contains client.csi-cephfs-node For example, if the client is client.csi-cephfs-node-vavuthupr10278-cephfs, So we should check client.csi-cephfs-node-vavuthupr10278-cephfs.Contains(client.csi-cephfs-node) Version of all relevant components ================================== Does this issue impact your ability to continue to work with the product? ========================================================================= Is there any workaround available to the best of your knowledge? ================================================================ Rate from 1 - 5 the complexity of the scenario you performed that caused this bug (1 - very simple, 5 - very complex)? ======================================== Can this issue reproducible? ============================ Can this issue reproduce from the UI? ===================================== If this is a regression, please provide more details to justify this ==================================================================== Steps to Reproduce ================== 1. 2. 3. Actual results ============== Expected results ================ Additional info ===============
Okay, will do so, I guess Gowtham has already started working into it.
Providing QA ack, based on comment 10. Testing should include import of external cluster.
Verified with build: ocs-registry:4.11.0-113 deployed with restricted auths enabled here: https://ocs4-jenkins-csb-odf-qe.apps.ocp-c1.prod.psi.redhat.com/job/qe-deploy-ocs-cluster/14657/consoleFull > Go to UI, workloads ---> Secrets ---> Actions ---> Edit Secret and then update the secrets from the o/p of below script # python3 /tmp/external-cluster-details-exporter-hdkjadkg.py --rbd-data-pool-name rbd --rgw-endpoint 10.x.xxx.xx7:8080 --cluster-name vavuthu2-1996829 --cephfs-filesystem-name cephfs > No issue observed. > Again Go to UI, workloads ---> Secrets ---> Actions ---> Edit Secret and then update the secrets from the o/p of below script # python3 /tmp/external-cluster-details-exporter-hdkjadkg.py --rbd-data-pool-name rbd --rgw-endpoint 10.x.xxx.xx7:8080 --cluster-name vavuthu2-1996829 --cephfs-filesystem-name cephfs --restricted-auth-permission true > No issues observed > check the health $ oc get cephobjectstore NAME PHASE ocs-external-storagecluster-cephobjectstore Connected $ oc get storagecluster NAME AGE PHASE EXTERNAL CREATED AT VERSION ocs-external-storagecluster 3d4h Ready true 2022-07-15T06:54:35Z 4.11.0 $ oc get pods NAME READY STATUS RESTARTS AGE csi-addons-controller-manager-6bc4944bfb-pw466 2/2 Running 0 3d4h csi-cephfsplugin-g7pq9 3/3 Running 0 3d4h csi-cephfsplugin-hhhf6 3/3 Running 0 3d4h csi-cephfsplugin-l7cfx 3/3 Running 0 3d4h csi-cephfsplugin-provisioner-85cb6589cc-2f6zr 6/6 Running 0 3d4h csi-cephfsplugin-provisioner-85cb6589cc-jsglw 6/6 Running 1 (3d3h ago) 3d4h csi-rbdplugin-6fxsw 4/4 Running 0 3d4h csi-rbdplugin-kk5vr 4/4 Running 0 3d4h csi-rbdplugin-provisioner-76d6c94989-p967j 7/7 Running 0 3d4h csi-rbdplugin-provisioner-76d6c94989-vmjhm 7/7 Running 2 (3d3h ago) 3d4h csi-rbdplugin-snk5j 4/4 Running 0 3d4h noobaa-core-0 1/1 Running 0 3d4h noobaa-db-pg-0 1/1 Running 0 3d4h noobaa-endpoint-85d9766c8-ssdtm 1/1 Running 0 3d4h noobaa-operator-6bd45d8bcb-m6pdl 1/1 Running 1 (3d4h ago) 3d4h ocs-metrics-exporter-778cdc4cb6-mh9w5 1/1 Running 0 3d4h ocs-operator-f9f56c775-pf6xd 1/1 Running 0 3d4h odf-console-7788bdf946-4shbk 1/1 Running 0 3d4h odf-operator-controller-manager-6fc9794b76-ksznk 2/2 Running 0 3d4h rook-ceph-operator-6c6879f9fb-76dlg 1/1 Running 0 3d4h rook-ceph-tools-external-d69d7d79d-h49mx 1/1 Running 0 166m $ oc rsh rook-ceph-tools-external-d69d7d79d-h49mx ceph health HEALTH_OK $ Moving to verified
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Important: Red Hat OpenShift Data Foundation 4.11.0 security, enhancement, & bugfix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2022:6156