Update: ======= Tested with scenario of csi users already existed with different caps existing csi users: client.csi-cephfs-node key: AQDki4Nif+3xDxAACR4vbqMtCikyaeqIJs7itQ== caps: [mds] allow rw caps: [mgr] allow rw caps: [mon] allow r, allow command 'osd blocklist' caps: [osd] allow rw tag cephfs *=* client.csi-cephfs-provisioner key: AQDki4Nie2ZeEBAASYJjVjP3L7hKw4K8EWCfNg== caps: [mgr] allow rw caps: [mon] allow r, allow command 'osd blocklist' caps: [osd] allow rw tag cephfs metadata=* client.csi-rbd-node key: AQDki4Ni6rAcDxAAGCwzY+AIbOz8cUj3zID/wQ== caps: [mon] profile rbd, allow command 'osd blocklist' caps: [osd] profile rbd client.csi-rbd-provisioner key: AQDki4NiH0KMDxAAuYMDwiUnbjIK7qnlPnPAcg== caps: [mgr] allow rw caps: [mon] profile rbd, allow command 'osd blocklist' caps: [osd] profile rbd > then deploy ODF 4.9.7-2 and it fails with below error 2022-05-17 18:21:27 12:51:27 - MainThread - ocs_ci.deployment.helpers.external_cluster_helpers - ERROR - Failed to run /tmp/external-cluster-details-exporter-vzikiiya.py with parameters --rbd-data-pool-name rbd --rgw-endpoint 10.1.161.235:8080. Error: Traceback (most recent call last): 2022-05-17 18:21:27 File "/tmp/external-cluster-details-exporter-vzikiiya.py", line 853, in <module> 2022-05-17 18:21:27 raise err 2022-05-17 18:21:27 File "/tmp/external-cluster-details-exporter-vzikiiya.py", line 850, in <module> 2022-05-17 18:21:27 rjObj.main() 2022-05-17 18:21:27 File "/tmp/external-cluster-details-exporter-vzikiiya.py", line 831, in main 2022-05-17 18:21:27 generated_output = self.gen_json_out() 2022-05-17 18:21:27 File "/tmp/external-cluster-details-exporter-vzikiiya.py", line 656, in gen_json_out 2022-05-17 18:21:27 self._gen_output_map() 2022-05-17 18:21:27 File "/tmp/external-cluster-details-exporter-vzikiiya.py", line 626, in _gen_output_map 2022-05-17 18:21:27 self.out_map['CSI_RBD_NODE_SECRET_SECRET'] = self.create_cephCSIKeyring_RBDNode() 2022-05-17 18:21:27 File "/tmp/external-cluster-details-exporter-vzikiiya.py", line 554, in create_cephCSIKeyring_RBDNode 2022-05-17 18:21:27 "Error: {}".format(err_msg if ret_val != 0 else self.EMPTY_OUTPUT_LIST)) 2022-05-17 18:21:27 __main__.ExecutionFailureException: 'auth get-or-create client.csi-rbd-node' command failed 2022-05-17 18:21:27 Error: key for client.csi-rbd-node exists but cap mon does not match 2022-05-17 18:21:27 12:51:27 - MainThread - ocs_ci.deployment.deployment - ERROR - > looks like update external script is not existed/updated in 4.9.7-2 packagemanifest/csv $ oc get csv -n openshift-storage | grep ocs-operator ocs-operator.v4.9.7 OpenShift Container Storage 4.9.7 ocs-operator.v4.9.6 Succeeded $ $ oc get csv $(oc get csv -n openshift-storage | grep ocs-operator | awk '{print $1}') -n openshift-storage -o jsonpath='{.metadata.annotations.external\.features\.ocs\.openshift\.io/export-script}' | base64 --decode > ceph-external-cluster-details-exporter.py $ grep -ir "check_user_exist" ceph-external-cluster-details.exporter.py
(In reply to Vijay Avuthu from comment #10) > Update: > ======= > > > looks like update external script is not existed/updated in 4.9.7-2 packagemanifest/csv > > $ oc get csv -n openshift-storage | grep ocs-operator > ocs-operator.v4.9.7 OpenShift Container Storage 4.9.7 > ocs-operator.v4.9.6 Succeeded > $ > > $ oc get csv $(oc get csv -n openshift-storage | grep ocs-operator | awk > '{print $1}') -n openshift-storage -o > jsonpath='{.metadata.annotations.external\.features\.ocs\.openshift\.io/ > export-script}' | base64 --decode > ceph-external-cluster-details-exporter.py > $ grep -ir "check_user_exist" ceph-external-cluster-details.exporter.py grepped on wrong file $ grep -ir "check_user_exist" ceph-external-cluster-details-exporter.py def check_user_exist(self,user): user_key = self.check_user_exist("client.csi-cephfs-provisioner") user_key = self.check_user_exist("client.csi-cephfs-node") user_key = self.check_user_exist("client.csi-rbd-provisioner") user_key = self.check_user_exist("client.csi-rbd-node") So, its failed with correct/updated script
(In reply to Vijay Avuthu from comment #10) > Update: > ======= > > Tested with scenario of csi users already existed with different caps > > existing csi users: > > client.csi-cephfs-node > key: AQDki4Nif+3xDxAACR4vbqMtCikyaeqIJs7itQ== > caps: [mds] allow rw > caps: [mgr] allow rw > caps: [mon] allow r, allow command 'osd blocklist' > caps: [osd] allow rw tag cephfs *=* > client.csi-cephfs-provisioner > key: AQDki4Nie2ZeEBAASYJjVjP3L7hKw4K8EWCfNg== > caps: [mgr] allow rw > caps: [mon] allow r, allow command 'osd blocklist' > caps: [osd] allow rw tag cephfs metadata=* > client.csi-rbd-node > key: AQDki4Ni6rAcDxAAGCwzY+AIbOz8cUj3zID/wQ== > caps: [mon] profile rbd, allow command 'osd blocklist' > caps: [osd] profile rbd > client.csi-rbd-provisioner > key: AQDki4NiH0KMDxAAuYMDwiUnbjIK7qnlPnPAcg== > caps: [mgr] allow rw > caps: [mon] profile rbd, allow command 'osd blocklist' > caps: [osd] profile rbd > > > then deploy ODF 4.9.7-2 and it fails with below error > > 2022-05-17 18:21:27 12:51:27 - MainThread - > ocs_ci.deployment.helpers.external_cluster_helpers - ERROR - Failed to run > /tmp/external-cluster-details-exporter-vzikiiya.py with parameters > --rbd-data-pool-name rbd --rgw-endpoint 10.1.161.235:8080. Error: Traceback > (most recent call last): > 2022-05-17 18:21:27 File > "/tmp/external-cluster-details-exporter-vzikiiya.py", line 853, in <module> > 2022-05-17 18:21:27 raise err > 2022-05-17 18:21:27 File > "/tmp/external-cluster-details-exporter-vzikiiya.py", line 850, in <module> > 2022-05-17 18:21:27 rjObj.main() > 2022-05-17 18:21:27 File > "/tmp/external-cluster-details-exporter-vzikiiya.py", line 831, in main > 2022-05-17 18:21:27 generated_output = self.gen_json_out() > 2022-05-17 18:21:27 File > "/tmp/external-cluster-details-exporter-vzikiiya.py", line 656, in > gen_json_out > 2022-05-17 18:21:27 self._gen_output_map() > 2022-05-17 18:21:27 File > "/tmp/external-cluster-details-exporter-vzikiiya.py", line 626, in > _gen_output_map > 2022-05-17 18:21:27 self.out_map['CSI_RBD_NODE_SECRET_SECRET'] = > self.create_cephCSIKeyring_RBDNode() > 2022-05-17 18:21:27 File > "/tmp/external-cluster-details-exporter-vzikiiya.py", line 554, in > create_cephCSIKeyring_RBDNode > 2022-05-17 18:21:27 "Error: {}".format(err_msg if ret_val != 0 else > self.EMPTY_OUTPUT_LIST)) > 2022-05-17 18:21:27 __main__.ExecutionFailureException: 'auth get-or-create > client.csi-rbd-node' command failed > 2022-05-17 18:21:27 Error: key for client.csi-rbd-node exists but cap mon > does not match > 2022-05-17 18:21:27 12:51:27 - MainThread - ocs_ci.deployment.deployment - > ERROR - > > > > looks like update external script is not existed/updated in 4.9.7-2 packagemanifest/csv > > $ oc get csv -n openshift-storage | grep ocs-operator > ocs-operator.v4.9.7 OpenShift Container Storage 4.9.7 > ocs-operator.v4.9.6 Succeeded > $ > > $ oc get csv $(oc get csv -n openshift-storage | grep ocs-operator | awk > '{print $1}') -n openshift-storage -o > jsonpath='{.metadata.annotations.external\.features\.ocs\.openshift\.io/ > export-script}' | base64 --decode > ceph-external-cluster-details-exporter.py > $ grep -ir "check_user_exist" ceph-external-cluster-details.exporter.py there is issue in osc-ci in getting exporter script. changing status back to ON_QA
Update: ======== Verified below scenarios ( ocs-registry:4.9.7-2 ): 1. Direct deployment of ODF 4.9 ( external cluster - doesn't have any csi users ) verified csi users are created client.csi-cephfs-node key: AQBcO4Nia6OdORAApg57j2vtZcb7Otd2g1bA2Q== caps: [mds] allow rw caps: [mgr] allow rw caps: [mon] allow r caps: [osd] allow rw tag cephfs *=* client.csi-cephfs-provisioner key: AQBcO4NikILzORAAH1KJW4h0krD4zTffY4jxyA== caps: [mgr] allow rw caps: [mon] allow r caps: [osd] allow rw tag cephfs metadata=* client.csi-rbd-node key: AQBcO4Nii8bOOBAAWYM4o0eTsNLhymisVBjm8g== caps: [mon] profile rbd caps: [osd] profile rbd client.csi-rbd-provisioner key: AQBcO4NiHWw+ORAA9p03lc9TZiT4pzcq0vy6BQ== caps: [mgr] allow rw caps: [mon] profile rbd caps: [osd] profile rbd https://ocs4-jenkins-csb-odf-qe.apps.ocp-c1.prod.psi.redhat.com/job/qe-deploy-ocs-cluster/12666/consoleFull RESULT: SUCCESS 2. create users with caps ( from ODF 4.10 external script). client.csi-cephfs-node key: AQDki4Nif+3xDxAACR4vbqMtCikyaeqIJs7itQ== caps: [mds] allow rw caps: [mgr] allow rw caps: [mon] allow r, allow command 'osd blocklist' caps: [osd] allow rw tag cephfs *=* client.csi-cephfs-provisioner key: AQDki4Nie2ZeEBAASYJjVjP3L7hKw4K8EWCfNg== caps: [mgr] allow rw caps: [mon] allow r, allow command 'osd blocklist' caps: [osd] allow rw tag cephfs metadata=* client.csi-rbd-node key: AQDki4Ni6rAcDxAAGCwzY+AIbOz8cUj3zID/wQ== caps: [mon] profile rbd, allow command 'osd blocklist' caps: [osd] profile rbd client.csi-rbd-provisioner key: AQDki4NiH0KMDxAAuYMDwiUnbjIK7qnlPnPAcg== caps: [mgr] allow rw caps: [mon] profile rbd, allow command 'osd blocklist' caps: [osd] profile rbd then normal ODF 4.9 deployment https://ocs4-jenkins-csb-odf-qe.apps.ocp-c1.prod.psi.redhat.com/job/qe-deploy-ocs-cluster/12719/console client.csi-cephfs-node key: AQDki4Nif+3xDxAACR4vbqMtCikyaeqIJs7itQ== caps: [mds] allow rw caps: [mgr] allow rw caps: [mon] allow r, allow command 'osd blocklist' caps: [osd] allow rw tag cephfs *=* client.csi-cephfs-provisioner key: AQDki4Nie2ZeEBAASYJjVjP3L7hKw4K8EWCfNg== caps: [mgr] allow rw caps: [mon] allow r, allow command 'osd blocklist' caps: [osd] allow rw tag cephfs metadata=* client.csi-rbd-node key: AQDki4Ni6rAcDxAAGCwzY+AIbOz8cUj3zID/wQ== caps: [mon] profile rbd, allow command 'osd blocklist' caps: [osd] profile rbd client.csi-rbd-provisioner key: AQDki4NiH0KMDxAAuYMDwiUnbjIK7qnlPnPAcg== caps: [mgr] allow rw caps: [mon] profile rbd, allow command 'osd blocklist' caps: [osd] profile rbd RESULT: SUCCESS 3. upgrade from 4.9.7-2 to 4.10.2-3 ( used cluster from scenario 2 ) https://ocs4-jenkins-csb-odf-qe.apps.ocp-c1.prod.psi.redhat.com/job/qe-deploy-ocs-cluster/12740/console RESULT: SUCCESS 4. upgrade from 4.8.10 live to ocs-registry:4.9.7-2 https://ocs4-jenkins-csb-odf-qe.apps.ocp-c1.prod.psi.redhat.com/job/qe-deploy-ocs-cluster/12754/consoleFull RESULT: SUCCESS Marking as Verified
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Red Hat OpenShift Data Foundation 4.9.7 Bug Fix Update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2022:4710