Description of problem (please be detailed as possible and provide log snippests): # python3 /tmp/external-cluster-details-exporter-eu22lvu5.py --rbd-data-pool-name rbd --rgw-endpoint 10.1.115.107:8080 Traceback (most recent call last): File "/tmp/external-cluster-details-exporter-eu22lvu5.py", line 1705, in <module> rjObj.main() File "/tmp/external-cluster-details-exporter-eu22lvu5.py", line 1685, in main generated_output = self.gen_json_out() File "/tmp/external-cluster-details-exporter-eu22lvu5.py", line 1460, in gen_json_out self._gen_output_map() File "/tmp/external-cluster-details-exporter-eu22lvu5.py", line 1442, in _gen_output_map err = self.validate_rgw_endpoint(info_cap_supported) File "/tmp/external-cluster-details-exporter-eu22lvu5.py", line 1317, in validate_rgw_endpoint base_url, verify, err = self.endpoint_dial(rgw_endpoint, cert=cert) ValueError: too many values to unpack (expected 3) Version of all relevant components (if applicable): quay.io/rhceph-dev/ocs-registry:4.11.8-1 ceph --version ceph version 16.2.8-85.el8cp (0bdc6db9a80af40dd496b05674a938d406a9f6f5) pacific (stable) - which is 5.2 RHCS Does this issue impact your ability to continue to work with the product (please explain in detail what is the user impact)? Yes, cannot deploy over external mode Is there any workaround available to the best of your knowledge? No Rate from 1 - 5 the complexity of the scenario you performed that caused this bug (1 - very simple, 5 - very complex)? 1 Can this issue reproducible? Haven't tried but by running the script on the node I still see the same failure. Can this issue reproduce from the UI? not relevant If this is a regression, please provide more details to justify this: Steps to Reproduce: 1. Deploy ODF 4.11 latest build mentioned above 2. Try to run external cluster details script on RHCS 5.2 cluster 3. Actual results: ValueError: too many values to unpack (expected 3) Expected results: Have the data generated properly without error Additional info:
I'm providing devel_ack as per the discussion, we should get this bz approved first before we re-spin the build.
WAIT, the BZ is approved for 4.13 but the fix is merged in 4.11? What's the intention here?
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Red Hat OpenShift Data Foundation 4.11.8 Bug Fix Update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2023:3293