Description of problem (please be detailed as possible and provide log snippests): exporter python script should support IPV6, part of https://bugzilla.redhat.com/show_bug.cgi?id=2064426#c2. Version of all relevant components (if applicable): Does this issue impact your ability to continue to work with the product (please explain in detail what is the user impact)? Is there any workaround available to the best of your knowledge? Rate from 1 - 5 the complexity of the scenario you performed that caused this bug (1 - very simple, 5 - very complex)? Can this issue reproducible? Can this issue reproduce from the UI? If this is a regression, please provide more details to justify this: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
Need details about what is needed to support ipv6.
@mudit need to understand the priority of this issue, Does it need to be in 4.12?
Moving out of 4.12 since there is not a clear requirement
We are doing a GA for IPv6 support in 4.12 So ideally we should support this as well. Eran, please share your thoughts.
Exporter script is working on ceph cluster ipv6: [root@argo016 ~]# python3.6 exporter.py --rbd-data-pool-name rbd [{"name": "rook-ceph-mon-endpoints", "kind": "ConfigMap", "data": {"data": "argo016=[2620:52:0:880:ae1f:6bff:fe0a:1844]:6789", "maxMonId": "0", "mapping": "{}"}}, {"name": "rook-ceph-mon", "kind": "Secret", "data": {"admin-secret": "admin-secret", "fsid": "495fb7f8-aae3-11ee-ab77-ac1f6b0a1844", "mon-secret": "mon-secret"}}, {"name": "rook-ceph-operator-creds", "kind": "Secret", "data": {"userID": "client.healthchecker", "userKey": "AQAsvZtljZbPOBAAW1F2B9AOIwVWQaIJlzSmng=="}}, {"name": "monitoring-endpoint", "kind": "CephCluster", "data": {"MonitoringEndpoint": "2620:52:0:880:ae1f:6bff:fe0a:1844", "MonitoringPort": "9283"}}, {"name": "rook-csi-rbd-node", "kind": "Secret", "data": {"userID": "csi-rbd-node", "userKey": "AQAsvZtl5eQ8ORAAQR8MI5uHnXAhZwom+UWYLA=="}}, {"name": "rook-csi-rbd-provisioner", "kind": "Secret", "data": {"userID": "csi-rbd-provisioner", "userKey": "AQAsvZtlT9egORAARydZt6N7On8FKsfdDIB4Gw=="}}, {"name": "rook-ceph-dashboard-link", "kind": "Secret", "data": {"userID": "ceph-dashboard-link", "userKey": "https://[2620:52:0:880:ae1f:6bff:fe0a:1844]:8443/"}}, {"name": "ceph-rbd", "kind": "StorageClass", "data": {"pool": "rbd", "csi.storage.k8s.io/provisioner-secret-name": "rook-csi-rbd-provisioner", "csi.storage.k8s.io/controller-expand-secret-name": "rook-csi-rbd-provisioner", "csi.storage.k8s.io/node-stage-secret-name": "rook-csi-rbd-node"}}]
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Important: Red Hat OpenShift Data Foundation 4.15.0 security, enhancement, & bug fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2024:1383