Bug 2112698

Summary: ODF web console under Storage is missing for IPv6 single-stack deployments
Product: [Red Hat Storage] Red Hat OpenShift Data Foundation Reporter: Kaspars Skels <kskels>
Component: management-consoleAssignee: Bipul Adhikari <badhikar>
Status: CLOSED CURRENTRELEASE QA Contact: Vijay Avuthu <vavuthu>
Severity: medium Docs Contact:
Priority: unspecified    
Version: 4.10CC: abose, amagrawa, badhikar, jefbrown, kramdoss, mbukatov, muagarwa, nthomas, ocs-bugs, odf-bz-bot, openshift-bugzilla-robot, rsussman, shberry
Target Milestone: ---   
Target Release: ODF 4.12.0   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: 4.12.0-74 Doc Type: Bug Fix
Doc Text:
IPv6 is now supported by ODF Console. Changes were made to the Nginx configuration to support IPv6 platforms.
Story Points: ---
Clone Of:
: 2130900 (view as bug list) Environment:
Last Closed: 2023-02-08 14:06:28 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 2120489, 2130900, 2130910    

Description Kaspars Skels 2022-07-31 17:58:19 UTC
Description of problem (please be detailed as possible and provide log
snippests):
Here are logs from the openshift-conole
(...)
E0731 15:40:58.587870       1 handlers.go:165] GET request for "odf-console" plugin failed: Get "https://odf-console-service.openshift-storage.svc.cluster.local:9001/plugin-manifest.json": dial tcp [fd02::d35a]:9001: connect: connection refused
(...)

Checking odf-console
[kskels@kskels ~]$ oc exec -it odf-console-69dffdffd5-278fb -- bash
bash-4.4$ curl https://localhost:9001
curl: (60) SSL certificate problem: self signed certificate in certificate chain
More details here: https://curl.haxx.se/docs/sslcerts.html

bash-4.4$ curl https://[::1]:9001
curl: (7) Failed to connect to ::1 port 9001: Connection refused

You can see here that connections are not accepted on IPv6 endpoint.

Version of all relevant components (if applicable):
OCP 4.10.22, ODF 4.10.5


Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?
The ODF itself work well, just issue not having UI from console.

Is there any workaround available to the best of your knowledge?
No

Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?
1


Can this issue reproducible?
Yes

Can this issue reproduce from the UI?
Not sure, part of installation.

If this is a regression, please provide more details to justify this:


Steps to Reproduce:
1. Install OCP 4.10
2. Install ODF and configure for IPv6
3. Access web UI


Actual results:


Expected results:


Additional info:

Comment 2 Martin Bukatovic 2022-08-03 13:10:05 UTC
Status of this bug should be aligned with https://issues.redhat.com/browse/RHSTOR-3540

Comment 4 Mudit Agarwal 2022-08-23 04:53:03 UTC
*** Bug 2120489 has been marked as a duplicate of this bug. ***

Comment 8 Vijay Avuthu 2022-12-12 06:06:40 UTC
Update:
=========

Installed ocs-registry:4.12.0-125 and 4.12.0-0.nightly-2022-12-04-160656

used ocs-ci to install ODF

07:24:17 - MainThread - ocs_ci.ocs.utils - INFO  - Enabling console plugin
07:24:17 - MainThread - ocs_ci.utility.utils - INFO  - Executing command: oc patch console.operator cluster -n openshift-storage --type json -p '[{"op": "add", "path": "/spec/plugins", "value": ["odf-console"]}]'
07:24:17 - MainThread - ocs_ci.deployment.deployment - INFO  - Done creating rook resources, waiting for HEALTH_OK
07:24:17 - MainThread - ocs_ci.utility.utils - INFO  - Executing command: oc wait --for condition=ready pod -l app=rook-ceph-tools -n openshift-storage --timeout=300s
07:24:17 - MainThread - ocs_ci.utility.utils - INFO  - Executing command: oc -n openshift-storage get pod -l 'app=rook-ceph-tools' -o jsonpath='{.items[0].metadata.name}'
07:24:17 - MainThread - ocs_ci.utility.utils - INFO  - Executing command: oc -n openshift-storage exec rook-ceph-tools-775d56dd7-c6zqf -- ceph health
07:24:18 - MainThread - ocs_ci.utility.utils - INFO  - Ceph cluster health is HEALTH_OK.

> console pod is running

# oc get pods -o wide | grep -i console
odf-console-c889845c6-7gbmn                                       1/1     Running     0               5d22h   fd01:0:0:4::16    e26-hxx-7xxxd   <none>           <none>
#

> I can see Operators ---> Installed Operators ---> OpenShift Data Foundation ---> Details, Console plugin is Enabled

Comment 11 Vijay Avuthu 2022-12-12 09:05:08 UTC
changing the status as it is verified in comment #8