Bug 2207775 - [GSS] OpenShift Data Foundation odf-console plugin stuck in CLBO
Summary: [GSS] OpenShift Data Foundation odf-console plugin stuck in CLBO
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat Storage
Component: management-console
Version: 4.10
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: ---
Assignee: Sanjal Katiyar
QA Contact: Prasad Desala
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2023-05-16 20:02 UTC by khover
Modified: 2023-08-09 16:46 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2023-05-24 11:38:07 UTC
Embargoed:
khover: needinfo-


Attachments (Terms of Use)

Description khover 2023-05-16 20:02:12 UTC
Description of problem (please be detailed as possible and provide log
snippests):

After upgrade from 4.9 to 4.10:

odf-console plugin stuck in CLBO

oc logs -f odf-console-68764c5799-t86xf2023/05/12 15:32:15 [emerg] 1#0: socket() [::]:9001 failed (97: Address family not supported by protocol)nginx: [emerg] socket() [::]:9001 failed (97: Address family not supported by protocol)

Verified the odf console pod was running and plugin enabled.

# oc get consoles.operator.openshift.io cluster -o jsonpath='{.spec.plugins}{"\n"}'
["odf-console"]

already tried to hard refresh the browser and clean up the cache as per [1]
[1] https://access.redhat.com/solutions/6824581

And tried to patch.

$ oc patch consoleplugins.console.openshift.io/odf-console -p '{"spec": {"service": {"basePath": "/compatibility/"}}}' --type merge

Found the following but not sure if related.

https://access.redhat.com/solutions/4898101


ODF install timeout

Version of all relevant components (if applicable):

ODF 4.10.12

Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?

ODF upgrade never completes.

Is there any workaround available to the best of your knowledge?

Unknown

Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?

4

Can this issue reproducible?

NA

Can this issue reproduce from the UI?

NA

If this is a regression, please provide more details to justify this:


Steps to Reproduce:
1.
2.
3.


Actual results:


Expected results:

ODF console pod Running 

Additional info:

no known issues with IPv4 communication in the clusters. 

IPv6 is set to "Disabled" on all worker and master nodes.

Comment 3 khover 2023-05-19 14:17:32 UTC
Hi Sanjal,

Would migrating from SDN to OVN ( dual stack ) solve the issue or since the customer disabled it, is there a way to enable the IPv6 on the nodes via mc ?


Note You need to log in before you can comment on or make changes to this bug.