Bug 2207775

Summary: [GSS] OpenShift Data Foundation odf-console plugin stuck in CLBO
Product: [Red Hat Storage] Red Hat OpenShift Data Foundation Reporter: khover
Component: management-consoleAssignee: Sanjal Katiyar <skatiyar>
Status: CLOSED NOTABUG QA Contact: Prasad Desala <tdesala>
Severity: high Docs Contact:
Priority: unspecified    
Version: 4.10CC: hnallurv, ocs-bugs, odf-bz-bot, skatiyar
Target Milestone: ---Flags: khover: needinfo-
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2023-05-24 11:38:07 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description khover 2023-05-16 20:02:12 UTC
Description of problem (please be detailed as possible and provide log
snippests):

After upgrade from 4.9 to 4.10:

odf-console plugin stuck in CLBO

oc logs -f odf-console-68764c5799-t86xf2023/05/12 15:32:15 [emerg] 1#0: socket() [::]:9001 failed (97: Address family not supported by protocol)nginx: [emerg] socket() [::]:9001 failed (97: Address family not supported by protocol)

Verified the odf console pod was running and plugin enabled.

# oc get consoles.operator.openshift.io cluster -o jsonpath='{.spec.plugins}{"\n"}'
["odf-console"]

already tried to hard refresh the browser and clean up the cache as per [1]
[1] https://access.redhat.com/solutions/6824581

And tried to patch.

$ oc patch consoleplugins.console.openshift.io/odf-console -p '{"spec": {"service": {"basePath": "/compatibility/"}}}' --type merge

Found the following but not sure if related.

https://access.redhat.com/solutions/4898101


ODF install timeout

Version of all relevant components (if applicable):

ODF 4.10.12

Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?

ODF upgrade never completes.

Is there any workaround available to the best of your knowledge?

Unknown

Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?

4

Can this issue reproducible?

NA

Can this issue reproduce from the UI?

NA

If this is a regression, please provide more details to justify this:


Steps to Reproduce:
1.
2.
3.


Actual results:


Expected results:

ODF console pod Running 

Additional info:

no known issues with IPv4 communication in the clusters. 

IPv6 is set to "Disabled" on all worker and master nodes.

Comment 3 khover 2023-05-19 14:17:32 UTC
Hi Sanjal,

Would migrating from SDN to OVN ( dual stack ) solve the issue or since the customer disabled it, is there a way to enable the IPv6 on the nodes via mc ?