Bug 2036592 - [GSS] ODF installation on ipv6 network isn't successful
Summary: [GSS] ODF installation on ipv6 network isn't successful
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat Storage
Component: ocs-operator
Version: 4.9
Hardware: All
OS: Unspecified
high
high
Target Milestone: ---
: ---
Assignee: Rohan Gupta
QA Contact: Elad
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-01-03 10:23 UTC by Priya Pandey
Modified: 2023-08-09 17:00 UTC (History)
14 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-04-21 09:15:50 UTC
Embargoed:


Attachments (Terms of Use)

Description Priya Pandey 2022-01-03 10:23:51 UTC
Description of problem (please be detailed as possible and provide log
snippests):

- ODF installation is not successfully installed on the ipv6 network.


Version of all relevant components (if applicable):

v4.9

Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?

ODF cluster is not healthy and operational.

Is there any workaround available to the best of your knowledge?

N/A

Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?



Can this issue reproducible?

No

Can this issue reproduce from the UI?
No

If this is a regression, please provide more details to justify this:

No

Steps to Reproduce:
N/A

Actual results:

ODF cluster is not healthy and operational

Expected results:

ODF cluster should be running fine and healthy.

Additional info:
In the next comments

Comment 4 Sébastien Han 2022-01-04 15:14:04 UTC
Moving to ocs-op since the revert of the IpFamily happened on the StorageCluster object, which is managed by ocs-op.
Thanks!

Comment 6 Sébastien Han 2022-01-11 15:11:51 UTC
We are not looking in the right direction, we should look at the ocs-operator logs since the StorageCluster CR was updated (probably by the API server?) to IPv4.
The logs are missing the latest must gathers attached (323270a6f16064cd1e8e3442dda286007bacee36c3ca24a100b61f3af00a01f6). I only see the YAML spec definition.

However, we need the logs to confirm this. I looked also at the types and it's just an enum so the API server should not change it.
I'm not super familiar with this OCS-Op code and will leave this the ocs-op people mainly and assist wherever I can.

Thanks.

Comment 8 Sébastien Han 2022-02-03 11:26:20 UTC
Hi Priya,

I don't know.

Comment 9 Sahina Bose 2022-02-04 06:10:57 UTC
@rohgupta Can you take a look and update bz?

Comment 10 Rohan Gupta 2022-02-04 07:42:19 UTC
@prpandey can you please share the must gather logs

Comment 11 Jose A. Rivera 2022-02-04 15:00:26 UTC
Everything you need is already attached to the customer case. While the latest must-gather is missing the rook-ceph-operator logs (HOW???), they followed up with a separate attachment:

https://access.redhat.com/support/cases/#/case/03017541/discussion?attachmentId=a092K00002xjvUMQAY

Please at least have a look to verify if the rook behaviors are sue to a misconfiguration from the StorageCluster.

Comment 15 Rohan Gupta 2022-02-07 15:14:06 UTC
I am not able to access the customer case link because of "You are not authorized to see this case". Gobinda downloaded and shared the must gather logs and I don't see OCS operator logs there and was not able to figure out the problem from the remaining logs.

Comment 18 Jose A. Rivera 2022-02-07 16:24:02 UTC
Oops, it's not actually accepted for GA. As such, pushing this out to ODF 4.11.

Comment 19 Rohan Gupta 2022-02-08 07:10:21 UTC
@ealcaniz I am looking for OCS operator logs. The attached logs are for Rook operator.


Note You need to log in before you can comment on or make changes to this bug.