Bug 2000100 - [GSS][RFE] OCS installation Fails on ipv6 network
Summary: [GSS][RFE] OCS installation Fails on ipv6 network
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat Storage
Component: unclassified
Version: 4.8
Hardware: All
OS: Unspecified
unspecified
high
Target Milestone: ---
: ODF 4.12.0
Assignee: Mudit Agarwal
QA Contact: Vijay Avuthu
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-09-01 11:53 UTC by Priya Pandey
Modified: 2023-08-09 17:03 UTC (History)
18 users (show)

Fixed In Version:
Doc Type: Enhancement
Doc Text:
Clone Of:
Environment:
Last Closed: 2023-02-08 14:06:28 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Knowledge Base (Article) 6429881 0 None None None 2021-11-01 12:53:35 UTC

Description Priya Pandey 2021-09-01 11:53:33 UTC
Description of problem (please be detailed as possible and provide log
snippets):


- OCS v4.8 on the ipv6 network shows all OSDs are down.


Version of all relevant components (if applicable):

- v4.8


Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?


- All OSDs are down which makes the cluster inoperative after installation.


Is there any workaround available to the best of your knowledge?

- N/A

Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?

- 2

Can this issue reproducible?

Yes

Can this issue reproduce from the UI?

No

If this is a regression, please provide more details to justify this:

No

Steps to Reproduce:
1. Install OCP v4.8 with ipv6 network
2. Install OCS on the same cluster
3. Verify the ceph cluster


Actual results:

All OSDs are down after installation which makes the cluster inoperative.


Expected results:

OSDs should be up and Ceph cluster should be healthy and operational

Additional info: In the next comments

Comment 6 Rejy M Cyriac 2021-09-06 14:57:28 UTC
Based on request from engineering, the 'installation' component has been deprecated

Comment 16 Vijay Avuthu 2022-12-06 13:28:20 UTC
update:
=========
> Install IPv6 on BM using below storagecluster

- apiVersion: ocs.openshift.io/v1
kind: StorageCluster
metadata:
  annotations:
    cluster.ocs.openshift.io/local-devices: 'true'
  name: ocs-storagecluster
  namespace: openshift-storage
spec:
  flexibleScaling: true
  manageNodes: false
  monDataDirHostPath: /var/lib/rook
  network:
    ipFamily: IPv6
  storageDeviceSets:
  - count: 3
    dataPVCTemplate:
      spec:
        accessModes:
        - ReadWriteOnce
        resources:
          requests:
            storage: 1490Gi
        storageClassName: localblock
        volumeMode: Block
    name: ocs-deviceset-localblock
    placement: {}
    portable: false
    replica: 1
    resources: {}

> # oc get nodes -o wide
NAME            STATUS   ROLES                  AGE   VERSION           INTERNAL-IP    EXTERNAL-IP   OS-IMAGE                                                        KERNEL-VERSION                 CONTAINER-RUNTIME
e26-h15-740xd   Ready    control-plane,master   10h   v1.25.2+5533733   fc00:xxxx::5   <none>        Red Hat Enterprise Linux CoreOS 412.86.202212030032-0 (Ootpa)   4.18.0-372.32.1.el8_6.x86_64   cri-o://1.25.1-5.rhaos4.12.git6005903.el8
e26-h17-740xd   Ready    control-plane,master   10h   v1.25.2+5533733   fc00:xxxx::6   <none>        Red Hat Enterprise Linux CoreOS 412.86.202212030032-0 (Ootpa)   4.18.0-372.32.1.el8_6.x86_64   cri-o://1.25.1-5.rhaos4.12.git6005903.el8
e26-h19-740xd   Ready    control-plane,master   10h   v1.25.2+5533733   fc00:xxxx::7   <none>        Red Hat Enterprise Linux CoreOS 412.86.202212030032-0 (Ootpa)   4.18.0-372.32.1.el8_6.x86_64   cri-o://1.25.1-5.rhaos4.12.git6005903.el8
e26-h21-740xd   Ready    worker                 10h   v1.25.2+5533733   fc00:xxxx::8   <none>        Red Hat Enterprise Linux CoreOS 412.86.202212030032-0 (Ootpa)   4.18.0-372.32.1.el8_6.x86_64   cri-o://1.25.1-5.rhaos4.12.git6005903.el8
e26-h23-740xd   Ready    worker                 10h   v1.25.2+5533733   fc00:xxxx::9   <none>        Red Hat Enterprise Linux CoreOS 412.86.202212030032-0 (Ootpa)   4.18.0-372.32.1.el8_6.x86_64   cri-o://1.25.1-5.rhaos4.12.git6005903.el8
e26-h25-740xd   Ready    worker                 10h   v1.25.2+5533733   fc00:xxxx::a   <none>        Red Hat Enterprise Linux CoreOS 412.86.202212030032-0 (Ootpa)   4.18.0-372.32.1.el8_6.x86_64   cri-o://1.25.1-5.rhaos4.12.git6005903.el8

# oc get sc
NAME                          PROVISIONER                             RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
localblock                    kubernetes.io/no-provisioner            Delete          WaitForFirstConsumer   false                  26m
ocs-storagecluster-ceph-rbd   openshift-storage.rbd.csi.ceph.com      Delete          Immediate              true                   20m
ocs-storagecluster-ceph-rgw   openshift-storage.ceph.rook.io/bucket   Delete          Immediate              false                  23m
ocs-storagecluster-cephfs     openshift-storage.cephfs.csi.ceph.com   Delete          Immediate              true                   20m
openshift-storage.noobaa.io   openshift-storage.noobaa.io/obc         Delete          Immediate              false                  17m

$ ceph osd tree
ID  CLASS  WEIGHT   TYPE NAME               STATUS  REWEIGHT  PRI-AFF
-1         4.36647  root default                                     
-3         1.45549      host e26-xxx-740xd                           
 0    ssd  1.45549          osd.0               up   1.00000  1.00000
-5         1.45549      host e26-xxx-740xd                           
 2    ssd  1.45549          osd.2               up   1.00000  1.00000
-7         1.45549      host e26-xxx-740xd                           
 1    ssd  1.45549          osd.1               up   1.00000  1.00000


sh-4.4$ ceph -s
  cluster:
    id:     c1e2c726-238c-4ac1-8594-a07ea3a09165
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum a,b,c (age 21m)
    mgr: a(active, since 20m)
    mds: 1/1 daemons up, 1 hot standby
    osd: 3 osds: 3 up (since 20m), 3 in (since 20m)
    rgw: 1 daemon active (1 hosts, 1 zones)
 
  data:
    volumes: 1/1 healthy
    pools:   12 pools, 353 pgs
    objects: 675 objects, 498 MiB
    usage:   806 MiB used, 4.4 TiB / 4.4 TiB avail
    pgs:     353 active+clean
 
  io:
    client:   852 B/s rd, 6.0 KiB/s wr, 1 op/s rd, 1 op/s wr


Note You need to log in before you can comment on or make changes to this bug.