Bug 2154522 - UI installation of ODF on IPv6 cluster doesn't succeed due to lack of IPv6 option in UI
Summary: UI installation of ODF on IPv6 cluster doesn't succeed due to lack of IPv6 op...
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat Storage
Component: management-console
Version: 4.12
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: ---
Assignee: Sanjal Katiyar
QA Contact: Prasad Desala
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-12-17 12:16 UTC by Vijay Avuthu
Modified: 2023-08-09 16:46 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2023-01-09 14:38:02 UTC
Embargoed:


Attachments (Terms of Use)

Description Vijay Avuthu 2022-12-17 12:16:50 UTC
Description of problem (please be detailed as possible and provide log
snippests):

UI installation of ODF on IPv6 cluster stuck on creating rook-ceph-rgw-ocs-storagecluster-cephobjectstore-a* pod. might be storagecluster doesn't have ipFamily as IPv6


Version of all relevant components (if applicable):
odf-operator.v4.12.0-143


Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?
Yes

Is there any workaround available to the best of your knowledge?
NA

Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?
1

Can this issue reproducible?
Yes

Can this issue reproduce from the UI?
Yes

If this is a regression, please provide more details to justify this:


Steps to Reproduce:
1. Install OCP IPv6 in Baremetal
2. install ODF using UI
3. UI doesn't have any option to create StorageCluster to include IPv6 network details


Actual results:

# oc get pods -o wide | grep -i rgw
rook-ceph-rgw-ocs-storagecluster-cephobjectstore-a-868444cdprxs   1/2     Running             212 (7m38s ago)   19h    fd01:0:0:4::31   e26-h21-740xd   <none>           <none>

# oc get storagecluster
NAME                 AGE   PHASE         EXTERNAL   CREATED AT             VERSION
ocs-storagecluster   20h   Progressing              2022-12-16T15:28:49Z   4.12.0

# oc describe storagecluster
Name:         ocs-storagecluster
Namespace:    openshift-storage
Labels:       <none>
Annotations:  cluster.ocs.openshift.io/local-devices: true
              uninstall.ocs.openshift.io/cleanup-policy: delete
              uninstall.ocs.openshift.io/mode: graceful
API Version:  ocs.openshift.io/v1
Kind:         StorageCluster


Spec:
  Arbiter:
  Encryption:
    Kms:
  External Storage:
  Flexible Scaling:  true
  Managed Resources:
    Ceph Block Pools:
    Ceph Cluster:
    Ceph Config:
    Ceph Dashboard:
    Ceph Filesystems:
    Ceph Non Resilient Pools:
    Ceph Object Store Users:
    Ceph Object Stores:
    Ceph Toolbox:
  Mirroring:
  Mon Data Dir Host Path:  /var/lib/rook
  Node Topologies:
  Storage Device Sets:
    Config:
    Count:  3
    Data PVC Template:
      Metadata:
      Spec:
        Access Modes:
          ReadWriteOnce
        Resources:
          Requests:
            Storage:         1
        Storage Class Name:  localblock
        Volume Mode:         Block
      Status:
    Name:  ocs-deviceset-localblock
    Placement:
    Prepare Placement:
    Replica:  1
    Resources:
Status:
  Conditions:
    Last Heartbeat Time:   2022-12-17T12:04:14Z
    Last Transition Time:  2022-12-16T15:28:50Z
    Message:               Error while reconciling: some StorageClasses were skipped while waiting for pre-requisites to be met: [ocs-storagecluster-ceph-rbd]
    Reason:                ReconcileFailed
    Status:                False
    Type:                  ReconcileComplete
    Last Heartbeat Time:   2022-12-16T15:28:50Z
    Last Transition Time:  2022-12-16T15:28:50Z
    Message:               Initializing StorageCluster
    Reason:                Init
    Status:                False
    Type:                  Available
    Last Heartbeat Time:   2022-12-16T15:28:50Z
    Last Transition Time:  2022-12-16T15:28:50Z
    Message:               Initializing StorageCluster
    Reason:                Init
    Status:                True
    Type:                  Progressing
    Last Heartbeat Time:   2022-12-16T15:28:50Z
    Last Transition Time:  2022-12-16T15:28:50Z
    Message:               Initializing StorageCluster
    Reason:                Init
    Status:                False
    Type:                  Degraded
    Last Heartbeat Time:   2022-12-16T15:28:50Z
    Last Transition Time:  2022-12-16T15:28:50Z
    Message:               Initializing StorageCluster
    Reason:                Init
    Status:                Unknown
    Type:                  Upgradeable
  External Storage:
    Granted Capacity:  0
  Failure Domain:      host
  Failure Domain Key:  kubernetes.io/hostname
  Failure Domain Values:
    e26-h21-740xd
    e26-h23-740xd
    e26-h25-740xd
  Images:
    Ceph:
      Actual Image:   quay.io/rhceph-dev/rhceph@sha256:c6fe7e71ad1b13281d1d2399ceb98d3d6927df40e5d442a15fa0dee2976ccbcf
      Desired Image:  quay.io/rhceph-dev/rhceph@sha256:c6fe7e71ad1b13281d1d2399ceb98d3d6927df40e5d442a15fa0dee2976ccbcf
    Noobaa Core:
      Desired Image:  quay.io/rhceph-dev/odf4-mcg-core-rhel8@sha256:b495b59219d78ab468d1e1faedacfda59cb4b9fe13b253157897ff6899811de5
    Noobaa DB:
      Desired Image:  quay.io/rhceph-dev/rhel8-postgresql-12@sha256:f4d8f5f165da493568802b4115f5e68af7cc11a3f14769e495de4a3f61a58238
  Kms Server Connection:
  Node Topologies:
    Labels:
      kubernetes.io/hostname:
        e26-h21-740xd
        e26-h23-740xd
        e26-h25-740xd
  Phase:  Progressing
  Related Objects:
    API Version:       ceph.rook.io/v1
    Kind:              CephCluster
    Name:              ocs-storagecluster-cephcluster
    Namespace:         openshift-storage
    Resource Version:  1960459
    UID:               95968b17-d5c4-4894-9283-fb124125a97f
  Version:             4.12.0
Events:                <none>



Expected results:
UI should have option for enabling IPv6 for ODF or auto detect it


Additional info:

# oc exec rook-ceph-tools-6755cd4cdb-vklgm -- ceph status
  cluster:
    id:     efcaa4be-b458-44f9-9a3d-fe24e784d82b
    health: HEALTH_WARN
            1 MDSs report slow metadata IOs
            Reduced data availability: 352 pgs inactive
 
  services:
    mon: 3 daemons, quorum a,b,c (age 20h)
    mgr: a(active, since 20h)
    mds: 1/1 daemons up, 1 standby
    osd: 3 osds: 0 up, 3 in (since 20h)
 
  data:
    volumes: 1/1 healthy
    pools:   11 pools, 352 pgs
    objects: 0 objects, 0 B
    usage:   0 B used, 0 B / 0 B avail
    pgs:     100.000% pgs unknown
             352 unknown

# oc exec rook-ceph-tools-6755cd4cdb-vklgm -- ceph osd tree
ID  CLASS  WEIGHT   TYPE NAME               STATUS  REWEIGHT  PRI-AFF
-1         4.36647  root default                                     
-5         1.45549      host e26-h21-740xd                           
 1    ssd  1.45549          osd.1             down   1.00000  1.00000
-3         1.45549      host e26-h23-740xd                           
 0    ssd  1.45549          osd.0             down   1.00000  1.00000
-7         1.45549      host e26-h25-740xd                           
 2    ssd  1.45549          osd.2             down   1.00000  1.00000

# oc get csv
NAME                                         DISPLAY                       VERSION             REPLACES   PHASE
mcg-operator.v4.12.0-143.stable              NooBaa Operator               4.12.0-143.stable              Succeeded
ocs-operator.v4.12.0-143.stable              OpenShift Container Storage   4.12.0-143.stable              Succeeded
odf-csi-addons-operator.v4.12.0-143.stable   CSI Addons                    4.12.0-143.stable              Succeeded
odf-operator.v4.12.0-143.stable              OpenShift Data Foundation     4.12.0-143.stable              Succeeded

Comment 2 Vijay Avuthu 2022-12-17 12:18:22 UTC
must gather: https://url.corp.redhat.com/02c691f

Comment 3 Nitin Goyal 2022-12-19 06:11:26 UTC
Moving it to UI team


Note You need to log in before you can comment on or make changes to this bug.