Bug 2081690 - [vSphere]: storagecluster is stuck in Progressing state - noobaa-ceph-objectstore-user is not ready
Summary: [vSphere]: storagecluster is stuck in Progressing state - noobaa-ceph-objects...
Keywords:
Status: CLOSED DUPLICATE of bug 2075581
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat Storage
Component: ocs-operator
Version: 4.11
Hardware: Unspecified
OS: Unspecified
unspecified
urgent
Target Milestone: ---
: ---
Assignee: Jose A. Rivera
QA Contact: Martin Bukatovic
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-05-04 11:40 UTC by Vijay Avuthu
Modified: 2023-08-09 17:00 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-05-05 05:26:59 UTC
Embargoed:


Attachments (Terms of Use)

Description Vijay Avuthu 2022-05-04 11:40:53 UTC
Description of problem (please be detailed as possible and provide log
snippests):

ocs-storagecluster is in Progressing state - noobaa-ceph-objectstore-user is not ready


Version of all relevant components (if applicable):

ocs-registry:4.11.0-61
openshift installer (4.11.0-0.nightly-2022-04-26-181148)


Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?

Yes

Is there any workaround available to the best of your knowledge?
No

Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?
1

Can this issue reproducible?
2/2

Can this issue reproduce from the UI?
Not tried

If this is a regression, please provide more details to justify this:
Yes

Steps to Reproduce:
1. install odf using ocs-ci
2. check storagecluster status
3.


Actual results:

storagecluster in Progressing phase

Expected results:

storagecluster should be in Ready state


Additional info:

$ oc get storagecluster
NAME                 AGE    PHASE         EXTERNAL   CREATED AT             VERSION
ocs-storagecluster   107m   Progressing              2022-05-04T09:33:48Z   4.11.0

$ $ oc describe storagecluster ocs-storagecluster

Status:
  Conditions:
    Last Heartbeat Time:   2022-05-04T11:22:45Z
    Last Transition Time:  2022-05-04T09:40:25Z
    Message:               Reconcile completed successfully
    Reason:                ReconcileCompleted
    Status:                True
    Type:                  ReconcileComplete
    Last Heartbeat Time:   2022-05-04T09:33:49Z
    Last Transition Time:  2022-05-04T09:33:49Z
    Message:               Initializing StorageCluster
    Reason:                Init
    Status:                False
    Type:                  Available
    Last Heartbeat Time:   2022-05-04T11:22:45Z
    Last Transition Time:  2022-05-04T09:33:49Z
    Message:               Waiting on Nooba instance to finish initialization
    Reason:                NoobaaInitializing
    Status:                True
    Type:                  Progressing
    Last Heartbeat Time:   2022-05-04T09:33:49Z
    Last Transition Time:  2022-05-04T09:33:49Z
    Message:               Initializing StorageCluster
    Reason:                Init
    Status:                False
    Type:                  Degraded
    Last Heartbeat Time:   2022-05-04T09:33:49Z
    Last Transition Time:  2022-05-04T09:33:49Z
    Message:               Initializing StorageCluster
    Reason:                Init
    Status:                Unknown
    Type:                  Upgradeable


> noobaa in configuring state

$ $ oc get noobaa
NAME     MGMT-ENDPOINTS                  S3-ENDPOINTS                    STS-ENDPOINTS                   IMAGE                                                                                                            PHASE         AGE
noobaa   ["https://10.1.112.53:31982"]   ["https://10.1.112.53:31961"]   ["https://10.1.112.53:30127"]   quay.io/rhceph-dev/odf4-mcg-core-rhel8@sha256:f3e0c7882859f1a213cd15349ddee4d216d17d4585a0601a2a2ede277d993ca1   Configuring   104m

> $ oc describe noobaa noobaa
Name:         noobaa
Namespace:    openshift-storage
Labels:       app=noobaa
Annotations:  <none>
API Version:  noobaa.io/v1alpha1
Kind:         NooBaa

  Conditions:
    Last Heartbeat Time:   2022-05-04T09:40:25Z
    Last Transition Time:  2022-05-04T09:40:25Z
    Message:               Ceph objectstore user "noobaa-ceph-objectstore-user" is not ready
    Reason:                TemporaryError
    Status:                False
    Type:                  Available
    Last Heartbeat Time:   2022-05-04T09:40:25Z
    Last Transition Time:  2022-05-04T09:40:25Z
    Message:               Ceph objectstore user "noobaa-ceph-objectstore-user" is not ready
    Reason:                TemporaryError
    Status:                True
    Type:                  Progressing
    Last Heartbeat Time:   2022-05-04T09:40:25Z
    Last Transition Time:  2022-05-04T09:40:25Z
    Message:               Ceph objectstore user "noobaa-ceph-objectstore-user" is not ready
    Reason:                TemporaryError
    Status:                False
    Type:                  Degraded
    Last Heartbeat Time:   2022-05-04T09:40:25Z
    Last Transition Time:  2022-05-04T09:40:25Z
    Message:               Ceph objectstore user "noobaa-ceph-objectstore-user" is not ready
    Reason:                TemporaryError
    Status:                False
    Type:                  Upgradeable
    Last Heartbeat Time:   2022-05-04T09:40:25Z
    Last Transition Time:  2022-05-04T09:40:25Z
    Status:                k8s
    Type:                  KMS-Type
    Last Heartbeat Time:   2022-05-04T09:40:25Z
    Last Transition Time:  2022-05-04T09:40:32Z
    Status:                Sync
    Type:                  KMS-Status
  Observed Generation:     1
  Phase:                   Configuring
  Readme:                  

> cephobjectstore is in Failure state

$ oc get cephobjectstore
NAME                                 PHASE
ocs-storagecluster-cephobjectstore   Failure

$ oc describe cephobjectstore ocs-storagecluster-cephobjectstore
Name:         ocs-storagecluster-cephobjectstore
Namespace:    openshift-storage
Labels:       <none>
Annotations:  <none>
API Version:  ceph.rook.io/v1
Kind:         CephObjectStore

Status:
  Bucket Status:
    Details:       failed to get details from ceph object user "rook-ceph-internal-s3-user-checker-ab073a5e-d1b8-4243-8b2e-57bfdcdaa968": Get "https://rook-ceph-rgw-ocs-storagecluster-cephobjectstore.openshift-storage.svc:443/admin/user?format=json&uid=rook-ceph-internal-s3-user-checker-ab073a5e-d1b8-4243-8b2e-57bfdcdaa968": dial tcp 172.30.206.46:443: connect: connection refused
    Health:        Failure
    Last Changed:  2022-05-04T11:23:50Z
    Last Checked:  2022-05-04T11:23:50Z
  Info:
    Endpoint:           http://rook-ceph-rgw-ocs-storagecluster-cephobjectstore.openshift-storage.svc:80
    Secure Endpoint:    https://rook-ceph-rgw-ocs-storagecluster-cephobjectstore.openshift-storage.svc:443
  Observed Generation:  1
  Phase:                Failure

Job: https://ocs4-jenkins-csb-odf-qe.apps.ocp-c1.prod.psi.redhat.com/job/qe-deploy-ocs-cluster-prod/4126/console

must gather: http://magna002.ceph.redhat.com/ocsci-jenkins/openshift-clusters/j-009vu1cs33-d/j-009vu1cs33-d_20220504T090326/logs/deployment_1651655279/

Comment 3 Nitin Goyal 2022-05-05 05:11:22 UTC
This seems to be a duplicate of https://bugzilla.redhat.com/show_bug.cgi?id=2075581#c3

Comment 4 Nitin Goyal 2022-05-05 05:26:59 UTC

*** This bug has been marked as a duplicate of bug 2075581 ***


Note You need to log in before you can comment on or make changes to this bug.