Bug 2262067
| Summary: | noobaa is in Configuring state in ODF cluster on IBM Power platform | ||
|---|---|---|---|
| Product: | [Red Hat Storage] Red Hat OpenShift Data Foundation | Reporter: | Aaruni Aggarwal <aaaggarw> |
| Component: | Multi-Cloud Object Gateway | Assignee: | Vinayak Hariharmath <vharihar> |
| Status: | CLOSED NOTABUG | QA Contact: | krishnaram Karthick <kramdoss> |
| Severity: | unspecified | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | 4.15 | CC: | bmclaren, dzaken, lmauda, muagarwa, ngowda, odf-bz-bot |
| Target Milestone: | --- | ||
| Target Release: | --- | ||
| Hardware: | ppc64le | ||
| OS: | Linux | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2024-07-24 09:01:51 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
|
Description
Aaruni Aggarwal
2024-01-31 08:58:56 UTC
There is no information on the BZ. Please provide information such as, what dos NooBaa report? What does the NooBa operator logs look like? Are all the pods up? If not, which aren't and what do their logs say? [root@rdr-rhcspin-bastion-0 ~]# oc get csv -A
NAMESPACE NAME DISPLAY VERSION REPLACES PHASE
openshift-operator-lifecycle-manager packageserver Package Server 0.0.1-snapshot Succeeded
openshift-storage mcg-operator.v4.15.0-126.stable NooBaa Operator 4.15.0-126.stable Succeeded
openshift-storage ocs-operator.v4.15.0-126.stable OpenShift Container Storage 4.15.0-126.stable Succeeded
openshift-storage odf-csi-addons-operator.v4.15.0-126.stable CSI Addons 4.15.0-126.stable Succeeded
openshift-storage odf-operator.v4.15.0-126.stable OpenShift Data Foundation 4.15.0-126.stable Succeeded
[root@rdr-rhcspin-bastion-0 ~]# oc get pods -n openshift-storage
NAME READY STATUS RESTARTS AGE
csi-addons-controller-manager-6f97cdbc9f-w22d7 2/2 Running 0 24m
csi-cephfsplugin-lgrn5 2/2 Running 0 5m5s
csi-cephfsplugin-provisioner-6c9755b765-glbfm 6/6 Running 1 (4m23s ago) 5m5s
csi-cephfsplugin-provisioner-6c9755b765-h6ks8 6/6 Running 0 5m5s
csi-cephfsplugin-vn5h6 2/2 Running 1 (4m23s ago) 5m5s
csi-cephfsplugin-xscq7 2/2 Running 0 5m5s
csi-rbdplugin-2f58j 3/3 Running 0 5m5s
csi-rbdplugin-g6tdw 3/3 Running 1 (4m23s ago) 5m5s
csi-rbdplugin-provisioner-6684c8d798-476hn 6/6 Running 0 5m5s
csi-rbdplugin-provisioner-6684c8d798-6xqvp 6/6 Running 0 5m5s
csi-rbdplugin-xnktr 3/3 Running 0 5m5s
noobaa-core-0 1/1 Running 0 3m19s
noobaa-db-pg-0 1/1 Running 0 5m
noobaa-operator-75d649856c-pmmc7 2/2 Running 0 24m
ocs-metrics-exporter-b94d575ff-d5lc5 1/1 Running 0 5m5s
ocs-operator-9fff9d744-tvlml 1/1 Running 0 24m
odf-console-9fccc4fc8-tjk6j 1/1 Running 0 24m
odf-operator-controller-manager-85468f98bf-n9qbj 2/2 Running 0 24m
rook-ceph-operator-666cf5c95b-zhvh8 1/1 Running 0 24m
rook-ceph-tools-6c854d5d84-x5bnz 1/1 Running 0 36s
ux-backend-server-649b5db99c-m9pz2 2/2 Running 0 24m
[root@rdr-rhcspin-bastion-0 ~]# oc get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
ocs-external-storagecluster-ceph-rbd (default) openshift-storage.rbd.csi.ceph.com Delete Immediate true 5m18s
ocs-external-storagecluster-ceph-rgw openshift-storage.ceph.rook.io/bucket Delete Immediate false 5m18s
ocs-external-storagecluster-cephfs openshift-storage.cephfs.csi.ceph.com Delete Immediate true 5m18s
[root@rdr-rhcspin-bastion-0 ~]# oc get storagecluster
NAME AGE PHASE EXTERNAL CREATED AT VERSION
ocs-external-storagecluster 3h6m Ready true 2024-01-30T05:22:56Z 4.15.0
[root@rdr-rhcspin-bastion-0 ~]# oc get cephcluster
NAME DATADIRHOSTPATH MONCOUNT AGE PHASE MESSAGE HEALTH EXTERNAL FSID
ocs-external-storagecluster-cephcluster 3h6m Connected Cluster connected successfully HEALTH_OK true 1431c918-abab-11ee-8c54-00505684cf1e
[root@rdr-rhcspin-bastion-0 ~]# oc get noobaa
NAME S3-ENDPOINTS STS-ENDPOINTS IMAGE PHASE AGE
noobaa registry.redhat.io/odf4/mcg-core-rhel9@sha256:4cab0a9394b1cad836736d2f76bdffdd17850f111608e3c67dd167af82b2033e Configuring 3h6m
[root@rdr-rhcspin-bastion-0 ~]# oc get noobaa -n openshift-storage -o yaml
apiVersion: v1
items:
- apiVersion: noobaa.io/v1alpha1
kind: NooBaa
metadata:
creationTimestamp: "2024-01-30T05:23:03Z"
finalizers:
- noobaa.io/graceful_finalizer
generation: 1
labels:
app: noobaa
name: noobaa
namespace: openshift-storage
ownerReferences:
- apiVersion: ocs.openshift.io/v1
blockOwnerDeletion: true
controller: true
kind: StorageCluster
name: ocs-external-storagecluster
uid: f264c5cb-58f6-404c-afaa-126202321a34
resourceVersion: "238635"
uid: 9ef0321e-7814-49cf-b9fd-6df356cbb75d
spec:
affinity: {}
autoscaler:
autoscalerType: hpav2
prometheusNamespace: openshift-monitoring
cleanupPolicy: {}
coreResources:
limits:
cpu: 999m
memory: 4Gi
requests:
cpu: 999m
memory: 4Gi
dbImage: registry.redhat.io/rhel9/postgresql-15@sha256:12afe2b0205a4aa24623f04d318d21f91393e4c70cf03a5f6720339e06d78293
dbResources:
limits:
cpu: 500m
memory: 4Gi
requests:
cpu: 500m
memory: 4Gi
dbStorageClass: ocs-external-storagecluster-ceph-rbd
dbType: postgres
dbVolumeResources:
requests:
storage: 50Gi
endpoints:
maxCount: 2
minCount: 1
resources:
limits:
cpu: 999m
memory: 2Gi
requests:
cpu: 999m
memory: 2Gi
image: registry.redhat.io/odf4/mcg-core-rhel9@sha256:4cab0a9394b1cad836736d2f76bdffdd17850f111608e3c67dd167af82b2033e
labels:
monitoring: {}
loadBalancerSourceSubnets: {}
pvPoolDefaultStorageClass: ocs-external-storagecluster-ceph-rbd
security:
kms: {}
tolerations:
- effect: NoSchedule
key: node.ocs.openshift.io/storage
operator: Equal
value: "true"
status:
accounts:
admin:
secretRef:
name: noobaa-admin
namespace: openshift-storage
actualImage: registry.redhat.io/odf4/mcg-core-rhel9@sha256:4cab0a9394b1cad836736d2f76bdffdd17850f111608e3c67dd167af82b2033e
conditions:
- lastHeartbeatTime: "2024-01-30T05:32:17Z"
lastTransitionTime: "2024-01-30T05:23:03Z"
message: 'cannot read admin account info, error: not anonymous method read_account'
reason: TemporaryError
status: "False"
type: Available
- lastHeartbeatTime: "2024-01-30T05:32:17Z"
lastTransitionTime: "2024-01-30T05:23:03Z"
message: 'cannot read admin account info, error: not anonymous method read_account'
reason: TemporaryError
status: "True"
type: Progressing
- lastHeartbeatTime: "2024-01-30T05:32:17Z"
lastTransitionTime: "2024-01-30T05:23:03Z"
message: 'cannot read admin account info, error: not anonymous method read_account'
reason: TemporaryError
status: "False"
type: Degraded
- lastHeartbeatTime: "2024-01-30T05:32:17Z"
lastTransitionTime: "2024-01-30T05:23:03Z"
message: 'cannot read admin account info, error: not anonymous method read_account'
reason: TemporaryError
status: "False"
type: Upgradeable
- lastHeartbeatTime: "2024-01-30T05:32:17Z"
lastTransitionTime: "2024-01-30T05:23:03Z"
status: k8s
type: KMS-Type
- lastHeartbeatTime: "2024-01-30T05:32:17Z"
lastTransitionTime: "2024-01-30T05:23:04Z"
status: Sync
type: KMS-Status
observedGeneration: 1
phase: Configuring
postgresUpdatePhase: NoUpgrade
readme: "\n\n\tNooBaa operator is still working to reconcile this system.\n\tCheck
out the system status.phase, status.conditions, and events with:\n\n\t\tkubectl
-n openshift-storage describe noobaa\n\t\tkubectl -n openshift-storage get noobaa
-o yaml\n\t\tkubectl -n openshift-storage get events --sort-by=metadata.creationTimestamp\n\n\tYou
can wait for a specific condition with:\n\n\t\tkubectl -n openshift-storage
wait noobaa/noobaa --for condition=available --timeout -1s\n\n\tNooBaa Core
Version: master-20230920\n\tNooBaa Operator Version: 5.15.0\n"
services:
serviceMgmt:
externalDNS:
- https://noobaa-mgmt-openshift-storage.apps.rdr-rhcspin.ibm.com:443
internalDNS:
- https://noobaa-mgmt.openshift-storage.svc:443
internalIP:
- https://172.30.32.28:443
nodePorts:
- https://10.20.176.218:0
podPorts:
- https://10.128.2.71:8443
serviceS3:
externalDNS:
- https://s3-openshift-storage.apps.rdr-rhcspin.ibm.com:443
internalDNS:
- https://s3.openshift-storage.svc:443
internalIP:
- https://172.30.152.111:443
serviceSts:
externalDNS:
- https://sts-openshift-storage.apps.rdr-rhcspin.ibm.com:443
internalDNS:
- https://sts.openshift-storage.svc:443
internalIP:
- https://172.30.183.9:443
upgradePhase: NoUpgrade
kind: List
metadata:
resourceVersion: ""
Will attach noobaa-operator logs. Also Core if you can, thanks. must-gather logs: https://drive.google.com/file/d/1zyqhswUgrt6HYVsRTgTvs6ktJV_Ih9CS/view?usp=sharing Got the same error in Internal mode cluster as well.
[root@aaruni-d149-bastion-0 ~]# oc get csv -A
NAMESPACE NAME DISPLAY VERSION REPLACES PHASE
openshift-local-storage local-storage-operator.v4.15.0-202311280332 Local Storage 4.15.0-202311280332 Succeeded
openshift-operator-lifecycle-manager packageserver Package Server 0.0.1-snapshot Succeeded
openshift-storage mcg-operator.v4.15.0-132.stable NooBaa Operator 4.15.0-132.stable Succeeded
openshift-storage ocs-operator.v4.15.0-132.stable OpenShift Container Storage 4.15.0-132.stable Succeeded
openshift-storage odf-csi-addons-operator.v4.15.0-132.stable CSI Addons 4.15.0-132.stable Succeeded
openshift-storage odf-operator.v4.15.0-132.stable OpenShift Data Foundation 4.15.0-132.stable Succeeded
[root@aaruni-d149-bastion-0 ~]# oc get storagecluster -n openshift-storage
NAME AGE PHASE EXTERNAL CREATED AT VERSION
ocs-storagecluster 10h Progressing 2024-02-04T17:58:27Z 4.15.0
[root@aaruni-d149-bastion-0 ~]# oc get noobaa -n openshift-storage
NAME S3-ENDPOINTS STS-ENDPOINTS IMAGE PHASE AGE
noobaa registry.redhat.io/odf4/mcg-core-rhel9@sha256:1d79a2ac176ca6e69c3198d0e35537aaf29373440d214d324d0d433d1473d9a1 Configuring 10h
[root@aaruni-d149-bastion-0 ~]#
[root@aaruni-d149-bastion-0 ~]# oc get noobaa -n openshift-storage -o yaml
apiVersion: v1
items:
- apiVersion: noobaa.io/v1alpha1
kind: NooBaa
metadata:
creationTimestamp: "2024-02-04T18:01:31Z"
finalizers:
- noobaa.io/graceful_finalizer
generation: 1
labels:
app: noobaa
name: noobaa
namespace: openshift-storage
ownerReferences:
- apiVersion: ocs.openshift.io/v1
blockOwnerDeletion: true
controller: true
kind: StorageCluster
name: ocs-storagecluster
uid: f23e0396-3d31-43c5-af57-c921f5c62ada
resourceVersion: "1880973"
uid: ad914f4d-c8e0-4bfd-ae6d-8199355efd23
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: cluster.ocs.openshift.io/openshift-storage
operator: Exists
autoscaler:
autoscalerType: hpav2
prometheusNamespace: openshift-monitoring
cleanupPolicy: {}
coreResources:
limits:
cpu: 999m
memory: 4Gi
requests:
cpu: 999m
memory: 4Gi
dbImage: registry.redhat.io/rhel9/postgresql-15@sha256:10e53e191e567248a514a7344c6d78432640aedbc1fa1f7b0364d3b88f8bde2c
dbResources:
limits:
cpu: 500m
memory: 4Gi
requests:
cpu: 500m
memory: 4Gi
dbStorageClass: ocs-storagecluster-ceph-rbd
dbType: postgres
dbVolumeResources:
requests:
storage: 50Gi
endpoints:
maxCount: 2
minCount: 1
resources:
limits:
cpu: 999m
memory: 2Gi
requests:
cpu: 999m
memory: 2Gi
image: registry.redhat.io/odf4/mcg-core-rhel9@sha256:1d79a2ac176ca6e69c3198d0e35537aaf29373440d214d324d0d433d1473d9a1
labels:
monitoring: {}
loadBalancerSourceSubnets: {}
pvPoolDefaultStorageClass: ocs-storagecluster-ceph-rbd
security:
kms: {}
tolerations:
- effect: NoSchedule
key: node.ocs.openshift.io/storage
operator: Equal
value: "true"
status:
accounts:
admin:
secretRef:
name: noobaa-admin
namespace: openshift-storage
actualImage: registry.redhat.io/odf4/mcg-core-rhel9@sha256:1d79a2ac176ca6e69c3198d0e35537aaf29373440d214d324d0d433d1473d9a1
conditions:
- lastHeartbeatTime: "2024-02-05T04:57:26Z"
lastTransitionTime: "2024-02-04T18:01:31Z"
message: 'cannot read admin account info, error: not anonymous method read_account'
reason: TemporaryError
status: "False"
type: Available
- lastHeartbeatTime: "2024-02-05T04:57:26Z"
lastTransitionTime: "2024-02-04T18:01:31Z"
message: 'cannot read admin account info, error: not anonymous method read_account'
reason: TemporaryError
status: "True"
type: Progressing
- lastHeartbeatTime: "2024-02-05T04:57:26Z"
lastTransitionTime: "2024-02-04T18:01:31Z"
message: 'cannot read admin account info, error: not anonymous method read_account'
reason: TemporaryError
status: "False"
type: Degraded
- lastHeartbeatTime: "2024-02-05T04:57:26Z"
lastTransitionTime: "2024-02-04T18:01:31Z"
message: 'cannot read admin account info, error: not anonymous method read_account'
reason: TemporaryError
status: "False"
type: Upgradeable
- lastHeartbeatTime: "2024-02-05T04:57:26Z"
lastTransitionTime: "2024-02-04T18:01:31Z"
status: k8s
type: KMS-Type
- lastHeartbeatTime: "2024-02-05T04:57:26Z"
lastTransitionTime: "2024-02-04T18:01:32Z"
status: Sync
type: KMS-Status
observedGeneration: 1
phase: Configuring
postgresUpdatePhase: NoUpgrade
readme: "\n\n\tNooBaa operator is still working to reconcile this system.\n\tCheck
out the system status.phase, status.conditions, and events with:\n\n\t\tkubectl
-n openshift-storage describe noobaa\n\t\tkubectl -n openshift-storage get noobaa
-o yaml\n\t\tkubectl -n openshift-storage get events --sort-by=metadata.creationTimestamp\n\n\tYou
can wait for a specific condition with:\n\n\t\tkubectl -n openshift-storage
wait noobaa/noobaa --for condition=available --timeout -1s\n\n\tNooBaa Core
Version: master-20230920\n\tNooBaa Operator Version: 5.15.0\n"
services:
serviceMgmt:
externalDNS:
- https://noobaa-mgmt-openshift-storage.apps.aaruni-d149.redhat.com:443
internalDNS:
- https://noobaa-mgmt.openshift-storage.svc:443
internalIP:
- https://172.30.127.138:443
nodePorts:
- https://10.20.182.46:0
podPorts:
- https://10.131.1.97:8443
serviceS3:
externalDNS:
- https://s3-openshift-storage.apps.aaruni-d149.redhat.com:443
internalDNS:
- https://s3.openshift-storage.svc:443
internalIP:
- https://172.30.82.12:443
serviceSts:
externalDNS:
- https://sts-openshift-storage.apps.aaruni-d149.redhat.com:443
internalDNS:
- https://sts.openshift-storage.svc:443
internalIP:
- https://172.30.41.219:443
upgradePhase: NoUpgrade
kind: List
metadata:
resourceVersion: ""
[root@aaruni-d149-bastion-0 ~]#
With ODF build:4.15.0-134, I was able to deploy External ODF cluster. Noobaa was in Ready state. But in the latest build ie. 4.15.0-144, While deploying Internal ODF Cluster, I encountered same issue ie. Noobaa is in Configuring state.
[root@rdr-fence15-bastion-0 ~]# oc get csv -A
NAMESPACE NAME DISPLAY VERSION REPLACES PHASE
openshift-local-storage local-storage-operator.v4.15.0-202311280332 Local Storage 4.15.0-202311280332 Succeeded
openshift-operator-lifecycle-manager packageserver Package Server 0.0.1-snapshot Succeeded
openshift-storage mcg-operator.v4.15.0-144.stable NooBaa Operator 4.15.0-144.stable Succeeded
openshift-storage ocs-operator.v4.15.0-144.stable OpenShift Container Storage 4.15.0-144.stable Succeeded
openshift-storage odf-csi-addons-operator.v4.15.0-144.stable CSI Addons 4.15.0-144.stable Succeeded
openshift-storage odf-operator.v4.15.0-144.stable OpenShift Data Foundation 4.15.0-144.stable Succeeded
[root@rdr-fence15-bastion-0 ~]# oc get noobaa -n openshift-storage
NAME S3-ENDPOINTS STS-ENDPOINTS IMAGE PHASE AGE
noobaa registry.redhat.io/odf4/mcg-core-rhel9@sha256:bfd4cb6d5c7526fdc8f35d4fedb1ff9ecb39ac7564c8f05c2d5e12aea41048e9 Configuring 4m28s
[root@rdr-fence15-bastion-0 ~]# oc get noobaa -n openshift-storage -o yaml
apiVersion: v1
items:
- apiVersion: noobaa.io/v1alpha1
kind: NooBaa
metadata:
creationTimestamp: "2024-02-19T09:55:15Z"
finalizers:
- noobaa.io/graceful_finalizer
generation: 1
labels:
app: noobaa
name: noobaa
namespace: openshift-storage
ownerReferences:
- apiVersion: ocs.openshift.io/v1
blockOwnerDeletion: true
controller: true
kind: StorageCluster
name: ocs-storagecluster
uid: 4df0c13c-cfdf-4516-98a8-effd6b6f7921
resourceVersion: "319337"
uid: e5622f6b-19dc-4864-8c1d-e667362b291d
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: cluster.ocs.openshift.io/openshift-storage
operator: Exists
autoscaler:
autoscalerType: hpav2
prometheusNamespace: openshift-monitoring
cleanupPolicy: {}
coreResources:
limits:
cpu: 999m
memory: 4Gi
requests:
cpu: 999m
memory: 4Gi
dbImage: registry.redhat.io/rhel9/postgresql-15@sha256:10e53e191e567248a514a7344c6d78432640aedbc1fa1f7b0364d3b88f8bde2c
dbResources:
limits:
cpu: 500m
memory: 4Gi
requests:
cpu: 500m
memory: 4Gi
dbStorageClass: ocs-storagecluster-ceph-rbd
dbType: postgres
dbVolumeResources:
requests:
storage: 50Gi
endpoints:
maxCount: 2
minCount: 1
resources:
limits:
cpu: 999m
memory: 2Gi
requests:
cpu: 999m
memory: 2Gi
image: registry.redhat.io/odf4/mcg-core-rhel9@sha256:bfd4cb6d5c7526fdc8f35d4fedb1ff9ecb39ac7564c8f05c2d5e12aea41048e9
labels:
monitoring: {}
loadBalancerSourceSubnets: {}
pvPoolDefaultStorageClass: ocs-storagecluster-ceph-rbd
security:
kms: {}
tolerations:
- effect: NoSchedule
key: node.ocs.openshift.io/storage
operator: Equal
value: "true"
status:
accounts:
admin:
secretRef:
name: noobaa-admin
namespace: openshift-storage
actualImage: registry.redhat.io/odf4/mcg-core-rhel9@sha256:bfd4cb6d5c7526fdc8f35d4fedb1ff9ecb39ac7564c8f05c2d5e12aea41048e9
conditions:
- lastHeartbeatTime: "2024-02-19T09:59:46Z"
lastTransitionTime: "2024-02-19T09:55:15Z"
message: 'cannot read admin account info, error: not anonymous method read_account'
reason: TemporaryError
status: "False"
type: Available
- lastHeartbeatTime: "2024-02-19T09:59:46Z"
lastTransitionTime: "2024-02-19T09:55:15Z"
message: 'cannot read admin account info, error: not anonymous method read_account'
reason: TemporaryError
status: "True"
type: Progressing
- lastHeartbeatTime: "2024-02-19T09:59:46Z"
lastTransitionTime: "2024-02-19T09:55:15Z"
message: 'cannot read admin account info, error: not anonymous method read_account'
reason: TemporaryError
status: "False"
type: Degraded
- lastHeartbeatTime: "2024-02-19T09:59:46Z"
lastTransitionTime: "2024-02-19T09:55:15Z"
message: 'cannot read admin account info, error: not anonymous method read_account'
reason: TemporaryError
status: "False"
type: Upgradeable
- lastHeartbeatTime: "2024-02-19T09:59:46Z"
lastTransitionTime: "2024-02-19T09:55:16Z"
status: k8s
type: KMS-Type
- lastHeartbeatTime: "2024-02-19T09:59:46Z"
lastTransitionTime: "2024-02-19T09:55:17Z"
status: Sync
type: KMS-Status
observedGeneration: 1
phase: Configuring
postgresUpdatePhase: NoUpgrade
readme: "\n\n\tNooBaa operator is still working to reconcile this system.\n\tCheck
out the system status.phase, status.conditions, and events with:\n\n\t\tkubectl
-n openshift-storage describe noobaa\n\t\tkubectl -n openshift-storage get noobaa
-o yaml\n\t\tkubectl -n openshift-storage get events --sort-by=metadata.creationTimestamp\n\n\tYou
can wait for a specific condition with:\n\n\t\tkubectl -n openshift-storage
wait noobaa/noobaa --for condition=available --timeout -1s\n\n\tNooBaa Core
Version: master-20230920\n\tNooBaa Operator Version: 5.15.0\n"
services:
serviceMgmt:
externalDNS:
- https://noobaa-mgmt-openshift-storage.apps.rdr-fence15.ibm.com:443
internalDNS:
- https://noobaa-mgmt.openshift-storage.svc:443
internalIP:
- https://172.30.158.228:443
nodePorts:
- https://10.20.187.36:0
podPorts:
- https://10.131.0.101:8443
serviceS3:
externalDNS:
- https://s3-openshift-storage.apps.rdr-fence15.ibm.com:443
internalDNS:
- https://s3.openshift-storage.svc:443
internalIP:
- https://172.30.24.172:443
serviceSts:
externalDNS:
- https://sts-openshift-storage.apps.rdr-fence15.ibm.com:443
internalDNS:
- https://sts.openshift-storage.svc:443
internalIP:
- https://172.30.226.158:443
upgradePhase: NoUpgrade
kind: List
metadata:
resourceVersion: ""
pods:
[root@rdr-fence15-bastion-0 ~]#
[root@rdr-fence15-bastion-0 ~]# oc get pods -n openshift-storage
NAME READY STATUS RESTARTS AGE
csi-addons-controller-manager-86996bdb6d-kwbtf 2/2 Running 0 61m
csi-cephfsplugin-57r8r 2/2 Running 0 14m
csi-cephfsplugin-l2x76 2/2 Running 1 (13m ago) 14m
csi-cephfsplugin-provisioner-cfbbb7d5d-4vlhs 6/6 Running 0 14m
csi-cephfsplugin-provisioner-cfbbb7d5d-cthdv 6/6 Running 0 14m
csi-cephfsplugin-w2vnt 2/2 Running 0 14m
csi-rbdplugin-cvl4j 3/3 Running 0 14m
csi-rbdplugin-p5pw7 3/3 Running 0 14m
csi-rbdplugin-provisioner-699c8f84f-m4cv9 6/6 Running 0 14m
csi-rbdplugin-provisioner-699c8f84f-sds8q 6/6 Running 0 14m
csi-rbdplugin-zlxjx 3/3 Running 0 14m
noobaa-core-0 1/1 Running 0 10m
noobaa-db-pg-0 1/1 Running 0 11m
noobaa-operator-55cf9b4b44-4lfrf 1/1 Running 0 62m
ocs-metrics-exporter-5b98864f59-86lbv 1/1 Running 0 11m
ocs-operator-d74f464fc-fndsv 1/1 Running 0 62m
odf-console-78c95fd8b4-pwkrx 1/1 Running 0 62m
odf-operator-controller-manager-6c9f4fbd5b-r2hbd 2/2 Running 0 62m
rook-ceph-crashcollector-worker-0-77cff6b86c-j4hkk 1/1 Running 0 11m
rook-ceph-crashcollector-worker-1-7559dc47dd-7kfzv 1/1 Running 0 11m
rook-ceph-crashcollector-worker-2-68ccdcbdd5-bqcp2 1/1 Running 0 11m
rook-ceph-exporter-worker-0-544c48b7b8-jgcsg 1/1 Running 0 11m
rook-ceph-exporter-worker-1-79d446fcdc-f8ggj 1/1 Running 0 11m
rook-ceph-exporter-worker-2-58b9c4d978-f9q42 1/1 Running 0 11m
rook-ceph-mds-ocs-storagecluster-cephfilesystem-a-658dcf47qzfcs 2/2 Running 0 11m
rook-ceph-mds-ocs-storagecluster-cephfilesystem-b-cfdf99c92l2bl 2/2 Running 0 11m
rook-ceph-mgr-a-6968dc7f85-925jf 3/3 Running 0 12m
rook-ceph-mgr-b-b959ff946-z29w4 3/3 Running 0 12m
rook-ceph-mon-a-655548f6b9-t8qmx 2/2 Running 0 13m
rook-ceph-mon-b-59f9fdd64b-x2m2w 2/2 Running 0 13m
rook-ceph-mon-c-78d5f67c88-bt9r9 2/2 Running 0 13m
rook-ceph-operator-8646cd4f4c-bhzvf 1/1 Running 0 14m
rook-ceph-osd-0-67f885b6b8-f8hq9 2/2 Running 0 12m
rook-ceph-osd-1-8498bfcddf-6xs98 2/2 Running 0 12m
rook-ceph-osd-2-7c9b995d6b-tg7ls 2/2 Running 0 12m
rook-ceph-osd-prepare-2e196a7cf16e661b02e6e50600e519e0-n9dg6 0/1 Completed 0 12m
rook-ceph-osd-prepare-68d752defee58d629e583903f0295d83-rtg9x 0/1 Completed 0 12m
rook-ceph-osd-prepare-ea2586877c8f36693fe974884793e27e-tkpn6 0/1 Completed 0 12m
rook-ceph-rgw-ocs-storagecluster-cephobjectstore-a-57bc6dc28dfb 2/2 Running 0 11m
ux-backend-server-68ff4bf87f-vkfvr 2/2 Running 0 62m
> @aaaggarw, We want to know the how frequently this is occurring. Please share this data with us.
Thanks Vinayak for looking at the issue.
Now we are consistently facing this issue. Latest build 4.15.0-157 was not working for us. Even build - 4.15.0-150 was not working for my colleague.
Hi Vinayak...is there a way for a Red Hatter to get access to the Slack channel? If not, can you provide a status in this BZ? Thanks! B. |