Bug 2043510 - noobaa core images are not upgrade to 4.10-113 images
Summary: noobaa core images are not upgrade to 4.10-113 images
Keywords:
Status: CLOSED DUPLICATE of bug 2043513
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat Storage
Component: Multi-Cloud Object Gateway
Version: 4.10
Hardware: Unspecified
OS: Unspecified
unspecified
urgent
Target Milestone: ---
: ---
Assignee: Nimrod Becker
QA Contact: Elad
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-01-21 12:06 UTC by Vijay Avuthu
Modified: 2023-08-09 16:49 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-01-26 15:28:06 UTC
Embargoed:


Attachments (Terms of Use)

Description Vijay Avuthu 2022-01-21 12:06:43 UTC
Description of problem (please be detailed as possible and provide log
snippests):

noobaa core images are not upgrade to 4.10-113 images

Version of all relevant components (if applicable):

upgraded from ocs-registry:4.9.2-9 to ocs-registry:4.10.0-113

Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?
Yes

Is there any workaround available to the best of your knowledge?
NA

Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?
1

Can this issue reproducible?
yes

Can this issue reproduce from the UI?
Not tried

If this is a regression, please provide more details to justify this:
Yes

Steps to Reproduce:
1. upgrade odf from ocs-registry:4.9.2-9 to ocs-registry:4.10.0-113
2. check images in noobaa-core
3.


Actual results:

2022-01-20 21:36:05  16:06:05 - MainThread - ocs_ci.ocs.resources.pod - WARNING - Images: {'quay.io/rhceph-dev/odf4-mcg-core-rhel8@sha256:5507f2c1074bfb023415f0fef16ec42fbe6e90c540fc45f1111c8c929e477910'} weren't upgraded in: noobaa-core-0!


Expected results:

images should be graded to associate version in 4.10.0-113

expected image is quay.io/rhceph-dev/odf4-mcg-core-rhel8@sha256:112ada3e385189131064e39f196aa33e0ba43d1f586a5b6352967a87b7fdc792

Additional info:

job: https://ocs4-jenkins-csb-odf-qe.apps.ocp-c1.prod.psi.redhat.com/job/qe-deploy-ocs-cluster-prod/3022/consoleFull

must gather: http://magna002.ceph.redhat.com/ocsci-jenkins/openshift-clusters/j-128ai3c33-ua/j-128ai3c33-ua_20220120T114821/logs/failed_testcase_ocs_logs_1642682264/test_upgrade_ocs_logs/

Comment 5 Jose A. Rivera 2022-01-26 14:34:43 UTC
...did anyone look at the StorageCluster status?

    conditions:
    - lastHeartbeatTime: "2022-01-20T16:28:30Z"
      lastTransitionTime: "2022-01-20T15:51:44Z"
      message: Reconcile completed successfully
      reason: ReconcileCompleted
      status: "True"
      type: ReconcileComplete
    - lastHeartbeatTime: "2022-01-20T16:28:25Z"
      lastTransitionTime: "2022-01-20T15:52:39Z"
      message: 'CephCluster error: failed to create cluster: failed to start ceph monitors: failed to start mon pods: failed to wait for mon quorum: exceeded max retry count waiting for monitors to reach quorum'
      reason: ClusterStateError
      status: "False"
      type: Available
    - lastHeartbeatTime: "2022-01-20T16:28:30Z"
      lastTransitionTime: "2022-01-20T15:52:39Z"
      message: 'CephCluster is creating: Configuring Ceph Mons'
      reason: ClusterStateCreating
      status: "True"
      type: Progressing
    - lastHeartbeatTime: "2022-01-20T16:28:25Z"
      lastTransitionTime: "2022-01-20T15:52:39Z"
      message: 'CephCluster error: failed to create cluster: failed to start ceph monitors: failed to start mon pods: failed to wait for mon quorum: exceeded max retry count waiting for monitors to reach quorum'
      reason: ClusterStateError
      status: "True"
      type: Degraded
    - lastHeartbeatTime: "2022-01-20T16:28:30Z"
      lastTransitionTime: "2022-01-20T15:52:39Z"
      message: 'CephCluster is creating: Configuring Ceph Mons'
      reason: ClusterStateCreating
      status: "False"
      type: UpgradeableAnd the operator logs have a lot of "Waiting on Ceph Cluster to initialize before starting Noobaa.", so that'd be why it hasn't touched the NooBaa CR.

Seems there's a mon Pod in CLBO:

rook-ceph-mon-b-5f6cfbd5d6-98hg2                                  1/2     CrashLoopBackOff       14 (101s ago)   48m     10.128.2.39    ip-10-0-139-255.us-east-2.compute.internal   <none>           <none>

I'm not sure what's going on with it, I can't see anything immediately obvious in the mon-b logs, so I'll need help analyzing that. Travis, Seb?

Comment 6 Sébastien Han 2022-01-26 15:23:29 UTC
It's a dup of another BZ that I need to find back. I'll close once I find it.

Comment 7 Sébastien Han 2022-01-26 15:28:06 UTC

*** This bug has been marked as a duplicate of bug 2043513 ***


Note You need to log in before you can comment on or make changes to this bug.