Bug 2189483 - After upgrade noobaa-db-pg-0 pod using old image in one of container
Summary: After upgrade noobaa-db-pg-0 pod using old image in one of container
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat Storage
Component: Multi-Cloud Object Gateway
Version: 4.13
Hardware: Unspecified
OS: Unspecified
unspecified
urgent
Target Milestone: ---
: ODF 4.13.0
Assignee: Utkarsh Srivastava
QA Contact: Shivam Durgbuns
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2023-04-25 12:05 UTC by Petr Balogh
Modified: 2023-08-09 16:49 UTC (History)
6 users (show)

Fixed In Version: 4.13.0-176
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2023-06-21 15:25:08 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github noobaa noobaa-operator pull 1104 0 None Merged Fix init container cleanup, Rollback lib-bucket-provisioner, Fix autoscaler 2023-04-25 12:13:36 UTC
Github noobaa noobaa-operator pull 1107 0 None Merged [Backport to 5.13] Rollback lib-bucket-prov, fix DB STS cleanup and defaults 2023-04-25 12:13:36 UTC
Red Hat Product Errata RHBA-2023:3742 0 None None None 2023-06-21 15:25:52 UTC

Description Petr Balogh 2023-04-25 12:05:13 UTC
Description of problem (please be detailed as possible and provide log
snippests):
After the successful upgrade from 4.12 to 4.13.0-169 , we see that noobaa-db-pg pod is not upgraded to expected image mentioned in CSV.
NAME                                         DISPLAY                       VERSION             REPLACES                                PHASE
mcg-operator.v4.13.0-169.stable              NooBaa Operator               4.13.0-169.stable   mcg-operator.v4.12.2-rhodf              Succeeded
ocs-operator.v4.13.0-169.stable              OpenShift Container Storage   4.13.0-169.stable   ocs-operator.v4.12.2-rhodf              Succeeded
odf-csi-addons-operator.v4.13.0-169.stable   CSI Addons                    4.13.0-169.stable   odf-csi-addons-operator.v4.12.2-rhodf   Succeeded
odf-operator.v4.13.0-169.stable              OpenShift Data Foundation     4.13.0-169.stable   odf-operator.v4.12.2-rhodf              Succeeded

2023-04-21 01:23:36  23:23:36 - MainThread - ocs_ci.ocs.ocp - INFO  - All the images: {'core': 'quay.io/rhceph-dev/odf4-mcg-core-rhel9@sha256:8372d403846d1f1fd933a22bc1d0cba6adaccb0ca0b938b40e3db70ace089e41'} were successfully upgraded in: noobaa-core-0!
2023-04-21 01:23:36  23:23:36 - MainThread - ocs_ci.utility.utils - INFO  - Executing command: oc -n openshift-storage get Pod noobaa-db-pg-0 -n openshift-storage -o yaml
2023-04-21 01:23:36  23:23:36 - MainThread - ocs_ci.ocs.resources.pod - WARNING  - Images: {'registry.redhat.io/odf4/mcg-core-rhel8@sha256:2f02bff3d69ac01a17e427eb78e05528fa270e8c92450ca603dfbf5ba8366824'} weren't upgraded in: noobaa-db-pg-0!



Mainly this container:
- containerID: cri-o://470b4522a7a0817487ae6ea3d2d9f0948fe746478f6e9b840c6579de92bd1920
    image: registry.redhat.io/odf4/mcg-core-rhel8@sha256:2f02bff3d69ac01a17e427eb78e05528fa270e8c92450ca603dfbf5ba8366824
    imageID: registry.redhat.io/odf4/mcg-core-rhel8@sha256:2f02bff3d69ac01a17e427eb78e05528fa270e8c92450ca603dfbf5ba8366824
    lastState: {}
    name: init
    ready: true
    restartCount: 0
    state:
      terminated:
        containerID: cri-o://470b4522a7a0817487ae6ea3d2d9f0948fe746478f6e9b840c6579de92bd1920
        exitCode: 0
        finishedAt: "2023-04-20T23:25:27Z"
        reason: Completed
        startedAt: "2023-04-20T23:25:27Z"

In CSV noobaa core should be:
                - name: NOOBAA_CORE_IMAGE
                  value: quay.io/rhceph-dev/odf4-mcg-core-rhel9@sha256:8372d403846d1f1fd933a22bc1d0cba6adaccb0ca0b938b40e3db70ace089e41




Version of all relevant components (if applicable):
ODF: v4.13.0-169

Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?


Is there any workaround available to the best of your knowledge?


Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?
1

Can this issue reproducible?
Upgrade 4.12 to v4.13.0-169

Can this issue reproduce from the UI?
Haven't tried but probably yes


If this is a regression, please provide more details to justify this:
Yes

Steps to Reproduce:
1. INstall 4.12
2. Upgrade to 4.13
3.


Actual results:
Mcg noobaa-db-pg container using old image instead of new one

Expected results:
Have all containers using new image mentioned in CSV

Additional info:

Comment 4 Mudit Agarwal 2023-04-26 09:54:32 UTC
Petr, qa ack pls

Comment 19 errata-xmlrpc 2023-06-21 15:25:08 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat OpenShift Data Foundation 4.13.0 enhancement and bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2023:3742


Note You need to log in before you can comment on or make changes to this bug.