Bug 1870453 - [Catalog updates] Should not compare the digest if cannot the new update pod's imageID
Summary: [Catalog updates] Should not compare the digest if cannot the new update pod'...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: OLM
Version: 4.6
Hardware: Unspecified
OS: Unspecified
unspecified
low
Target Milestone: ---
: 4.6.0
Assignee: Evan Cordell
QA Contact: Jian Zhang
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-08-20 07:02 UTC by Jian Zhang
Modified: 2020-10-27 16:30 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-10-27 16:29:46 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github operator-framework operator-lifecycle-manager pull 1729 0 None closed Bug 1870453: Should not compare the digest if cannot the new update pod's imageID 2021-01-14 07:39:53 UTC
Red Hat Product Errata RHBA-2020:4196 0 None None None 2020-10-27 16:30:01 UTC

Description Jian Zhang 2020-08-20 07:02:55 UTC
Description of problem:
During the CatalogSource pods polling, OLM compare the digest of the update pod and the current served pods. But, OLM cannot get the imageID of the update pod once it's created. But, OLM still compare the digest, it leads to unnecessary updates operations. As follows:

 330 time="2020-08-19T05:28:57Z" level=warning msg="pod status unknown" CatalogSource=qe-app-registry-v5xz4
 331 time="2020-08-19T05:28:57Z" level=info msg="ImageID " CatalogSource=qe-app-registry-v5xz4
 332 time="2020-08-19T05:28:57Z" level=info msg="Update Pod ImageID " CatalogSource=qe-app-registry-v5xz4


Version-Release number of selected component (if applicable):
OLM git commits: c3852d57c86707deb80c042c2155ad82c2d9628f

How reproducible:
always

Steps to Reproduce:
1. Install OCP 4.6.
2. After a while, check the OLM catalog-operator logs.

[root@preserve-olm-env data]# oc -n openshift-operator-lifecycle-manager logs catalog-operator-6944b55486-cdn7b > catalog.0820

Actual results:
OLM still do the update operation envn if the imageID of the update pod is empty. 

Expected results:
When the imageID of the update pod is empty, don't run the updates operations.


Additional info:

Comment 1 Jian Zhang 2020-08-20 07:06:54 UTC
Submit a PR for it: https://github.com/operator-framework/operator-lifecycle-manager/pull/1729

Comment 2 Kevin Rizza 2020-08-20 19:11:00 UTC
Marking this for upcoming sprint, as it needs to be rebased on top of some other changes that are currently in progress.

Comment 7 Jian Zhang 2020-09-21 09:31:40 UTC
LGTM, verify it. Details:
Cluster version is 4.6.0-0.nightly-2020-09-20-184226

[root@preserve-olm-env data]# oc exec catalog-operator-76484c8f8d-7c49f -- olm --version
OLM version: 0.16.1
git commit: 026fa7a609b57f740b4873522eb283f0a5f11d04

1, Install OCP 4.6
2, Subscribe an Operaor. (Regression test)
[root@preserve-olm-env data]# oc get sub -n openshift-kube-descheduler-operator
NAME                                PACKAGE                             SOURCE            CHANNEL
cluster-kube-descheduler-operator   cluster-kube-descheduler-operator   qe-app-registry   4.6
[root@preserve-olm-env data]# oc get csv -n openshift-kube-descheduler-operator
NAME                                                   DISPLAY                     VERSION                 REPLACES   PHASE
clusterkubedescheduleroperator.4.6.0-202009192030.p0   Kube Descheduler Operator   4.6.0-202009192030.p0              Succeeded

3, Update an Index image.
4, Check the OLM logs. Shouldn't get the `pod status unknown` waring.

Comment 10 errata-xmlrpc 2020-10-27 16:29:46 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (OpenShift Container Platform 4.6 GA Images), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:4196


Note You need to log in before you can comment on or make changes to this bug.