Bug 1878307

Summary: Catalog polling intervals only occur every OLM sync cycle
Product: OpenShift Container Platform Reporter: OpenShift BugZilla Robot <openshift-bugzilla-robot>
Component: OLMAssignee: Daniel Sover <dsover>
OLM sub component: OLM QA Contact: Jian Zhang <jiazha>
Status: CLOSED ERRATA Docs Contact:
Severity: medium    
Priority: medium CC: jiazha, krizza, nhale
Version: 4.5Keywords: UpcomingSprint
Target Milestone: ---   
Target Release: 4.5.z   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2020-11-05 12:46:56 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1867802    
Bug Blocks:    

Comment 2 Jian Zhang 2020-09-27 08:41:28 UTC
[root@preserve-olm-env data]# oc -n openshift-operator-lifecycle-manager exec catalog-operator-76db8688d7-jz5ss -- olm --version
OLM version: 0.15.1
git commit: 756140c115052e883470cee61c925158e908a10f

1, Create a CatalogSource that its upgrade interval is 5 minutes.
[root@preserve-olm-env data]# cat cs-win.yaml 
apiVersion: operators.coreos.com/v1alpha1
kind: CatalogSource
metadata:
  name: wmco
  namespace: openshift-marketplace
spec:
  displayName: Windows operators
  sourceType: grpc
  image: quay.io/sgaoshang/wmco-index:1.0.0
  updateStrategy:
    registryPoll:
      interval: 5m

[root@preserve-olm-env data]# oc create -f cs-win.yaml 
catalogsource.operators.coreos.com/wmco created

[root@preserve-olm-env data]# oc get catalogsource
NAME                  DISPLAY               TYPE   PUBLISHER   AGE
certified-operators   Certified Operators   grpc   Red Hat     55m
community-operators   Community Operators   grpc   Red Hat     55m
redhat-marketplace    Red Hat Marketplace   grpc   Red Hat     55m
redhat-operators      Red Hat Operators     grpc   Red Hat     55m
wmco                  Windows operators     grpc               26s

But, not sure why 2 images here.
[root@preserve-olm-env data]# oc get pods
NAME                                   READY   STATUS    RESTARTS   AGE
certified-operators-d7f49f94-9w8xd     1/1     Running   0          80m
community-operators-79585fdc64-m56bw   1/1     Running   0          80m
marketplace-operator-8f66c787b-hm7zk   1/1     Running   0          81m
redhat-marketplace-6958b45cf5-pdcmt    1/1     Running   0          80m
redhat-operators-5dcdd4fbcf-74p5s      1/1     Running   0          80m
wmco-nll4c                             1/1     Running   0          3m8s
wmco-qfdtd                             1/1     Running   0          3m8s
[root@preserve-olm-env data]# oc get pods wmco-nll4c -o yaml|grep image
            f:image: {}
            f:imagePullPolicy: {}
  - image: quay.io/sgaoshang/wmco-index:1.0.0
    imagePullPolicy: Always
  imagePullSecrets:
    image: quay.io/sgaoshang/wmco-index:1.0.0
    imageID: quay.io/sgaoshang/wmco-index@sha256:9dafa56eb4946f73cb8e8a9d4b3b44458af074ec80f3a898b3636d5c45569c73
[root@preserve-olm-env data]# oc get pods wmco-qfdtd -o yaml|grep image
            f:image: {}
            f:imagePullPolicy: {}
        f:imagePullSecrets:
            f:image: {}
            f:imagePullPolicy: {}
  - image: quay.io/sgaoshang/wmco-index:1.0.0
    imagePullPolicy: Always
  imagePullSecrets:
    image: quay.io/sgaoshang/wmco-index:1.0.0
    imageID: quay.io/sgaoshang/wmco-index@sha256:9dafa56eb4946f73cb8e8a9d4b3b44458af074ec80f3a898b3636d5c45569c73

After 5 mins, they're all deleted. A new one created.

[root@preserve-olm-env data]# oc get pods
NAME                                   READY   STATUS    RESTARTS   AGE
certified-operators-d7f49f94-9w8xd     1/1     Running   0          82m
community-operators-79585fdc64-m56bw   1/1     Running   0          82m
marketplace-operator-8f66c787b-hm7zk   1/1     Running   0          83m
redhat-marketplace-6958b45cf5-pdcmt    1/1     Running   0          82m
redhat-operators-5dcdd4fbcf-74p5s      1/1     Running   0          82m
wmco-7tm5z                             0/1     Running   0          7s

But, this new one running 9 mins, 
[root@preserve-olm-env data]# oc get pods
NAME                                   READY   STATUS    RESTARTS   AGE
certified-operators-d7f49f94-9w8xd     1/1     Running   0          91m
community-operators-79585fdc64-m56bw   1/1     Running   0          91m
marketplace-operator-8f66c787b-hm7zk   1/1     Running   0          92m
redhat-marketplace-6958b45cf5-pdcmt    1/1     Running   0          91m
redhat-operators-5dcdd4fbcf-74p5s      1/1     Running   0          91m
wmco-7tm5z                             1/1     Running   0          9m11s

[root@preserve-olm-env data]# oc get pods wmco-7tm5z -o yaml|grep image 
            f:image: {}
            f:imagePullPolicy: {}
        f:imagePullSecrets:
            f:image: {}
            f:imagePullPolicy: {}
  - image: quay.io/sgaoshang/wmco-index:1.0.0
    imagePullPolicy: Always
  imagePullSecrets:
    image: quay.io/sgaoshang/wmco-index:1.0.0
    imageID: quay.io/sgaoshang/wmco-index@sha256:9dafa56eb4946f73cb8e8a9d4b3b44458af074ec80f3a898b3636d5c45569c73

Comment 3 Jian Zhang 2020-09-28 07:42:47 UTC
Added more details:

Cluster version is 4.5.0-0.nightly-2020-09-27-230429
[root@preserve-olm-env data]# oc -n openshift-operator-lifecycle-manager exec catalog-operator-77885bd5f-pnqcn -- olm --version
OLM version: 0.15.1
git commit: 756140c115052e883470cee61c925158e908a10f

1, Create a CatalogSource that its upgrade interval is 5 minutes.
[root@preserve-olm-env data]# cat cs-win.yaml 
apiVersion: operators.coreos.com/v1alpha1
kind: CatalogSource
metadata:
  name: wmco
  namespace: openshift-marketplace
spec:
  displayName: Windows operators
  sourceType: grpc
  image: quay.io/sgaoshang/wmco-index:1.0.0
  updateStrategy:
    registryPoll:
      interval: 5m

[root@preserve-olm-env data]# oc create -f cs-win.yaml 
catalogsource.operators.coreos.com/wmco created

[root@preserve-olm-env data]# oc get pods
NAME                                    READY   STATUS    RESTARTS   AGE
certified-operators-54fdccdfbd-xfbz7    1/1     Running   0          18m
community-operators-6bd74db8c4-7gp9h    1/1     Running   0          18m
marketplace-operator-545dbff9b8-dsc5r   1/1     Running   0          35m
redhat-marketplace-76cb4579-2bfb8       1/1     Running   0          18m
redhat-operators-57777cdfc9-rwf6b       1/1     Running   0          18m
wmco-fd5wt                              1/1     Running   0          2m48s
[root@preserve-olm-env data]# date
Mon Sep 28 03:10:55 EDT 2020

[root@preserve-olm-env data]# oc get pods wmco-fd5wt -o=jsonpath='{.metadata.labels}'
{"olm.catalogSource":"wmco"}

[root@preserve-olm-env data]# oc get pods wmco-fd5wt -o=jsonpath='{.status.containerStatuses[0].imageID}'
quay.io/sgaoshang/wmco-index@sha256:9dafa56eb4946f73cb8e8a9d4b3b44458af074ec80f3a898b3636d5c45569c73

After around 7 mins later, the update pod generated.
[root@preserve-olm-env data]# oc get pods -w
NAME                                    READY   STATUS    RESTARTS   AGE
certified-operators-54fdccdfbd-xfbz7    1/1     Running   0          22m
community-operators-6bd74db8c4-7gp9h    1/1     Running   0          22m
marketplace-operator-545dbff9b8-dsc5r   1/1     Running   0          39m
redhat-marketplace-76cb4579-2bfb8       1/1     Running   0          22m
redhat-operators-57777cdfc9-rwf6b       1/1     Running   0          22m
wmco-fd5wt                              1/1     Running   0          6m47s
wmco-mgm5m                              0/1     Pending   0          0s
wmco-mgm5m                              0/1     Pending   0          0s
wmco-mgm5m                              0/1     ContainerCreating   0          0s
wmco-mgm5m                              0/1     ContainerCreating   0          2s
wmco-mgm5m                              0/1     Running             0          4s
wmco-mgm5m                              1/1     Running             0          11s

[root@preserve-olm-env data]# oc get pods
NAME                                    READY   STATUS    RESTARTS   AGE
certified-operators-54fdccdfbd-xfbz7    1/1     Running   0          30m
community-operators-6bd74db8c4-7gp9h    1/1     Running   0          30m
marketplace-operator-545dbff9b8-dsc5r   1/1     Running   0          46m
redhat-marketplace-76cb4579-2bfb8       1/1     Running   0          30m
redhat-operators-57777cdfc9-rwf6b       1/1     Running   0          30m
wmco-fd5wt                              1/1     Running   0          14m
wmco-mgm5m                              1/1     Running   0          7m13s

But, the update alive for a long time, it should be deleted immediately since its imageID are the same as the serving pod's.
[root@preserve-olm-env data]# oc get pods wmco-mgm5m -o=jsonpath='{.status.containerStatuses[0].imageID}'
quay.io/sgaoshang/wmco-index@sha256:9dafa56eb4946f73cb8e8a9d4b3b44458af074ec80f3a898b3636d5c45569c73

[root@preserve-olm-env data]# oc get pods wmco-mgm5m -o=jsonpath='{.metadata.labels}'
{"catalogsource.operators.coreos.com/update":"wmco","olm.catalogSource":""}

After while, a new update pod generated, the old update pod was deleted.
[root@preserve-olm-env data]# oc get pods -w
NAME                                    READY   STATUS    RESTARTS   AGE
certified-operators-54fdccdfbd-xfbz7    1/1     Running   0          30m
community-operators-6bd74db8c4-7gp9h    1/1     Running   0          30m
marketplace-operator-545dbff9b8-dsc5r   1/1     Running   0          46m
redhat-marketplace-76cb4579-2bfb8       1/1     Running   0          30m
redhat-operators-57777cdfc9-rwf6b       1/1     Running   0          30m
wmco-fd5wt                              1/1     Running   0          14m
wmco-mgm5m                              1/1     Running   0          7m13s
wmco-mgm5m                              1/1     Terminating   0          8m42s
wmco-mgm5m                              1/1     Terminating   0          8m42s
wmco-8hlr7                              0/1     Pending       0          0s
wmco-8hlr7                              0/1     Pending       0          0s
wmco-8hlr7                              0/1     ContainerCreating   0          0s
wmco-8hlr7                              0/1     ContainerCreating   0          2s
wmco-8hlr7                              0/1     Running             0          3s
wmco-8hlr7                              1/1     Running             0          13s

[root@preserve-olm-env data]#  oc get pods -l catalogsource.operators.coreos.com/update=wmco
NAME         READY   STATUS    RESTARTS   AGE
wmco-8hlr7   1/1     Running   0          117s
[root@preserve-olm-env data]#  oc get pods -l olm.catalogSource=wmco
NAME         READY   STATUS    RESTARTS   AGE
wmco-fd5wt   1/1     Running   0          18m

After while, the serving pod(wmco-fd5wt) was deleted too. But, why? No updates in the digest image. See:
[root@preserve-olm-env data]# oc get pods -w
NAME                                    READY   STATUS    RESTARTS   AGE
certified-operators-54fdccdfbd-xfbz7    1/1     Running   0          36m
community-operators-6bd74db8c4-7gp9h    1/1     Running   0          36m
marketplace-operator-545dbff9b8-dsc5r   1/1     Running   0          52m
redhat-marketplace-76cb4579-2bfb8       1/1     Running   0          36m
redhat-operators-57777cdfc9-rwf6b       1/1     Running   0          36m
wmco-8hlr7                              1/1     Running   0          4m35s
wmco-fd5wt                              1/1     Running   0          20m
wmco-8hlr7                              1/1     Terminating   0          6m18s
wmco-8hlr7                              1/1     Terminating   0          6m18s
wmco-qbqnr                              0/1     Pending       0          0s
wmco-qbqnr                              0/1     Pending       0          0s
wmco-qbqnr                              0/1     ContainerCreating   0          0s
wmco-qbqnr                              0/1     ContainerCreating   0          0s
wmco-fd5wt                              1/1     Terminating         0          21m
wmco-fd5wt                              1/1     Terminating         0          21m
wmco-fsnzs                              0/1     Pending             0          0s
wmco-fsnzs                              0/1     Pending             0          0s
wmco-fsnzs                              0/1     ContainerCreating   0          0s
wmco-qbqnr                              0/1     ContainerCreating   0          2s
wmco-fsnzs                              0/1     ContainerCreating   0          2s
wmco-fsnzs                              0/1     Running             0          3s
wmco-qbqnr                              0/1     Running             0          4s
wmco-qbqnr                              1/1     Running             0          12s
wmco-fsnzs                              1/1     Running             0          16s


[root@preserve-olm-env data]# oc get pods
NAME                                    READY   STATUS    RESTARTS   AGE
certified-operators-54fdccdfbd-xfbz7    1/1     Running   0          39m
community-operators-6bd74db8c4-7gp9h    1/1     Running   0          39m
marketplace-operator-545dbff9b8-dsc5r   1/1     Running   0          55m
redhat-marketplace-76cb4579-2bfb8       1/1     Running   0          39m
redhat-operators-57777cdfc9-rwf6b       1/1     Running   0          39m
wmco-fsnzs                              1/1     Running   0          84s
wmco-qbqnr                              1/1     Running   0          85s


[root@preserve-olm-env data]# oc get pods
NAME                                    READY   STATUS    RESTARTS   AGE
certified-operators-54fdccdfbd-xfbz7    1/1     Running   0          40m
community-operators-6bd74db8c4-7gp9h    1/1     Running   0          40m
marketplace-operator-545dbff9b8-dsc5r   1/1     Running   0          56m
redhat-marketplace-76cb4579-2bfb8       1/1     Running   0          40m
redhat-operators-57777cdfc9-rwf6b       1/1     Running   0          40m
wmco-fsnzs                              1/1     Running   0          2m39s
wmco-qbqnr                              1/1     Running   0          2m40s

[root@preserve-olm-env data]# oc get pods wmco-fsnzs -o=jsonpath='{.status.containerStatuses[0].imageID}'
quay.io/sgaoshang/wmco-index@sha256:9dafa56eb4946f73cb8e8a9d4b3b44458af074ec80f3a898b3636d5c45569c73
[root@preserve-olm-env data]# 
[root@preserve-olm-env data]# oc get pods wmco-qbqnr -o=jsonpath='{.status.containerStatuses[0].imageID}'
quay.io/sgaoshang/wmco-index@sha256:9dafa56eb4946f73cb8e8a9d4b3b44458af074ec80f3a898b3636d5c45569c73

Comment 11 Jian Zhang 2020-10-26 03:31:25 UTC
Cluster version is 4.5.0-0.nightly-2020-10-25-174204

[root@preserve-olm-env data]# oc -n openshift-operator-lifecycle-manager  exec catalog-operator-6fcf845f79-47xjh -- olm --version
OLM version: 0.15.1
git commit: 8623c6f5d598aa7159d699bb900ca109b810db77

1, Create a CatalogSource that its upgrade interval is 5 minutes.
[root@preserve-olm-env data]# cat cs-win.yaml 
apiVersion: operators.coreos.com/v1alpha1
kind: CatalogSource
metadata:
  name: wmco
  namespace: openshift-marketplace
spec:
  displayName: Windows operators
  sourceType: grpc
  image: quay.io/sgaoshang/wmco-index:1.0.0
  updateStrategy:
    registryPoll:
      interval: 5m

[root@preserve-olm-env data]# oc create -f cs-win.yaml 
catalogsource.operators.coreos.com/wmco created

As we can see, the checking pod generated after 7 mins, and less than 15 mins. LGTM, verify it.
[root@preserve-olm-env data]# oc get pods -w
NAME                                   READY   STATUS    RESTARTS   AGE
certified-operators-787b75b847-shp2z   1/1     Running   0          90m
community-operators-94cfd79fb-jdxtp    1/1     Running   0          90m
marketplace-operator-6667d6b5c-2b8wr   1/1     Running   0          91m
poll-test-pjrqh                        1/1     Running   0          21m
redhat-marketplace-6466dd47f9-sfxhv    1/1     Running   0          90m
redhat-operators-7469b76899-mk689      1/1     Running   0          90m
wmco-6fs4m                             1/1     Running   0          7m44s
wmco-kxr84                             0/1     Pending   0          0s
wmco-kxr84                             0/1     Pending   0          0s
wmco-kxr84                             0/1     ContainerCreating   0          0s
wmco-kxr84                             0/1     ContainerCreating   0          2s
wmco-kxr84                             0/1     ContainerCreating   0          3s
wmco-kxr84                             0/1     Running             0          4s
wmco-kxr84                             1/1     Running             0          12s
wmco-kxr84                             1/1     Terminating         0          12s
wmco-kxr84                             1/1     Terminating         0          12s
wmco-f2v8g                             0/1     Pending             0          0s
wmco-f2v8g                             0/1     Pending             0          0s
wmco-f2v8g                             0/1     ContainerCreating   0          0s
wmco-f2v8g                             0/1     ContainerCreating   0          2s
wmco-f2v8g                             0/1     Running             0          4s
wmco-f2v8g                             1/1     Running             0          12s
wmco-f2v8g                             1/1     Terminating         0          12s
wmco-f2v8g                             1/1     Terminating         0          12s
poll-test-lmd8k                        0/1     Pending             0          0s
poll-test-lmd8k                        0/1     Pending             0          0s
poll-test-lmd8k                        0/1     ContainerCreating   0          0s
poll-test-lmd8k                        0/1     ContainerCreating   0          2s
poll-test-lmd8k                        0/1     Running             0          4s
poll-test-lmd8k                        1/1     Running             0          8s
poll-test-lmd8k                        1/1     Terminating         0          8s
poll-test-lmd8k                        1/1     Terminating         0          8s
...

Comment 14 errata-xmlrpc 2020-11-05 12:46:56 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (OpenShift Container Platform 4.5.17 bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:4325