Bug 2021202
| Summary: | imagePullPolicy is "Always" for performance-addon-rhel8-operator image | ||
|---|---|---|---|
| Product: | OpenShift Container Platform | Reporter: | Vitaly Grinberg <vgrinber> |
| Component: | Performance Addon Operator | Assignee: | Martin Sivák <msivak> |
| Status: | CLOSED ERRATA | QA Contact: | Gowrishankar Rajaiyan <grajaiya> |
| Severity: | low | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | 4.9 | CC: | aos-bugs, shajmakh |
| Target Milestone: | --- | ||
| Target Release: | 4.10.0 | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | Bug Fix | |
| Doc Text: |
Cause:
Single node cluster or any other edge node losing connection to the image registry. Or that connection being bandwidth limited.
Consequence:
The operator cannot start (Image pull timeout) even though it is in the local CRI-O cache.
Fix:
Do not pull the image again when the image is available on the node already.
Result:
The operator will start from the local image cache.
|
Story Points: | --- |
| Clone Of: | Environment: | ||
| Last Closed: | 2022-03-10 19:34:25 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
| Bug Depends On: | |||
| Bug Blocks: | 2052622, 2055019 | ||
|
Description
Vitaly Grinberg
2021-11-08 14:49:01 UTC
Verification:
Version:
ocp: 4.10.0-rc.1
pao: registry-proxy.engineering.redhat.com/rh-osbs/openshift4-performance-addon-rhel8-operator@sha256:607b829f1ac58e2851d6188ccc2acdccb5f4e9ef4d092a2851fff05f20d74017 corresponding to v4.10.0-32
Steps:
-Install pao using PPC & inspect into the pod's replicaset:
value: performance-addon-operator.v4.10.0
image: registry.redhat.io/openshift4/performance-addon-rhel8-operator@sha256:607b829f1ac58e2851d6188ccc2acdccb5f4e9ef4d092a2851fff05f20d74017
imagePullPolicy: IfNotPresent [1]
-Reboot the node on which the pod currently running.
-Check the pod description/events:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 15m default-scheduler Successfully assigned openshift-performance-addon-operator/performance-operator-6bfbf5dd54-6bhml to ocp410-master-2.demo.lab.mniranja
Normal AddedInterface 15m multus Add eth0 [10.132.0.59/23] from openshift-sdn
Normal Pulling 15m kubelet Pulling image "registry.redhat.io/openshift4/performance-addon-rhel8-operator@sha256:607b829f1ac58e2851d6188ccc2acdccb5f4e9ef4d092a2851fff05f20d74017"
Normal Pulled 14m kubelet Successfully pulled image "registry.redhat.io/openshift4/performance-addon-rhel8-operator@sha256:607b829f1ac58e2851d6188ccc2acdccb5f4e9ef4d092a2851fff05f20d74017" in 32.747670506s
Normal Created 14m kubelet Created container performance-operator
Normal Started 14m kubelet Started container performance-operator
Warning NodeNotReady 3m47s node-controller Node is not ready [2]
Normal AddedInterface 2m27s multus Add eth0 [10.132.0.19/23] from openshift-sdn
Normal Pulled 2m27s kubelet Container image "registry.redhat.io/openshift4/performance-addon-rhel8-operator@sha256:607b829f1ac58e2851d6188ccc2acdccb5f4e9ef4d092a2851fff05f20d74017" already present on machine [3]
Normal Created 2m26s kubelet Created container performance-operator
Normal Started 2m25s kubelet Started container performance-operator
[1] the pull policy is now IfNotPresent.
[2] the node is rebooting
[3] after the machine is up again it trie to rerun the pod & pull the image, since the image already exists on that node, it uses it & does no additional pulling.
As a negative check, you may also delete the pull secret before restarting the node.
deleting the pull secret:
oc delete secrets/pull-secret -n openshift-config
Verified successfully.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (OpenShift Container Platform 4.10 low-latency extras update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2022:0640 |