Bug 2011977
| Summary: | SRO bundle references non-existent image | ||
|---|---|---|---|
| Product: | OpenShift Container Platform | Reporter: | dagray |
| Component: | Special Resource Operator | Assignee: | Brett Thurber <bthurber> |
| Status: | CLOSED ERRATA | QA Contact: | liqcui |
| Severity: | high | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | 4.9 | CC: | aos-bugs |
| Target Milestone: | --- | ||
| Target Release: | 4.10.0 | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | No Doc Update | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2022-03-10 16:18:05 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
| Bug Depends On: | |||
| Bug Blocks: | 2011985 | ||
|
Description
dagray
2021-10-07 20:20:55 UTC
Verified Result:
make deploy IMAGE="quay.io/openshift/origin-special-resource-rhel8-operator:4.9"
[mirroradmin@ec2-18-217-45-133 special-resource-operator]$ make deploy IMAGE="quay.io/openshift/origin-special-resource-rhel8-operator:4.9"
which: no controller-gen in (/home/mirroradmin/.local/bin:/home/mirroradmin/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/usr/local/go/bin:.)
which: no golangci-lint in (/home/mirroradmin/.local/bin:/home/mirroradmin/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/usr/local/go/bin:.)
which: no kube-linter in (/home/mirroradmin/.local/bin:/home/mirroradmin/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/usr/local/go/bin:.)
cp .patches/options.patch.go vendor/github.com/google/go-containerregistry/pkg/crane/.
cp .patches/getter.patch.go vendor/helm.sh/helm/v3/pkg/getter/.
cp .patches/action.patch.go vendor/helm.sh/helm/v3/pkg/action/.
cp .patches/install.patch.go vendor/helm.sh/helm/v3/pkg/action/.
go: creating new go.mod: module tmp
go get: added sigs.k8s.io/controller-tools v0.5.0
/home/mirroradmin/go/bin/controller-gen "crd:crdVersions=v1,trivialVersions=true" rbac:roleName=manager-role webhook paths="./..." output:crd:artifacts:config=config/crd/bases
# TODO kustomize cannot set name of namespace according to settings, hack TODO
cd config/namespace && sed -i 's/name: .*/name: openshift-special-resource-operator/g' namespace.yaml
cd config/namespace && /usr/bin/kustomize edit set namespace openshift-special-resource-operator
cd config/default && /usr/bin/kustomize edit set namespace openshift-special-resource-operator
cd config/manager && /usr/bin/kustomize edit set image controller=quay.io/openshift/origin-special-resource-rhel8-operator:4.9
cd manifests; rm -f *.yaml
cd manifests; ( /usr/bin/kustomize build ../config/namespace && echo "---" && /usr/bin/kustomize build ../config/default ) | csplit - --prefix="" --suppress-matched --suffix-format="%04d.yaml" /---/ '{*}' --silent
cd manifests; bash ../scripts/rename.sh
cd manifests; /usr/bin/kustomize build ../config/cr > 0016_specialresource_special-resource-preamble.yaml
/usr/bin/kustomize build config/namespace | kubectl apply -f -
namespace/openshift-special-resource-operator created
/usr/bin/kustomize build config/default | kubectl apply -f -
customresourcedefinition.apiextensions.k8s.io/specialresources.sro.openshift.io created
role.rbac.authorization.k8s.io/special-resource-leader-election-role created
role.rbac.authorization.k8s.io/special-resource-prometheus-k8s created
clusterrole.rbac.authorization.k8s.io/special-resource-manager-role created
clusterrole.rbac.authorization.k8s.io/special-resource-metrics-reader created
clusterrole.rbac.authorization.k8s.io/special-resource-proxy-role created
rolebinding.rbac.authorization.k8s.io/special-resource-leader-election-rolebinding created
rolebinding.rbac.authorization.k8s.io/special-resource-prometheus-k8s created
clusterrolebinding.rbac.authorization.k8s.io/special-resource-manager-rolebinding created
clusterrolebinding.rbac.authorization.k8s.io/special-resource-proxy-rolebinding created
configmap/special-resource-dependencies created
configmap/special-resource-lifecycle created
service/special-resource-controller-manager-metrics-service created
deployment.apps/special-resource-controller-manager created
clusteroperator.config.openshift.io/special-resource-operator created
servicemonitor.monitoring.coreos.com/special-resource-controller-manager-metrics-monitor created
/usr/bin/kustomize build config/cr | kubectl apply -f -
specialresource.sro.openshift.io/special-resource-preamble created
[mirroradmin@ec2-18-217-45-133 special-resource-operator]$ oc get pods -n openshift-special-resource-operator
NAME READY STATUS RESTARTS AGE
special-resource-controller-manager-5cd4b58c76-68957 2/2 Running 0 23s
[mirroradmin@ec2-18-217-45-133 special-resource-operator]$ oc describe pod special-resource-controller-manager-5cd4b58c76-68957 -n openshift-special-resource-operator |grep -i image
Image: registry.redhat.io/openshift4/ose-kube-rbac-proxy
Image ID: registry.redhat.io/openshift4/ose-kube-rbac-proxy@sha256:00e42b2a0f0dad10c8e87e33f0d0854fe3e4d4c05532dcb3ad695956f909000c
Image: quay.io/openshift/origin-special-resource-rhel8-operator:4.9
Image ID: quay.io/openshift/origin-special-resource-rhel8-operator@sha256:65b2289f4375e02810cd220b8eccc0830665d0e2daf5e7dfdf5072757714659e
Normal Pulling 40s kubelet Pulling image "registry.redhat.io/openshift4/ose-kube-rbac-proxy"
Normal Pulled 38s kubelet Successfully pulled image "registry.redhat.io/openshift4/ose-kube-rbac-proxy" in 1.4348371s
Normal Pulling 38s kubelet Pulling image "quay.io/openshift/origin-special-resource-rhel8-operator:4.9"
Normal Pulled 35s kubelet Successfully pulled image "quay.io/openshift/origin-special-resource-rhel8-operator:4.9" in 3.44575781s
---------------------------------------------------------------------
operator-sdk run bundle quay.io/openshift-psap-qe/special-resource-operator-bundle:4.9.0
[mirroradmin@ec2-18-217-45-133 ~]$ operator-sdk run bundle quay.io/openshift-psap-qe/special-resource-operator-bundle:4.9.0
INFO[0007] Successfully created registry pod: uay-io-openshift-psap-qe-special-resource-operator-bundle-4-9-0
INFO[0007] Created CatalogSource: openshift-special-resource-operator-catalog
INFO[0007] OperatorGroup "operator-sdk-og" created
INFO[0007] Created Subscription: openshift-special-resource-operator-v4-9-0-sub
INFO[0019] Approved InstallPlan install-lglvh for the Subscription: openshift-special-resource-operator-v4-9-0-sub
INFO[0019] Waiting for ClusterServiceVersion "default/openshift-special-resource-operator.v4.9.0" to reach 'Succeeded' phase
INFO[0019] Waiting for ClusterServiceVersion "default/openshift-special-resource-operator.v4.9.0" to appear
INFO[0036] Found ClusterServiceVersion "default/openshift-special-resource-operator.v4.9.0" phase: Pending
INFO[0038] Found ClusterServiceVersion "default/openshift-special-resource-operator.v4.9.0" phase: InstallReady
INFO[0039] Found ClusterServiceVersion "default/openshift-special-resource-operator.v4.9.0" phase: Installing
INFO[0044] Found ClusterServiceVersion "default/openshift-special-resource-operator.v4.9.0" phase: Succeeded
INFO[0044] OLM has successfully installed "openshift-special-resource-operator.v4.9.0"
[mirroradmin@ec2-18-217-45-133 ~]$ oc get pods
NAME READY STATUS RESTARTS AGE
dcba5785ed1c5156f1b8af0bda1d10c45702796b4ce072beb75b6e--1-qvmgk 0/1 Completed 0 10m
nfd-controller-manager-7d775b9976-rxpxv 2/2 Running 0 10m
special-resource-controller-manager-78c9564f6c-f5bph 2/2 Running 0 10m
uay-io-openshift-psap-qe-special-resource-operator-bundle-4-9-0 1/1 Running 0 10m
[mirroradmin@ec2-18-217-45-133 ~]$ oc describe pod special-resource-controller-manager-78c9564f6c-f5bph |grep Image
Image: registry.redhat.io/openshift4/ose-kube-rbac-proxy
Image ID: registry.redhat.io/openshift4/ose-kube-rbac-proxy@sha256:00e42b2a0f0dad10c8e87e33f0d0854fe3e4d4c05532dcb3ad695956f909000c
Image: quay.io/openshift/origin-special-resource-rhel8-operator:4.9
Image ID: quay.io/openshift/origin-special-resource-rhel8-operator@sha256:65b2289f4375e02810cd220b8eccc0830665d0e2daf5e7dfdf5072757714659e
make bundle bundle-build bundle-push VERSION=4.9.0 IMAGE="quay.io/openshift/origin-special-resource-rhel8-operator:4.9"
which: no controller-gen in (/home/mirroradmin/.local/bin:/home/mirroradmin/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/usr/local/go/bin:.)
which: no golangci-lint in (/home/mirroradmin/.local/bin:/home/mirroradmin/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/usr/local/go/bin:.)
which: no kube-linter in (/home/mirroradmin/.local/bin:/home/mirroradmin/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/usr/local/go/bin:.)
go: creating new go.mod: module tmp
go get: added sigs.k8s.io/controller-tools v0.5.0
/home/mirroradmin/go/bin/controller-gen "crd:crdVersions=v1,trivialVersions=true" rbac:roleName=manager-role webhook paths="./..." output:crd:artifacts:config=config/crd/bases
operator-sdk generate kustomize manifests -q
cd config/manager && /usr/bin/kustomize edit set image controller=quay.io/openshift/origin-special-resource-rhel8-operator:4.9
/usr/bin/kustomize build config/manifests | operator-sdk generate bundle -q --overwrite --verbose --version 4.9.0 --channels="4.9" --default-channel="4.9"
DEBU[0000] Debug logging is set
INFO[0001] Creating bundle.Dockerfile
INFO[0001] Creating bundle/metadata/annotations.yaml
INFO[0001] Bundle metadata generated suceessfully
operator-sdk bundle validate ./bundle
INFO[0000] Found annotations file bundle-dir=bundle container-tool=docker
INFO[0000] Could not find optional dependencies file bundle-dir=bundle container-tool=docker
INFO[0000] All validation tests have completed successfully
podman build -f bundle.Dockerfile -t quay.io/openshift-psap-qe/special-resource-operator-bundle:4.9.0 .
STEP 1: FROM scratch
STEP 2: LABEL operators.operatorframework.io.bundle.mediatype.v1=registry+v1
--> Using cache c400bd1a61f9a996dae21328ad67f305f588b010c0f4426bb02c476122025a05
--> c400bd1a61f
STEP 3: LABEL operators.operatorframework.io.bundle.manifests.v1=manifests/
--> Using cache 06bbc406cb5916f8dba29bb666c00804a0eedb7b587ca545d8de1c80e3fe1866
--> 06bbc406cb5
STEP 4: LABEL operators.operatorframework.io.bundle.metadata.v1=metadata/
--> Using cache 5d04591bc06b77ac84954f3b5de9dfc6e9c435ed0c73b3b909384347aa22acbc
--> 5d04591bc06
STEP 5: LABEL operators.operatorframework.io.bundle.package.v1=openshift-special-resource-operator
--> Using cache 1aeb0151791f5de1868bd5fb38067a00dd0db8a4a3e7345ee997d69bd1321ce6
--> 1aeb0151791
STEP 6: LABEL operators.operatorframework.io.bundle.channels.v1=4.9
--> Using cache 55e937b9b5329edc2a273a0cc0c56587e0e01aafcb4bca547db404cf0c5c7b6a
--> 55e937b9b53
STEP 7: LABEL operators.operatorframework.io.bundle.channel.default.v1=4.9
--> Using cache 4eaece1f1524ed7ed2730a9b82d883a27ded245450a4f27b3a7babd8ebd4dd80
--> 4eaece1f152
STEP 8: LABEL operators.operatorframework.io.metrics.builder=operator-sdk-v1.8.0-ocp
--> Using cache 93b701ee37e2448cda112fde76199bd9e87cbc9ecb20342cebc663424ebf6789
--> 93b701ee37e
STEP 9: LABEL operators.operatorframework.io.metrics.mediatype.v1=metrics+v1
--> Using cache a8031ad6181392cf88056f7e6a69aafaea1a00624bfe84a937d17bfedf3befb8
--> a8031ad6181
STEP 10: LABEL operators.operatorframework.io.metrics.project_layout=go.kubebuilder.io/v2
--> Using cache e74932847f61d2478e197e49051b73d77aa651df5e6a71e0c71ee9ed85a7550a
--> e74932847f6
STEP 11: LABEL operators.operatorframework.io.test.mediatype.v1=scorecard+v1
--> Using cache e74939111060fd3e53a0fe9fa07ecacd200c8e436cdd036126f8adeb7f5a6ba9
--> e7493911106
STEP 12: LABEL operators.operatorframework.io.test.config.v1=tests/scorecard/
--> Using cache 275c25967bd809c37ccde39ff2fbb17f1897efca4899a584069d25c37062074e
--> 275c25967bd
STEP 13: COPY bundle/manifests /manifests/
--> 0c22e737119
STEP 14: COPY bundle/metadata /metadata/
--> 89e02d97a86
STEP 15: COPY bundle/tests/scorecard /tests/scorecard/
STEP 16: COMMIT quay.io/openshift-psap-qe/special-resource-operator-bundle:4.9.0
--> b0e98e3e3e0
b0e98e3e3e00d8aa88500256895be0134f9e217a98527b26430b7699301ea52c
[mirroradmin@ec2-18-217-45-133 ~]$ oc get pods
NAME READY STATUS RESTARTS AGE
dcba5785ed1c5156f1b8af0bda1d10c45702796b4ce072beb75b6e--1-qvmgk 0/1 Completed 0 10m
nfd-controller-manager-7d775b9976-rxpxv 2/2 Running 0 10m
special-resource-controller-manager-78c9564f6c-f5bph 2/2 Running 0 10m
uay-io-openshift-psap-qe-special-resource-operator-bundle-4-9-0 1/1 Running 0 10m
[mirroradmin@ec2-18-217-45-133 ~]$ oc describe pod special-resource-controller-manager-78c9564f6c-f5bph |grep Image
Image: registry.redhat.io/openshift4/ose-kube-rbac-proxy
Image ID: registry.redhat.io/openshift4/ose-kube-rbac-proxy@sha256:00e42b2a0f0dad10c8e87e33f0d0854fe3e4d4c05532dcb3ad695956f909000c
Image: quay.io/openshift/origin-special-resource-rhel8-operator:4.9
Image ID: quay.io/openshift/origin-special-resource-rhel8-operator@sha256:65b2289f4375e02810cd220b8eccc0830665d0e2daf5e7dfdf5072757714659e
Expected Result:
To be consistent with other optional operators we should be using the origin image quay.io/openshift/origin-special-resource-rhel8-operator:4.9.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: OpenShift Container Platform 4.10.3 security update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2022:0056 |