Bug 1895367

Summary: Missing image in metadata DB index.db in disconnected Operator Hub installation. OCP 4.6.1
Product: OpenShift Container Platform Reporter: Andy Bartlett <andbartl>
Component: OLMAssignee: Ben Luddy <bluddy>
OLM sub component: OLM QA Contact: kuiwang
Status: CLOSED ERRATA Docs Contact:
Severity: medium    
Priority: medium CC: bluddy, krizza, ocasalsa
Version: 4.6Keywords: UpcomingSprint
Target Milestone: ---   
Target Release: 4.7.0   
Hardware: x86_64   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
: 1904547 (view as bug list) Environment:
Last Closed: 2021-02-24 15:31:26 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1904547    

Description Andy Bartlett 2020-11-06 13:48:14 UTC
Description of problem:
My Customer has been trying to install CLO in a disconnect cluster environment

Version-Release number of selected component (if applicable):
Openshift 4.6.1

How reproducible:
100%

Steps to Reproduce:

export custom_mapping="cluster-logging-latest-mapping.txt"
export default_mapping="redhat-operator-index-manifests/mapping.txt"
export redhat_index="registry.redhat.io/redhat/redhat-operator-index:v4.6"
export registry_base="registry.example.com"
export catalog_path="$registry_base/redhat-operator-index:v4.6"
export policy="redhat-operator-index-manifests/imageContentSourcePolicy.yaml"
 
# Build & push custom catalog index
# Expects podman version >= 1.8

$ opm index prune -f $redhat_index -p cluster-logging -t $catalog_path && podman push $catalog_path

# Disable default catalog sources for current cluster

$ oc patch OperatorHub cluster --type json -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]'

# Create and apply custom catalogsources for current cluster to point to local catalog
$ cat <<EOF | oc apply -f -
---
apiVersion: operators.coreos.com/v1alpha1
kind: CatalogSource
metadata:
  name: my-operator-catalog
  namespace: openshift-marketplace
spec:
  sourceType: grpc
  image: $catalog_path
  displayName: My Operator Catalog
  publisher: cps
  updateStrategy:
    registryPoll:
      interval: 30m
EOF
 
# Create new catalog DB and generate the manifests file ONLY from the local catalog
$ oc adm catalog mirror $catalog_path $registry_base -a pullsecret.json --manifests-only

$ export db_path="/tmp/$(ls -trh1 /tmp/ | tail -1)/index.db"
 
# Apply mirroring policy for current cluster generated by above catalog mirror command
$ oc apply -f $policy
 
# Get relevant images from DB and grep these from the manifests file
# THE IMAGE ose-cluster-logging-operator-bundle@sha256:f10296c32c23be6684e96a19f76153c1a1d25d18b70c1caf6d2224b5aedd64b6 IS MISSING

for sha in $(echo "select * from related_image where operatorbundle_name like 'clusterlogging%';" | sqlite3 -line $db_path | grep sha256 | cut -d ":" -f2); do grep $sha $default_mapping >> $custom_mapping; done
 
# Not all images are present in the DB, ose-cluster-logging-operator-bundle@sha256:f10296c32c23be6684e96a19f76153c1a1d25d18b70c1caf6d2224b5aedd64b6 IS MISSING
# so IGNORE THE ABOVE step and get all images containing "logging"
grep logging $default_mapping > $custom_mapping
# Double check custom mapping file
echo "Mirroring images using following custom mapping: "
cat $custom_mapping
 
# skopeo WORKAROUND for image push still needed for 4.6.1 since “oc image mirror” still uploads wrong SHAs
sudo yum install skopeo -y
while read -r line; do IFS='='; arrLeft=($line); IFS=':'; arrRight=(${arrLeft[1]}); unset IFS; echo "MIRRORING: ${arrLeft[0]} -> ${arrRight[0]}"; skopeo copy --all --authfile pullsecret.json docker://${arrLeft[0]} docker://${arrRight[0]}; done < $custom_mapping;

Actual results:

CLO fails to install because it cannot find ose-cluster-logging-operator-bundle


Expected results:

CLO install success

Additional info:

The workaround for me to get this working:
Start the process as normal (https://docs.openshift.com/container-platform/4.6/operators/admin/olm-restricted-networks.html)

Ensure that the file $custom_mapping has been deleted, then:

$ for sha in $(echo "select bundlepath from operatorbundle where name like '%clusterlogging%';" | sqlite3 -line $db_path | grep sha256 | cut -d ":" -f2); do grep $sha olm/$default_mapping >> $custom_mapping; done

$ for sha in $(echo "select * from related_image where operatorbundle_name like 'clusterlogging%';" | sqlite3 -line $db_path | grep sha256 | cut -d ":" -f2); do grep $sha olm/$default_mapping >> $custom_mapping; done

$ oc image mirror -a pull-secret-json -f $custom_mapping --continue-on-error=true

then continue as normal, when you run your skopeo script make sure it includes the following (using skopeo due to oc image mirror problem BZ:1832968):

skopeo copy --all docker://registry.redhat.io/openshift4/ose-logging-curator5@sha256:904e21f0eb89c3e6b7aaab78dd6a052f215f96b1d62d70e6bec2516b04651e0c docker://registry.example.com/olm-openshift4-ose-logging-curator5

skopeo copy --all docker://registry.redhat.io/openshift4/ose-logging-fluentd@sha256:f262f333d80b3ca40cf0fad7a4b33f8d9748724167e462c7540421e2c46e72bd docker://registry.example.com/olm-openshift4-ose-logging-fluentd

skopeo copy --all docker://registry.redhat.io/openshift4/ose-cluster-logging-operator@sha256:c4300dd4a4d069203fb3d00822778ebc66401b88fa22523be967e717505a4020 docker://registry.example.com/olm-openshift4-ose-cluster-logging-operator

Comment 2 Ben Luddy 2020-12-01 21:35:40 UTC
The bundle image itself should be present in the related_images table. I've just written a patch to address this and made a note that it should be backported to 4.6.z.

Comment 4 kuiwang 2020-12-04 06:46:33 UTC
verify it on 4.7. Fail because registry.redhat.io/openshift4/ose-cluster-logging-operator-bundle@sha256:61aca61840dcf1d50f4a17fc9b2e10b7855c563bd6680b4dc77e3e9283c81369 is still MISSED.

Could you please check it? thanks

--
[root@preserve-olm-env operator-registry]# git log -n 1
commit 8ec2adab5781598a4f2094dbec6e5947d288e428
Merge: 9e92474 5d7ef44
Author: OpenShift Merge Robot <openshift-merge-robot.github.com>
Date:   Thu Dec 3 16:28:26 2020 -0500

    Merge pull request #530 from benluddy/disable-broken-builds
    
    Disable broken ppc64le and s390x release builds.
[root@preserve-olm-env operator-registry]# make clean;make build
GOFLAGS="-mod=vendor" go build  -tags "json1" -o bin/appregistry-server ./cmd/appregistry-server
GOFLAGS="-mod=vendor" go build  -tags "json1" -o bin/configmap-server ./cmd/configmap-server
GOFLAGS="-mod=vendor" go build  -tags "json1" -o bin/initializer ./cmd/initializer
GOFLAGS="-mod=vendor" go build  -tags "json1" -o bin/registry-server ./cmd/registry-server
GOFLAGS="-mod=vendor" go build -ldflags "-X 'github.com/operator-framework/operator-registry/cmd/opm/version.gitCommit=8ec2ada' -X 'github.com/operator-framework/operator-registry/cmd/opm/version.opmVersion=v1.15.2-10-g8ec2ada' -X 'github.com/operator-framework/operator-registry/cmd/opm/version.buildDate=2020-12-04T05:53:22Z'"  -tags "json1" -o bin/opm ./cmd/opm
[root@preserve-olm-env operator-registry]# 
[root@preserve-olm-env operator-registry]# export custom_mapping="cluster-logging-latest-mapping.txt"
[root@preserve-olm-env operator-registry]# export default_mapping="redhat-operator-index-manifests/mapping.txt"
[root@preserve-olm-env operator-registry]# export redhat_index="registry.redhat.io/redhat/redhat-operator-index:v4.6"
[root@preserve-olm-env operator-registry]# export registry_base="quay.io/kuiwang"
[root@preserve-olm-env operator-registry]# export catalog_path="$registry_base/redhat-operator-index:v4.6"
[root@preserve-olm-env operator-registry]# export policy="redhat-operator-index-manifests/imageContentSourcePolicy.yaml"
[root@preserve-olm-env operator-registry]# cd 1895367
[root@preserve-olm-env 1895367]# docker pull quay.io/operator-framework/upstream-opm-builder:latest
latest: Pulling from operator-framework/upstream-opm-builder
188c0c94c7c5: Already exists 
0b204e20a8c1: Already exists 
e83dfb2924e7: Already exists 
e3628f8eb74c: Pull complete 
acdafdf2d92f: Pull complete 
Digest: sha256:49bd6c3700d1f7a5464ad197da694cb9e81fd46db50f4f798da654bae79ba6c6
Status: Downloaded newer image for quay.io/operator-framework/upstream-opm-builder:latest
quay.io/operator-framework/upstream-opm-builder:latest
[root@preserve-olm-env 1895367]# podman pull quay.io/operator-framework/upstream-opm-builder:latest
Trying to pull quay.io/operator-framework/upstream-opm-builder:latest...
Getting image source signatures
Copying blob 188c0c94c7c5 skipped: already exists  
Copying blob e3628f8eb74c done  
Copying blob e83dfb2924e7 done  
Copying blob 0b204e20a8c1 done  
Copying blob acdafdf2d92f done  
Copying config b0efbf75e2 done  
Writing manifest to image destination
Storing signatures
b0efbf75e2a099663f1c57a039ef36d26ebc45bde9cfea62c0db6b35ca92c466
[root@preserve-olm-env 1895367]# docker pull quay.io/operator-framework/upstream-registry-builder:latest
latest: Pulling from operator-framework/upstream-registry-builder
188c0c94c7c5: Already exists 
538011b74a72: Already exists 
f165f470e2ae: Already exists 
0d44b6e19c67: Pull complete 
Digest: sha256:4f2296b266ca362ff7752a10b486a58ef4c7ad35c22060c06adc3f7b4073bcab
Status: Downloaded newer image for quay.io/operator-framework/upstream-registry-builder:latest
quay.io/operator-framework/upstream-registry-builder:latest
[root@preserve-olm-env 1895367]# podman pull quay.io/operator-framework/upstream-registry-builder:latest
Trying to pull quay.io/operator-framework/upstream-registry-builder:latest...
Getting image source signatures
Copying blob 188c0c94c7c5 skipped: already exists  
Copying blob 538011b74a72 skipped: already exists  
Copying blob f165f470e2ae skipped: already exists  
Copying blob 0d44b6e19c67 done  
Copying config 2f3f94fb88 done  
Writing manifest to image destination
Storing signatures
2f3f94fb882afba110e1971b1dae1aa90458eabda85652b4a98fcc6b849c365b
[root@preserve-olm-env 1895367]# 
[root@preserve-olm-env 1895367]# ../bin/opm  index prune -f $redhat_index -p cluster-logging -t $catalog_path
INFO[0000] pruning the index                             packages="[cluster-logging]"
INFO[0000] Pulling previous image registry.redhat.io/redhat/redhat-operator-index:v4.6 to get metadata  packages="[cluster-logging]"
INFO[0000] running /usr/bin/podman pull registry.redhat.io/redhat/redhat-operator-index:v4.6  packages="[cluster-logging]"
INFO[0003] running /usr/bin/podman pull registry.redhat.io/redhat/redhat-operator-index:v4.6  packages="[cluster-logging]"
INFO[0005] Getting label data from previous image        packages="[cluster-logging]"
INFO[0005] running podman inspect                        packages="[cluster-logging]"
INFO[0005] running podman create                         packages="[cluster-logging]"
INFO[0006] running podman cp                             packages="[cluster-logging]"
INFO[0010] running podman rm                             packages="[cluster-logging]"
INFO[0011] deleting packages                             pkg=3scale-operator
INFO[0011] input has been sanitized                      pkg=3scale-operator
INFO[0011] packages: [3scale-operator]                   pkg=3scale-operator
INFO[0011] deleting packages                             pkg=advanced-cluster-management
INFO[0011] input has been sanitized                      pkg=advanced-cluster-management
INFO[0011] packages: [advanced-cluster-management]       pkg=advanced-cluster-management
INFO[0011] deleting packages                             pkg=amq-broker
INFO[0011] input has been sanitized                      pkg=amq-broker
INFO[0011] packages: [amq-broker]                        pkg=amq-broker
INFO[0011] deleting packages                             pkg=amq-broker-lts
INFO[0011] input has been sanitized                      pkg=amq-broker-lts
INFO[0011] packages: [amq-broker-lts]                    pkg=amq-broker-lts
INFO[0011] deleting packages                             pkg=amq-online
INFO[0011] input has been sanitized                      pkg=amq-online
INFO[0011] packages: [amq-online]                        pkg=amq-online
INFO[0012] deleting packages                             pkg=amq-streams
INFO[0012] input has been sanitized                      pkg=amq-streams
INFO[0012] packages: [amq-streams]                       pkg=amq-streams
INFO[0012] deleting packages                             pkg=amq7-interconnect-operator
INFO[0012] input has been sanitized                      pkg=amq7-interconnect-operator
INFO[0012] packages: [amq7-interconnect-operator]        pkg=amq7-interconnect-operator
INFO[0012] deleting packages                             pkg=apicast-operator
INFO[0012] input has been sanitized                      pkg=apicast-operator
INFO[0012] packages: [apicast-operator]                  pkg=apicast-operator
INFO[0012] deleting packages                             pkg=awx-resource-operator
INFO[0012] input has been sanitized                      pkg=awx-resource-operator
INFO[0012] packages: [awx-resource-operator]             pkg=awx-resource-operator
INFO[0012] deleting packages                             pkg=businessautomation-operator
INFO[0012] input has been sanitized                      pkg=businessautomation-operator
INFO[0012] packages: [businessautomation-operator]       pkg=businessautomation-operator
INFO[0012] deleting packages                             pkg=cluster-kube-descheduler-operator
INFO[0012] input has been sanitized                      pkg=cluster-kube-descheduler-operator
INFO[0012] packages: [cluster-kube-descheduler-operator]  pkg=cluster-kube-descheduler-operator
INFO[0012] deleting packages                             pkg=clusterresourceoverride
INFO[0012] input has been sanitized                      pkg=clusterresourceoverride
INFO[0012] packages: [clusterresourceoverride]           pkg=clusterresourceoverride
INFO[0012] deleting packages                             pkg=codeready-workspaces
INFO[0012] input has been sanitized                      pkg=codeready-workspaces
INFO[0012] packages: [codeready-workspaces]              pkg=codeready-workspaces
INFO[0012] deleting packages                             pkg=compliance-operator
INFO[0012] input has been sanitized                      pkg=compliance-operator
INFO[0012] packages: [compliance-operator]               pkg=compliance-operator
INFO[0012] deleting packages                             pkg=container-security-operator
INFO[0012] input has been sanitized                      pkg=container-security-operator
INFO[0012] packages: [container-security-operator]       pkg=container-security-operator
INFO[0012] deleting packages                             pkg=datagrid
INFO[0012] input has been sanitized                      pkg=datagrid
INFO[0012] packages: [datagrid]                          pkg=datagrid
INFO[0012] deleting packages                             pkg=eap
INFO[0012] input has been sanitized                      pkg=eap
INFO[0012] packages: [eap]                               pkg=eap
INFO[0012] deleting packages                             pkg=elasticsearch-operator
INFO[0012] input has been sanitized                      pkg=elasticsearch-operator
INFO[0012] packages: [elasticsearch-operator]            pkg=elasticsearch-operator
INFO[0012] deleting packages                             pkg=file-integrity-operator
INFO[0012] input has been sanitized                      pkg=file-integrity-operator
INFO[0012] packages: [file-integrity-operator]           pkg=file-integrity-operator
INFO[0012] deleting packages                             pkg=fuse-apicurito
INFO[0012] input has been sanitized                      pkg=fuse-apicurito
INFO[0012] packages: [fuse-apicurito]                    pkg=fuse-apicurito
INFO[0012] deleting packages                             pkg=fuse-console
INFO[0012] input has been sanitized                      pkg=fuse-console
INFO[0012] packages: [fuse-console]                      pkg=fuse-console
INFO[0012] deleting packages                             pkg=fuse-online
INFO[0012] input has been sanitized                      pkg=fuse-online
INFO[0012] packages: [fuse-online]                       pkg=fuse-online
INFO[0012] deleting packages                             pkg=jaeger-product
INFO[0012] input has been sanitized                      pkg=jaeger-product
INFO[0012] packages: [jaeger-product]                    pkg=jaeger-product
INFO[0012] deleting packages                             pkg=kiali-ossm
INFO[0012] input has been sanitized                      pkg=kiali-ossm
INFO[0012] packages: [kiali-ossm]                        pkg=kiali-ossm
INFO[0012] deleting packages                             pkg=kubevirt-hyperconverged
INFO[0012] input has been sanitized                      pkg=kubevirt-hyperconverged
INFO[0012] packages: [kubevirt-hyperconverged]           pkg=kubevirt-hyperconverged
INFO[0012] deleting packages                             pkg=local-storage-operator
INFO[0012] input has been sanitized                      pkg=local-storage-operator
INFO[0012] packages: [local-storage-operator]            pkg=local-storage-operator
INFO[0012] deleting packages                             pkg=metering-ocp
INFO[0012] input has been sanitized                      pkg=metering-ocp
INFO[0012] packages: [metering-ocp]                      pkg=metering-ocp
INFO[0012] deleting packages                             pkg=mtc-operator
INFO[0012] input has been sanitized                      pkg=mtc-operator
INFO[0012] packages: [mtc-operator]                      pkg=mtc-operator
INFO[0013] deleting packages                             pkg=nfd
INFO[0013] input has been sanitized                      pkg=nfd
INFO[0013] packages: [nfd]                               pkg=nfd
INFO[0013] deleting packages                             pkg=ocs-operator
INFO[0013] input has been sanitized                      pkg=ocs-operator
INFO[0013] packages: [ocs-operator]                      pkg=ocs-operator
INFO[0013] deleting packages                             pkg=openshift-pipelines-operator-rh
INFO[0013] input has been sanitized                      pkg=openshift-pipelines-operator-rh
INFO[0013] packages: [openshift-pipelines-operator-rh]   pkg=openshift-pipelines-operator-rh
INFO[0013] deleting packages                             pkg=performance-addon-operator
INFO[0013] input has been sanitized                      pkg=performance-addon-operator
INFO[0013] packages: [performance-addon-operator]        pkg=performance-addon-operator
INFO[0013] deleting packages                             pkg=ptp-operator
INFO[0013] input has been sanitized                      pkg=ptp-operator
INFO[0013] packages: [ptp-operator]                      pkg=ptp-operator
INFO[0013] deleting packages                             pkg=quay-bridge-operator
INFO[0013] input has been sanitized                      pkg=quay-bridge-operator
INFO[0013] packages: [quay-bridge-operator]              pkg=quay-bridge-operator
INFO[0013] deleting packages                             pkg=quay-operator
INFO[0013] input has been sanitized                      pkg=quay-operator
INFO[0013] packages: [quay-operator]                     pkg=quay-operator
INFO[0013] deleting packages                             pkg=red-hat-camel-k
INFO[0013] input has been sanitized                      pkg=red-hat-camel-k
INFO[0013] packages: [red-hat-camel-k]                   pkg=red-hat-camel-k
INFO[0013] deleting packages                             pkg=rh-service-binding-operator
INFO[0013] input has been sanitized                      pkg=rh-service-binding-operator
INFO[0013] packages: [rh-service-binding-operator]       pkg=rh-service-binding-operator
INFO[0013] deleting packages                             pkg=rhsso-operator
INFO[0013] input has been sanitized                      pkg=rhsso-operator
INFO[0013] packages: [rhsso-operator]                    pkg=rhsso-operator
INFO[0013] deleting packages                             pkg=serverless-operator
INFO[0013] input has been sanitized                      pkg=serverless-operator
INFO[0013] packages: [serverless-operator]               pkg=serverless-operator
INFO[0013] deleting packages                             pkg=service-registry-operator
INFO[0013] input has been sanitized                      pkg=service-registry-operator
INFO[0013] packages: [service-registry-operator]         pkg=service-registry-operator
INFO[0013] deleting packages                             pkg=servicemeshoperator
INFO[0013] input has been sanitized                      pkg=servicemeshoperator
INFO[0013] packages: [servicemeshoperator]               pkg=servicemeshoperator
INFO[0013] deleting packages                             pkg=sriov-network-operator
INFO[0013] input has been sanitized                      pkg=sriov-network-operator
INFO[0013] packages: [sriov-network-operator]            pkg=sriov-network-operator
INFO[0013] deleting packages                             pkg=vertical-pod-autoscaler
INFO[0013] input has been sanitized                      pkg=vertical-pod-autoscaler
INFO[0013] packages: [vertical-pod-autoscaler]           pkg=vertical-pod-autoscaler
INFO[0013] deleting packages                             pkg=web-terminal
INFO[0013] input has been sanitized                      pkg=web-terminal
INFO[0013] packages: [web-terminal]                      pkg=web-terminal
INFO[0013] Generating dockerfile                         packages="[cluster-logging]"
INFO[0013] writing dockerfile: index.Dockerfile923781692  packages="[cluster-logging]"
INFO[0013] running podman build                          packages="[cluster-logging]"
INFO[0013] [podman build --format docker -f index.Dockerfile923781692 -t quay.io/kuiwang/redhat-operator-index:v4.6 .]  packages="[cluster-logging]"

[root@preserve-olm-env 1895367]# podman push $catalog_path
Getting image source signatures
Copying blob cb3faef4edb6 done  
Copying blob 53b2ccb878f2 skipped: already exists  
Copying blob 371d4702865c skipped: already exists  
Copying blob 772b412a6a9d skipped: already exists  
Copying blob 4570b0d18853 skipped: already exists  
Copying blob ace0eda3e3be skipped: already exists  
Copying config 76ffa31a08 done  
Writing manifest to image destination
Writing manifest to image destination
Storing signatures
[root@preserve-olm-env 1895367]# 
[root@preserve-olm-env 1895367]# oc adm catalog mirror $catalog_path $registry_base  --manifests-only
src image has index label for database path: /database/index.db
using database path mapping: /database/index.db:/tmp/850764862
wrote database to /tmp/850764862
using database at: /tmp/850764862/index.db
no digest mapping available for quay.io/kuiwang/redhat-operator-index:v4.6, skip writing to ImageContentSourcePolicy
wrote mirroring manifests to manifests-redhat-operator-index-1607062454
[root@preserve-olm-env 1895367]# export db_path="/tmp/$(ls -trh1 /tmp/ | tail -1)/index.db"
[root@preserve-olm-env 1895367]# echo $db_path
/tmp/850764862/index.db
[root@preserve-olm-env 1895367]# ln -s manifests-redhat-operator-index-1607062454 redhat-operator-index-manifests
[root@preserve-olm-env 1895367]# for sha in $(echo "select * from related_image where operatorbundle_name like 'clusterlogging%';" | sqlite3 -line $db_path | grep sha256 | cut -d ":" -f2); do grep $sha $default_mapping >> $custom_mapping; done
[root@preserve-olm-env 1895367]# cat $custom_mapping
registry.redhat.io/openshift4/ose-logging-curator5@sha256:73884604ac4506bcfb2a3c112eb621f40e0cd53fede8118e0f7a1b292ac8f924=quay.io/kuiwang/openshift4-ose-logging-curator5:f6789123
registry.redhat.io/openshift4/ose-logging-fluentd@sha256:11ccb42f3d96b065f7d94879611a7aefabbe509b522c11ac36be7a1c959a34d6=quay.io/kuiwang/openshift4-ose-logging-fluentd:1f0b20f7
registry.redhat.io/openshift4/ose-cluster-logging-operator@sha256:540b0d087c5e5529bab555030310478630249a9339a4a4c3fac6d0d7037d5eac=quay.io/kuiwang/openshift4-ose-cluster-logging-operator:5f9e513e
[root@preserve-olm-env 1895367]# cat $default_mapping
registry.redhat.io/openshift4/performance-addon-operator-bundle-registry-container-rhel8@sha256:ec1cceb8325d2c0d24141a1acd819edd5ebb68bbe5edafafd801f9481b13b87f=quay.io/kuiwang/openshift4-performance-addon-operator-bundle-registry-container-rhel8:8e15ece1
registry.redhat.io/openshift4/ose-logging-curator5@sha256:73884604ac4506bcfb2a3c112eb621f40e0cd53fede8118e0f7a1b292ac8f924=quay.io/kuiwang/openshift4-ose-logging-curator5:f6789123
registry.redhat.io/openshift4/ose-logging-fluentd@sha256:11ccb42f3d96b065f7d94879611a7aefabbe509b522c11ac36be7a1c959a34d6=quay.io/kuiwang/openshift4-ose-logging-fluentd:1f0b20f7
quay.io/kuiwang/redhat-operator-index:v4.6=quay.io/kuiwang/kuiwang-redhat-operator-index:v4.6
registry.redhat.io/openshift4/ose-cluster-logging-operator@sha256:540b0d087c5e5529bab555030310478630249a9339a4a4c3fac6d0d7037d5eac=quay.io/kuiwang/openshift4-ose-cluster-logging-operator:5f9e513e
registry.redhat.io/openshift4/performance-addon-rhel8-operator@sha256:5bd10e162f1af1b66064ed72468695fb962baa4d7ce3f5a61c62dd14c71a2e76=quay.io/kuiwang/openshift4-performance-addon-rhel8-operator:32ce28da
registry.redhat.io/openshift4/ose-cluster-logging-operator-bundle@sha256:61aca61840dcf1d50f4a17fc9b2e10b7855c563bd6680b4dc77e3e9283c81369=quay.io/kuiwang/openshift4-ose-cluster-logging-operator-bundle:8f6d6578
[root@preserve-olm-env 1895367]# 

[root@preserve-olm-env 1895367]# grep logging $default_mapping
registry.redhat.io/openshift4/ose-logging-curator5@sha256:73884604ac4506bcfb2a3c112eb621f40e0cd53fede8118e0f7a1b292ac8f924=quay.io/kuiwang/openshift4-ose-logging-curator5:f6789123
registry.redhat.io/openshift4/ose-logging-fluentd@sha256:11ccb42f3d96b065f7d94879611a7aefabbe509b522c11ac36be7a1c959a34d6=quay.io/kuiwang/openshift4-ose-logging-fluentd:1f0b20f7
registry.redhat.io/openshift4/ose-cluster-logging-operator@sha256:540b0d087c5e5529bab555030310478630249a9339a4a4c3fac6d0d7037d5eac=quay.io/kuiwang/openshift4-ose-cluster-logging-operator:5f9e513e
registry.redhat.io/openshift4/ose-cluster-logging-operator-bundle@sha256:61aca61840dcf1d50f4a17fc9b2e10b7855c563bd6680b4dc77e3e9283c81369=quay.io/kuiwang/openshift4-ose-cluster-logging-operator-bundle:8f6d6578
[root@preserve-olm-env 1895367]# grep logging $custom_mapping
registry.redhat.io/openshift4/ose-logging-curator5@sha256:73884604ac4506bcfb2a3c112eb621f40e0cd53fede8118e0f7a1b292ac8f924=quay.io/kuiwang/openshift4-ose-logging-curator5:f6789123
registry.redhat.io/openshift4/ose-logging-fluentd@sha256:11ccb42f3d96b065f7d94879611a7aefabbe509b522c11ac36be7a1c959a34d6=quay.io/kuiwang/openshift4-ose-logging-fluentd:1f0b20f7
registry.redhat.io/openshift4/ose-cluster-logging-operator@sha256:540b0d087c5e5529bab555030310478630249a9339a4a4c3fac6d0d7037d5eac=quay.io/kuiwang/openshift4-ose-cluster-logging-operator:5f9e513e
[root@preserve-olm-env 1895367]# 


--

Comment 5 Ben Luddy 2020-12-04 07:11:14 UTC
This patch affects index image builds, so you should not see a difference in registry.redhat.io/redhat/redhat-operator-index:v4.6 unless the pipelines that built the 4.6 index images were using the latest opm 4.7.

It will be easier to test this by building a new index with a build of opm that contains the patch, then checking that the bundle image appears in the related_images table:

---

$ opm index add --mode=semver -b quay.io/openshift-community-operators/eclipse-che:v7.3.1 -t testindex
...
$ podman create testindex
cb22835fc2d9c3b057908a66d1a59a5fe5b900f3e16fa20d7949f2d910de5f60
$ podman cp cb22:/database/index.db .
...
$ sqlite3 index.db 'select * from related_image;'
quay.io/openshift-community-operators/eclipse-che:v7.3.1|eclipse-che.v7.3.1
quay.io/eclipse/che-operator:7.3.1|eclipse-che.v7.3.1

---

If you follow the same steps with a build of opm that doesn't have the patch, the last step will not output the bundle image:

$ sqlite3 index.db 'select * from related_image;'
quay.io/eclipse/che-operator:7.3.1|eclipse-che.v7.3.1

Please let me know if you have any other questions.

Comment 6 kuiwang 2020-12-04 08:49:04 UTC
Verify it on 4.7. LGTM

Note: need official index image rebuilt with opm which has the fix.

--
### take index image built with verifying 1898500
[root@preserve-olm-env operator-registry]# tree manifests/teiid-1898500
manifests/teiid-1898500
|-- 0.0.1
|   |-- teiid.0.0.1.clusterserviceversion.yaml
|   `-- virtualdatabases.teiid.io.crd.yaml
|-- 0.1.0
|   |-- teiid.0.1.0.clusterserviceversion.yaml
|   `-- virtualdatabase.crd.yaml
|-- 0.1.1
|   |-- teiid.0.1.1.clusterserviceversion.yaml
|   `-- virtualdatabase.crd.yaml
|-- 0.2.0
|   |-- teiid.io_virtualdatabases_crd.yaml
|   `-- teiid.v0.2.0.clusterserviceversion.yaml
|-- 0.3.0
|   |-- teiid.io_virtualdatabases_crd.yaml
|   |-- teiid_service.yaml
|   `-- teiid.v0.3.0.clusterserviceversion.yaml
|-- 0.4.0
|   |-- teiid.io_virtualdatabases_crd.yaml
|   |-- teiid_service.yaml
|   `-- teiid.v0.4.0.clusterserviceversion.yaml
|-- ci.yaml
`-- teiid.package.yaml

6 directories, 16 files
[root@preserve-olm-env operator-registry]# cat manifests/teiid-1898500/0.3.0/teiid_service.yaml 
apiVersion: v1
kind: Service
metadata:
  creationTimestamp: null
  labels:
    app: teiid-operator
  name: teiid-service
spec:
  ports:
  - name: https
    port: 8443
    targetPort: https
  selector:
    app: teiid-operator
status:
  loadBalancer: {}
[root@preserve-olm-env operator-registry]# cat manifests/teiid-1898500/0.4.0/teiid_service.yaml 
apiVersion: v1
kind: Service
metadata:
  creationTimestamp: null
  labels:
    app: teiid-operator
  name: teiid-service
spec:
  ports:
  - name: https
    port: 8443
    targetPort: https
  selector:
    app: teiid-operator
status:
  loadBalancer: {}
[root@preserve-olm-env operator-registry]# 


[root@preserve-olm-env operator-registry]# ./bin/opm alpha bundle build --directory /root/kuiwang/operator-registry/manifests/teiid-1898500/0.3.0 --tag quay.io/kuiwang/teiid-operator:v1898500-3 -p teiid -c alpha -e alpha
INFO[0000] Building annotations.yaml                    
INFO[0000] Writing annotations.yaml in /root/kuiwang/operator-registry/manifests/teiid-1898500/metadata 
INFO[0000] Building Dockerfile                          
INFO[0000] Writing bundle.Dockerfile in /root/kuiwang/operator-registry 
INFO[0000] Building bundle image                        
Sending build context to Docker daemon  117.6MB
Step 1/9 : FROM scratch
 ---> 
Step 2/9 : LABEL operators.operatorframework.io.bundle.mediatype.v1=registry+v1
 ---> Using cache
 ---> 17f4d6cc02f6
Step 3/9 : LABEL operators.operatorframework.io.bundle.manifests.v1=manifests/
 ---> Using cache
 ---> ed5b62e609a0
Step 4/9 : LABEL operators.operatorframework.io.bundle.metadata.v1=metadata/
 ---> Using cache
 ---> 958a7490fbd5
Step 5/9 : LABEL operators.operatorframework.io.bundle.package.v1=teiid
 ---> Using cache
 ---> 660a23efdfcf
Step 6/9 : LABEL operators.operatorframework.io.bundle.channels.v1=alpha
 ---> Running in 52324d71259a
Removing intermediate container 52324d71259a
 ---> 89d62d90f67d
Step 7/9 : LABEL operators.operatorframework.io.bundle.channel.default.v1=alpha
 ---> Running in 6eb437988134
Removing intermediate container 6eb437988134
 ---> 10ffde972c83
Step 8/9 : COPY manifests/teiid-1898500/0.3.0 /manifests/
 ---> 9f0f3474803a
Step 9/9 : COPY manifests/teiid-1898500/metadata /metadata/
 ---> bf886260d189
Successfully built bf886260d189
Successfully tagged quay.io/kuiwang/teiid-operator:v1898500-3
[root@preserve-olm-env operator-registry]# docker push quay.io/kuiwang/teiid-operator:v1898500-3
The push refers to repository [quay.io/kuiwang/teiid-operator]
f733c016c96f: Pushed 
90b0a18e463d: Pushed 
v1898500-3: digest: sha256:43457b119b4332054cdd27746366fe197987cf4d3f54079d4dcc4c41f33aa34e size: 733
[root@preserve-olm-env operator-registry]# rm -fr bundle.Dockerfile manifests/teiid-1898500/metadata/
[root@preserve-olm-env operator-registry]# 


[root@preserve-olm-env operator-registry]# ./bin/opm alpha bundle build --directory /root/kuiwang/operator-registry/manifests/teiid-1898500/0.4.0 --tag quay.io/kuiwang/teiid-operator:v1898500-4 -p teiid -c beta -e beta
INFO[0000] Building annotations.yaml                    
INFO[0000] Writing annotations.yaml in /root/kuiwang/operator-registry/manifests/teiid-1898500/metadata 
INFO[0000] Building Dockerfile                          
INFO[0000] Writing bundle.Dockerfile in /root/kuiwang/operator-registry 
INFO[0000] Building bundle image                        
Sending build context to Docker daemon  117.6MB
Step 1/9 : FROM scratch
 ---> 
Step 2/9 : LABEL operators.operatorframework.io.bundle.mediatype.v1=registry+v1
 ---> Using cache
 ---> 17f4d6cc02f6
Step 3/9 : LABEL operators.operatorframework.io.bundle.manifests.v1=manifests/
 ---> Using cache
 ---> ed5b62e609a0
Step 4/9 : LABEL operators.operatorframework.io.bundle.metadata.v1=metadata/
 ---> Using cache
 ---> 958a7490fbd5
Step 5/9 : LABEL operators.operatorframework.io.bundle.package.v1=teiid
 ---> Using cache
 ---> 660a23efdfcf
Step 6/9 : LABEL operators.operatorframework.io.bundle.channels.v1=beta
 ---> Using cache
 ---> c841860d7e05
Step 7/9 : LABEL operators.operatorframework.io.bundle.channel.default.v1=beta
 ---> Using cache
 ---> add28127624d
Step 8/9 : COPY manifests/teiid-1898500/0.4.0 /manifests/
 ---> c76d8d190c3b
Step 9/9 : COPY manifests/teiid-1898500/metadata /metadata/
 ---> 73fa5e8e73e7
Successfully built 73fa5e8e73e7
Successfully tagged quay.io/kuiwang/teiid-operator:v1898500-4
[root@preserve-olm-env operator-registry]# docker push quay.io/kuiwang/teiid-operator:v1898500-4
The push refers to repository [quay.io/kuiwang/teiid-operator]
bd961919d71f: Pushed 
2cb0281357ce: Pushed 
v1898500-4: digest: sha256:a391c30fbce9e159eea312aa8193936bbbb637e9ae98087246828ee121aaa788 size: 733
[root@preserve-olm-env operator-registry]# rm -fr bundle.Dockerfile manifests/teiid-1898500/metadata/
[root@preserve-olm-env operator-registry]# 

[root@preserve-olm-env operator-registry]# ./bin/opm index add --bundles quay.io/kuiwang/teiid-operator:v1898500-3 --tag quay.io/kuiwang/teiid-index:1898500 -c docker
INFO[0000] building the index                            bundles="[quay.io/kuiwang/teiid-operator:v1898500-3]"
INFO[0000] running /usr/bin/docker pull quay.io/kuiwang/teiid-operator:v1898500-3  bundles="[quay.io/kuiwang/teiid-operator:v1898500-3]"
INFO[0000] running docker create                         bundles="[quay.io/kuiwang/teiid-operator:v1898500-3]"
INFO[0000] running docker cp                             bundles="[quay.io/kuiwang/teiid-operator:v1898500-3]"
INFO[0000] running docker rm                             bundles="[quay.io/kuiwang/teiid-operator:v1898500-3]"
INFO[0000] Could not find optional dependencies file     dir=bundle_tmp154897146 file=bundle_tmp154897146/metadata load=annotations
INFO[0000] found csv, loading bundle                     dir=bundle_tmp154897146 file=bundle_tmp154897146/manifests load=bundle
INFO[0000] loading bundle file                           dir=bundle_tmp154897146/manifests file=teiid.io_virtualdatabases_crd.yaml load=bundle
INFO[0000] loading bundle file                           dir=bundle_tmp154897146/manifests file=teiid.v0.3.0.clusterserviceversion.yaml load=bundle
INFO[0000] loading bundle file                           dir=bundle_tmp154897146/manifests file=teiid_service.yaml load=bundle
INFO[0001] Generating dockerfile                         bundles="[quay.io/kuiwang/teiid-operator:v1898500-3]"
INFO[0001] writing dockerfile: index.Dockerfile121648264  bundles="[quay.io/kuiwang/teiid-operator:v1898500-3]"
INFO[0001] running docker build                          bundles="[quay.io/kuiwang/teiid-operator:v1898500-3]"
INFO[0001] [docker build -f index.Dockerfile121648264 -t quay.io/kuiwang/teiid-index:1898500 .]  bundles="[quay.io/kuiwang/teiid-operator:v1898500-3]"
[root@preserve-olm-env operator-registry]# docker push quay.io/kuiwang/teiid-index:1898500
The push refers to repository [quay.io/kuiwang/teiid-index]
b8a0e050ae65: Pushed 
53b2ccb878f2: Mounted from operator-framework/upstream-opm-builder 
772b412a6a9d: Mounted from operator-framework/upstream-opm-builder 
371d4702865c: Layer already exists 
4570b0d18853: Layer already exists 
ace0eda3e3be: Layer already exists 
1898500: digest: sha256:c2d7fcab488f615fd356060a804cf2f145249105547a9e0b2813615de7f1620f size: 1578
[root@preserve-olm-env operator-registry]# ./bin/opm index add --bundles quay.io/kuiwang/teiid-operator:v1898500-4 --from-index quay.io/kuiwang/teiid-index:1898500 --tag quay.io/kuiwang/teiid-index:1898500 -c docker
INFO[0000] building the index                            bundles="[quay.io/kuiwang/teiid-operator:v1898500-4]"
INFO[0000] Pulling previous image quay.io/kuiwang/teiid-index:1898500 to get metadata  bundles="[quay.io/kuiwang/teiid-operator:v1898500-4]"
INFO[0000] running /usr/bin/docker pull quay.io/kuiwang/teiid-index:1898500  bundles="[quay.io/kuiwang/teiid-operator:v1898500-4]"
INFO[0000] running /usr/bin/docker pull quay.io/kuiwang/teiid-index:1898500  bundles="[quay.io/kuiwang/teiid-operator:v1898500-4]"
INFO[0001] Getting label data from previous image        bundles="[quay.io/kuiwang/teiid-operator:v1898500-4]"
INFO[0001] running docker inspect                        bundles="[quay.io/kuiwang/teiid-operator:v1898500-4]"
INFO[0001] running docker create                         bundles="[quay.io/kuiwang/teiid-operator:v1898500-4]"
INFO[0001] running docker cp                             bundles="[quay.io/kuiwang/teiid-operator:v1898500-4]"
INFO[0001] running docker rm                             bundles="[quay.io/kuiwang/teiid-operator:v1898500-4]"
INFO[0001] running /usr/bin/docker pull quay.io/kuiwang/teiid-operator:v1898500-4  bundles="[quay.io/kuiwang/teiid-operator:v1898500-4]"
INFO[0002] running docker create                         bundles="[quay.io/kuiwang/teiid-operator:v1898500-4]"
INFO[0002] running docker cp                             bundles="[quay.io/kuiwang/teiid-operator:v1898500-4]"
INFO[0002] running docker rm                             bundles="[quay.io/kuiwang/teiid-operator:v1898500-4]"
INFO[0002] Could not find optional dependencies file     dir=bundle_tmp866806378 file=bundle_tmp866806378/metadata load=annotations
INFO[0002] found csv, loading bundle                     dir=bundle_tmp866806378 file=bundle_tmp866806378/manifests load=bundle
INFO[0002] loading bundle file                           dir=bundle_tmp866806378/manifests file=teiid.io_virtualdatabases_crd.yaml load=bundle
INFO[0002] loading bundle file                           dir=bundle_tmp866806378/manifests file=teiid.v0.4.0.clusterserviceversion.yaml load=bundle
INFO[0002] loading bundle file                           dir=bundle_tmp866806378/manifests file=teiid_service.yaml load=bundle
INFO[0002] Generating dockerfile                         bundles="[quay.io/kuiwang/teiid-operator:v1898500-4]"
INFO[0002] writing dockerfile: index.Dockerfile291380344  bundles="[quay.io/kuiwang/teiid-operator:v1898500-4]"
INFO[0002] running docker build                          bundles="[quay.io/kuiwang/teiid-operator:v1898500-4]"
INFO[0002] [docker build -f index.Dockerfile291380344 -t quay.io/kuiwang/teiid-index:1898500 .]  bundles="[quay.io/kuiwang/teiid-operator:v1898500-4]"
[root@preserve-olm-env operator-registry]# docker push quay.io/kuiwang/teiid-index:1898500
The push refers to repository [quay.io/kuiwang/teiid-index]
e42f38b23694: Pushed 
53b2ccb878f2: Layer already exists 
772b412a6a9d: Layer already exists 
371d4702865c: Layer already exists 
4570b0d18853: Layer already exists 
ace0eda3e3be: Layer already exists 
1898500: digest: sha256:0c3d4787022d27d80efaa3f3ab25ef648dcd1501a6bd8e8d6d042a7623f564ce size: 1578

[root@preserve-olm-env 1895367]# docker create quay.io/kuiwang/teiid-index:1898500
f1b39d407c3c93b153e9ff7cafa38b6551d925c4c9748f272dde562cedd1f5c5
[root@preserve-olm-env 1895367]# docker cp f1b39d407c3c93b153e9ff7cafa38b6551d925c4c9748f272dde562cedd1f5c5:/database/index.db .
[root@preserve-olm-env 1895367]# sqlite3 index.db 'select * from related_image;'
quay.io/kuiwang/teiid-operator:v1898500-3|teiid.v0.3.0
quay.io/teiid/teiid-operator:0.3.0|teiid.v0.3.0
quay.io/teiid/teiid-operator:0.4.0|teiid.v0.4.0
quay.io/kuiwang/teiid-operator:v1898500-4|teiid.v0.4.0
[root@preserve-olm-env 1895367]# docker rm f1b39d407c3c93b153e9ff7cafa38b6551d925c4c9748f272dde562cedd1f5c5
f1b39d407c3c93b153e9ff7cafa38b6551d925c4c9748f272dde562cedd1f5c5
[root@preserve-olm-env 1895367]# 

##here quay.io/kuiwang/teiid-operator:v1898500-3 and quay.io/kuiwang/teiid-operator:v1898500-4 are bundle image
--

Comment 10 errata-xmlrpc 2021-02-24 15:31:26 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: OpenShift Container Platform 4.7.0 security, bug fix, and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2020:5633