Bug 1851543 - Image digest written to mirror for the elasticsearch operator does not match expected image digest
Summary: Image digest written to mirror for the elasticsearch operator does not match ...
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Documentation
Version: 4.3.z
Hardware: Unspecified
OS: Unspecified
high
medium
Target Milestone: ---
: 4.6.z
Assignee: Michael Burke
QA Contact: Anping Li
Latha S
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-06-26 21:39 UTC by Courtney Ruhm
Modified: 2024-03-25 16:06 UTC (History)
12 users (show)

Fixed In Version:
Doc Type: Enhancement
Doc Text:
Feature: OCP supports multi-arch operators by having operator manifests refer to container images at a manifest list (the target system resolves this to the image manifest for its own platform). (Warning, "manifest" is overloaded here. Operator manifests are metadata about the operator including its CSV. A "manifest list" or "image manifest" refers to docker entities.) Reason: This enables supporting clusters with multiple platforms with a single operator manifest (rather than releasing and consuming a different operator manifest for each platform). Result: Mirroring operators to a private location requires mirroring the manifest list, not just images for the target cluster platform. Most registries requires container images for all of the platforms to be mirrored as well in order to mirror the manifest list. * *** These changes are relevant for 4.2 through 4.5 (using latter as base): *** * https://docs.openshift.com/container-platform/4.5/operators/admin/olm-restricted-networks.html No change in "Building an Operator catalog image" step 2. "oc adm catalog build" does not appear to handle manifest lists, so a filter is still required there. Configuring OperatorHub for restricted networks step 2. "oc adm catalog mirror" also does not appear to handle manifest lists (there is an error with --filter-by-os=".*") so I believe it will implicitly filter according to the client system if no filter is given. But someone who can actually run this should tell us whether it's even possible to use this without --manifests-only since it doesn't appear to directly support mirroring manifest lists at all and they must use "oc image mirror" in the next step to get the job done. In that case, this command doc should be altered to remove [--filter-by-os="<os>/<arch>"] entirely and require --manifests-only. Step 3b "oc image mirror" should look like this: $ oc image mirror \ --filter-by-os=".*" \ [-a ${REG_CREDS}] \ -f ./redhat-operators-manifests/mapping.txt This may be a good place to note that images for all arches are mirrored even if you only need one arch. The only case where they could get away with not doing that is if they don't need any of our multi-arch operators, and I'm not sure that's a use case worth restructuring this whole doc for.
Clone Of:
Environment:
Last Closed: 2022-05-23 15:10:00 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHDEVDOCS-3897 0 None None None 2022-03-28 13:43:37 UTC
Red Hat Knowledge Base (Solution) 5200741 0 None None None 2020-07-02 23:40:33 UTC

Description Courtney Ruhm 2020-06-26 21:39:42 UTC
Description of problem:

Image digest written to mirror for the elasticsearch operator does not match expected image digest. Prevents elasticsearch from installing and running correctly.


Version-Release number of selected component (if applicable):

4.3.8

How reproducible:

Every time

Steps to Reproduce:

I tested this on my disconnected cluster.  The problem seems to stem from the manifest not existing in the mirrored registry. Below is an extract of the event from cri-o[ref 2].  This shows the attempt to pull from the mirrored registry by cri-o.  When using skopeo inspect, it appears that the digest is indeed different[ref 3]

[0] - [root@gss-ose-3-openshift redhat-operators-manifests]# podman pull registry.redhat.io/openshift4/ose-elasticsearch-operator@sha256:13e233ff2dd41967c55194724ba148868ed878232789ab616ea3a6f9a9219c97
Trying to pull registry.redhat.io/openshift4/ose-elasticsearch-operator@sha256:13e233ff2dd41967c55194724ba148868ed878232789ab616ea3a6f9a9219c97...
Getting image source signatures
Copying blob 66f2cdbd1337 skipped: already exists  
Copying blob 82a8f4ea76cb skipped: already exists  
Copying blob a3ac36470b00 skipped: already exists  
Copying config a02e4e75c9 done  
Writing manifest to image destination
Storing signatures
a02e4e75c95dab28a2d8b6968bdf383edf98654b654f216e00529320f70a773c


[1] - 
[root@gss-ose-3-openshift redhat-operators-manifests]# podman pull gss-ose-3-openshift.lab.eng.rdu2.redhat.com:5000/openshift4/ose-elasticsearch-operator@sha256:13e233ff2dd41967c55194724ba148868ed878232789ab616ea3a6f9a9219c97
Trying to pull gss-ose-3-openshift.lab.eng.rdu2.redhat.com:5000/openshift4/ose-elasticsearch-operator@sha256:13e233ff2dd41967c55194724ba148868ed878232789ab616ea3a6f9a9219c97...
  manifest unknown: manifest unknown
Error: error pulling image "gss-ose-3-openshift.lab.eng.rdu2.redhat.com:5000/openshift4/ose-elasticsearch-operator@sha256:13e233ff2dd41967c55194724ba148868ed878232789ab616ea3a6f9a9219c97": unable to pull gss-ose-3-openshift.lab.eng.rdu2.redhat.com:5000/openshift4/ose-elasticsearch-operator@sha256:13e233ff2dd41967c55194724ba148868ed878232789ab616ea3a6f9a9219c97: unable to pull image: Error initializing source docker://gss-ose-3-openshift.lab.eng.rdu2.redhat.com:5000/openshift4/ose-elasticsearch-operator@sha256:13e233ff2dd41967c55194724ba148868ed878232789ab616ea3a6f9a9219c97: Error reading manifest sha256:13e233ff2dd41967c55194724ba148868ed878232789ab616ea3a6f9a9219c97 in gss-ose-3-openshift.lab.eng.rdu2.redhat.com:5000/openshift4/ose-elasticsearch-operator: manifest unknown: manifest unknown

[2] - 
Jun 26 17:24:26 worker-0 crio[2122871]: time="2020-06-26 17:24:26.365058153Z" level=debug msg="request: &ExecSyncRequest{ContainerId:df534a6f4028ee7bca18bc1fe24548ee9ef17b9ba69b34b920d2647a811ad80e,Cmd:[test -f /etc/cni/net.d/80-openshift-network.conf],Timeout:1,}" file="v1alpha2/api.pb.go:7852" id=4b063370-eb27-4996-b3b1-598d74fa4ea2
Jun 26 17:24:26 worker-0 crio[2122871]: time="2020-06-26 17:24:26.415829239Z" level=debug msg="request: &ExecSyncRequest{ContainerId:5a92b9a45fb30e0ec7f1f2a929542f0e2e318480f8c15bd591c98d85c05525be,Cmd:[/bin/bash -c #!/bin/bash\n/usr/share/openvswitch/scripts/ovs-ctl status > /dev/null &&\n/usr/bin/ovs-appctl -T 5 ofproto/list > /dev/null &&\n/usr/bin/ovs-vsctl -t 5 show > /dev/null\n],Timeout:1,}" file="v1alpha2/api.pb.go:7852" id=d392d722-75f6-4035-b857-bbcc2304c446
Jun 26 17:24:26 worker-0 crio[2122871]: time="2020-06-26 17:24:26.520796247Z" level=debug msg="request: &ImageStatusRequest{Image:&ImageSpec{Image:registry.redhat.io/openshift4/ose-elasticsearch-operator@sha256:13e233ff2dd41967c55194724ba148868ed878232789ab616ea3a6f9a9219c97,},Verbose:false,}" file="v1alpha2/api.pb.go:8226" id=e9c1d307-c0be-43a0-a729-4b3ac570b750
Jun 26 17:24:26 worker-0 crio[2122871]: time="2020-06-26 17:24:26.521775050Z" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/var/run/containers/storage]registry.redhat.io/openshift4/ose-elasticsearch-operator@sha256:13e233ff2dd41967c55194724ba148868ed878232789ab616ea3a6f9a9219c97\"" file="storage/storage_transport.go:174"
Jun 26 17:24:26 worker-0 crio[2122871]: time="2020-06-26 17:24:26.522714872Z" level=debug msg="reference \"[overlay@/var/lib/containers/storage+/var/run/containers/storage]registry.redhat.io/openshift4/ose-elasticsearch-operator@sha256:13e233ff2dd41967c55194724ba148868ed878232789ab616ea3a6f9a9219c97\" does not resolve to an image ID" file="storage/storage_reference.go:161"
Jun 26 17:24:26 worker-0 crio[2122871]: time="2020-06-26 17:24:26.522940616Z" level=warning msg="imageStatus: can't find registry.redhat.io/openshift4/ose-elasticsearch-operator@sha256:13e233ff2dd41967c55194724ba148868ed878232789ab616ea3a6f9a9219c97" file="server/image_status.go:49" id=e9c1d307-c0be-43a0-a729-4b3ac570b750
Jun 26 17:24:26 worker-0 crio[2122871]: time="2020-06-26 17:24:26.523150845Z" level=debug msg="response: &ImageStatusResponse{Image:nil,Info:map[string]string{},}" file="v1alpha2/api.pb.go:8226" id=e9c1d307-c0be-43a0-a729-4b3ac570b750
Jun 26 17:24:26 worker-0 crio[2122871]: time="2020-06-26 17:24:26.527120692Z" level=debug msg="request: &PullImageRequest{Image:&ImageSpec{Image:registry.redhat.io/openshift4/ose-elasticsearch-operator@sha256:13e233ff2dd41967c55194724ba148868ed878232789ab616ea3a6f9a9219c97,},Auth:&AuthConfig{Username:52481163|uhc-1ImGTzaH7nshUGWQkKGiduRrOce,Password:eyJhbGciOiJSUzUxMiJ9.eyJzdWIiOiIyNTI3N2MyOWVkOTQ0ZGQxYTcyZjVlYmRkZjU3ZGJlMCJ9.JSI265W0tlScBi-0acRmH1U_xNe3yEkXZsHqmQc_WLI87bN7taX6YBivm3UqQgWNxglJ-rQaa1A48VnK4BFunW7CZKcjIf_Jetb0Wst98pIsXq_KvTIZkFfjO5Y2U3BiPnFeITdd_2JCcnuuDVpsDBj8O6JVf3C4M7viVkTq8OvH4i_PRYmEe_2EAhgRTmW26Q-1OJpe4CT8hnItsvYFMdhP79pjL6GC7bGUt65DiDyPHUtZMybAyahZR8E6fz1OLiNfiPrCHMMujBBewkabzEp0kxTf5XOfHXtZ9dsTSu1dy4imvNoueFxgoCl1J6bBelnQezD-2heEVIFkCUVv391UYTN-x7PWwOO-4gjqsvaga3FtElHuKpPuHq7CviZhDmFxIfclDLagL3P59LJHlYFT7ePKtAQqkFvrgqrbj3eSbyc6_fnY2-ecsmrYeEYLhDifBFYvTfnx69xM3DF5C3-aEJQX3r28gSsRDPgw5UXVuLLlZH94Un73QgK8hxfEWgBd1P2AoJzIOkh5_vURj9V3FnAhClXJEuiI4yMmlPi2iVDbuyzuK0HaYJyFdriQIXLxRKhykpGUkK2iqQhLhA4TyzQwAIk1w944UCZNWP0BrGRnWf3z9GtFgUCM54nlXTmYOL9EKxJ_wXn66o_iI7_pQWIPH79y-w25uWiMNXs,Auth:,ServerAddress:,IdentityToken:,RegistryToken:,},SandboxConfig:&PodSandboxConfig{Metadata:&PodSandboxMetadata{Name:elasticsearch-operator-555fcd468d-n8n6x,Uid:4fd6582c-6024-4efe-af26-86ef9d0a84d0,Namespace:openshift-operators,Attempt:0,},Hostname:elasticsearch-operator-555fcd468d-n8n6x,LogDirectory:/var/log/pods/openshift-operators_elasticsearch-operator-555fcd468d-n8n6x_4fd6582c-6024-4efe-af26-86ef9d0a84d0,DnsConfig:&DNSConfig{Servers:[172.30.0.10],Searches:[openshift-operators.svc.cluster.local svc.cluster.local cluster.local shift.repro.alpha.rhcee.support],Options:[ndots:5],},PortMappings:[]*PortMapping{&PortMapping{Protocol:TCP,ContainerPort:60000,HostPort:0,HostIp:,},},Labels:map[string]string{io.kubernetes.pod.name: elasticsearch-operator-555fcd468d-n8n6x,io.kubernetes.pod.namespace: openshift-operators,io.kubernetes.pod.uid: 4fd6582c-6024-4efe-af26-86ef9d0a84d0,name: elasticsearch-operator,pod-template-hash: 555fcd468d,},Annotations:map[string]string{alm-examples: [\n    {\n        \"apiVersion\": \"logging.openshift.io/v1\",\n        \"kind\": \"Elasticsearch\",\n        \"metadata\": {\n          \"name\": \"elasticsearch\"\n        },\n        \"spec\": {\n          \"managementState\": \"Managed\",\n          \"nodeSpec\": {\n            \"image\": \"registry.redhat.io/openshift4/ose-logging-elasticsearch5@sha256:771f91bd4c0e454f773634dca7c9993b6e6fd985d33515eb2d2244b9ef799614\",\n            \"resources\": {\n              \"limits\": {\n                \"memory\": \"1Gi\"\n              },\n              \"requests\": {\n                \"memory\": \"512Mi\"\n              }\n            }\n          },\n          \"redundancyPolicy\": \"SingleRedundancy\",\n          \"nodes\": [\n            {\n                \"nodeCount\": 1,\n                \"roles\": [\"client\",\"data\",\"master\"]\n            }\n          ]\n        }\n    }\n],capabilities: Seamless Upgrades,categories: OpenShift Optional, Logging & Tracing,certified: false,containerImage: registry.redhat.io/openshift4/ose-elasticsearch-operator@sha256:13e233ff2dd41967c55194724ba148868ed878232789ab616ea3a6f9a9219c97,createdAt: 2019-02-20T08:00:00Z,description: The Elasticsearch Operator for OKD provides a means for configuring and managing an Elasticsearch cluster for tracing and cluster logging.\n## Prerequisites and Requirements\n### Elasticsearch Operator Namespace\nThe Elasticsearch Operator must be deployed to the global operator group namespace\n### Memory Considerations\nElasticsearch is a memory intensive application.  The initial\nset of OKD nodes may not be large enough to support the Elasticsearch cluster.  Additional OKD nodes must be added\nto the OKD cluster if you desire to run with the recommended (or better) memory. Each ES node can operate with a\nlower memory setting though this is not recommended for production deployments.,k8s.v1.cni.cncf.io/networks-status: [{\n    \"name\": \"openshift-sdn\",\n    \"interface\": \"eth0\",\n    \"ips\": [\n        \"10.129.0.16\"\n    ],\n    \"dns\": {},\n    \"default-route\": [\n        \"10.129.0.1\"\n    ]\n}],kubernetes.io/config.seen: 2020-06-26T17:08:17.692067411Z,kubernetes.io/config.source: api,olm.operatorGroup: global-operators,olm.operatorNamespace: openshift-operators,olm.skipRange: >=4.1.0 <4.3.26-202006160135,olm.targetNamespaces: ,openshift.io/scc: restricted,support: AOS Cluster Logging, Jaeger,},Linux:&LinuxPodSandboxConfig{CgroupParent:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4fd6582c_6024_4efe_af26_86ef9d0a84d0.slice,SecurityContext:&LinuxSandboxSecurityContext{NamespaceOptions:&NamespaceOption{Network:POD,Pid:CONTAINER,Ipc:POD,},SelinuxOptions:&SELinuxOption{User:,Role:,Type:,Level:s0:c13,c7,},RunAsUser:nil,ReadonlyRootfs:false,SupplementalGroups:[1000170000],Privileged:false,SeccompProfilePath:,RunAsGroup:nil,},Sysctls:map[string]string{},},},}" file="v1alpha2/api.pb.go:8244" id=8bb8c88e-f70b-4c5f-8545-1370caab0423
Jun 26 17:24:26 worker-0 crio[2122871]: time="2020-06-26 17:24:26.527651364Z" level=debug msg="PullImageRequest: &PullImageRequest{Image:&ImageSpec{Image:registry.redhat.io/openshift4/ose-elasticsearch-operator@sha256:13e233ff2dd41967c55194724ba148868ed878232789ab616ea3a6f9a9219c97,},Auth:&AuthConfig{Username:52481163|uhc-1ImGTzaH7nshUGWQkKGiduRrOce,Password:eyJhbGciOiJSUzUxMiJ9.eyJzdWIiOiIyNTI3N2MyOWVkOTQ0ZGQxYTcyZjVlYmRkZjU3ZGJlMCJ9.JSI265W0tlScBi-0acRmH1U_xNe3yEkXZsHqmQc_WLI87bN7taX6YBivm3UqQgWNxglJ-rQaa1A48VnK4BFunW7CZKcjIf_Jetb0Wst98pIsXq_KvTIZkFfjO5Y2U3BiPnFeITdd_2JCcnuuDVpsDBj8O6JVf3C4M7viVkTq8OvH4i_PRYmEe_2EAhgRTmW26Q-1OJpe4CT8hnItsvYFMdhP79pjL6GC7bGUt65DiDyPHUtZMybAyahZR8E6fz1OLiNfiPrCHMMujBBewkabzEp0kxTf5XOfHXtZ9dsTSu1dy4imvNoueFxgoCl1J6bBelnQezD-2heEVIFkCUVv391UYTN-x7PWwOO-4gjqsvaga3FtElHuKpPuHq7CviZhDmFxIfclDLagL3P59LJHlYFT7ePKtAQqkFvrgqrbj3eSbyc6_fnY2-ecsmrYeEYLhDifBFYvTfnx69xM3DF5C3-aEJQX3r28gSsRDPgw5UXVuLLlZH94Un73QgK8hxfEWgBd1P2AoJzIOkh5_vURj9V3FnAhClXJEuiI4yMmlPi2iVDbuyzuK0HaYJyFdriQIXLxRKhykpGUkK2iqQhLhA4TyzQwAIk1w944UCZNWP0BrGRnWf3z9GtFgUCM54nlXTmYOL9EKxJ_wXn66o_iI7_pQWIPH79y-w25uWiMNXs,Auth:,ServerAddress:,IdentityToken:,RegistryToken:,},SandboxConfig:&PodSandboxConfig{Metadata:&PodSandboxMetadata{Name:elasticsearch-operator-555fcd468d-n8n6x,Uid:4fd6582c-6024-4efe-af26-86ef9d0a84d0,Namespace:openshift-operators,Attempt:0,},Hostname:elasticsearch-operator-555fcd468d-n8n6x,LogDirectory:/var/log/pods/openshift-operators_elasticsearch-operator-555fcd468d-n8n6x_4fd6582c-6024-4efe-af26-86ef9d0a84d0,DnsConfig:&DNSConfig{Servers:[172.30.0.10],Searches:[openshift-operators.svc.cluster.local svc.cluster.local cluster.local shift.repro.alpha.rhcee.support],Options:[ndots:5],},PortMappings:[]*PortMapping{&PortMapping{Protocol:TCP,ContainerPort:60000,HostPort:0,HostIp:,},},Labels:map[string]string{io.kubernetes.pod.name: elasticsearch-operator-555fcd468d-n8n6x,io.kubernetes.pod.namespace: openshift-operators,io.kubernetes.pod.uid: 4fd6582c-6024-4efe-af26-86ef9d0a84d0,name: elasticsearch-operator,pod-template-hash: 555fcd468d,},Annotations:map[string]string{alm-examples: [\n    {\n        \"apiVersion\": \"logging.openshift.io/v1\",\n        \"kind\": \"Elasticsearch\",\n        \"metadata\": {\n          \"name\": \"elasticsearch\"\n        },\n        \"spec\": {\n          \"managementState\": \"Managed\",\n          \"nodeSpec\": {\n            \"image\": \"registry.redhat.io/openshift4/ose-logging-elasticsearch5@sha256:771f91bd4c0e454f773634dca7c9993b6e6fd985d33515eb2d2244b9ef799614\",\n            \"resources\": {\n              \"limits\": {\n                \"memory\": \"1Gi\"\n              },\n              \"requests\": {\n                \"memory\": \"512Mi\"\n              }\n            }\n          },\n          \"redundancyPolicy\": \"SingleRedundancy\",\n          \"nodes\": [\n            {\n                \"nodeCount\": 1,\n                \"roles\": [\"client\",\"data\",\"master\"]\n            }\n          ]\n        }\n    }\n],capabilities: Seamless Upgrades,categories: OpenShift Optional, Logging & Tracing,certified: false,containerImage: registry.redhat.io/openshift4/ose-elasticsearch-operator@sha256:13e233ff2dd41967c55194724ba148868ed878232789ab616ea3a6f9a9219c97,createdAt: 2019-02-20T08:00:00Z,description: The Elasticsearch Operator for OKD provides a means for configuring and managing an Elasticsearch cluster for tracing and cluster logging.\n## Prerequisites and Requirements\n### Elasticsearch Operator Namespace\nThe Elasticsearch Operator must be deployed to the global operator group namespace\n### Memory Considerations\nElasticsearch is a memory intensive application.  The initial\nset of OKD nodes may not be large enough to support the Elasticsearch cluster.  Additional OKD nodes must be added\nto the OKD cluster if you desire to run with the recommended (or better) memory. Each ES node can operate with a\nlower memory setting though this is not recommended for production deployments.,k8s.v1.cni.cncf.io/networks-status: [{\n    \"name\": \"openshift-sdn\",\n    \"interface\": \"eth0\",\n    \"ips\": [\n        \"10.129.0.16\"\n    ],\n    \"dns\": {},\n    \"default-route\": [\n        \"10.129.0.1\"\n    ]\n}],kubernetes.io/config.seen: 2020-06-26T17:08:17.692067411Z,kubernetes.io/config.source: api,olm.operatorGroup: global-operators,olm.operatorNamespace: openshift-operators,olm.skipRange: >=4.1.0 <4.3.26-202006160135,olm.targetNamespaces: ,openshift.io/scc: restricted,support: AOS Cluster Logging, Jaeger,},Linux:&LinuxPodSandboxConfig{CgroupParent:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4fd6582c_6024_4efe_af26_86ef9d0a84d0.slice,SecurityContext:&LinuxSandboxSecurityContext{NamespaceOptions:&NamespaceOption{Network:POD,Pid:CONTAINER,Ipc:POD,},SelinuxOptions:&SELinuxOption{User:,Role:,Type:,Level:s0:c13,c7,},RunAsUser:nil,ReadonlyRootfs:false,SupplementalGroups:[1000170000],Privileged:false,SeccompProfilePath:,RunAsGroup:nil,},Sysctls:map[string]string{},},},}" file="server/image_pull.go:24" id=8bb8c88e-f70b-4c5f-8545-1370caab0423
Jun 26 17:24:26 worker-0 crio[2122871]: time="2020-06-26 17:24:26.538977723Z" level=debug msg="reference rewritten from 'registry.redhat.io/openshift4/ose-elasticsearch-operator@sha256:13e233ff2dd41967c55194724ba148868ed878232789ab616ea3a6f9a9219c97' to 'gss-ose-3-openshift.lab.eng.rdu2.redhat.com:5000/openshift4/ose-elasticsearch-operator@sha256:13e233ff2dd41967c55194724ba148868ed878232789ab616ea3a6f9a9219c97'" file="sysregistriesv2/system_registries_v2.go:52"
Jun 26 17:24:26 worker-0 crio[2122871]: time="2020-06-26 17:24:26.539240993Z" level=debug msg="reference rewritten from 'registry.redhat.io/openshift4/ose-elasticsearch-operator@sha256:13e233ff2dd41967c55194724ba148868ed878232789ab616ea3a6f9a9219c97' to 'registry.redhat.io/openshift4/ose-elasticsearch-operator@sha256:13e233ff2dd41967c55194724ba148868ed878232789ab616ea3a6f9a9219c97'" file="sysregistriesv2/system_registries_v2.go:52"
Jun 26 17:24:26 worker-0 crio[2122871]: time="2020-06-26 17:24:26.539428915Z" level=debug msg="Trying to pull \"gss-ose-3-openshift.lab.eng.rdu2.redhat.com:5000/openshift4/ose-elasticsearch-operator@sha256:13e233ff2dd41967c55194724ba148868ed878232789ab616ea3a6f9a9219c97\"" file="docker/docker_image_src.go:63"
Jun 26 17:24:26 worker-0 crio[2122871]: time="2020-06-26 17:24:26.539963698Z" level=debug msg="Returning credentials from /var/lib/kubelet/config.json" file="config/config.go:113"
Jun 26 17:24:26 worker-0 crio[2122871]: time="2020-06-26 17:24:26.540104036Z" level=debug msg="Using registries.d directory /etc/containers/registries.d for sigstore configuration" file="docker/lookaside.go:51"
Jun 26 17:24:26 worker-0 crio[2122871]: time="2020-06-26 17:24:26.540515346Z" level=debug msg=" Using \"default-docker\" configuration" file="docker/lookaside.go:169"
Jun 26 17:24:26 worker-0 crio[2122871]: time="2020-06-26 17:24:26.540630861Z" level=debug msg=" No signature storage configuration found for gss-ose-3-openshift.lab.eng.rdu2.redhat.com:5000/openshift4/ose-elasticsearch-operator@sha256:13e233ff2dd41967c55194724ba148868ed878232789ab616ea3a6f9a9219c97" file="docker/lookaside.go:174"
Jun 26 17:24:26 worker-0 crio[2122871]: time="2020-06-26 17:24:26.540826378Z" level=debug msg="Looking for TLS certificates and private keys in /etc/docker/certs.d/gss-ose-3-openshift.lab.eng.rdu2.redhat.com:5000" file="tlsclientconfig/tlsclientconfig.go:21"
Jun 26 17:24:26 worker-0 crio[2122871]: time="2020-06-26 17:24:26.541109053Z" level=debug msg="GET https://gss-ose-3-openshift.lab.eng.rdu2.redhat.com:5000/v2/" file="docker/docker_client.go:500"
Jun 26 17:24:26 worker-0 crio[2122871]: time="2020-06-26 17:24:26.567603773Z" level=debug msg="Received container exit code: 0, message: " file="oci/runtime_oci.go:472"
Jun 26 17:24:26 worker-0 crio[2122871]: time="2020-06-26 17:24:26.568401109Z" level=info msg="exec'd [test -f /etc/cni/net.d/80-openshift-network.conf] in openshift-sdn/sdn-p6p6p/sdn" file="server/container_execsync.go:49" id=4b063370-eb27-4996-b3b1-598d74fa4ea2
Jun 26 17:24:26 worker-0 crio[2122871]: time="2020-06-26 17:24:26.568559416Z" level=debug msg="response: &ExecSyncResponse{Stdout:[],Stderr:[],ExitCode:0,}" file="v1alpha2/api.pb.go:7852" id=4b063370-eb27-4996-b3b1-598d74fa4ea2
Jun 26 17:24:26 worker-0 crio[2122871]: time="2020-06-26 17:24:26.590142498Z" level=debug msg="Ping https://gss-ose-3-openshift.lab.eng.rdu2.redhat.com:5000/v2/ status 401" file="docker/docker_client.go:628"
Jun 26 17:24:26 worker-0 crio[2122871]: time="2020-06-26 17:24:26.590906239Z" level=debug msg="GET https://gss-ose-3-openshift.lab.eng.rdu2.redhat.com:5000/v2/openshift4/ose-elasticsearch-operator/manifests/sha256:13e233ff2dd41967c55194724ba148868ed878232789ab616ea3a6f9a9219c97" file="docker/docker_client.go:500"
Jun 26 17:24:26 worker-0 crio[2122871]: time="2020-06-26 17:24:26.635578186Z" level=debug msg="Trying to pull \"registry.redhat.io/openshift4/ose-elasticsearch-operator@sha256:13e233ff2dd41967c55194724ba148868ed878232789ab616ea3a6f9a9219c97\"" file="docker/docker_image_src.go:63"
Jun 26 17:24:26 worker-0 crio[2122871]: time="2020-06-26 17:24:26.635812005Z" level=debug msg="Returning credentials from DockerAuthConfig" file="config/config.go:80"
Jun 26 17:24:26 worker-0 crio[2122871]: time="2020-06-26 17:24:26.636009181Z" level=debug msg="Using registries.d directory /etc/containers/registries.d for sigstore configuration" file="docker/lookaside.go:51"
Jun 26 17:24:26 worker-0 crio[2122871]: time="2020-06-26 17:24:26.636672987Z" level=debug msg=" Using \"default-docker\" configuration" file="docker/lookaside.go:169"
Jun 26 17:24:26 worker-0 crio[2122871]: time="2020-06-26 17:24:26.636853291Z" level=debug msg=" No signature storage configuration found for registry.redhat.io/openshift4/ose-elasticsearch-operator@sha256:13e233ff2dd41967c55194724ba148868ed878232789ab616ea3a6f9a9219c97" file="docker/lookaside.go:174"
Jun 26 17:24:26 worker-0 crio[2122871]: time="2020-06-26 17:24:26.637147939Z" level=debug msg="Looking for TLS certificates and private keys in /etc/docker/certs.d/registry.redhat.io" file="tlsclientconfig/tlsclientconfig.go:21"
Jun 26 17:24:26 worker-0 crio[2122871]: time="2020-06-26 17:24:26.637460531Z" level=debug msg="GET https://registry.redhat.io/v2/" file="docker/docker_client.go:500"
Jun 26 17:24:26 worker-0 crio[2122871]: time="2020-06-26 17:24:26.643517674Z" level=debug msg="Ping https://registry.redhat.io/v2/ err Get https://registry.redhat.io/v2/: dial tcp: lookup registry.redhat.io on 192.168.122.1:53: server misbehaving (&url.Error{Op:\"Get\", URL:\"https://registry.redhat.io/v2/\", Err:(*net.OpError)(0xc000af4b90)})" file="docker/docker_client.go:624"
Jun 26 17:24:26 worker-0 crio[2122871]: time="2020-06-26 17:24:26.643738060Z" level=debug msg="GET https://registry.redhat.io/v1/_ping" file="docker/docker_client.go:500"
Jun 26 17:24:26 worker-0 crio[2122871]: time="2020-06-26 17:24:26.649778756Z" level=debug msg="Ping https://registry.redhat.io/v1/_ping err Get https://registry.redhat.io/v1/_ping: dial tcp: lookup registry.redhat.io on 192.168.122.1:53: server misbehaving (&url.Error{Op:\"Get\", URL:\"https://registry.redhat.io/v1/_ping\", Err:(*net.OpError)(0xc0008d78b0)})" file="docker/docker_client.go:651"
Jun 26 17:24:26 worker-0 crio[2122871]: time="2020-06-26 17:24:26.650034288Z" level=debug msg="error preparing image registry.redhat.io/openshift4/ose-elasticsearch-operator@sha256:13e233ff2dd41967c55194724ba148868ed878232789ab616ea3a6f9a9219c97: error pinging docker registry registry.redhat.io: Get https://registry.redhat.io/v2/: dial tcp: lookup registry.redhat.io on 192.168.122.1:53: server misbehaving" file="server/image_pull.go:64" id=8bb8c88e-f70b-4c5f-8545-1370caab0423
Jun 26 17:24:26 worker-0 crio[2122871]: time="2020-06-26 17:24:26.650470369Z" level=debug msg="response error: Get https://registry.redhat.io/v2/: dial tcp: lookup registry.redhat.io on 192.168.122.1:53: server misbehaving\nerror pinging docker registry registry.redhat.io\ngithub.com/cri-o/cri-o/vendor/github.com/containers/image/v5/docker.(*dockerClient).detectPropertiesHelper\n\t/builddir/build/BUILD/cri-o-9aad8e4832d2e1d03db6e54bdd5a3dbf72c021c8/_output/src/github.com/cri-o/cri-o/vendor/github.com/containers/image/v5/docker/docker_client.go:642\ngithub.com/cri-o/cri-o/vendor/github.com/containers/image/v5/docker.(*dockerClient).detectProperties.func1\n\t/builddir/build/BUILD/cri-o-9aad8e4832d2e1d03db6e54bdd5a3dbf72c021c8/_output/src/github.com/cri-o/cri-o/vendor/github.com/containers/image/v5/docker/docker_client.go:675\nsync.(*Once).Do\n\t/usr/lib/golang/src/sync/once.go:44\ngithub.com/cri-o/cri-o/vendor/github.com/containers/image/v5/docker.(*dockerClient).detectProperties\n\t/builddir/build/BUILD/cri-o-9aad8e4832d2e1d03db6e54bdd5a3dbf72c021c8/_output/src/github.com/cri-o/cri-o/vendor/github.com/containers/image/v5/docker/docker_client.go:675\ngithub.com/cri-o/cri-o/vendor/github.com/containers/image/v5/docker.(*dockerClient).makeRequest\n\t/builddir/build/BUILD/cri-o-9aad8e4832d2e1d03db6e54bdd5a3dbf72c021c8/_output/src/github.com/cri-o/cri-o/vendor/github.com/containers/image/v5/docker/docker_client.go:395\ngithub.com/cri-o/cri-o/vendor/github.com/containers/image/v5/docker.(*dockerImageSource).fetchManifest\n\t/builddir/build/BUILD/cri-o-9aad8e4832d2e1d03db6e54bdd5a3dbf72c021c8/_output/src/github.com/cri-o/cri-o/vendor/github.com/containers/image/v5/docker/docker_image_src.go:152\ngithub.com/cri-o/cri-o/vendor/github.com/containers/image/v5/docker.(*dockerImageSource).ensureManifestIsLoaded\n\t/builddir/build/BUILD/cri-o-9aad8e4832d2e1d03db6e54bdd5a3dbf72c021c8/_output/src/github.com/cri-o/cri-o/vendor/github.com/containers/image/v5/docker/docker_image_src.go:185\ngithub.com/cri-o/cri-o/vendor/github.com/containers/image/v5/docker.newImageSource\n\t/builddir/build/BUILD/cri-o-9aad8e4832d2e1d03db6e54bdd5a3dbf72c021c8/_output/src/github.com/cri-o/cri-o/vendor/github.com/containers/image/v5/docker/docker_image_src.go:88\ngithub.com/cri-o/cri-o/vendor/github.com/containers/image/v5/docker.newImage\n\t/builddir/build/BUILD/cri-o-9aad8e4832d2e1d03db6e54bdd5a3dbf72c021c8/_output/src/github.com/cri-o/cri-o/vendor/github.com/containers/image/v5/docker/docker_image.go:27\ngithub.com/cri-o/cri-o/vendor/github.com/containers/image/v5/docker.dockerReference.NewImage\n\t/builddir/build/BUILD/cri-o-9aad8e4832d2e1d03db6e54bdd5a3dbf72c021c8/_output/src/github.com/cri-o/cri-o/vendor/github.com/containers/image/v5/docker/docker_transport.go:138\ngithub.com/cri-o/cri-o/internal/pkg/storage.(*imageService).PrepareImage\n\t/builddir/build/BUILD/cri-o-9aad8e4832d2e1d03db6e54bdd5a3dbf72c021c8/_output/src/github.com/cri-o/cri-o/internal/pkg/storage/image.go:349\ngithub.com/cri-o/cri-o/server.(*Server).PullImage\n\t/builddir/build/BUILD/cri-o-9aad8e4832d2e1d03db6e54bdd5a3dbf72c021c8/_output/src/github.com/cri-o/cri-o/server/image_pull.go:62\ngithub.com/cri-o/cri-o/vendor/k8s.io/cri-api/pkg/apis/runtime/v1alpha2._ImageService_PullImage_Handler.func1\n\t/builddir/build/BUILD/cri-o-9aad8e4832d2e1d03db6e54bdd5a3dbf72c021c8/_output/src/github.com/cri-o/cri-o/vendor/k8s.io/cri-api/pkg/apis/runtime/v1alpha2/api.pb.go:8242\ngithub.com/cri-o/cri-o/internal/pkg/log.UnaryInterceptor.func1\n\t/builddir/build/BUILD/cri-o-9aad8e4832d2e1d03db6e54bdd5a3dbf72c021c8/_output/src/github.com/cri-o/cri-o/internal/pkg/log/interceptors.go:59\ngithub.com/cri-o/cri-o/vendor/k8s.io/cri-api/pkg/apis/runtime/v1alpha2._ImageService_PullImage_Handler\n\t/builddir/build/BUILD/cri-o-9aad8e4832d2e1d03db6e54bdd5a3dbf72c021c8/_output/src/github.com/cri-o/cri-o/vendor/k8s.io/cri-api/pkg/apis/runtime/v1alpha2/api.pb.go:8244\ngithub.com/cri-o/cri-o/vendor/google.golang.org/grpc.(*Server).processUnaryRPC\n\t/builddir/build/BUILD/cri-o-9aad8e4832d2e1d03db6e54bdd5a3dbf72c021c8/_output/src/github.com/cri-o/cri-o/vendor/google.golang.org/grpc/server.go:995\ngithub.com/cri-o/cri-o/vendor/google.golang.org/grpc.(*Server).handleStream\n\t/builddir/build/BUILD/cri-o-9aad8e4832d2e1d03db6e54bdd5a3dbf72c021c8/_output/src/github.com/cri-o/cri-o/vendor/google.golang.org/grpc/server.go:1275\ngithub.com/cri-o/cri-o/vendor/google.golang.org/grpc.(*Server).serveStreams.func1.1\n\t/builddir/build/BUILD/cri-o-9aad8e4832d2e1d03db6e54bdd5a3dbf72c021c8/_output/src/github.com/cri-o/cri-o/vendor/google.golang.org/grpc/server.go:710\nruntime.goexit\n\t/usr/lib/golang/src/runtime/asm_amd64.s:1337" file="v1alpha2/api.pb.go:8244" id=8bb8c88e-f70b-4c5f-8545-1370caab0423


[3] - 
[root@gss-ose-3-openshift mirror]# skopeo inspect --authfile /home/shadowman/auth.json docker://gss-ose-3-openshift.lab.eng.rdu2.redhat.com:5000/openshift4/ose-elasticsearch-operator
{
    "Name": "gss-ose-3-openshift.lab.eng.rdu2.redhat.com:5000/openshift4/ose-elasticsearch-operator",
    "Digest": "sha256:10925a6fe388eb8f29a173ca5f8841ef78fb78012166262d57ce98312c4eec61",
    "RepoTags": [
        "latest"
    ],

Actual results:


37m         Normal    RequirementsUnknown      clusterserviceversion/elasticsearch-operator.4.4.0-202006080610    requirements not yet checked
37m         Normal    RequirementsNotMet       clusterserviceversion/elasticsearch-operator.4.4.0-202006080610    one or more requirements couldn't be found
37m         Normal    BackOff                  pod/elasticsearch-operator-6897c8954b-v6lzj                        Back-off pulling image "registry.redhat.io/openshift4/ose-elasticsearch-operator@sha256:150902be5584eb0f4f5fb1e367a95dc829e3e6d46779cf5a3257eb2c5fbbf80a"

Expected results:

Image digest will match in mirror. Elasticsearch will install and run successfully 

Additional info:

A workaround we found.


// workaround


1. Get skopeo:
# yum install skopeo 

2. Inspect the `openshift4/ose-elasticsearch-operator` to get the correct image digest
# skopeo inspect --authfile /home/shadowman/auth.json docker://registry-host:5000/openshift4/ose-elasticsearch-operator
{
    "Name": "registry-host:5000/openshift4/ose-elasticsearch-operator",
    "Digest": "sha256:10925a6fe388eb8f29a173ca5f8841ef78fb78012166262d57ce98312c4eec61",   <----- the digest
    "RepoTags": [
        "latest"
    ],

3. Get the elasticsearch csv
# oc get csv -n openshift-operators

4. Edit the elasticsearch csv
# oc edit csv -n openshift-operators elasticsearch-operator.4.x.x.xxxxx

5. Update sha256 digests to equal the sha256 value read above for both containerImage image references.  In your case, find image references for: `registry.redhat.io/openshift4/ose-elasticsearch-operator@sha256:150902be5584eb0f4f5fb1e367a95dc829e3e6d46779cf5a3257eb2c5fbbf80a`.  

6. Delete the elasticsearch-operator deployment. For example:

# oc delete deployment elasticsearch-operator -n openshift-operators

In a few moments, elasticsearch should restart with an existing image digest

Comment 3 Periklis Tsirakidis 2020-06-30 07:27:09 UTC
Putting to low, as not a blocker for 4.5. This issue relates to digests for 4.3.x EO images.

Comment 5 Yuxiang Zhu 2020-07-06 07:42:21 UTC
registry.redhat.io/openshift4/ose-elasticsearch-operator@sha256:150902be5584eb0f4f5fb1e367a95dc829e3e6d46779cf5a3257eb2c5fbbf80a is not found because we didn't ship https://errata.devel.redhat.com/advisory/55232.
However due to the current faulty OLM operator metadata push workflow, the reference had been put into distgit then later 4.4 or 4.2 OLM operator metadata push updated that reference to distgit.


I think this issue can be closed because later OLM operator metadata pushes have updated the reference so it is no longer current.

Comment 6 Yuxiang Zhu 2020-07-06 07:45:02 UTC
later 4.4 or 4.2 OLM operator metadata push updated that reference to distgit -> later 4.4 or 4.2 OLM operator metadata push updated that reference in app registry.

Comment 7 Periklis Tsirakidis 2020-07-06 07:52:15 UTC
As mentioned above this is not a bug.

Comment 8 Javier Coscia 2020-08-11 19:08:30 UTC
Reopening this bug since we have found another instance for the same error.
In this case, the issue happened under a OCP 4.5.4 cluster while trying to install the CL stack.

Customer validated yesterday that `ose-elasticsearch-operator` with SHA 772ade6ee79fd75d04a9a1c2bb8c92c2900c305e5f5c868e59b99643c9b515d9 was available from `registry.redhat.io`, although, a few days back, when case was opened on 7/8, the image was not available/found

This is the message from the case when customer tried to pull the ES operator image.
~~~
~ % docker pull registry.redhat.io/openshift4/ose-elasticsearch-operator@sha256:772ade6ee79fd75d04a9a1c2bb8c92c2900c305e5f5c868e59b99643c9b515d9
Error response from daemon: error parsing HTTP 404 response body: invalid character '<' looking for beginning of value: "<HTML><HEAD><TITLE>Error</TITLE></HEAD><BODY>\nAn error occurred while processing your request.<p>\nReference&#32;&#35;132&#46;8ce50b17&#46;1596815551&#46;81f218\n</BODY></HTML>\n"
~~~

After the image was available yesterday, customer noticed that the image Digest changed for that image when mirrored to their internal registry (NEXUS).

Images for elasticsearch-operator on channel 4.5
[
  "registry.redhat.io/openshift4/ose-elasticsearch-operator@sha256:772ade6ee79fd75d04a9a1c2bb8c92c2900c305e5f5c868e59b99643c9b515d9",
  "registry.redhat.io/openshift4/ose-elasticsearch-proxy@sha256:c3dcaabf92984f305b5374018c58cac18525418969b9ae2122bfdb9d9e7f81d5",
  "registry.redhat.io/openshift4/ose-logging-elasticsearch6@sha256:76a2e6073f0cd2a9c02fcdd4578e477136da51c955dc7583fe72702bd16f4515",
  "registry.redhat.io/openshift4/ose-logging-kibana6@sha256:0b7f2c19f44b73739c8c3925d2f305047c979a943fa942b4bea2ab776943f4cf",
  "registry.redhat.io/openshift4/ose-oauth-proxy@sha256:6db2efb24cf572508af74ee155b2437c60ebb40f52619320038dec2ab5553413"
]



Example:

* This is a custom script customer built

~~~
 ./check_digest.sh 772ade6ee79fd75d04a9a1c2bb8c92c2900c305e5f5c868e59b99643c9b515d9
SOURCE: sha256:772ade6ee79fd75d04a9a1c2bb8c92c2900c305e5f5c868e59b99643c9b515d9
DEST: sha256:65ac828ee5bff80547fd2045b4d0730cadaa16e22f7b07a757817305891b5fdd
~~~

As you can see the image on the SOURCE registry (Red Hat) is different from the DEST registry (NEXUS)

This happened to all images for elasticsearch-operator and cluster-logging.


As a workaround, customer had to edit the `ClusterServiceVersion` for both Operators to point to the digest value in their mirror registry.

Comment 9 Luke Meyer 2020-08-13 17:32:36 UTC
"mistakes were made" last week and 4.5 metadata was out of sync with the images available until Monday. Things should be synced now, else file more bugs.

> customer noticed that the image Digest changed for that image when mirrored to their internal registry (NEXUS).

What they have done is sync the single architecture image instead of the manifest list (operator manifests point at manifest list shasums).

--------------
$ oc image info registry-proxy.engineering.redhat.com/rh-osbs/openshift-ose-elasticsearch-operator@sha256:772ade6ee79fd75d04a9a1c2bb8c92c2900c305e5f5c868e59b99643c9b515d9
error: the image is a manifest list and contains multiple images - use --filter-by-os to select from:

  OS            DIGEST
  linux/amd64   sha256:65ac828ee5bff80547fd2045b4d0730cadaa16e22f7b07a757817305891b5fdd
--------------

Assuming they are using oc image mirror they need to include the `--filter-by-os=.*` parameter to get the manifest list as well.

I don't think this is a bug (now), but feel free to continue to reopen if it needs something.

Comment 10 Javier Coscia 2020-08-13 19:55:56 UTC
Thanks for the pointer Luke. Customer tested with the `--filter-by-os=.*` parameter and the procedure worked.
I believe we could update the documentation [1] adding this parameter.  What do you think ?

[1] https://docs.openshift.com/container-platform/4.5/operators/olm-restricted-networks.html#olm-restricted-networks-operatorhub_olm-restricted-networks (step 3.b)

Comment 20 Luke Meyer 2020-10-22 23:43:34 UTC
(In reply to Hideshi Fukumoto from comment #14)
> @Luke,
> 
> Thank you for your help.
> I've re-read your comment on this issue, and I understand that
> 
> 1. The documentation should be improved.
>     => OK, please improve the doc ASAP in this bug.
>        If you need another bug to do it, please let us know.

I think this is the bug to do that :) I am not the expert on this but I will ask experts to review.
Editing "Docs text" to indicate the changes.

> 
> 2. There is currently no way to reduce the storage space by not copying
> unnecessary
>    architecture's image/something-else, and it's the current specification.
>     => However, our customer wants to avoid wasting disk space and transfer
> time by
>        copying unnecessary architecture's images.
>        Can you handle it in this bug ?
>        If not, should we file new RFE request on it?

That should be a new RFE. And I have to say up front, it will require either a significant architectural change or a registry that's more relaxed about data integrity.

If you can find a registry that will host the manifest list without requiring all the manifests to be present, then you can save the space for the superfluous arches.

Otherwise, we'll either need an architectural revision to make the operator manifests support multiple arches, or publish multiple manifests to multiple channels, or something else I haven't thought of.

This problem doesn't go away with bundle format in 4.6+. The commands probably change but references are still to manifest lists.

Comment 21 Hideshi Fukumoto 2020-10-23 02:03:03 UTC
(In reply to Luke Meyer from comment #20)

@luke
Thanks for your answers.

> (In reply to Hideshi Fukumoto from comment #14)
> > @Luke,
> > 
> > Thank you for your help.
> > I've re-read your comment on this issue, and I understand that
> > 
> > 1. The documentation should be improved.
> >     => OK, please improve the doc ASAP in this bug.
> >        If you need another bug to do it, please let us know.
> 
> I think this is the bug to do that :) I am not the expert on this but I will
> ask experts to review.
> Editing "Docs text" to indicate the changes.

    I personally believe that the content in the "Docs text" is used in the next
    Release Note to reflect as "Bug fixes", "Known issues", "Enhancement".
    Therefore, in this case, I think it's better to change the explanation
    in "OCP Operators" guide[1] directly (and append some additional notes if necessary)
    
    | Step 3b "oc image mirror" should look like this:
    | 
    | $ oc image mirror \
    |     --filter-by-os=".*" \
    |     [-a ${REG_CREDS}] \
    |     -f ./redhat-operators-manifests/mapping.txt
    
   [1] https://docs.openshift.com/container-platform/4.5/operators/olm-restricted-networks.html#olm-restricted-networks-operatorhub_olm-restricted-networks (step 3.b)

> > 2. There is currently no way to reduce the storage space by not copying
> > unnecessary
> >    architecture's image/something-else, and it's the current specification.
> >     => However, our customer wants to avoid wasting disk space and transfer
> > time by
> >        copying unnecessary architecture's images.
> >        Can you handle it in this bug ?
> >        If not, should we file new RFE request on it?
> 
> That should be a new RFE.

  Okay, I'll file a new RFE on it.

> And I have to say up front, it will require either
> a significant architectural change or a registry that's more relaxed about
> data integrity.
> 
> If you can find a registry that will host the manifest list without
> requiring all the manifests to be present, then you can save the space for
> the superfluous arches.

  I'm sorry but I do not understand your idea above.
  So, if it's a realistic procedure for our customers to implement at this time,
  please let us about it.

Comment 23 Asheth 2020-10-23 16:20:43 UTC
@Luke 

One of my CU has a restricted environment. OCP Version is 4.5.8. CU was facing challenges while installing elasticsearch operator, hence CU followed the article[1]. After following the article[1], CU was able to successfully install elasticsearch operator. Now, CU is facing issues for other operators as well.

I would like to know that is it expected behavior?

For CU manually deleting the CSV and then deleting the deployment is not a practical solution.

Can you advise?

[1] https://access.redhat.com/solutions/5200741

Comment 36 Michael Burke 2022-05-19 16:14:50 UTC
Hideshi --

Forgive my ignorance on this issue. It is not entirely clear to me what the changes you requested are, especially considering that the section you commented on has been removed. 

It seems that the information now appears in 

* "Mirroring an Operator catalog" [1] for 4.6-4.8
* "Mirroring Operator catalogs for use with disconnected clusters" in the "Mirroring catalog contents to airgapped registries" subsection [2] for 4.9+.

You are suggesting that for multi-arch environments that the 

----
$ oc adm catalog mirror \
    <index_image> \
    <mirror_registry>:<port>/<namespace> \
    [-a ${REG_CREDS}] \
    [--insecure] \
    [--index-filter-by-os='<platform>/<arch>'] \
    [--manifests-only] 
----

should be

----
$ oc adm catalog mirror \
    <index_image> \
    <mirror_registry>:<port>/<namespace> \
    [-a ${REG_CREDS}] \
    [--insecure] \
    [--manifests-only] 
----

And

----
$ oc adm catalog mirror \
    <index_image> \ 
    file:///local/index \ 
    -a ${REG_CREDS} \ 
    --insecure \ 
    --index-filter-by-os='<platform>/<arch>' 
----

Should be:

----
$ oc image mirror \
    --filter-by-os=".*" \
    [-a ${REG_CREDS}] \
    -f ./redhat-operators-manifests/mapping.txt
----

However, the `oc adm catalog mirror` is in a distinct section, Mirroring catalog contents to registries on the same network [3], in the 4.9+ docs. 

Any thoughts on how to proceed? 

Thank you,
Michael

[1] https://docs.openshift.com/container-platform/4.8/operators/admin/olm-restricted-networks.html#olm-mirror-catalog_olm-restricted-networks
[2] https://docs.openshift.com/container-platform/4.10/installing/disconnected_install/installing-mirroring-installation-images.html#olm-mirror-catalog-colocated_installing-mirroring-installation-images
[3] https://docs.openshift.com/container-platform/4.10/installing/disconnected_install/installing-mirroring-installation-images.html#olm-mirror-catalog-colocated_installing-mirroring-installation-images

Comment 38 Red Hat Bugzilla 2023-09-15 01:29:59 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 365 days


Note You need to log in before you can comment on or make changes to this bug.