Bug 1909992 - Fail to pull the bundle image when using the private index image
Summary: Fail to pull the bundle image when using the private index image
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: OLM
Version: 4.7
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: 4.7.0
Assignee: Anik
QA Contact: Jian Zhang
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-12-22 09:29 UTC by Jian Zhang
Modified: 2021-02-24 15:48 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-02-24 15:47:55 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github operator-framework operator-lifecycle-manager pull 1941 0 None closed Bug 1909992: Allow private bundle images within private indexes 2021-02-18 16:06:05 UTC
Red Hat Product Errata RHSA-2020:5633 0 None None None 2021-02-24 15:48:11 UTC

Description Jian Zhang 2020-12-22 09:29:40 UTC
Description of problem:
When using the private index image, the catalogSource pod works well. But, the bundle pulling failed. I guess the root cause is that the CatalogSource secret wasn't added to this bundle pod.

[root@preserve-olm-env data]# oc get job
NAME                                                              COMPLETIONS   DURATION   AGE
d3383223ae0bc9c573b86c2d918fe9f6f9988f5771d0a503da2943d2fa37991   0/1           12m        12m
[root@preserve-olm-env data]# oc get pods
[root@preserve-olm-env data]# oc get pods
NAME                                                              READY   STATUS                  RESTARTS   AGE
d3383223ae0bc9c573b86c2d918fe9f6f9988f5771d0a503da2943d2faxzbsk   0/1     Init:ImagePullBackOff   0          12m


Version-Release number of selected component (if applicable):
[root@preserve-olm-env data]# oc version
Client Version: 4.7.0-0.nightly-2020-12-16-224526
Server Version: 4.7.0-0.nightly-2020-12-20-031835
Kubernetes Version: v1.20.0+87544c5

[root@preserve-olm-env data]# oc -n openshift-operator-lifecycle-manager  exec catalog-operator-7c9b666d58-zvmsd -- olm --version
OLM version: 0.17.0
git commit: ffb66b0f4150bb82cbc6a1e7571901fe7724f91e

How reproducible:
always

Steps to Reproduce:
1. Install OCP 4.7, and login it as a cluster-admin rolee.

2. Create a private image on Quay.io and login it, for example, quay.io/jiazha/upstream-opm-builder
[root@preserve-olm-env data]# docker login quay.io
Authenticating with existing credentials...
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded


3. Replace the etcd operator image with the private image. And, build the private bundle and index image.
[root@preserve-olm-env etcd]# opm alpha bundle build -c clusterwide-alpha -e clusterwide-alpha -d ./0.9.4-clusterwide/ -p etcd -o -t quay.io/jiazha/upstream-opm-builder:etcdbundle
...
[root@preserve-olm-env etcd]# docker push quay.io/jiazha/upstream-opm-builder:etcdbundle
The push refers to repository [quay.io/jiazha/upstream-opm-builder]
...
[root@preserve-olm-env etcd]# opm index add -b quay.io/jiazha/upstream-opm-builder:etcdbundle -c docker  -t quay.io/jiazha/upstream-opm-builder:etcdindex
INFO[0000] building the index                            bundles="[quay.io/jiazha/upstream-opm-builder:etcdbundle]"
...
[root@preserve-olm-env etcd]# docker push quay.io/jiazha/upstream-opm-builder:etcdindex
...


4, Create a secret that contains the auth of this private repo, like below:
[root@preserve-olm-env data]# oc create secret generic secret-cs --from-file=.dockerconfigjson=/root/.docker/config.json --type=kubernetes.io/dockerconfigjson 
secret/secret-cs created


5. Create a CatalogSource to consume this private image, and set the `spec.secrets` fields to consume this secret.
[root@preserve-olm-env data]# cat cs-private.yaml 
apiVersion: operators.coreos.com/v1alpha1
kind: CatalogSource
metadata:
  name: cs-private
  namespace: openshift-marketplace
spec:
  displayName: Jian Test
  publisher: Jian
  sourceType: grpc
  secrets: 
  - "secret-cs"
  image: quay.io/jiazha/upstream-opm-builder:etcdindex
  updateStrategy:
    registryPoll:
      interval: 10m

[root@preserve-olm-env data]# oc create -f  cs-private.yaml 
catalogsource.operators.coreos.com/etcd-test created
[root@preserve-olm-env data]# oc get catalogsource
NAME                  DISPLAY                TYPE   PUBLISHER      AGE
cs-private            Jian Test              grpc   Jian           24m

[root@preserve-olm-env data]# oc get pods
NAME                                                              READY   STATUS      RESTARTS   AGE
cs-private-c2qrf                                                  1/1     Running     0          26m

[root@preserve-olm-env data]# oc get packagemanifest|grep etcd
etcd                                                 Jian Test              25m
etcd                                                 Community Operators    8h

6, Subscribe to the etcd operator from this CatalogSource.
[root@preserve-olm-env data]# cat sub-etcd-cluster.yaml 
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  name: etcd
  namespace: openshift-operators
spec:
  channel: clusterwide-alpha
  installPlanApproval: Automatic
  name: etcd
  source: cs-private
  sourceNamespace: openshift-marketplace
  startingCSV: etcdoperator.v0.9.4-clusterwide

[root@preserve-olm-env data]# oc get sub -A
NAMESPACE                            NAME                  PACKAGE               SOURCE                CHANNEL
e2e-test-compliance-ktj2cslj-7vvvl   compliance-operator   compliance-operator   compliance-operator   4.6
openshift-operators                  etcd                  etcd                  cs-private            clusterwide-alpha
[root@preserve-olm-env data]# oc get ip -n openshift-operators 
NAME            CSV                               APPROVAL    APPROVED
install-8xgk4   etcdoperator.v0.9.4-clusterwide   Automatic   true
[root@preserve-olm-env data]# oc get csv -n openshift-operators 
No resources found in openshift-operators namespace.



Actual results:
The bundle pod failed to pull the private bundle image: 

[root@preserve-olm-env data]# oc get job
NAME                                                              COMPLETIONS   DURATION   AGE
d3383223ae0bc9c573b86c2d918fe9f6f9988f5771d0a503da2943d2fa37991   0/1           29s        29s

[root@preserve-olm-env data]# oc get pods
NAME                                                              READY   STATUS                  RESTARTS   AGE
d3383223ae0bc9c573b86c2d918fe9f6f9988f5771d0a503da2943d2faxzbsk   0/1     Init:ImagePullBackOff   0          12m

  Normal   BackOff         <invalid> (x5 over <invalid>)  kubelet            Back-off pulling image "quay.io/jiazha/upstream-opm-builder:etcdbundle"
  Warning  Failed          <invalid> (x5 over <invalid>)  kubelet            Error: ImagePullBackOff
  Normal   Pulling         <invalid> (x4 over <invalid>)  kubelet            Pulling image "quay.io/jiazha/upstream-opm-builder:etcdbundle"
  Warning  Failed          <invalid> (x4 over <invalid>)  kubelet            Error: ErrImagePull


Expected results:
The bundle pods can use the CatalogSource secret to pull the private bundle image successfully.

Additional info:
[root@preserve-olm-env data]# oc get pods d3383223ae0bc9c573b86c2d918fe9f6f9988f5771d0a503da2943d2faxzbsk  -o yaml
apiVersion: v1
kind: Pod
metadata:
  annotations:
    k8s.v1.cni.cncf.io/network-status: |-
      [{
          "name": "",
          "interface": "eth0",
          "ips": [
              "10.128.2.191"
          ],
          "default": true,
          "dns": {}
      }]
    k8s.v1.cni.cncf.io/networks-status: |-
      [{
          "name": "",
          "interface": "eth0",
          "ips": [
              "10.128.2.191"
          ],
          "default": true,
          "dns": {}
      }]
    openshift.io/scc: restricted
  creationTimestamp: "2020-12-22T09:01:35Z"
  generateName: d3383223ae0bc9c573b86c2d918fe9f6f9988f5771d0a503da2943d2fa37991-
  labels:
    controller-uid: a8dd76a8-ce65-467e-b70e-9bf971d3f899
    job-name: d3383223ae0bc9c573b86c2d918fe9f6f9988f5771d0a503da2943d2fa37991
  managedFields:
  - apiVersion: v1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:generateName: {}
        f:labels:
          .: {}
          f:controller-uid: {}
          f:job-name: {}
        f:ownerReferences:
          .: {}
          k:{"uid":"a8dd76a8-ce65-467e-b70e-9bf971d3f899"}:
            .: {}
            f:apiVersion: {}
            f:blockOwnerDeletion: {}
            f:controller: {}
            f:kind: {}
            f:name: {}
            f:uid: {}
      f:spec:
        f:containers:
          k:{"name":"extract"}:
            .: {}
            f:command: {}
            f:env:
              .: {}
              k:{"name":"CONTAINER_IMAGE"}:
                .: {}
                f:name: {}
                f:value: {}
            f:image: {}
            f:imagePullPolicy: {}
            f:name: {}
            f:resources: {}
            f:terminationMessagePath: {}
            f:terminationMessagePolicy: {}
            f:volumeMounts:
              .: {}
              k:{"mountPath":"/bundle"}:
                .: {}
                f:mountPath: {}
                f:name: {}
        f:dnsPolicy: {}
        f:enableServiceLinks: {}
        f:initContainers:
          .: {}
          k:{"name":"pull"}:
            .: {}
            f:command: {}
            f:image: {}
            f:imagePullPolicy: {}
            f:name: {}
            f:resources: {}
            f:terminationMessagePath: {}
            f:terminationMessagePolicy: {}
            f:volumeMounts:
              .: {}
              k:{"mountPath":"/bundle"}:
                .: {}
                f:mountPath: {}
                f:name: {}
              k:{"mountPath":"/util"}:
                .: {}
                f:mountPath: {}
                f:name: {}
          k:{"name":"util"}:
            .: {}
            f:command: {}
            f:image: {}
            f:imagePullPolicy: {}
            f:name: {}
            f:resources: {}
            f:terminationMessagePath: {}
            f:terminationMessagePolicy: {}
            f:volumeMounts:
              .: {}
              k:{"mountPath":"/util"}:
                .: {}
                f:mountPath: {}
                f:name: {}
        f:restartPolicy: {}
        f:schedulerName: {}
        f:securityContext:
          .: {}
          f:fsGroup: {}
          f:seLinuxOptions:
            f:level: {}
        f:terminationGracePeriodSeconds: {}
        f:volumes:
          .: {}
          k:{"name":"bundle"}:
            .: {}
            f:emptyDir: {}
            f:name: {}
          k:{"name":"util"}:
            .: {}
            f:emptyDir: {}
            f:name: {}
    manager: kube-controller-manager
    operation: Update
    time: "2020-12-22T09:01:35Z"
  - apiVersion: v1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          f:k8s.v1.cni.cncf.io/network-status: {}
          f:k8s.v1.cni.cncf.io/networks-status: {}
    manager: multus
    operation: Update
    time: "2020-12-22T09:01:37Z"
  - apiVersion: v1
    fieldsType: FieldsV1
    fieldsV1:
      f:status:
        f:conditions:
          k:{"type":"ContainersReady"}:
            .: {}
            f:lastProbeTime: {}
            f:lastTransitionTime: {}
            f:message: {}
            f:reason: {}
            f:status: {}
            f:type: {}
          k:{"type":"Initialized"}:
            .: {}
            f:lastProbeTime: {}
            f:lastTransitionTime: {}
            f:message: {}
            f:reason: {}
            f:status: {}
            f:type: {}
          k:{"type":"Ready"}:
            .: {}
            f:lastProbeTime: {}
            f:lastTransitionTime: {}
            f:message: {}
            f:reason: {}
            f:status: {}
            f:type: {}
        f:containerStatuses: {}
        f:hostIP: {}
        f:initContainerStatuses: {}
        f:podIP: {}
        f:podIPs:
          .: {}
          k:{"ip":"10.128.2.191"}:
            .: {}
            f:ip: {}
        f:startTime: {}
    manager: kubelet
    operation: Update
    time: "2020-12-22T09:01:40Z"
  name: d3383223ae0bc9c573b86c2d918fe9f6f9988f5771d0a503da2943d2faxzbsk
  namespace: openshift-marketplace
  ownerReferences:
  - apiVersion: batch/v1
    blockOwnerDeletion: true
    controller: true
    kind: Job
    name: d3383223ae0bc9c573b86c2d918fe9f6f9988f5771d0a503da2943d2fa37991
    uid: a8dd76a8-ce65-467e-b70e-9bf971d3f899
  resourceVersion: "205999"
  uid: dc3cc527-8cac-470f-8c84-8765b5be2afd
spec:
  containers:
  - command:
    - opm
    - alpha
    - bundle
    - extract
    - -m
    - /bundle/
    - -n
    - openshift-marketplace
    - -c
    - d3383223ae0bc9c573b86c2d918fe9f6f9988f5771d0a503da2943d2fa37991
    env:
    - name: CONTAINER_IMAGE
      value: quay.io/jiazha/upstream-opm-builder:etcdbundle
    image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:262b08f1fc93d6409bc612a11687821371e10fbf8301f3f94db2234d619ccc87
    imagePullPolicy: IfNotPresent
    name: extract
    resources: {}
    securityContext:
      capabilities:
        drop:
        - KILL
        - MKNOD
        - SETGID
        - SETUID
      runAsUser: 1000270000
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /bundle
      name: bundle
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: default-token-dzp95
      readOnly: true
  dnsPolicy: ClusterFirst
  enableServiceLinks: true
  imagePullSecrets:
  - name: default-dockercfg-98ccr
  initContainers:
  - command:
    - /bin/cp
    - -Rv
    - /bin/cpb
    - /util/cpb
    image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82a6e2add54339cdc10687b4c5d827ec8106cc56422e7039c131e7727755b3c9
    imagePullPolicy: IfNotPresent
    name: util
    resources: {}
    securityContext:
      capabilities:
        drop:
        - KILL
        - MKNOD
        - SETGID
        - SETUID
      runAsUser: 1000270000
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /util
      name: util
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: default-token-dzp95
      readOnly: true
  - command:
    - /util/cpb
    - /bundle
    image: quay.io/jiazha/upstream-opm-builder:etcdbundle
    imagePullPolicy: Always
    name: pull
    resources: {}
    securityContext:
      capabilities:
        drop:
        - KILL
        - MKNOD
        - SETGID
        - SETUID
      runAsUser: 1000270000
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /bundle
      name: bundle
    - mountPath: /util
      name: util
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: default-token-dzp95
      readOnly: true
  nodeName: ip-10-0-158-39.us-east-2.compute.internal
  preemptionPolicy: PreemptLowerPriority
  priority: 0
  restartPolicy: OnFailure
  schedulerName: default-scheduler
  securityContext:
    fsGroup: 1000270000
    seLinuxOptions:
      level: s0:c16,c15
  serviceAccount: default
  serviceAccountName: default
  terminationGracePeriodSeconds: 30
  tolerations:
  - effect: NoExecute
    key: node.kubernetes.io/not-ready
    operator: Exists
    tolerationSeconds: 300
  - effect: NoExecute
    key: node.kubernetes.io/unreachable
    operator: Exists
    tolerationSeconds: 300
  volumes:
  - emptyDir: {}
    name: bundle
  - emptyDir: {}
    name: util
  - name: default-token-dzp95
    secret:
      defaultMode: 420
      secretName: default-token-dzp95
status:
  conditions:
  - lastProbeTime: null
    lastTransitionTime: "2020-12-22T09:01:35Z"
    message: 'containers with incomplete status: [pull]'
    reason: ContainersNotInitialized
    status: "False"
    type: Initialized
  - lastProbeTime: null
    lastTransitionTime: "2020-12-22T09:01:35Z"
    message: 'containers with unready status: [extract]'
    reason: ContainersNotReady
    status: "False"
    type: Ready
  - lastProbeTime: null
    lastTransitionTime: "2020-12-22T09:01:35Z"
    message: 'containers with unready status: [extract]'
    reason: ContainersNotReady
    status: "False"
    type: ContainersReady
  - lastProbeTime: null
    lastTransitionTime: "2020-12-22T09:01:35Z"
    status: "True"
    type: PodScheduled
  containerStatuses:
  - image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:262b08f1fc93d6409bc612a11687821371e10fbf8301f3f94db2234d619ccc87
    imageID: ""
    lastState: {}
    name: extract
    ready: false
    restartCount: 0
    started: false
    state:
      waiting:
        reason: PodInitializing
  hostIP: 10.0.158.39
  initContainerStatuses:
  - containerID: cri-o://6f019c2053a89674fc932149542316ce7dd6a2bc7a2d77d863a185453ed2994f
    image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82a6e2add54339cdc10687b4c5d827ec8106cc56422e7039c131e7727755b3c9
    imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82a6e2add54339cdc10687b4c5d827ec8106cc56422e7039c131e7727755b3c9
    lastState: {}
    name: util
    ready: true
    restartCount: 0
    state:
      terminated:
        containerID: cri-o://6f019c2053a89674fc932149542316ce7dd6a2bc7a2d77d863a185453ed2994f
        exitCode: 0
        finishedAt: "2020-12-22T09:01:40Z"
        reason: Completed
        startedAt: "2020-12-22T09:01:39Z"
  - image: quay.io/jiazha/upstream-opm-builder:etcdbundle
    imageID: ""
    lastState: {}
    name: pull
    ready: false
    restartCount: 0
    state:
      waiting:
        message: Back-off pulling image "quay.io/jiazha/upstream-opm-builder:etcdbundle"
        reason: ImagePullBackOff
  phase: Pending
  podIP: 10.128.2.191
  podIPs:
  - ip: 10.128.2.191
  qosClass: BestEffort
  startTime: "2020-12-22T09:01:35Z"

Comment 4 Jian Zhang 2021-01-18 07:31:06 UTC
Cluster version is 4.7.0-0.nightly-2021-01-18-000316
[root@preserve-olm-env data]# oc -n openshift-operator-lifecycle-manager   exec catalog-operator-6d9d94fdb8-wk2vh -- olm --version
OLM version: 0.17.0
git commit: cab348020d3dafccfb7eef5ef4e05f7fe402b544

1, Create a pull secret called "secret-cs" in "openshift-marketplace" namespace.
[root@preserve-olm-env data]# oc project
Using project "openshift-marketplace" on server "https://api.xxia18shared.qe.devcluster.openshift.com:6443".
[root@preserve-olm-env data]# oc extract secret/pull-secret -n openshift-config  --confirm
.dockerconfigjson
[root@preserve-olm-env data]# oc create secret generic secret-cs --from-file=.dockerconfigjson=/root/.docker/config.json --type=kubernetes.io/dockerconfigjson
secret/secret-cs created

2, Create a CatalogSource CR to consume a private index image that provides etcdoperator.
[root@preserve-olm-env data]# oc create -f cs-private.yaml 
catalogsource.operators.coreos.com/cs-private created
[root@preserve-olm-env data]# cat cs-private.yaml 
apiVersion: operators.coreos.com/v1alpha1
kind: CatalogSource
metadata:
  name: cs-private
  namespace: openshift-marketplace
spec:
  displayName: OLM Test
  publisher: Jian
  sourceType: grpc
  secrets: 
  - "secret-cs"
  image: quay.io/jiazha/upstream-opm-builder:etcdindex
  updateStrategy:
    registryPoll:
      interval: 10m


[root@preserve-olm-env data]# oc get packagemanifest|grep etcd
etcd                                                 Community Operators    3h26m
etcd                                                 OLM Test               3m5s

3, Subscribe to this etcdoperator.
[root@preserve-olm-env data]# cat sub-etcd-cluster.yaml 
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  name: etcd-private
  namespace: openshift-operators
spec:
  channel: clusterwide-alpha
  installPlanApproval: Automatic
  name: etcd
  source: cs-private
  sourceNamespace: openshift-marketplace
  startingCSV: etcdoperator.v0.9.4-clusterwide

4, The job  that unpacks the bundle images works well.
[root@preserve-olm-env data]# oc get job
NAME                                                              COMPLETIONS   DURATION   AGE
...
d3383223ae0bc9c573b86c2d918fe9f6f9988f5771d0a503da2943d2fa37991   1/1           8s         34m

[root@preserve-olm-env data]# oc get pods
NAME                                                              READY   STATUS                  RESTARTS   AGE
...
d3383223ae0bc9c573b86c2d918fe9f6f9988f5771d0a503da2943d2fajxtbz   0/1     Completed               0          34m
...

[root@preserve-olm-env data]# oc get job d3383223ae0bc9c573b86c2d918fe9f6f9988f5771d0a503da2943d2fa37991 -o yaml|grep imagePullSecrets -A1
            f:imagePullSecrets:
              .: {}
--
      imagePullSecrets:
      - name: secret-cs


5, Check if this operator can be installed.

[root@preserve-olm-env data]# oc get sub -n openshift-operators
NAME           PACKAGE   SOURCE       CHANNEL
etcd-private   etcd      cs-private   clusterwide-alpha


[root@preserve-olm-env data]# oc get ip -n openshift-operators  
NAME            CSV                               APPROVAL    APPROVED
install-lm8d4   etcdoperator.v0.9.4-clusterwide   Automatic   true
[root@preserve-olm-env data]# oc get csv -n openshift-operators  
NAME                              DISPLAY   VERSION             REPLACES   PHASE
etcdoperator.v0.9.4-clusterwide   etcd      0.9.4-clusterwide              Installing
...

[root@preserve-olm-env data]# oc get job d3383223ae0bc9c573b86c2d918fe9f6f9988f5771d0a503da2943d2fa37991 -o yaml|grep image:
                f:image: {}
                f:image: {}
                f:image: {}
        image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:56493169715122404a9007e9c087ba36a5851f0cbccebd82c2c0a162ef80fdef
        image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:452e9c8f942f706eed65c0b726b863a2dc796dc029f9736a1596a2fc23e8b29f
        image: quay.io/jiazha/upstream-opm-builder:etcdbundle


[root@preserve-olm-env data]# oc get csv -n openshift-operators 
NAME                              DISPLAY   VERSION             REPLACES   PHASE
etcdoperator.v0.9.4-clusterwide   etcd      0.9.4-clusterwide              Failed

[root@preserve-olm-env data]# oc get pods -n openshift-operators 
NAME                             READY   STATUS             RESTARTS   AGE
etcd-operator-65f8576977-n5txr   0/3     ImagePullBackOff   0          19m

[root@preserve-olm-env data]# oc get deployment -n openshift-operators 
NAME            READY   UP-TO-DATE   AVAILABLE   AGE
etcd-operator   0/1     1            0           20m

[root@preserve-olm-env data]# oc get deployment -n openshift-operators etcd-operator -o yaml|grep imagePullSecrets
...


This private operator failed to install since we didn't inject the pull-secret to its deployment in the configmap.

[root@preserve-olm-env data]# oc get cm d3383223ae0bc9c573b86c2d918fe9f6f9988f5771d0a503da2943d2fa37991 -o yaml|grep imagePullSecrets
[root@preserve-olm-env data]# 


I tried to add the above pull secret(cs-secret) auth to the pull-secret of "openshift-config". But, it still failed to pull this private image. 
[root@preserve-olm-env data]# cat .dockerconfigjson | jq --compact-output '.auths["quay.io/jiazha"] |= . + {"auth":"xxx"}' > new_dockerconfigjson
[root@preserve-olm-env data]#  oc set data secret/pull-secret -n openshift-config  --from-file=.dockerconfigjson=new_dockerconfigjson
secret/pull-secret data updated

[root@preserve-olm-env data]# oc get pods
NAME                             READY   STATUS             RESTARTS   AGE
etcd-operator-65f8576977-2xlsb   0/3     ImagePullBackOff   0          9s

  Normal   BackOff         <invalid> (x2 over <invalid>)  kubelet            Back-off pulling image "quay.io/jiazha/upstream-opm-builder@sha256:2cce9f8e95c9b4ce19b9ffbb95298b9ba3c4960dc962030d8bce655f3811adb0"
  Warning  Failed          <invalid> (x2 over <invalid>)  kubelet            Error: ImagePullBackOff

I change the Status to ASSIGNED since I think we should have an official solution to install the operator using the private image.

Comment 6 Jian Zhang 2021-01-20 03:45:12 UTC
Hi Anik,

Thanks for your information! I understand, the problem here is that I add this pull-secret to the pull-secret of "openshift-config", but it still doesn't work. Anyway, I think this problem is not related to this bug, verify it first.

Comment 7 Jian Zhang 2021-01-20 06:18:31 UTC
Updates:

adding this pull-secret to the pull-secret of "openshift-config", it works after a few minutes.

[root@preserve-olm-env data]# oc project
Using project "openshift-operators" on server "https://api.hongli-aw47.qe.devcluster.openshift.com:6443".
[root@preserve-olm-env data]# oc get csv
NAME                              DISPLAY   VERSION             REPLACES   PHASE
etcdoperator.v0.9.4-clusterwide   etcd      0.9.4-clusterwide              Installing
[root@preserve-olm-env data]# oc get pods
NAME                             READY   STATUS             RESTARTS   AGE
etcd-operator-65f8576977-pzbf7   0/3     ImagePullBackOff   0          22s

[root@preserve-olm-env data]# oc get pods
NAME                             READY   STATUS    RESTARTS   AGE
etcd-operator-65f8576977-8mh4h   3/3     Running   0          154m

That's a workaround if users want to install the operator that using the private image.
[root@preserve-olm-env data]# cat .dockerconfigjson | jq --compact-output '.auths["quay.io/jiazha"] |= . + {"auth":"xxx"}' > new_dockerconfigjson
[root@preserve-olm-env data]#  oc set data secret/pull-secret -n openshift-config  --from-file=.dockerconfigjson=new_dockerconfigjson
secret/pull-secret data updated

Comment 10 errata-xmlrpc 2021-02-24 15:47:55 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: OpenShift Container Platform 4.7.0 security, bug fix, and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2020:5633


Note You need to log in before you can comment on or make changes to this bug.