Bug 1835884 - opm bundle extract should be permissive to annotation problems
Summary: opm bundle extract should be permissive to annotation problems
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: OLM
Version: 4.5
Hardware: Unspecified
OS: Unspecified
urgent
urgent
Target Milestone: ---
: 4.5.0
Assignee: Kevin Rizza
QA Contact: yhui
URL:
Whiteboard:
: 1843660 (view as bug list)
Depends On:
Blocks: 1843660
TreeView+ depends on / blocked
 
Reported: 2020-05-14 16:46 UTC by yhui
Modified: 2020-07-13 17:39 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1843660 (view as bug list)
Environment:
Last Closed: 2020-07-13 17:39:04 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github operator-framework operator-registry pull 338 0 None closed Bug 1835884: opm bundle extract shouldn't validate annotations 2021-02-16 15:54:49 UTC
Red Hat Product Errata RHBA-2020:2409 0 None None None 2020-07-13 17:39:19 UTC

Description yhui 2020-05-14 16:46:33 UTC
Description of problem:
There is no csv or pod after subscription is created using the catalogsource (created by bundle index image built with 'opm index add'). And the installplan complained 'BundleLookupPending'.

[root@preserve-olm-env ~]# oc get csv -n test-operators
No resources found in test-operators namespace.
[root@preserve-olm-env ~]# oc get pod -n test-operators
No resources found in test-operators namespace.
[root@preserve-olm-env database]# oc get ip install-bgs4k -n test-operators -o yaml
status:
  bundleLookups:
  - catalogSourceRef:
      name: cockroachdb-catalog
      namespace: openshift-marketplace
    conditions:
    - lastTransitionTime: "2020-05-14T08:43:12Z"
      message: unpack job not completed
      reason: JobIncomplete
      status: "True"
      type: BundleLookupPending
    identifier: cockroachdb.v2.0.9
    path: quay.io/yuhui12/cockroachdb-bundle:2.0.9
    replaces: ""
  catalogSources: []
  phase: Installing


Version-Release number of selected component (if applicable):
[root@preserve-olm-env ~]# oc get clusterversion
NAME      VERSION                             AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.5.0-0.nightly-2020-05-13-221558   True        False         13h     Cluster version is 4.5.0-0.nightly-2020-05-13-221558


How reproducible:
always


Steps to Reproduce:
1. Create cockroachdb bundle image using 'opm alpha bundle build'
[root@preserve-olm-env ~]# opm alpha bundle build -d /root/hui/community-operators/community-operators/cockroachdb/2.0.9 -t quay.io/yuhui12/cockroachdb-bundle:2.0.9 -c alpha -p cockroachdb
[root@preserve-olm-env ~]# opm alpha bundle build -d /root/hui/community-operators/community-operators/cockroachdb/2.1.1 -t quay.io/yuhui12/cockroachdb-bundle:2.1.1 -c alpha -p cockroachdb
[root@preserve-olm-env ~]# opm alpha bundle build -d /root/hui/community-operators/community-operators/cockroachdb/2.1.11 -t quay.io/yuhui12/cockroachdb-bundle:2.1.11 -c alpha -p cockroachdb

[root@preserve-olm-env ~]# docker push quay.io/yuhui12/cockroachdb-bundle:2.0.9
[root@preserve-olm-env ~]# docker push quay.io/yuhui12/cockroachdb-bundle:2.1.1
[root@preserve-olm-env ~]# docker push quay.io/yuhui12/cockroachdb-bundle:2.1.11

2.Create the bundle index image using 'opm index add'
[root@preserve-olm-env ~]# opm index add -b quay.io/yuhui12/cockroachdb-bundle:2.0.9 -t quay.io/yuhui12/cockroachdb-index:2.0.9 -c docker
[root@preserve-olm-env ~]# docker push quay.io/yuhui12/cockroachdb-index:2.0.9
[root@preserve-olm-env ~]# opm index add --bundles quay.io/yuhui12/cockroachdb-bundle:2.1.1 --from-index quay.io/yuhui12/cockroachdb-index:2.0.9 --tag quay.io/yuhui12/cockroachdb-index:2.1.1
[root@preserve-olm-env ~]# docker push quay.io/yuhui12/cockroachdb-index:2.1.1
[root@preserve-olm-env ~]# opm index add --bundles quay.io/yuhui12/cockroachdb-bundle:2.1.11 --from-index quay.io/yuhui12/cockroachdb-index:2.1.1 --tag quay.io/yuhui12/cockroachdb-index:2.1.11 -c docker
[root@preserve-olm-env ~]# docker push quay.io/yuhui12/cockroachdb-index:2.1.11

3.Create catalogsource using the bundle index image.
[root@preserve-olm-env new-feature]# cat catsrc.yaml 
apiVersion: operators.coreos.com/v1alpha1
kind: CatalogSource
metadata:
  name: cockroachdb-catalog
  namespace: openshift-marketplace
spec:
  displayName: cockroachdb Operator Catalog
  image: quay.io/yuhui12/cockroachdb-index:2.1.11
  publisher: QE
  sourceType: grpc
[root@preserve-olm-env new-feature]# oc apply -f catsrc.yaml

[root@preserve-olm-env new-feature]# oc get catsrc -n openshift-marketplace
NAME                  DISPLAY                        TYPE   PUBLISHER   AGE
certified-operators   Certified Operators            grpc   Red Hat     13h
cockroachdb-catalog   cockroachdb Operator Catalog   grpc   QE          8h
community-operators   Community Operators            grpc   Red Hat     13h
etcd-catalog          Etcd Operator                  grpc   QE          8h
qe-app-registry                                      grpc               13h
redhat-marketplace    Red Hat Marketplace            grpc   Red Hat     13h
redhat-operators      Red Hat Operators              grpc   Red Hat     13h
[root@preserve-olm-env new-feature]# oc get pod -n openshift-marketplace
NAME                                   READY   STATUS    RESTARTS   AGE
certified-operators-76dcb7679-cgldm    1/1     Running   0          13h
cockroachdb-catalog-kjj26              1/1     Running   0          9h
community-operators-76cbcd48f6-9b8td   1/1     Running   0          13h
etcd-catalog-6mbqk                     1/1     Running   0          8h
marketplace-operator-85fbdccd7-fzz8g   1/1     Running   0          13h
qe-app-registry-6d877786d4-vnxml       1/1     Running   0          13h
redhat-marketplace-8574d7bcc4-dqngr    1/1     Running   0          13h
redhat-operators-7bf444b88d-jgwzc      1/1     Running   0          13h

4. Create the OperatorGroup and sub on test-operators project.
[root@preserve-olm-env new-feature]# cat og-new.yaml 
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
  name: test-operators-og
  namespace: test-operators
spec:
  targetNamespaces:
  - test-operators
[root@preserve-olm-env new-feature]# oc apply -f og-new.yaml
[root@preserve-olm-env new-feature]# cat sub.yaml 
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  name: cockroachdb
  namespace: test-operators
spec:
  channel: alpha
  installPlanApproval: Automatic
  name: cockroachdb
  source: cockroachdb-catalog
  sourceNamespace: openshift-marketplace
  startingCSV: cockroachdb.v2.0.9
[root@preserve-olm-env new-feature]# oc apply -f sub.yaml 
[root@preserve-olm-env new-feature]# oc get sub -n test-operators
NAME          PACKAGE       SOURCE                CHANNEL
cockroachdb   cockroachdb   cockroachdb-catalog   alpha

5. Check the csv and pod.
[root@preserve-olm-env ~]# oc get csv -n test-operators
No resources found in test-operators namespace.
[root@preserve-olm-env ~]# oc get pod -n test-operators
No resources found in test-operators namespace.
[root@preserve-olm-env database]# oc get ip install-bgs4k -n test-operators -o yaml
status:
  bundleLookups:
  - catalogSourceRef:
      name: cockroachdb-catalog
      namespace: openshift-marketplace
    conditions:
    - lastTransitionTime: "2020-05-14T08:43:12Z"
      message: unpack job not completed
      reason: JobIncomplete
      status: "True"
      type: BundleLookupPending
    identifier: cockroachdb.v2.0.9
    path: quay.io/yuhui12/cockroachdb-bundle:2.0.9
    replaces: ""
  catalogSources: []
  phase: Installing

Actual results:
There is no csv or pod created. And the ip complained BundleLookupPending.

Expected results:
The csv and pod can be created successfully. And there is no error info for ip.


Additional info:
The OCP cluster to debug is https://mastern-jenkins-csb-openshift-qe.cloud.paas.psi.redhat.com/job/Launch%20Environment%20Flexy/93055/artifact/workdir/install-dir/auth/kubeconfig/*view*/.

Comment 1 Ben Luddy 2020-05-14 20:01:53 UTC
I restarted the unpack job and took a look at the failure logs:

$ k delete -n openshift-marketplace job 2ac9c0f1268caab9e2b0fcdd901870e8859a3c3e6ab089795476074e739d15c 
job.batch "2ac9c0f1268caab9e2b0fcdd901870e8859a3c3e6ab089795476074e739d15c" deleted

$ k logs -n openshift-marketplace 2ac9c0f1268caab9e2b0fcdd901870e8859a3c3e6ab089795476074e73777mp
time="2020-05-14T19:57:17Z" level=info msg="Using in-cluster kube client config"
time="2020-05-14T19:57:17Z" level=info msg="Reading file" file=/bundle/manifests/cockroachdb.v2.0.9.clusterserviceversion.yaml
time="2020-05-14T19:57:17Z" level=info msg="Reading file" file=/bundle/manifests/cockroachdbs.charts.helm.k8s.io.crd.yaml
time="2020-05-14T19:57:17Z" level=info msg="Reading file" file=/bundle/metadata/annotations.yaml
Error: error loading manifests from directory: annotation validation failed, missing or empty values: operators.operatorframework.io.bundle.channel.default.v1
Usage:
  opm alpha bundle extract [flags]

Flags:
  -c, --configmapname string   name of configmap to write bundle data
  -l, --datalimit uint         maximum limit in bytes for total bundle data (default 1048576)
      --debug                  enable debug logging
  -h, --help                   help for extract
  -k, --kubeconfig string      absolute path to kubeconfig file
  -m, --manifestsdir string    path to directory containing manifests (default "/")
  -n, --namespace string       namespace to write configmap data (default "openshift-operator-lifecycle-manager")

So opm is complaining about the bundle's annotations. Opening that up:

$ cat annotations.yaml 
annotations:
  operators.operatorframework.io.bundle.channel.default.v1: ""
  operators.operatorframework.io.bundle.channels.v1: alpha
  operators.operatorframework.io.bundle.manifests.v1: manifests/
  operators.operatorframework.io.bundle.mediatype.v1: registry+v1
  operators.operatorframework.io.bundle.metadata.v1: metadata/
  operators.operatorframework.io.bundle.package.v1: cockroachdb

So "operators.operatorframework.io.bundle.channel.default.v1" is an empty string (which is what the validation error is complaining about).

Comment 2 Ben Luddy 2020-05-14 20:26:28 UTC
In 57fa577, `opm alpha bundle build -d ~/w/community-operators/community-operators/cockroachdb/2.0.9/ -u /tmp/u -t localhost:5000/u -p cockroachdb -c alpha` produces annotations.yaml with `operators.operatorframework.io.bundle.channel.default.v1: ""`, but in 0121e48, it produces `operators.operatorframework.io.bundle.channel.default.v1: alpha`.

Our docs (https://github.com/operator-framework/operator-registry/blob/master/docs/design/operator-bundle.md#generate-bundle-annotations-and-dockerfile) say "If the default channel is not provided, the first channel in channel list is selected as default," which doesn't seem to be happening in this case.

Comment 3 Ben Luddy 2020-05-14 21:20:09 UTC
Just caught up with Bowen on this. A fix for the annotation validation error is already up in https://github.com/operator-framework/operator-registry/pull/252.

Comment 4 Evan Cordell 2020-05-18 21:55:21 UTC
I think there is also a bug in opm alpha bundle extract - 

by the time we are unpacking a bundle, we no longer care about these annotations aside from the manifest location (there may be more at some point, but none of the others we have today are important once we've picked the bundle and decided to unpack it). `bundle extract` should not be validating these before it unpacks.

Comment 5 Bowen Song 2020-05-20 19:44:35 UTC
Delete https://github.com/operator-framework/operator-registry/blob/master/pkg/configmap/configmap_writer.go#L86-L88 with respect to changes in https://github.com/operator-framework/operator-registry/pull/318, default channel is optional, so that validation could pass. 

I agree that validation should not be playing the core part on this.

Comment 8 yhui 2020-05-29 15:05:56 UTC
Version:
Latest master branch of opm
[root@preserve-olm-env operator-registry]# git log
commit 054cd90a84ecadc24d6b94e955c3437ac740a1a7
Merge: 7437af6 2c009be
Author: OpenShift Merge Robot <openshift-merge-robot.github.com>
Date:   Thu May 28 14:36:19 2020 -0400

    Merge pull request #340 from ecordell/ro
    
    Bug 1840727: fix(unpack): support unpacking readonly folders

Steps to test:
1. Create cockroachdb bundle image using 'opm alpha bundle build'
[root@preserve-olm-env operator-registry]# /data/hui/operator-registry/opm alpha bundle build -d /data/hui/cockroachdb.bk/2.0.9 -t quay.io/yuhui12/cockroachdb-bundle:2.0.9d -c alpha -p cockroachdb

In this command, the default channel was not provided.

2. Check the bundle.Dockerfile and annotations.yaml.
[root@preserve-olm-env new-feature]# cat bundle.Dockerfile 
FROM scratch

LABEL operators.operatorframework.io.bundle.mediatype.v1=registry+v1
LABEL operators.operatorframework.io.bundle.manifests.v1=manifests/
LABEL operators.operatorframework.io.bundle.metadata.v1=metadata/
LABEL operators.operatorframework.io.bundle.package.v1=cockroachdb
LABEL operators.operatorframework.io.bundle.channels.v1=alpha
LABEL operators.operatorframework.io.bundle.channel.default.v1=
```

[root@preserve-olm-env metadata]# cat annotations.yaml
annotations:
  operators.operatorframework.io.bundle.channel.default.v1: ""
  operators.operatorframework.io.bundle.channels.v1: alpha
  operators.operatorframework.io.bundle.manifests.v1: manifests/
  operators.operatorframework.io.bundle.mediatype.v1: registry+v1
  operators.operatorframework.io.bundle.metadata.v1: metadata/
  operators.operatorframework.io.bundle.package.v1: cockroachdb

The operators.operatorframework.io.bundle.channel.default.v1 was not populated as the value of channel.

This is not matched with the expected result.

Comment 9 Kevin Rizza 2020-05-29 15:13:15 UTC
Hi Yu,

Are you asking that the dockefile label has "" as the value of the `operators.operatorframework.io.bundle.channel.default.v1` label? This bug was about the annotation validation failing when the string is empty, not that channel default matches between the dockerfile and the annotations.yaml (in fact, these have different syntax and leaving it empty should be fine since the dockerfile is not yaml). The test for this should be to ensure that, if you do build a bundle without providing the default channel in alpha bundle build, you can add it to an index and OLM can successfully unpack and install the operator on the cluster.

Comment 10 yhui 2020-06-02 09:34:59 UTC
Version:
Latest master branch of opm
[root@preserve-olm-env operator-registry]# git log
commit a146011de4ada20ca6a857cf81cc6db7798d9891
Merge: 054cd90 eadc1bb
Author: exdx <dsover>
Date:   Fri May 29 15:29:33 2020 -0400

    Merge pull request #341 from exdx/feat/add-release-doc
    
    docs: add OPM_VERSION notes to release docs

OCP 4.5:
[root@preserve-olm-env new-feature]# /data/hui/oc version
Client Version: 4.5.0-202005291417-9933eb9
Server Version: 4.5.0-0.nightly-2020-05-30-025738
Kubernetes Version: v1.18.3+224c8a2


Steps to test:
1. Create cockroachdb bundle image using 'opm alpha bundle build' without --default

[root@preserve-olm-env cockroachdb]# /data/hui/operator-registry/opm alpha bundle build -d /data/hui/community-operators/community-operators/cockroachdb/2.0.9/ -t quay.io/yuhui12/cockroachdb-bundle:2.0.9d -c alpha -p cockroachdb
[root@preserve-olm-env cockroachdb]# rm -rf bundle.Dockerfile metadata/
[root@preserve-olm-env cockroachdb]# /data/hui/operator-registry/opm alpha bundle build -d /data/hui/community-operators/community-operators/cockroachdb/2.1.1/ -t quay.io/yuhui12/cockroachdb-bundle:2.1.1d -c alpha -p cockroachdb
[root@preserve-olm-env cockroachdb]# rm -rf bundle.Dockerfile metadata/
[root@preserve-olm-env cockroachdb]# /data/hui/operator-registry/opm alpha bundle build -d /data/hui/community-operators/community-operators/cockroachdb/2.1.11/ -t quay.io/yuhui12/cockroachdb-bundle:2.1.11d -c alpha -p cockroachdb

[root@preserve-olm-env cockroachdb]# docker push quay.io/yuhui12/cockroachdb-bundle:2.0.9d
[root@preserve-olm-env cockroachdb]# docker push quay.io/yuhui12/cockroachdb-bundle:2.1.1d
[root@preserve-olm-env cockroachdb]# docker push quay.io/yuhui12/cockroachdb-bundle:2.1.11d

2.Create the bundle index image using 'opm index add'

[root@preserve-olm-env cockroachdb]# /data/hui/operator-registry/opm index add -b quay.io/yuhui12/cockroachdb-bundle:2.0.9d -t quay.io/yuhui12/cockroachdb-index:2.0.9d -c docker
[root@preserve-olm-env cockroachdb]# docker push quay.io/yuhui12/cockroachdb-index:2.0.9d

[root@preserve-olm-env cockroachdb]# /data/hui/operator-registry/opm index add -b quay.io/yuhui12/cockroachdb-bundle:2.1.1d --from-index quay.io/yuhui12/cockroachdb-index:2.0.9d -t quay.io/yuhui12/cockroachdb-index:2.1.1d -c docker
[root@preserve-olm-env cockroachdb]# docker push quay.io/yuhui12/cockroachdb-index:2.1.1d

[root@preserve-olm-env cockroachdb]# /data/hui/operator-registry/opm index add -b quay.io/yuhui12/cockroachdb-bundle:2.1.11d --from-index quay.io/yuhui12/cockroachdb-index:2.1.1d -t quay.io/yuhui12/cockroachdb-index:2.1.11d -c docker
[root@preserve-olm-env cockroachdb]# docker push quay.io/yuhui12/cockroachdb-index:2.1.11d

3.Create catalogsource using the bundle index image.

[root@preserve-olm-env new-feature]# cat catsrc.yaml 
apiVersion: operators.coreos.com/v1alpha1
kind: CatalogSource
metadata:
  name: hui-catalog
  namespace: openshift-marketplace
spec:
  displayName: cockroachdb Operator Catalog
  image: quay.io/yuhui12/cockroachdb-index:2.1.11d
  publisher: QE
  sourceType: grpc
[root@preserve-olm-env new-feature]# oc apply -f catsrc.yaml

[root@preserve-olm-env new-feature]# oc get catsrc -n openshift-marketplace
NAME                  DISPLAY                        TYPE   PUBLISHER   AGE
certified-operators   Certified Operators            grpc   Red Hat     29h
community-operators   Community Operators            grpc   Red Hat     29h
hui-catalog           cockroachdb Operator Catalog   grpc   QE          54s
qe-app-registry                                      grpc               29h
redhat-marketplace    Red Hat Marketplace            grpc   Red Hat     29h
redhat-operators      Red Hat Operators              grpc   Red Hat     29h
[root@preserve-olm-env new-feature]# oc get pod -n openshift-marketplace
NAME                                    READY   STATUS    RESTARTS   AGE
certified-operators-65cd75b554-thft6    1/1     Running   0          29h
community-operators-78dc4d647c-gzhc7    1/1     Running   0          29h
hui-catalog-hr9rl                       1/1     Running   0          63s
marketplace-operator-7b987fbbf9-64ccp   1/1     Running   0          29h
qe-app-registry-d5f7fb49c-f8g5x         1/1     Running   0          29h
redhat-marketplace-6489bf4d4d-xwv9c     1/1     Running   0          29h
redhat-operators-6cb4f64c8d-dph8s       1/1     Running   0          29h
[root@preserve-olm-env new-feature]# oc get packagemanifest |grep cock
cockroachdb                                  Community Operators            29h
cockroachdb-certified                        Certified Operators            29h
cockroachdb-certified-rhmp                   Red Hat Marketplace            29h
cockroachdb                                  cockroachdb Operator Catalog   79s

4. Create the OperatorGroup and sub on yh-operators project.

[root@preserve-olm-env new-feature]# cat og.yaml 
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
  name: test-operators-og
  namespace: yh-operators
spec:
  targetNamespaces:
  - yh-operators
[root@preserve-olm-env new-feature]# oc apply -f og-new.yaml

[root@preserve-olm-env new-feature]# cat sub.yaml 
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  name: cockroachdb
  namespace: yh-operators
spec:
  channel: alpha
  installPlanApproval: Automatic
  name: cockroachdb
  source: hui-catalog
  sourceNamespace: openshift-marketplace
  startingCSV: cockroachdb.v2.0.9
[root@preserve-olm-env new-feature]# oc apply -f sub.yaml 

[root@preserve-olm-env new-feature]# oc get sub -n yh-operators
NAME          PACKAGE       SOURCE        CHANNEL
cockroachdb   cockroachdb   hui-catalog   alpha
[root@preserve-olm-env new-feature]# oc get ip -n yh-operators
NAME            CSV                  APPROVAL    APPROVED
install-hpcml   cockroachdb.v2.0.9   Automatic   true
[root@preserve-olm-env new-feature]# oc get csv
NAME                 DISPLAY       VERSION   REPLACES   PHASE
cockroachdb.v2.0.9   CockroachDB   2.0.9                Succeeded
[root@preserve-olm-env new-feature]# oc get pods
NAME                          READY   STATUS    RESTARTS   AGE
cockroachdb-56c98d555-5h9fj   1/1     Running   0          45s

5. Wait about 2 minutes, the operator will update automatically.

[root@preserve-olm-env operator-registry]# oc get ip -n yh-operators
NAME            CSV                   APPROVAL    APPROVED
install-hpcml   cockroachdb.v2.0.9    Automatic   true
install-x4hdl   cockroachdb.v2.1.11   Automatic   true
install-xvbqt   cockroachdb.v2.1.1    Automatic   true
[root@preserve-olm-env operator-registry]# oc get csv -n yh-operators
NAME                  DISPLAY       VERSION   REPLACES             PHASE
cockroachdb.v2.1.11   CockroachDB   2.1.11    cockroachdb.v2.1.1   Succeeded
[root@preserve-olm-env operator-registry]# oc get pods
NAME                         READY   STATUS    RESTARTS   AGE
cockroachdb-c4fdd648-gqmms   1/1     Running   0          77m

The operator can upgrade from cockroachdb.v2.0.9 to 2.1.1 to 2.1.11. Verify the bug.

Comment 11 yhui 2020-06-02 09:59:42 UTC
Hi Kevin,

I found there were three completed pods added on openshift-marketplace project. 

[root@preserve-olm-env new-feature]# oc get pod -n openshift-marketplace
NAME                                                              READY   STATUS      RESTARTS   AGE
039f18bab9794d4f6f692b1ef2e06fff09f4724e5e61e6379a549f399d29z6q   0/1     Completed   0          98m
24daa8b5b0ad821f90fedb2263e340938478547298f0498618e008d20bf6s2f   0/1     Completed   0          100m
b8f7eba83ed6ec7b3edaf11d86cde995dee33ab244706c1c0186b479e4jcqxx   0/1     Completed   0          100m
certified-operators-65cd75b554-thft6                              1/1     Running     0          31h
community-operators-78dc4d647c-gzhc7                              1/1     Running     0          31h
hui-catalog-hr9rl                                                 1/1     Running     0          102m
marketplace-operator-7b987fbbf9-64ccp                             1/1     Running     0          31h
olm-operators-6qphr                                               1/1     Running     0          58s
qe-app-registry-d5f7fb49c-f8g5x                                   1/1     Running     0          30h
redhat-marketplace-6489bf4d4d-xwv9c                               1/1     Running     0          31h
redhat-operators-6cb4f64c8d-dph8s                                 1/1     Running     0          31h
[root@preserve-olm-env new-feature]# oc logs 039f18bab9794d4f6f692b1ef2e06fff09f4724e5e61e6379a549f399d29z6q -n openshift-marketplace
time="2020-06-02T08:12:23Z" level=info msg="Using in-cluster kube client config"
time="2020-06-02T08:12:23Z" level=info msg="Reading file" file=/bundle/manifests/cockroachdb.v2.1.11.clusterserviceversion.yaml
time="2020-06-02T08:12:23Z" level=info msg="Reading file" file=/bundle/manifests/cockroachdbs.charts.helm.k8s.io.crd.yaml
time="2020-06-02T08:12:23Z" level=info msg="Reading file" file=/bundle/metadata/annotations.yaml

[root@preserve-olm-env new-feature]# oc logs 24daa8b5b0ad821f90fedb2263e340938478547298f0498618e008d20bf6s2f -n openshift-marketplace
time="2020-06-02T08:10:26Z" level=info msg="Using in-cluster kube client config"
time="2020-06-02T08:10:26Z" level=info msg="Reading file" file=/bundle/manifests/cockroachdb.v2.0.9.clusterserviceversion.yaml
time="2020-06-02T08:10:26Z" level=info msg="Reading file" file=/bundle/manifests/cockroachdbs.charts.helm.k8s.io.crd.yaml
time="2020-06-02T08:10:26Z" level=info msg="Reading file" file=/bundle/metadata/annotations.yaml

[root@preserve-olm-env new-feature]# oc logs b8f7eba83ed6ec7b3edaf11d86cde995dee33ab244706c1c0186b479e4jcqxx -n openshift-marketplace
time="2020-06-02T08:10:41Z" level=info msg="Using in-cluster kube client config"
time="2020-06-02T08:10:41Z" level=info msg="Reading file" file=/bundle/manifests/cockroachdb.v2.1.1.clusterserviceversion.yaml
time="2020-06-02T08:10:41Z" level=info msg="Reading file" file=/bundle/manifests/cockroachdbs.charts.helm.k8s.io.crd.yaml
time="2020-06-02T08:10:41Z" level=info msg="Reading file" file=/bundle/metadata/annotations.yaml


[root@preserve-olm-env new-feature]# oc describe pod 039f18bab9794d4f6f692b1ef2e06fff09f4724e5e61e6379a549f399d29z6q -n openshift-marketplace
Name:         039f18bab9794d4f6f692b1ef2e06fff09f4724e5e61e6379a549f399d29z6q
Namespace:    openshift-marketplace
Priority:     0
Node:         ip-10-0-202-71.us-east-2.compute.internal/10.0.202.71
Start Time:   Tue, 02 Jun 2020 04:12:16 -0400
Labels:       controller-uid=b5da0c95-ef0f-4f90-96c4-ec3351f0026d
              job-name=039f18bab9794d4f6f692b1ef2e06fff09f4724e5e61e6379a549f399df3bbd
Annotations:  k8s.v1.cni.cncf.io/network-status:
                [{
                    "name": "openshift-sdn",
                    "interface": "eth0",
                    "ips": [
                        "10.128.2.19"
                    ],
                    "default": true,
                    "dns": {}
                }]
              k8s.v1.cni.cncf.io/networks-status:
                [{
                    "name": "openshift-sdn",
                    "interface": "eth0",
                    "ips": [
                        "10.128.2.19"
                    ],
                    "default": true,
                    "dns": {}
                }]
              openshift.io/scc: restricted
Status:       Succeeded
IP:           10.128.2.18
IPs:
  IP:           10.128.2.18
Controlled By:  Job/039f18bab9794d4f6f692b1ef2e06fff09f4724e5e61e6379a549f399df3bbd
Init Containers:
  util:
    Container ID:  cri-o://fa7937de5f81c5cf1ec559159265518a486f0a4a64b4b17cfe11ae03ced65dba
    Image:         quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dfb4a2f25478a6aadbf70b0e768e2259542f0b832791d70e2cf39be50c2b3899
    Image ID:      quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dfb4a2f25478a6aadbf70b0e768e2259542f0b832791d70e2cf39be50c2b3899
    Port:          <none>
    Host Port:     <none>
    Command:
      /bin/cp
      -Rv
      /bin/cpb
      /util/cpb
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Tue, 02 Jun 2020 04:12:19 -0400
      Finished:     Tue, 02 Jun 2020 04:12:19 -0400
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /util from util (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-4qmcx (ro)
  pull:
    Container ID:  cri-o://26cb954c0fac70ead2f106968cc630aaf9d94b673fdef92b1c0084b93228489c
    Image:         quay.io/yuhui12/cockroachdb-bundle:2.1.11d
    Image ID:      quay.io/yuhui12/cockroachdb-bundle@sha256:2eea266525434286dc5cc9c5c642ffe98aa2cdee824dd2201c28cc3c30252c50
    Port:          <none>
    Host Port:     <none>
    Command:
      /util/cpb
      /bundle
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Tue, 02 Jun 2020 04:12:21 -0400
      Finished:     Tue, 02 Jun 2020 04:12:22 -0400
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /bundle from bundle (rw)
      /util from util (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-4qmcx (ro)
Containers:
  extract:
    Container ID:  cri-o://2dc421492ac14cad848c384ea9a5a804e58e79df7891deaa0c17101951ce7757
    Image:         quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f022a06628f53fcd22e2fe0ae3d028d1129cc7ba89369023e30ecb517c5bfd6b
    Image ID:      quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f022a06628f53fcd22e2fe0ae3d028d1129cc7ba89369023e30ecb517c5bfd6b
    Port:          <none>
    Host Port:     <none>
    Command:
      opm
      alpha
      bundle
      extract
      -m
      /bundle/
      -n
      openshift-marketplace
      -c
      039f18bab9794d4f6f692b1ef2e06fff09f4724e5e61e6379a549f399df3bbd
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Tue, 02 Jun 2020 04:12:23 -0400
      Finished:     Tue, 02 Jun 2020 04:12:23 -0400
    Ready:          False
    Restart Count:  0
    Environment:
      CONTAINER_IMAGE:  quay.io/yuhui12/cockroachdb-bundle:2.1.11d
    Mounts:
      /bundle from bundle (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-4qmcx (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  bundle:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  <unset>
  util:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  <unset>
  default-token-4qmcx:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-4qmcx
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason          Age                  From                                                Message
  ----     ------          ----                 ----                                                -------
  Normal   Scheduled       103m                 default-scheduler                                   Successfully assigned openshift-marketplace/039f18bab9794d4f6f692b1ef2e06fff09f4724e5e61e6379a549f399d29z6q to ip-10-0-202-71.us-east-2.compute.internal
  Normal   AddedInterface  103m                 multus                                              Add eth0 [10.128.2.18/23]
  Normal   Created         103m                 kubelet, ip-10-0-202-71.us-east-2.compute.internal  Created container util
  Normal   Started         103m                 kubelet, ip-10-0-202-71.us-east-2.compute.internal  Started container util
  Normal   Pulling         103m                 kubelet, ip-10-0-202-71.us-east-2.compute.internal  Pulling image "quay.io/yuhui12/cockroachdb-bundle:2.1.11d"
  Normal   Started         103m                 kubelet, ip-10-0-202-71.us-east-2.compute.internal  Started container pull
  Normal   Pulled          103m                 kubelet, ip-10-0-202-71.us-east-2.compute.internal  Successfully pulled image "quay.io/yuhui12/cockroachdb-bundle:2.1.11d"
  Normal   Created         103m                 kubelet, ip-10-0-202-71.us-east-2.compute.internal  Created container pull
  Normal   Pulled          103m                 kubelet, ip-10-0-202-71.us-east-2.compute.internal  Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f022a06628f53fcd22e2fe0ae3d028d1129cc7ba89369023e30ecb517c5bfd6b" already present on machine
  Normal   Created         103m                 kubelet, ip-10-0-202-71.us-east-2.compute.internal  Created container extract
  Normal   Started         103m                 kubelet, ip-10-0-202-71.us-east-2.compute.internal  Started container extract
  Normal   SandboxChanged  103m                 kubelet, ip-10-0-202-71.us-east-2.compute.internal  Pod sandbox changed, it will be killed and re-created.
  Normal   Pulled          103m (x2 over 103m)  kubelet, ip-10-0-202-71.us-east-2.compute.internal  Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dfb4a2f25478a6aadbf70b0e768e2259542f0b832791d70e2cf39be50c2b3899" already present on machine
  Normal   AddedInterface  103m                 multus                                              Add eth0 [10.128.2.19/23]
  Warning  Failed          103m                 kubelet, ip-10-0-202-71.us-east-2.compute.internal  Error: cannot find volume "util" to mount into container "util"


Although the operator can be created and upgraded sucessfully. But I think this result is not what we expected. Please help to check the issue. Thanks.

Comment 12 Evan Cordell 2020-06-03 19:48:01 UTC
Hi Hui Yu,

Could you please file a new bug with this issue? It is something we would like to look at, but is unrelated to this bug. Moving this bug back from Verified prevents us from backporting the fix discussed in this BZ.

Comment 14 Evan Cordell 2020-06-03 19:50:37 UTC
*** Bug 1843660 has been marked as a duplicate of this bug. ***

Comment 16 yhui 2020-06-04 06:43:03 UTC
Hi Evan,

OK. I will verify the bug on latest OCP 4.5 cluster again. And I will create another bug to track another issue(Comment 11) I met.


Version:
Latest OCP 4.5
[root@preserve-olm-env ~]# oc get clusterversion
NAME      VERSION                             AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.5.0-0.nightly-2020-06-03-013823   True        False         103m    Cluster version is 4.5.0-0.nightly-2020-06-03-013823


Steps to test:
1. Create cockroachdb bundle image using 'opm alpha bundle build' without --default

[root@preserve-olm-env cockroachdb]# /data/hui/operator-registry/opm alpha bundle build -d /data/hui/community-operators/community-operators/cockroachdb/2.0.9/ -t quay.io/yuhui12/cockroachdb-bundle:2.0.9d -c alpha -p cockroachdb
[root@preserve-olm-env cockroachdb]# rm -rf bundle.Dockerfile metadata/
[root@preserve-olm-env cockroachdb]# /data/hui/operator-registry/opm alpha bundle build -d /data/hui/community-operators/community-operators/cockroachdb/2.1.1/ -t quay.io/yuhui12/cockroachdb-bundle:2.1.1d -c alpha -p cockroachdb
[root@preserve-olm-env cockroachdb]# rm -rf bundle.Dockerfile metadata/
[root@preserve-olm-env cockroachdb]# /data/hui/operator-registry/opm alpha bundle build -d /data/hui/community-operators/community-operators/cockroachdb/2.1.11/ -t quay.io/yuhui12/cockroachdb-bundle:2.1.11d -c alpha -p cockroachdb

[root@preserve-olm-env cockroachdb]# docker push quay.io/yuhui12/cockroachdb-bundle:2.0.9d
[root@preserve-olm-env cockroachdb]# docker push quay.io/yuhui12/cockroachdb-bundle:2.1.1d
[root@preserve-olm-env cockroachdb]# docker push quay.io/yuhui12/cockroachdb-bundle:2.1.11d

2.Create the bundle index image using 'opm index add'

[root@preserve-olm-env cockroachdb]# /data/hui/operator-registry/opm index add -b quay.io/yuhui12/cockroachdb-bundle:2.0.9d -t quay.io/yuhui12/cockroachdb-index:2.0.9d -c docker
[root@preserve-olm-env cockroachdb]# docker push quay.io/yuhui12/cockroachdb-index:2.0.9d

[root@preserve-olm-env cockroachdb]# /data/hui/operator-registry/opm index add -b quay.io/yuhui12/cockroachdb-bundle:2.1.1d --from-index quay.io/yuhui12/cockroachdb-index:2.0.9d -t quay.io/yuhui12/cockroachdb-index:2.1.1d -c docker
[root@preserve-olm-env cockroachdb]# docker push quay.io/yuhui12/cockroachdb-index:2.1.1d

[root@preserve-olm-env cockroachdb]# /data/hui/operator-registry/opm index add -b quay.io/yuhui12/cockroachdb-bundle:2.1.11d --from-index quay.io/yuhui12/cockroachdb-index:2.1.1d -t quay.io/yuhui12/cockroachdb-index:2.1.11d -c docker
[root@preserve-olm-env cockroachdb]# docker push quay.io/yuhui12/cockroachdb-index:2.1.11d

3.Create catalogsource using the bundle index image.

[root@preserve-olm-env new-feature]# cat catsrc.yaml 
apiVersion: operators.coreos.com/v1alpha1
kind: CatalogSource
metadata:
  name: hui-catalog
  namespace: openshift-marketplace
spec:
  displayName: cockroachdb Operator Catalog
  image: quay.io/yuhui12/cockroachdb-index:2.1.11d
  publisher: QE
  sourceType: grpc
[root@preserve-olm-env new-feature]# oc apply -f catsrc.yaml

[root@preserve-olm-env new-feature]# oc get catsrc -n openshift-marketplace
NAME                  DISPLAY                        TYPE   PUBLISHER   AGE
certified-operators   Certified Operators            grpc   Red Hat     121m
community-operators   Community Operators            grpc   Red Hat     121m
hui-catalog           cockroachdb Operator Catalog   grpc   QE          60s
qe-app-registry                                      grpc               105m
redhat-marketplace    Red Hat Marketplace            grpc   Red Hat     121m
redhat-operators      Red Hat Operators              grpc   Red Hat     121m
[root@preserve-olm-env new-feature]# oc get pod -n openshift-marketplace
NAME                                    READY   STATUS    RESTARTS   AGE
certified-operators-58ffc9d4b8-qxrzx    1/1     Running   0          121m
community-operators-7bc8ccf96b-qmmzn    1/1     Running   0          121m
hui-catalog-mzgjq                       1/1     Running   0          68s
marketplace-operator-68469887df-4dn9m   1/1     Running   0          122m
qe-app-registry-6549bd6d8b-7wqz6        1/1     Running   0          105m
redhat-marketplace-856c8c9d4c-htnmg     1/1     Running   0          121m
redhat-operators-bb85c8b4c-dwk8n        1/1     Running   0          121m
[root@preserve-olm-env new-feature]# oc get packagemanifest |grep cock
cockroachdb                                  Community Operators            121m
cockroachdb-certified-rhmp                   Red Hat Marketplace            121m
cockroachdb-certified                        Certified Operators            121m
cockroachdb                                  cockroachdb Operator Catalog   94s


4. Create the OperatorGroup and sub on yh-operators project.

[root@preserve-olm-env new-feature]# cat og.yaml 
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
  name: test-operators-og
  namespace: yh-operators
spec:
  targetNamespaces:
  - yh-operators
[root@preserve-olm-env new-feature]# oc apply -f og-new.yaml

[root@preserve-olm-env new-feature]# cat sub.yaml 
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  name: cockroachdb
  namespace: yh-operators
spec:
  channel: alpha
  installPlanApproval: Automatic
  name: cockroachdb
  source: hui-catalog
  sourceNamespace: openshift-marketplace
  startingCSV: cockroachdb.v2.0.9
[root@preserve-olm-env new-feature]# oc apply -f sub.yaml 

[root@preserve-olm-env new-feature]# oc get sub -n yh-operators
NAME          PACKAGE       SOURCE        CHANNEL
cockroachdb   cockroachdb   hui-catalog   alpha
[root@preserve-olm-env new-feature]# oc get ip -n yh-operators
NAME            CSV                  APPROVAL    APPROVED
install-hpcml   cockroachdb.v2.0.9   Automatic   true
[root@preserve-olm-env new-feature]# oc get csv
NAME                 DISPLAY       VERSION   REPLACES   PHASE
cockroachdb.v2.0.9   CockroachDB   2.0.9                Succeeded
[root@preserve-olm-env new-feature]# oc get pods
NAME                          READY   STATUS    RESTARTS   AGE
cockroachdb-56c98d555-5h9fj   1/1     Running   0          45s

5. Wait about 2 minutes, the operator will update automatically.

[root@preserve-olm-env new-feature]# oc get ip
NAME            CSV                   APPROVAL    APPROVED
install-8hrdk   cockroachdb.v2.1.1    Automatic   true
install-8vrtf   cockroachdb.v2.1.11   Automatic   true
install-rmrbh   cockroachdb.v2.0.9    Automatic   true
[root@preserve-olm-env new-feature]# oc get csv
NAME                  DISPLAY       VERSION   REPLACES             PHASE
cockroachdb.v2.1.11   CockroachDB   2.1.11    cockroachdb.v2.1.1   Succeeded
[root@preserve-olm-env new-feature]# oc get pod
NAME                         READY   STATUS    RESTARTS   AGE
cockroachdb-c4fdd648-6rbvx   1/1     Running   0          35s

The operator can upgrade from cockroachdb.v2.0.9 to 2.1.1 to 2.1.11. Verify the bug.

Comment 17 errata-xmlrpc 2020-07-13 17:39:04 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:2409


Note You need to log in before you can comment on or make changes to this bug.