Bug 1929335 - [release-4.6] Operator objects are re-created after all other associated resources have been deleted
Summary: [release-4.6] Operator objects are re-created after all other associated reso...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: OLM
Version: 4.6.z
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: 4.6.z
Assignee: Joe Lanford
QA Contact: kuiwang
URL:
Whiteboard:
Depends On: 1899588
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-02-16 16:52 UTC by Joe Lanford
Modified: 2022-10-11 09:00 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1899588
Environment:
Last Closed: 2021-03-09 20:16:08 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github operator-framework operator-lifecycle-manager pull 2008 0 None open [release-4.6] Bug 1929335: Only re-create operator resource if it has existing components 2021-02-16 18:29:46 UTC
Red Hat Product Errata RHBA-2021:0674 0 None None None 2021-03-09 20:16:19 UTC

Description Joe Lanford 2021-02-16 16:52:45 UTC
+++ This bug was initially created as a clone of Bug #1899588 +++

Description of problem:

Using upstream OLM v0.17.0 on a newly created vanilla kubernetes 1.19.4 kind cluster, I installed the prometheus operator from the upstream community catalog.

After uninstalling the prometheus operator (delete subscription, csv, install plans, and crds), I saw the `Operator` object still existing.

Running `kubectl delete operator prometheus.default` deleted it, but then OLM immediately recreated it.

Before deleting it the first time, it looked like this:

```
apiVersion: operators.coreos.com/v1
kind: Operator
metadata:
  creationTimestamp: "2020-11-19T15:42:06Z"
  generation: 1
  managedFields:
  - apiVersion: operators.coreos.com/v1
    fieldsType: FieldsV1
    fieldsV1:
      f:spec: {}
      f:status:
        .: {}
        f:components:
          .: {}
          f:labelSelector:
            .: {}
            f:matchExpressions: {}
    manager: olm
    operation: Update
    time: "2020-11-19T15:42:06Z"
  name: prometheus.default
  resourceVersion: "4812"
  selfLink: /apis/operators.coreos.com/v1/operators/prometheus.default
  uid: b3753ce5-4825-4587-b494-8105b093c901
spec: {}
status:
  components:
    labelSelector:
      matchExpressions:
      - key: operators.coreos.com/prometheus.default
        operator: Exists
```

After subsequent deletes and OLM re-creations, it looks like this:

```
apiVersion: operators.coreos.com/v1
kind: Operator
metadata:
  creationTimestamp: "2020-11-19T15:42:06Z"
  generation: 1
  managedFields:
  - apiVersion: operators.coreos.com/v1
    fieldsType: FieldsV1
    fieldsV1:
      f:spec: {}
      f:status:
        .: {}
        f:components:
          .: {}
          f:labelSelector:
            .: {}
            f:matchExpressions: {}
    manager: olm
    operation: Update
    time: "2020-11-19T15:42:06Z"
  name: prometheus.default
  resourceVersion: "4812"
  selfLink: /apis/operators.coreos.com/v1/operators/prometheus.default
  uid: b3753ce5-4825-4587-b494-8105b093c901
spec: {}
```

Version-Release number of selected component (if applicable):
Upstream v0.17.0

How reproducible:
Every time I've tried

Steps to Reproduce:
1. Start a Kubernetes v1.19.4 cluster 
2. Install upstream OLM v0.17.0
3. Create an AllNamespaces operator group
4. Create a subscription for the "prometheus" operator from the "operatorhubio-catalog" source
5. Wait for the operator to be installed successfully
6. Delete the subscription, CSV, and CRDs associated with the prometheus operator
7. Delete Operator object associated with the prometheus operator


Actual results:

Run `kubectl get operators` and see that OLM re-created the just deleted prometheus Operator object.

Expected results:

Run `kubectl get operators` and see no Operator object for prometheus

Additional info:

--- Additional comment from Kevin Rizza on 2021-01-27 15:55:24 UTC ---

Looks like this one won't make it in for the 4.7 release, the work is still under review for now. Moving it out and marking for upcoming sprint.

--- Additional comment from Nick Hale on 2021-02-03 11:43:19 UTC ---



--- Additional comment from Kevin Rizza on 2021-02-12 17:16:17 UTC ---



--- Additional comment from Xavier Morano on 2021-02-15 10:43:03 UTC ---

Hi!

any workaround available for OCP 4.6.1X ?

thx

--- Additional comment from Joe Lanford on 2021-02-15 19:46:45 UTC ---

To workaround this issue, run the following:

```
kubectl scale deployment -n openshift-operator-lifecycle-manager olm-operator --replicas=0
kubectl delete operator -n <your-operator-namespace> <your-operator>
kubectl scale deployment -n openshift-operator-lifecycle-manager olm-operator --replicas=1
```

Comment 3 kuiwang 2021-03-01 02:41:08 UTC
verify it on 4.6. LGTM

--
[root@preserve-olm-env 1929335]# oc get clusterversion
NAME      VERSION                             AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.6.0-0.nightly-2021-02-26-224651   True        False         2m36s   Cluster version is 4.6.0-0.nightly-2021-02-26-224651
[root@preserve-olm-env 1929335]# 
[root@preserve-olm-env 1929335]# oc get pod -n openshift-operator-lifecycle-manager
NAME                                READY   STATUS    RESTARTS   AGE
catalog-operator-64764f7685-jxk9j   1/1     Running   0          33m
olm-operator-6784859658-8gpld       1/1     Running   0          33m
packageserver-6fdb9b5c67-9kts2      1/1     Running   0          25m
packageserver-6fdb9b5c67-x88zj      1/1     Running   0          25m
[root@preserve-olm-env 1929335]# oc exec catalog-operator-64764f7685-jxk9j -n openshift-operator-lifecycle-manager -- olm --version
OLM version: 0.16.1
git commit: 724b6a442b4979a2f2749d42f49b4dc81ce9911f
[root@preserve-olm-env 1929335]# 
[root@preserve-olm-env 1929335]# cat og-single.yaml 
kind: OperatorGroup
apiVersion: operators.coreos.com/v1
metadata:
  name: og-single1
  namespace: default
spec:
  targetNamespaces:
  - default
[root@preserve-olm-env 1929335]# oc apply -f og-single.yaml 
operatorgroup.operators.coreos.com/og-single1 created
[root@preserve-olm-env 1929335]# cat teiidcatsrc.yaml 
apiVersion: operators.coreos.com/v1alpha1
kind: CatalogSource
metadata:
  name: teiid
  namespace: default
spec:
  displayName: "teiid Operators"
  image: quay.io/kuiwang/teiid-index:1898500
  publisher: QE
  sourceType: grpc
[root@preserve-olm-env 1929335]# oc apply -f teiidcatsrc.yaml 
catalogsource.operators.coreos.com/teiid created
[root@preserve-olm-env 1929335]# cat teiidsub.yaml
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  name: teiid
  namespace: default
spec:
  source: teiid
  sourceNamespace: default

  channel: alpha
  installPlanApproval: Automatic
  name: teiid
[root@preserve-olm-env 1929335]# oc apply -f teiidsub.yaml 
subscription.operators.coreos.com/teiid created
[root@preserve-olm-env 1929335]# 

[root@preserve-olm-env 1929335]# oc get sub
NAME    PACKAGE   SOURCE   CHANNEL
teiid   teiid     teiid    alpha
[root@preserve-olm-env 1929335]# oc get ip
NAME            CSV            APPROVAL    APPROVED
install-7ll7q   teiid.v0.3.0   Automatic   true
[root@preserve-olm-env 1929335]# oc get csv
NAME           DISPLAY   VERSION   REPLACES   PHASE
teiid.v0.3.0   Teiid     0.3.0                Succeeded
[root@preserve-olm-env 1929335]# oc get operators
NAME            AGE
teiid.default   57s
[root@preserve-olm-env 1929335]# oc delete sub teiid
subscription.operators.coreos.com "teiid" deleted
[root@preserve-olm-env 1929335]# oc delete csv teiid.v0.3.0
clusterserviceversion.operators.coreos.com "teiid.v0.3.0" deleted
[root@preserve-olm-env 1929335]# oc get operator teiid.default -o yaml
apiVersion: operators.coreos.com/v1
kind: Operator
metadata:
...
  name: teiid.default
  resourceVersion: "28673"
  selfLink: /apis/operators.coreos.com/v1/operators/teiid.default
  uid: a948fa2c-2841-42d0-8ead-56bf9860b332
spec: {}
status:
  components:
    labelSelector:
      matchExpressions:
      - key: operators.coreos.com/teiid.default
        operator: Exists
    refs:
    - apiVersion: apiextensions.k8s.io/v1
      conditions:
      - lastTransitionTime: "2021-03-01T02:38:10Z"
        message: no conflicts found
        reason: NoConflicts
        status: "True"
        type: NamesAccepted
      - lastTransitionTime: "2021-03-01T02:38:10Z"
        message: the initial names have been accepted
        reason: InitialNamesAccepted
        status: "True"
        type: Established
      kind: CustomResourceDefinition
      name: virtualdatabases.teiid.io
[root@preserve-olm-env 1929335]# 
[root@preserve-olm-env 1929335]# oc delete crd virtualdatabases.teiid.io
customresourcedefinition.apiextensions.k8s.io "virtualdatabases.teiid.io" deleted
[root@preserve-olm-env 1929335]# oc get operator teiid.default -o yaml
apiVersion: operators.coreos.com/v1
kind: Operator
metadata:
...
  name: teiid.default
  resourceVersion: "28927"
  selfLink: /apis/operators.coreos.com/v1/operators/teiid.default
  uid: a948fa2c-2841-42d0-8ead-56bf9860b332
spec: {}
status:
  components:
    labelSelector:
      matchExpressions:
      - key: operators.coreos.com/teiid.default
        operator: Exists
[root@preserve-olm-env 1929335]# oc delete operator teiid.default
operator.operators.coreos.com "teiid.default" deleted
[root@preserve-olm-env 1929335]# oc get operator teiid.default -o yaml
Error from server (NotFound): operators.operators.coreos.com "teiid.default" not found
[root@preserve-olm-env 1929335]# 




--

Comment 6 errata-xmlrpc 2021-03-09 20:16:08 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (OpenShift Container Platform 4.6.20 bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2021:0674


Note You need to log in before you can comment on or make changes to this bug.