Bug 1899588 - Operator objects are re-created after all other associated resources have been deleted
Summary: Operator objects are re-created after all other associated resources have bee...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: OLM
Version: 4.7
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: 4.7.0
Assignee: Joe Lanford
QA Contact: kuiwang
URL:
Whiteboard:
: 1923854 1928079 (view as bug list)
Depends On:
Blocks: 1929335
TreeView+ depends on / blocked
 
Reported: 2020-11-19 15:54 UTC by Joe Lanford
Modified: 2022-09-28 14:36 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1929335 (view as bug list)
Environment:
Last Closed: 2021-02-24 15:34:22 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github operator-framework operator-lifecycle-manager pull 1938 0 None closed Bug 1899588: Only re-create operator resource if it has existing components 2021-02-15 19:46:50 UTC
Red Hat Product Errata RHSA-2020:5633 0 None None None 2021-02-24 15:35:06 UTC

Description Joe Lanford 2020-11-19 15:54:11 UTC
Description of problem:

Using upstream OLM v0.17.0 on a newly created vanilla kubernetes 1.19.4 kind cluster, I installed the prometheus operator from the upstream community catalog.

After uninstalling the prometheus operator (delete subscription, csv, install plans, and crds), I saw the `Operator` object still existing.

Running `kubectl delete operator prometheus.default` deleted it, but then OLM immediately recreated it.

Before deleting it the first time, it looked like this:

```
apiVersion: operators.coreos.com/v1
kind: Operator
metadata:
  creationTimestamp: "2020-11-19T15:42:06Z"
  generation: 1
  managedFields:
  - apiVersion: operators.coreos.com/v1
    fieldsType: FieldsV1
    fieldsV1:
      f:spec: {}
      f:status:
        .: {}
        f:components:
          .: {}
          f:labelSelector:
            .: {}
            f:matchExpressions: {}
    manager: olm
    operation: Update
    time: "2020-11-19T15:42:06Z"
  name: prometheus.default
  resourceVersion: "4812"
  selfLink: /apis/operators.coreos.com/v1/operators/prometheus.default
  uid: b3753ce5-4825-4587-b494-8105b093c901
spec: {}
status:
  components:
    labelSelector:
      matchExpressions:
      - key: operators.coreos.com/prometheus.default
        operator: Exists
```

After subsequent deletes and OLM re-creations, it looks like this:

```
apiVersion: operators.coreos.com/v1
kind: Operator
metadata:
  creationTimestamp: "2020-11-19T15:42:06Z"
  generation: 1
  managedFields:
  - apiVersion: operators.coreos.com/v1
    fieldsType: FieldsV1
    fieldsV1:
      f:spec: {}
      f:status:
        .: {}
        f:components:
          .: {}
          f:labelSelector:
            .: {}
            f:matchExpressions: {}
    manager: olm
    operation: Update
    time: "2020-11-19T15:42:06Z"
  name: prometheus.default
  resourceVersion: "4812"
  selfLink: /apis/operators.coreos.com/v1/operators/prometheus.default
  uid: b3753ce5-4825-4587-b494-8105b093c901
spec: {}
```

Version-Release number of selected component (if applicable):
Upstream v0.17.0

How reproducible:
Every time I've tried

Steps to Reproduce:
1. Start a Kubernetes v1.19.4 cluster 
2. Install upstream OLM v0.17.0
3. Create an AllNamespaces operator group
4. Create a subscription for the "prometheus" operator from the "operatorhubio-catalog" source
5. Wait for the operator to be installed successfully
6. Delete the subscription, CSV, and CRDs associated with the prometheus operator
7. Delete Operator object associated with the prometheus operator


Actual results:

Run `kubectl get operators` and see that OLM re-created the just deleted prometheus Operator object.

Expected results:

Run `kubectl get operators` and see no Operator object for prometheus

Additional info:

Comment 1 Kevin Rizza 2021-01-27 15:55:24 UTC
Looks like this one won't make it in for the 4.7 release, the work is still under review for now. Moving it out and marking for upcoming sprint.

Comment 2 Nick Hale 2021-02-03 11:43:19 UTC
*** Bug 1923854 has been marked as a duplicate of this bug. ***

Comment 3 Kevin Rizza 2021-02-12 17:16:17 UTC
*** Bug 1928079 has been marked as a duplicate of this bug. ***

Comment 4 Xavier Morano 2021-02-15 10:43:03 UTC
Hi!

any workaround available for OCP 4.6.1X ?

thx

Comment 5 Joe Lanford 2021-02-15 19:46:45 UTC
To workaround this issue, run the following:

```
kubectl scale deployment -n openshift-operator-lifecycle-manager olm-operator --replicas=0
kubectl delete operator -n <your-operator-namespace> <your-operator>
kubectl scale deployment -n openshift-operator-lifecycle-manager olm-operator --replicas=1
```

Comment 9 kuiwang 2021-02-19 02:27:32 UTC
verify it on 4.7. LGTM

--
[root@preserve-olm-env bin]# oc get pod -n openshift-operator-lifecycle-manager
NAME                                READY   STATUS    RESTARTS   AGE
catalog-operator-7fd8476f47-c7tkv   1/1     Running   0          16m
olm-operator-6fc884bdc9-nbcdx       1/1     Running   0          14m
packageserver-6fc885578d-pzq2m      1/1     Running   0          14m
packageserver-6fc885578d-qmr99      1/1     Running   0          16m
[root@preserve-olm-env bin]# oc exec catalog-operator-7fd8476f47-c7tkv -n openshift-operator-lifecycle-manager -- olm --version
OLM version: 0.17.0
git commit: 4b67acc560a790caa37fdff2f2c1a1eb50a4949f
[root@preserve-olm-env bin]# 

[root@preserve-olm-env 1899588]# cat og-single.yaml 
kind: OperatorGroup
apiVersion: operators.coreos.com/v1
metadata:
  name: og-single1
  namespace: default
spec:
  targetNamespaces:
  - default
[root@preserve-olm-env 1899588]# oc apply -f og-single.yaml 
operatorgroup.operators.coreos.com/og-single1 created
[root@preserve-olm-env 1899588]# cat teiidcatsrc.yaml 
apiVersion: operators.coreos.com/v1alpha1
kind: CatalogSource
metadata:
  name: teiid
  namespace: default
spec:
  displayName: "teiid Operators"
  image: quay.io/kuiwang/teiid-index:1898500
  publisher: QE
  sourceType: grpc
[root@preserve-olm-env 1899588]# oc apply -f teiidcatsrc.yaml 
catalogsource.operators.coreos.com/teiid created
[root@preserve-olm-env 1899588]# cat teiidsub.yaml 
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  name: teiid
  namespace: default
spec:
  source: teiid
  sourceNamespace: default

  channel: alpha
  installPlanApproval: Automatic
  name: teiid
[root@preserve-olm-env 1899588]# oc apply -f teiidsub.yaml 
subscription.operators.coreos.com/teiid created
[root@preserve-olm-env 1899588]# 


[root@preserve-olm-env 1899588]# oc get sub
NAME    PACKAGE   SOURCE   CHANNEL
teiid   teiid     teiid    alpha
[root@preserve-olm-env 1899588]# oc get ip
NAME            CSV            APPROVAL    APPROVED
install-d9zb5   teiid.v0.3.0   Automatic   true
[root@preserve-olm-env 1899588]# oc get csv
NAME           DISPLAY   VERSION   REPLACES   PHASE
teiid.v0.3.0   Teiid     0.3.0                Installing
[root@preserve-olm-env 1899588]# oc get csv
NAME           DISPLAY   VERSION   REPLACES   PHASE
teiid.v0.3.0   Teiid     0.3.0                Succeeded
[root@preserve-olm-env 1899588]# oc get operators
NAME            AGE
teiid.default   89s
[root@preserve-olm-env 1899588]# 


[root@preserve-olm-env 1899588]# oc delete sub teiid
subscription.operators.coreos.com "teiid" deleted
[root@preserve-olm-env 1899588]# oc delete csv teiid.v0.3.0
clusterserviceversion.operators.coreos.com "teiid.v0.3.0" deleted
[root@preserve-olm-env 1899588]# 


[root@preserve-olm-env 1899588]# oc get operator teiid.default -o yaml
apiVersion: operators.coreos.com/v1
kind: Operator
metadata:
...
  name: teiid.default
  resourceVersion: "40657"
  selfLink: /apis/operators.coreos.com/v1/operators/teiid.default
  uid: 57a42948-8349-4817-af41-2506a18282f4
spec: {}
status:
  components:
    labelSelector:
      matchExpressions:
      - key: operators.coreos.com/teiid.default
        operator: Exists
    refs:
    - apiVersion: apiextensions.k8s.io/v1
      conditions:
      - lastTransitionTime: "2021-02-19T02:16:50Z"
        message: no conflicts found
        reason: NoConflicts
        status: "True"
        type: NamesAccepted
      - lastTransitionTime: "2021-02-19T02:16:50Z"
        message: the initial names have been accepted
        reason: InitialNamesAccepted
        status: "True"
        type: Established
      - lastTransitionTime: "2021-02-19T02:16:50Z"
        message: 'spec.preserveUnknownFields: Invalid value: true: must be false'
        reason: Violations
        status: "True"
        type: NonStructuralSchema
      kind: CustomResourceDefinition
      name: virtualdatabases.teiid.io
[root@preserve-olm-env 1899588]# 
[root@preserve-olm-env 1899588]# oc delete crd virtualdatabases.teiid.io
customresourcedefinition.apiextensions.k8s.io "virtualdatabases.teiid.io" deleted
[root@preserve-olm-env 1899588]# oc get operator teiid.default -o yaml
apiVersion: operators.coreos.com/v1
kind: Operator
metadata:
...
  name: teiid.default
  resourceVersion: "40979"
  selfLink: /apis/operators.coreos.com/v1/operators/teiid.default
  uid: 57a42948-8349-4817-af41-2506a18282f4
spec: {}
status:
  components:
    labelSelector:
      matchExpressions:
      - key: operators.coreos.com/teiid.default
        operator: Exists
[root@preserve-olm-env 1899588]# 
[root@preserve-olm-env 1899588]# oc delete operator teiid.default
operator.operators.coreos.com "teiid.default" deleted
[root@preserve-olm-env 1899588]# oc get operator teiid.default -o yaml
Error from server (NotFound): operators.operators.coreos.com "teiid.default" not found


--

Comment 11 errata-xmlrpc 2021-02-24 15:34:22 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: OpenShift Container Platform 4.7.0 security, bug fix, and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2020:5633

Comment 12 Marcus Notø 2022-09-28 08:38:10 UTC
I have experienced the same problem using a different platform. What was the workaround for fixing this problem on the opernshift platform?

Comment 13 Alexander Greene 2022-09-28 14:36:04 UTC
In case anyone else references this bug, please review this documentation [0] which details:
- why the prometheus operator continues to appear in the `oc get operators` output.
- How to fully remove the operator.


Ref
[0] https://github.com/operator-framework/olm-docs/pull/251/files


Note You need to log in before you can comment on or make changes to this bug.