Bug 1767004 - Invalid CSV data may cause OLM to not update status on csv
Summary: Invalid CSV data may cause OLM to not update status on csv
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: OLM
Version: 4.3.0
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: 4.3.0
Assignee: Jeff Peeler
QA Contact: Bruno Andrade
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-10-30 13:18 UTC by Joel Smith
Modified: 2020-01-23 11:10 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-01-23 11:09:47 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
The CRD for VPA (1.04 KB, text/plain)
2019-10-30 13:18 UTC, Joel Smith
no flags Details
Operator Group for VPA (209 bytes, text/plain)
2019-10-30 13:19 UTC, Joel Smith
no flags Details
CSV stub for VPA (2.85 KB, text/plain)
2019-10-30 13:20 UTC, Joel Smith
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Github operator-framework operator-lifecycle-manager pull 1114 0 'None' closed Bug 1767004: defer provided api update in operator groups 2020-08-25 16:05:27 UTC
Red Hat Product Errata RHBA-2020:0062 0 None None None 2020-01-23 11:10:15 UTC

Description Joel Smith 2019-10-30 13:18:48 UTC
Created attachment 1630605 [details]
The CRD for VPA

Description of problem:

When I create a csv for my component, OLM never updates the status of the object. The OLM operator continually logs these two messages:

E1030 12:55:07.985809       1 queueinformer_operator.go:282] sync {"update" "openshift-vertical-pod-autoscaler/vertical-pod-autoscaler.v0.0.1"} failed: could not update operatorgroups olm.providedAPIs annotation: Operation cannot be fulfilled on operatorgroups.operators.coreos.com "vertical-pod-autoscaler": the object has been modified; please apply your changes to the latest version and try again
time="2019-10-30T12:55:08Z" level=info msg="csv in operatorgroup" csv=vertical-pod-autoscaler.v0.0.1 id=5JgyC namespace=openshift-vertical-pod-autoscaler opgroup=vertical-pod-autoscaler phase=


Version-Release number of selected component (if applicable):
registry.svc.ci.openshift.org/origin/4.3-2019-10-29-180250@sha256:e41dbf5ba2d0b45f0c14341b8d9efca9df3085c86bae11f4ca6ad327cae62606

How reproducible:
100% on my cluster

Steps to Reproduce:
1. oc create ns vertical-pod-autoscaler
2. oc create -f crd.yaml
3. oc create -f og.yaml
4. oc create -f csv.yaml

Actual results:

OLM Operator never updates status on CSV object. Even if there is a problem with it (there probably is), I would expect the status field to be updated with information about what the problem is.

Expected results:

OLM Operator updates the status field of the CSV object.

Additional info:
Note that the attached CSV has been stripped down and uses busybox to rule out problems with the managed image.

Comment 1 Joel Smith 2019-10-30 13:19:36 UTC
Created attachment 1630606 [details]
Operator Group for VPA

Comment 2 Joel Smith 2019-10-30 13:20:25 UTC
Created attachment 1630607 [details]
CSV stub for VPA

Comment 3 Joel Smith 2019-10-30 13:43:27 UTC
Sorry, reproduction step 1 should have been:
oc create ns openshift-vertical-pod-autoscaler

Comment 6 Bruno Andrade 2019-11-21 05:14:51 UTC
Marking as VERIFIED. Reproduced the same scenario and could verify that csv is being changed to Pending status and the missing objects are detailed at csv status object. 

oc create ns openshift-vertical-pod-autoscaler
namespace/openshift-vertical-pod-autoscaler created

oc create -f og.yaml 
operatorgroup.operators.coreos.com/vertical-pod-autoscaler created

oc create -f csv.yaml 
clusterserviceversion.operators.coreos.com/vertical-pod-autoscaler.v0.0.1 created

oc get csv -n openshift-vertical-pod-autoscaler
NAME                             DISPLAY                   VERSION   REPLACES   PHASE
vertical-pod-autoscaler.v0.0.1   Vertical Pod Autoscaler   0.0.2                Pending


oc get csv vertical-pod-autoscaler.v0.0.1 -o yaml -n openshift-vertical-pod-autoscaler | grep -A 20 "status:"
status:
  certsLastUpdated: null
  certsRotateAt: null
  conditions:
  - lastTransitionTime: "2019-11-21T04:56:18Z"
    lastUpdateTime: "2019-11-21T04:56:18Z"
    message: requirements not yet checked
    phase: Pending
    reason: RequirementsUnknown
  - lastTransitionTime: "2019-11-21T04:56:18Z"
    lastUpdateTime: "2019-11-21T04:56:18Z"
    message: one or more requirements couldn't be found
    phase: Pending
    reason: RequirementsNotMet
  lastTransitionTime: "2019-11-21T04:56:18Z"
  lastUpdateTime: "2019-11-21T04:56:18Z"
  message: one or more requirements couldn't be found
  phase: Pending
  reason: RequirementsNotMet
  requirementStatus:
  - group: operators.coreos.com
--
    status: Present
    version: v1alpha1
  - group: apiextensions.k8s.io
    kind: CustomResourceDefinition
    message: CRD version not served
    name: verticalpodautoscalercontrollers.autoscaling.openshift.io
    status: NotPresent
    version: v1beta1

----

Cluster Version: 4.3.0-0.nightly-2019-11-19-122017
OLM version: 0.13.0
git commit: 70939eb4edb14a7d969caf1f8b7620f225cd3c17

Comment 8 errata-xmlrpc 2020-01-23 11:09:47 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:0062


Note You need to log in before you can comment on or make changes to this bug.