Bug 1826024 - OLM not properly installing CRDs defined using apiextensions/v1beta1 preserveUnknownFields is lost
Summary: OLM not properly installing CRDs defined using apiextensions/v1beta1 preserve...
Keywords:
Status: CLOSED DUPLICATE of bug 1825330
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: OLM
Version: 4.5
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: 4.5.0
Assignee: Evan Cordell
QA Contact: Jian Zhang
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-04-20 17:19 UTC by Rob Cernich
Modified: 2020-04-23 15:54 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-04-23 15:54:01 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
portworx-operator.v1.2.0 (39.46 KB, text/plain)
2020-04-23 08:14 UTC, shahan
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker MAISTRA-1359 0 Major Done Creation of Service Mesh Control Plane is not working on Openshift 4.5 nightly 2020-10-07 15:11:20 UTC

Description Rob Cernich 2020-04-20 17:19:38 UTC
Description of problem:

API server no longer defaults preserveUnknownFields to true for apiextensions/v1beta1 CRDs

Previous versions of k8s defaulted this value to true for the older schema.  The result of this is that any application using older crd definitions will fail to work because none of the structure is persisted by the api server.

Version-Release number of selected component (if applicable):
4.5.0-0.nightly-2020-04-18-184707

How reproducible:


Steps to Reproduce:
1. Create a CRD using apiextensions/v1beta1
2. Create a resource based on the CRD (e.g. oc create...)
3. Get the resource from the api server (e.g. oc get resource -o yaml)
4. Notice there are no contents (e.g. spec field is non-existent)

Actual results:

API server prunes data from resources defined by apiextensions/v1beta1 CRDs

Expected results:

API server should properly round-trip resources defined by apiextensions/v1beta1 CRDs


Additional info:

Comment 3 Venkata Siva Teja Areti 2020-04-20 18:49:40 UTC
I tried using the crontabs example in upstream documentation and not able to reproduce this behavior. Could you please let me know what I am missing?

https://v1-17.docs.kubernetes.io/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definitions/

$ oc version
Client Version: 4.4.0-0.nightly-2020-03-18-102708
Server Version: 4.5.0-0.nightly-2020-04-18-093630
Kubernetes Version: v1.18.0-rc.1

$ jq '. | .apiVersion' < crontabs_crd_v1beta1.json
"apiextensions.k8s.io/v1beta1"


## example yaml this field. I removed it to see what default value is being set.
$ jq '. | .spec.preserveUnknownFields' < crontabs_crd_v1beta1.json
null

$ oc create -f crontabs_crd_v1beta1.json 
customresourcedefinition.apiextensions.k8s.io/crontabs.stable.example.com created

## Looks like the default value is set to true
$ oc get customresourcedefinition.apiextensions.k8s.io/crontabs.stable.example.com -o jsonpath="{.spec.preserveUnknownFields}"
true

$ oc create -f crontabs_cr.json 
crontab.stable.example.com/my-new-cron-object created

## random field which is not defined in the spec is preserved
$ oc get crontab.stable.example.com/my-new-cron-object -o jsonpath="{.spec.randomField}"
42

Comment 5 Rob Cernich 2020-04-20 19:01:30 UTC
Sorry, I left out an important detail:  The CRD must not contain any open api specification.  If it has an open api specification, there will be "known" fields.  Run your same test, but delete the schema from the CRD before you install it.

Comment 6 Stefan Schimanski 2020-04-20 19:10:03 UTC
Can you please attach the concrete CRD? Hunting the exact reproducer is not effective.

Comment 7 Stefan Schimanski 2020-04-20 19:23:55 UTC
Reducing severity. 4.5 is not released and not even near to it. There cannot be urgent issues.

Comment 9 Venkata Siva Teja Areti 2020-04-20 20:39:22 UTC
Still no luck in reproducing this and I am not sure where I am going wrong.

$ oc version
Client Version: 4.5.0-0.nightly-2020-04-18-093630
Server Version: 4.5.0-0.nightly-2020-04-18-093630
Kubernetes Version: v1.18.0-rc.1

$ oc apply -f https://raw.githubusercontent.com/Maistra/istio-operator/maistra-1.1/manifests-maistra/1.1.0/servicemeshcontrolplanes.crd.yaml 
customresourcedefinition.apiextensions.k8s.io/servicemeshcontrolplanes.maistra.io created

$ oc get customresourcedefinition.apiextensions.k8s.io/servicemeshcontrolplanes.maistra.io -o jsonpath="{.spec.preserveUnknownFields}"      
true

$ oc apply -f ~/Downloads/servicemeshcontrolplane-basic-install.yml                                                                         
servicemeshcontrolplane.maistra.io/basic-install created

$ oc get servicemeshcontrolplane.maistra.io/basic-install -o jsonpath="{.spec}"                                                       
map[istio:map[gateways:map[istio-egressgateway:map[autoscaleEnabled:false] istio-ingressgateway:map[autoscaleEnabled:false ior_enabled:false]] grafana:map[enabled:true] kiali:map[enabled:true] mixer:map[policy:map[autoscaleEnabled:false] telemetry:map[autoscaleEnabled:false]] pilot:map[autoscaleEnabled:false traceSampling:100] tracing:map[enabled:true jaeger:map[template:all-in-one]]] version:v1.1]

All the spec fields are preserved and it is not empty. Could you please check these steps and let me know if I am missing something.

Comment 10 Rob Cernich 2020-04-20 20:59:10 UTC
Interesting.  One last detail, this is for an operator, which was installed through OLM (I assume).  I wonder if OLM is doing something to the CRD when it installs.  It's sounding like this should be moved to the OLM team.

Comment 11 Venkata Siva Teja Areti 2020-04-20 21:01:53 UTC
thanks for confirming. moving this to olm team.

Comment 12 shahan 2020-04-23 08:13:19 UTC
Install the Portworx Enterprise operator failed, It appears that they are the same root cause.
status:
  conditions:
    - lastTransitionTime: '2020-04-23T07:14:30Z'
      lastUpdateTime: '2020-04-23T07:14:30Z'
      message: requirements not yet checked
      phase: Pending
      reason: RequirementsUnknown
    - lastTransitionTime: '2020-04-23T07:14:30Z'
      lastUpdateTime: '2020-04-23T07:14:30Z'
      message: one or more requirements couldn't be found
      phase: Pending
      reason: RequirementsNotMet
  lastTransitionTime: '2020-04-23T07:14:30Z'
  lastUpdateTime: '2020-04-23T07:14:30Z'
  message: one or more requirements couldn't be found
  phase: Pending
  reason: RequirementsNotMet
  requirementStatus:
    - group: operators.coreos.com
      kind: ClusterServiceVersion
      message: CSV minKubeVersion (1.12.0) less than server version (v1.18.0-rc.1)
      name: portworx-operator.v1.2.0
      status: Present
      version: v1alpha1
    - group: apiextensions.k8s.io
      kind: CustomResourceDefinition
      message: CRD is not present
      name: storageclusters.core.libopenstorage.org
      status: NotPresent
      version: v1
    - group: apiextensions.k8s.io
      kind: CustomResourceDefinition
      message: CRD is not present
      name: storagenodes.core.libopenstorage.org
      status: NotPresent
      version: v1
    - group: ''
      kind: ServiceAccount
      message: Service account does not exist
      name: portworx-operator
      status: NotPresent
      version: v1
    - group: ''
      kind: ServiceAccount
      message: Service account does not exist
      name: portworx
      status: NotPresent
      version: v1
    - group: ''
      kind: ServiceAccount
      message: Service account does not exist
      name: portworx-pvc-controller
      status: NotPresent
      version: v1
    - group: ''
      kind: ServiceAccount
      message: Service account does not exist
      name: px-lighthouse
      status: NotPresent
      version: v1


The attachment is the whole csv file.

Comment 13 shahan 2020-04-23 08:14:12 UTC
Created attachment 1681028 [details]
portworx-operator.v1.2.0

Comment 14 Anik 2020-04-23 15:54:01 UTC
Duplicate of https://bugzilla.redhat.com/show_bug.cgi?id=1825330
PR for fix : operator-framework/operator-lifecycle-manager/pull/1470

*** This bug has been marked as a duplicate of bug 1825330 ***


Note You need to log in before you can comment on or make changes to this bug.