Description of problem: API server no longer defaults preserveUnknownFields to true for apiextensions/v1beta1 CRDs Previous versions of k8s defaulted this value to true for the older schema. The result of this is that any application using older crd definitions will fail to work because none of the structure is persisted by the api server. Version-Release number of selected component (if applicable): 4.5.0-0.nightly-2020-04-18-184707 How reproducible: Steps to Reproduce: 1. Create a CRD using apiextensions/v1beta1 2. Create a resource based on the CRD (e.g. oc create...) 3. Get the resource from the api server (e.g. oc get resource -o yaml) 4. Notice there are no contents (e.g. spec field is non-existent) Actual results: API server prunes data from resources defined by apiextensions/v1beta1 CRDs Expected results: API server should properly round-trip resources defined by apiextensions/v1beta1 CRDs Additional info:
I tried using the crontabs example in upstream documentation and not able to reproduce this behavior. Could you please let me know what I am missing? https://v1-17.docs.kubernetes.io/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definitions/ $ oc version Client Version: 4.4.0-0.nightly-2020-03-18-102708 Server Version: 4.5.0-0.nightly-2020-04-18-093630 Kubernetes Version: v1.18.0-rc.1 $ jq '. | .apiVersion' < crontabs_crd_v1beta1.json "apiextensions.k8s.io/v1beta1" ## example yaml this field. I removed it to see what default value is being set. $ jq '. | .spec.preserveUnknownFields' < crontabs_crd_v1beta1.json null $ oc create -f crontabs_crd_v1beta1.json customresourcedefinition.apiextensions.k8s.io/crontabs.stable.example.com created ## Looks like the default value is set to true $ oc get customresourcedefinition.apiextensions.k8s.io/crontabs.stable.example.com -o jsonpath="{.spec.preserveUnknownFields}" true $ oc create -f crontabs_cr.json crontab.stable.example.com/my-new-cron-object created ## random field which is not defined in the spec is preserved $ oc get crontab.stable.example.com/my-new-cron-object -o jsonpath="{.spec.randomField}" 42
Sorry, I left out an important detail: The CRD must not contain any open api specification. If it has an open api specification, there will be "known" fields. Run your same test, but delete the schema from the CRD before you install it.
Can you please attach the concrete CRD? Hunting the exact reproducer is not effective.
Reducing severity. 4.5 is not released and not even near to it. There cannot be urgent issues.
You can use this: oc create -f https://raw.githubusercontent.com/Maistra/istio-operator/maistra-1.1/manifests-maistra/1.1.0/servicemeshcontrolplanes.crd.yaml
Still no luck in reproducing this and I am not sure where I am going wrong. $ oc version Client Version: 4.5.0-0.nightly-2020-04-18-093630 Server Version: 4.5.0-0.nightly-2020-04-18-093630 Kubernetes Version: v1.18.0-rc.1 $ oc apply -f https://raw.githubusercontent.com/Maistra/istio-operator/maistra-1.1/manifests-maistra/1.1.0/servicemeshcontrolplanes.crd.yaml customresourcedefinition.apiextensions.k8s.io/servicemeshcontrolplanes.maistra.io created $ oc get customresourcedefinition.apiextensions.k8s.io/servicemeshcontrolplanes.maistra.io -o jsonpath="{.spec.preserveUnknownFields}" true $ oc apply -f ~/Downloads/servicemeshcontrolplane-basic-install.yml servicemeshcontrolplane.maistra.io/basic-install created $ oc get servicemeshcontrolplane.maistra.io/basic-install -o jsonpath="{.spec}" map[istio:map[gateways:map[istio-egressgateway:map[autoscaleEnabled:false] istio-ingressgateway:map[autoscaleEnabled:false ior_enabled:false]] grafana:map[enabled:true] kiali:map[enabled:true] mixer:map[policy:map[autoscaleEnabled:false] telemetry:map[autoscaleEnabled:false]] pilot:map[autoscaleEnabled:false traceSampling:100] tracing:map[enabled:true jaeger:map[template:all-in-one]]] version:v1.1] All the spec fields are preserved and it is not empty. Could you please check these steps and let me know if I am missing something.
Interesting. One last detail, this is for an operator, which was installed through OLM (I assume). I wonder if OLM is doing something to the CRD when it installs. It's sounding like this should be moved to the OLM team.
thanks for confirming. moving this to olm team.
Install the Portworx Enterprise operator failed, It appears that they are the same root cause. status: conditions: - lastTransitionTime: '2020-04-23T07:14:30Z' lastUpdateTime: '2020-04-23T07:14:30Z' message: requirements not yet checked phase: Pending reason: RequirementsUnknown - lastTransitionTime: '2020-04-23T07:14:30Z' lastUpdateTime: '2020-04-23T07:14:30Z' message: one or more requirements couldn't be found phase: Pending reason: RequirementsNotMet lastTransitionTime: '2020-04-23T07:14:30Z' lastUpdateTime: '2020-04-23T07:14:30Z' message: one or more requirements couldn't be found phase: Pending reason: RequirementsNotMet requirementStatus: - group: operators.coreos.com kind: ClusterServiceVersion message: CSV minKubeVersion (1.12.0) less than server version (v1.18.0-rc.1) name: portworx-operator.v1.2.0 status: Present version: v1alpha1 - group: apiextensions.k8s.io kind: CustomResourceDefinition message: CRD is not present name: storageclusters.core.libopenstorage.org status: NotPresent version: v1 - group: apiextensions.k8s.io kind: CustomResourceDefinition message: CRD is not present name: storagenodes.core.libopenstorage.org status: NotPresent version: v1 - group: '' kind: ServiceAccount message: Service account does not exist name: portworx-operator status: NotPresent version: v1 - group: '' kind: ServiceAccount message: Service account does not exist name: portworx status: NotPresent version: v1 - group: '' kind: ServiceAccount message: Service account does not exist name: portworx-pvc-controller status: NotPresent version: v1 - group: '' kind: ServiceAccount message: Service account does not exist name: px-lighthouse status: NotPresent version: v1 The attachment is the whole csv file.
Created attachment 1681028 [details] portworx-operator.v1.2.0
Duplicate of https://bugzilla.redhat.com/show_bug.cgi?id=1825330 PR for fix : operator-framework/operator-lifecycle-manager/pull/1470 *** This bug has been marked as a duplicate of bug 1825330 ***