Description of problem: oc apply -f no longer applies as many resources as possible before exiting with an error. This affects our deployment process in CI: we apply multiple manifests with a single oc apply in a loop until all succeed. The loop is needed because not all resources can be installed in one iteration, e.g. we need to wait for a CRD being installed by OLM (installation of that CRD triggered by the other resources). This does not work anymore, because oc apply does not install a single resource anymore. kubectl apply has the same bug in version 1.18.0 [0]. Looks like OCP rebased on k8s 1.18 just recently and introduced this bug with it. There is a fix already available in k8s, but not released yet [1]. We have a workaround in place now, but it is ugly, and we would like to get rid of it asap. Would it be possible to cherrypick the fix into OCP? [0] https://github.com/kubernetes/kubectl/issues/845 [1] https://github.com/kubernetes/kubernetes/pull/89607 Version-Release number of selected component (if applicable): $ oc version Client Version: 4.5.0-0.nightly-2020-04-03-003232 How reproducible: always Steps to Reproduce: See upstream issue: https://github.com/kubernetes/kubectl/issues/845 Additional info: n/a
Is this a regression and 4.4 is not affected?
Correct, this only affects 4.5
@Maciej: Is this something that is going to get fixed for 4.5? Or maybe it already was and the bug was not updated? We have a workaround in place so it does not affect us too badly, but it means we have to manually split all the yamls into separated objects first and then post them one by one. And that is ugly.
Ok I did a test using a file that contained an undefined CR and then a namespace using the latest 4.5 ci version: [msivak@localhost tmp]$ oc version Client Version: 4.5.0-0.ci-2020-05-07-020439 Server Version: 4.5.0-0.ci-2020-05-07-020439 Kubernetes Version: v1.18.0-rc.1 [msivak@localhost tmp]$ cat test.yaml --- apiVersion: performance.openshift.io/v1alpha1 kind: PerformanceProfile metadata: name: example-performanceprofile spec: cpu: "test" --- apiVersion: v1 kind: Namespace metadata: labels: openshift.io/cluster-monitoring: "true" name: openshift-performance-addon spec: {} [msivak@localhost tmp]$ oc apply -f test.yaml namespace/openshift-performance-addon created error: unable to recognize "test.yaml": no matches for kind "PerformanceProfile" in version "performance.openshift.io/v1alpha1" So it seems the bug is now resolved. I wonder if the patch was backported as it was only fixed in kubernetes v1.18.1
This was fixed in https://github.com/openshift/oc/pull/402
[root@dhcp-140-138 ~]# cat /tmp/bug.yaml --- apiVersion: performance.openshift.io/v1alpha1 kind: PerformanceProfile metadata: name: example-performanceprofile spec: cpu: "test" --- apiVersion: v1 kind: Namespace metadata: labels: openshift.io/cluster-monitoring: "true" name: openshift-performance-addon spec: {} --- apiVersion: batch/v1beta1 kind: CronJob metadata: name: hello spec: schedule: "30 3 * * *" jobTemplate: spec: template: spec: containers: - name: hello image: busybox args: - /bin/sh - -c - date; echo Hello from a CronJob restartPolicy: OnFailure[root@dhcp-140-138 ~]# oc version Client Version: 4.5.0-202005072157-f415627 Server Version: 4.5.0-0.nightly-2020-05-08-200452 Kubernetes Version: v1.18.0-rc.1 [root@dhcp-140-138 ~]# oc apply -f /tmp/bug.yaml namespace/openshift-performance-addon unchanged Warning: oc apply should be used on resource created by either oc create --save-config or oc apply cronjob.batch/hello configured error: unable to recognize "/tmp/bug.yaml": no matches for kind "PerformanceProfile" in version "performance.openshift.io/v1alpha1"
[root@dhcp-140-138 ~]# oc apply -f /tmp/bug.yaml namespace/openshift-performance-addon created cronjob.batch/hello created error: unable to recognize "/tmp/bug.yaml": no matches for kind "PerformanceProfile" in version "performance.openshift.io/v1alpha1"
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:2409