Bug 1820665 - oc apply -f no longer applies as many resources as possible before exiting with an error.
Summary: oc apply -f no longer applies as many resources as possible before exiting wi...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: oc
Version: 4.5
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: 4.5.0
Assignee: Maciej Szulik
QA Contact: zhou ying
URL:
Whiteboard:
Depends On:
Blocks: 1771572
TreeView+ depends on / blocked
 
Reported: 2020-04-03 14:38 UTC by Marc Sluiter
Modified: 2020-07-13 17:25 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-07-13 17:25:33 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2020:2409 0 None None None 2020-07-13 17:25:56 UTC

Description Marc Sluiter 2020-04-03 14:38:09 UTC
Description of problem:
oc apply -f no longer applies as many resources as possible before exiting with an error.

This affects our deployment process in CI: we apply multiple manifests with a single oc apply in a loop until all succeed. The loop is needed because not all resources can be installed in one iteration, e.g. we need to wait for a CRD being installed by OLM (installation of that CRD triggered by the other resources). This does not work anymore, because oc apply does not install a single resource anymore.

kubectl apply has the same bug in version 1.18.0 [0]. Looks like OCP rebased on k8s 1.18 just recently and introduced this bug with it. There is a fix already available in k8s, but not released yet [1].

We have a workaround in place now, but it is ugly, and we would like to get rid of it asap. Would it be possible to cherrypick the fix into OCP?

[0] https://github.com/kubernetes/kubectl/issues/845
[1] https://github.com/kubernetes/kubernetes/pull/89607

Version-Release number of selected component (if applicable):
$ oc version
Client Version: 4.5.0-0.nightly-2020-04-03-003232

How reproducible:
always

Steps to Reproduce:
See upstream issue: https://github.com/kubernetes/kubectl/issues/845

Additional info:
n/a

Comment 1 Federico Simoncelli 2020-04-07 09:30:52 UTC
Is this a regression and 4.4 is not affected?

Comment 2 Marc Sluiter 2020-04-07 09:55:51 UTC
Correct, this only affects 4.5

Comment 3 Martin Sivák 2020-04-30 07:48:18 UTC
@Maciej: Is this something that is going to get fixed for 4.5? Or maybe it already was and the bug was not updated?

We have a workaround in place so it does not affect us too badly, but it means we have to manually split all the yamls into separated objects first and then post them one by one. And that is ugly.

Comment 4 Martin Sivák 2020-05-07 12:12:14 UTC
Ok I did a test using a file that contained an undefined CR and then a namespace using the latest 4.5 ci version:

[msivak@localhost tmp]$ oc version
Client Version: 4.5.0-0.ci-2020-05-07-020439
Server Version: 4.5.0-0.ci-2020-05-07-020439
Kubernetes Version: v1.18.0-rc.1
[msivak@localhost tmp]$ cat test.yaml 
---
apiVersion: performance.openshift.io/v1alpha1
kind: PerformanceProfile
metadata:
  name: example-performanceprofile
spec:
  cpu: "test"
---
apiVersion: v1
kind: Namespace
metadata:
  labels:
    openshift.io/cluster-monitoring: "true"
  name: openshift-performance-addon
spec: {}
[msivak@localhost tmp]$ oc apply -f test.yaml 
namespace/openshift-performance-addon created
error: unable to recognize "test.yaml": no matches for kind "PerformanceProfile" in version "performance.openshift.io/v1alpha1"


So it seems the bug is now resolved. I wonder if the patch was backported as it was only fixed in kubernetes v1.18.1

Comment 5 Maciej Szulik 2020-05-08 18:31:27 UTC
This was fixed in https://github.com/openshift/oc/pull/402

Comment 8 zhou ying 2020-05-09 04:12:48 UTC
[root@dhcp-140-138 ~]# cat /tmp/bug.yaml 
---
apiVersion: performance.openshift.io/v1alpha1
kind: PerformanceProfile
metadata:
  name: example-performanceprofile
spec:
  cpu: "test"
---
apiVersion: v1
kind: Namespace
metadata:
  labels:
    openshift.io/cluster-monitoring: "true"
  name: openshift-performance-addon
spec: {}
---
apiVersion: batch/v1beta1
kind: CronJob
metadata:
  name: hello
spec:
  schedule: "30 3 * * *"
  jobTemplate:
    spec:
      template:
        spec:
          containers:
          - name: hello
            image: busybox
            args:
            - /bin/sh
            - -c
            - date; echo Hello from a CronJob
          restartPolicy: OnFailure[root@dhcp-140-138 ~]# oc version 
Client Version: 4.5.0-202005072157-f415627
Server Version: 4.5.0-0.nightly-2020-05-08-200452
Kubernetes Version: v1.18.0-rc.1



[root@dhcp-140-138 ~]# oc apply -f /tmp/bug.yaml 
namespace/openshift-performance-addon unchanged
Warning: oc apply should be used on resource created by either oc create --save-config or oc apply
cronjob.batch/hello configured
error: unable to recognize "/tmp/bug.yaml": no matches for kind "PerformanceProfile" in version "performance.openshift.io/v1alpha1"

Comment 9 zhou ying 2020-05-09 04:24:29 UTC
[root@dhcp-140-138 ~]# oc apply -f /tmp/bug.yaml 
namespace/openshift-performance-addon created
cronjob.batch/hello created
error: unable to recognize "/tmp/bug.yaml": no matches for kind "PerformanceProfile" in version "performance.openshift.io/v1alpha1"

Comment 11 errata-xmlrpc 2020-07-13 17:25:33 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:2409


Note You need to log in before you can comment on or make changes to this bug.