Description of problem: Test "[sig-cli] Kubectl client [k8s.io] Kubectl client-side validation should create/apply a valid CR with arbitrary-extra properties for CRD with partially-specified validation schema [Suite:openshift/conformance/parallel] [Suite:k8s] " appears flaky. ``` fail [k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:959]: Oct 11 12:08:20.482: failed to create CR {"kind":"E2e-test-kubectl-8375-crd","apiVersion":"kubectl-crd-test.k8s.io/v1","metadata":{"name":"test-cr"},"spec":{"bars":[{"name":"test-bar"}],"extraProperty":"arbitrary-value"}} in namespace --namespace=e2e-kubectl-192: error running &{/usr/bin/kubectl [kubectl --server=https://api.ci-op-cgddh3rr-7f7b0.origin-ci-int-aws.dev.rhcloud.com:6443 --kubeconfig=/tmp/admin.kubeconfig --namespace=e2e-kubectl-192 create --validate=true -f -] [] 0xc00302b760 error: error validating "STDIN": error validating data: ValidationError(E2e-test-kubectl-8375-crd.spec): unknown field "extraProperty" in io.k8s.kubectl-crd-test.v1.E2e-test-kubectl-8375-crd.spec; if you choose to ignore these errors, turn validation off with --validate=false [] <nil> 0xc0038f1dd0 exit status 1 <nil> <nil> true [0xc000011210 0xc000011498 0xc0000115b8] [0xc000011210 0xc000011498 0xc0000115b8] [0xc000011268 0xc000011440 0xc000011508] [0x95aef0 0x95b020 0x95b020] 0xc002f5ede0 <nil>}: Command stdout: stderr: error: error validating "STDIN": error validating data: ValidationError(E2e-test-kubectl-8375-crd.spec): unknown field "extraProperty" in io.k8s.kubectl-crd-test.v1.E2e-test-kubectl-8375-crd.spec; if you choose to ignore these errors, turn validation off with --validate=false error: exit status 1 ``` Recent tests: https://prow.svc.ci.openshift.org/view/gcs/origin-ci-test/logs/release-openshift-ocp-installer-e2e-aws-upi-4.2/34 https://prow.svc.ci.openshift.org/view/gcs/origin-ci-test/logs/release-openshift-ocp-installer-e2e-aws-upi-4.2/30 https://prow.svc.ci.openshift.org/view/gcs/origin-ci-test/logs/release-openshift-ocp-installer-e2e-aws-upi-4.2/29 Version-Release number of selected component (if applicable): 4.2 How reproducible: Sometimes Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
Raising to severity high - this is the top flake on our 4.2 blocking job (3 of 7 failures in the past week): https://testgrid.k8s.io/redhat-openshift-ocp-release-4.2-blocking#release-openshift-origin-installer-e2e-aws-4.2
@adam, in that time it never double flaked to cause an e2e job to fail a single time. It seems like you would want to devote effort first to bugs that are actually causing job failures.
I have not seen this flaking other than those single instances back in the day. I'm moving this to qa to double check.
Confirmed with latest version ,can't reproduce the issue now: [root@dhcp-140-138 origin]# openshift-tests run-test "[sig-cli] Kubectl client Kubectl client-side validation should create/apply a valid CR with arbitrary-extra properties for CRD with partially-specified validation schema [Suite:openshift/conformance/parallel] [Suite:k8s]" Feb 3 12:47:17.096: INFO: >>> kubeConfig: /root/kubeconfig Feb 3 12:47:17.098: INFO: >>> kubeConfig: /root/kubeconfig Feb 3 12:47:25.488: INFO: >>> kubeConfig: /root/kubeconfig Feb 3 12:47:25.490: INFO: Waiting up to 30m0s for all (but 100) nodes to be schedulable Feb 3 12:47:26.247: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Feb 3 12:47:27.062: INFO: 0 / 0 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Feb 3 12:47:27.062: INFO: expected 0 pod replicas in namespace 'kube-system', 0 are Running and Ready. Feb 3 12:47:27.062: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Feb 3 12:47:27.325: INFO: e2e test version: v1.16.2 Feb 3 12:47:27.578: INFO: kube-apiserver version: v1.17.1 Feb 3 12:47:27.578: INFO: >>> kubeConfig: /root/kubeconfig Feb 3 12:47:27.838: INFO: Cluster IP family: ipv4 [BeforeEach] [Top Level] /home/golang/src/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:60 [BeforeEach] [sig-cli] Kubectl client /home/golang/src/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 12:47:27.842: INFO: >>> kubeConfig: /root/kubeconfig STEP: Building a namespace api object, basename kubectl Feb 3 12:47:28.690: INFO: About to run a Kube e2e test, ensuring namespace is privileged Feb 3 12:47:31.267: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /home/golang/src/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225 [It] should create/apply a valid CR with arbitrary-extra properties for CRD with partially-specified validation schema [Suite:openshift/conformance/parallel] [Suite:k8s] /home/golang/src/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:943 STEP: prepare CRD with partially-specified validation schema Feb 3 12:47:31.520: INFO: >>> kubeConfig: /root/kubeconfig STEP: sleep for 10s to wait for potential crd openapi publishing alpha feature STEP: successfully create CR Feb 3 12:47:43.907: INFO: Running '/usr/sbin/kubectl --server=https://api.yinzhou.qe.devcluster.openshift.com:6443 --kubeconfig=/root/kubeconfig --namespace=e2e-kubectl-8090 create --validate=true -f -' Feb 3 12:47:52.449: INFO: stderr: "" Feb 3 12:47:52.449: INFO: stdout: "e2e-test-kubectl-5898-crd.kubectl.example.com/test-cr created\n" Feb 3 12:47:52.449: INFO: Running '/usr/sbin/kubectl --server=https://api.yinzhou.qe.devcluster.openshift.com:6443 --kubeconfig=/root/kubeconfig --namespace=e2e-kubectl-8090 delete e2e-test-kubectl-5898-crds test-cr' Feb 3 12:47:54.083: INFO: stderr: "" Feb 3 12:47:54.083: INFO: stdout: "e2e-test-kubectl-5898-crd.kubectl.example.com \"test-cr\" deleted\n" STEP: successfully apply CR Feb 3 12:47:54.083: INFO: Running '/usr/sbin/kubectl --server=https://api.yinzhou.qe.devcluster.openshift.com:6443 --kubeconfig=/root/kubeconfig --namespace=e2e-kubectl-8090 apply --validate=true -f -' Feb 3 12:47:57.749: INFO: stderr: "" Feb 3 12:47:57.749: INFO: stdout: "e2e-test-kubectl-5898-crd.kubectl.example.com/test-cr created\n" Feb 3 12:47:57.749: INFO: Running '/usr/sbin/kubectl --server=https://api.yinzhou.qe.devcluster.openshift.com:6443 --kubeconfig=/root/kubeconfig --namespace=e2e-kubectl-8090 delete e2e-test-kubectl-5898-crds test-cr' Feb 3 12:47:59.295: INFO: stderr: "" Feb 3 12:47:59.295: INFO: stdout: "e2e-test-kubectl-5898-crd.kubectl.example.com \"test-cr\" deleted\n" [AfterEach] [sig-cli] Kubectl client /home/golang/src/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 12:47:59.808: INFO: Waiting up to 3m0s for all (but 100) nodes to be ready STEP: Destroying namespace "e2e-kubectl-8090" for this suite. Feb 3 12:48:00.869: INFO: Running AfterSuite actions on all nodes Feb 3 12:48:00.873: INFO: Running AfterSuite actions on node 1
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:0581