Bug 1848723 - CI failing on: FailedScheduling: 0/4 nodes are available: 4 node(s) didn't match node selector.
Summary: CI failing on: FailedScheduling: 0/4 nodes are available: 4 node(s) didn't ma...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Installer
Version: 3.11.0
Hardware: Unspecified
OS: Unspecified
high
urgent
Target Milestone: ---
: 3.11.z
Assignee: Russell Teague
QA Contact: Scott Dodson
URL:
Whiteboard:
Depends On:
Blocks: 1828484
TreeView+ depends on / blocked
 
Reported: 2020-06-18 20:14 UTC by W. Trevor King
Modified: 2020-07-27 13:49 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Cause: Grouping of nodes in CI for GCP assigned masters and infra nodes to the same group. Consequence: Node group mapping caused all nodes to be labeled as masters and none as infra. Fix: Changed node group mapping to put infra and compute in the same node group to have proper infra and compute labels applied. Result: CI cluster is built properly with master and infra/compute nodes.
Clone Of:
Environment:
Last Closed: 2020-07-27 13:49:10 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift openshift-ansible pull 12203 0 None closed Bug 1848723: Correct GCP group mapping for infra and compute 2020-08-12 19:10:26 UTC
Red Hat Product Errata RHBA-2020:2990 0 None None None 2020-07-27 13:49:22 UTC

Description W. Trevor King 2020-06-18 20:14:23 UTC
3.11 CI has been sad, with jobs like [1] failing 75 test-cases, including:

[Conformance][Area:Networking][Feature:Router] The HAProxy router should serve a route that points to two services and respect weights [Suite:openshift/conformance/parallel/minimal]
/tmp/openshift/build-rpms/rpm/BUILD/origin-3.11.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/weighted.go:36
Expected error:
    <*errors.errorString | 0xc4200e3580>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/tmp/openshift/build-rpms/rpm/BUILD/origin-3.11.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/framework.go:1266

Standard output for that case included lots of stuff like:

Sun 18 15:22:41.533: INFO: At 2020-06-18 15:17:41 +0000 UTC - event for weighted-router: {default-scheduler } FailedScheduling: 0/4 nodes are available: 4 node(s) didn't match node selector.

So seems like resource exhaustion or some such.  None of the gathered pod logs look like the scheduler container to me [2], but I'm not really sure what I'm looking for.

Stdout also contains:

Logging node info for node ci-op-ixbhsck4-7a04a-ig-m-7gl2
Jun 18 15:22:41.553: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:ci-op-ixbhsck4-7a04a-ig-m-7gl2,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/ci-op-ixbhsck4-7a04a-ig-m-7gl2,UID:b74cea1a-b174-11ea-b99f-42010a8e0005,ResourceVersion:42862,Generation:0,CreationTimestamp:2020-06-18 15:02:23 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/instance-type: n1-standard-4,beta.kubernetes.io/os: linux,failure-domain.beta.kubernetes.io/region: us-east1,failure-domain.beta.kubernetes.io/zone: us-east1-c,kubernetes.io/hostname: ci-op-ixbhsck4-7a04a-ig-m-7gl2,node-role.kubernetes.io/master: true,role: infra,},Annotations:map[string]string{node.openshift.io/md5sum: d555cf858261577cf916fb7a92453b4a,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:NodeSpec{PodCIDR:,DoNotUse_ExternalID:,ProviderID:gce://openshift-gce-devel-ci/us-east1-c/ci-op-ixbhsck4-7a04a-ig-m-7gl2,Unschedulable:false,Taints:[],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{4 0} {<nil>} 4 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{15598829568 0} {<nil>}  BinarySI},pods: {{250 0} {<nil>} 250 DecimalSI},},Allocatable:ResourceList{cpu: {{4 0} {<nil>} 4 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{15493971968 0} {<nil>}  BinarySI},pods: {{250 0} {<nil>} 250 DecimalSI},},Phase:,Conditions:[{NetworkUnavailable False 0001-01-01 00:00:00 +0000 UTC 2020-06-18 15:02:23 +0000 UTC RouteCreated openshift-sdn cleared kubelet-set NoRouteCreated} {OutOfDisk False 2020-06-18 15:22:32 +0000 UTC 2020-06-18 15:02:23 +0000 UTC KubeletHasSufficientDisk kubelet has sufficient disk space available} {MemoryPressure False 2020-06-18 15:22:32 +0000 UTC 2020-06-18 15:02:23 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2020-06-18 15:22:32 +0000 UTC 2020-06-18 15:02:23 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2020-06-18 15:22:32 +0000 UTC 2020-06-18 15:02:23 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2020-06-18 15:22:32 +0000 UTC 2020-06-18 15:04:50 +0000 UTC KubeletReady kubelet is posting ready status}],Addresses:[{InternalIP 10.142.0.5} {ExternalIP 35.231.41.193} {Hostname ci-op-ixbhsck4-7a04a-ig-m-7gl2}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:11b71dacfc47dad75f32c40baf2c9336,SystemUUID:26A21264-AF2E-73AB-88E0-E37D55E548E8,BootID:2c3ce488-1de3-46c6-b4a3-fee978957fbc,KernelVersion:3.10.0-1127.8.2.el7.x86_64,OSImage:Red Hat Enterprise Linux Server 7.8 (Maipo),ContainerRuntimeVersion:docker://1.13.1,KubeletVersion:v1.11.0+d4cacc0,KubeProxyVersion:v1.11.0+d4cacc0,OperatingSystem:linux,Architecture:amd64,},Images:[{[registry.svc.ci.openshift.org/ci-op-ixbhsck4/stable@sha256:5ccd2d23e5638c55b49ea7ac6c70bf860073ebd26cc254013b20407a058cb42f registry.svc.ci.openshift.org/ci-op-ixbhsck4/stable:node] 1189533806} {[registry.svc.ci.openshift.org/ci-op-ixbhsck4/stable@sha256:38813cb4637f1a634c092de63ef6cebc702e2ee2c60c9ecf792cdeaf6988be90 registry.svc.ci.openshift.org/ci-op-ixbhsck4/stable:control-plane] 832674027} {[docker.io/humblec/glusterdynamic-provisioner@sha256:90067cb05a7d217651e84576935934fd9dcff8e6bcdcbaa416bbf36fcd09dbd1 docker.io/humblec/glusterdynamic-provisioner:v1.0] 373281573} {[registry.svc.ci.openshift.org/ci-op-ixbhsck4/stable@sha256:90613c74336740cabbb42385e9d0877c377d446800df70a8ad3b0618528674dd registry.svc.ci.openshift.org/ci-op-ixbhsck4/stable:console] 270341120} {[registry.svc.ci.openshift.org/ci-op-ixbhsck4/stable@sha256:d8306998cf1ba5dc04724b06d21aa0394d601b0917c2476766f4acdec51b0e94 registry.svc.ci.openshift.org/ci-op-ixbhsck4/stable:pod] 262132280} {[gcr.io/kubernetes-e2e-test-images/volume-nfs@sha256:86e62e3284154c97dfa5aac1e07df9bd0592ec8ec45386c0fd979dc5688cab6f gcr.io/kubernetes-e2e-test-images/volume-nfs:0.8] 247157334} {[docker.io/openshift/test-multicast@sha256:71f45c5f9b2e121fdc4c696f73f94910fe99314582c29b353538bbcaac101fad docker.io/openshift/test-multicast:latest] 211481541} {[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils-amd64@sha256:9e842e2f77073ec30e2349f9082ec35357a3bf61e12488adc677144295e68c38 gcr.io/kubernetes-e2e-test-images/jessie-dnsutils-amd64:1.0] 195659796} {[docker.io/nginx@sha256:21f32f6c08406306d822a0e6e8b7dc81f53f336570e852e25fbe1e3e3d0d0133 docker.io/nginx:latest] 132122017} {[k8s.gcr.io/nginx-slim-amd64@sha256:6654db6d4028756062edac466454ee5c9cf9b20ef79e35a81e3c840031eb1e2b k8s.gcr.io/nginx-slim-amd64:0.20] 103591055} {[k8s.gcr.io/echoserver@sha256:cb5c1bddd1b5665e1867a7fa1b5fa843a47ee433bbb75d4293888b71def53229 k8s.gcr.io/echoserver:1.10] 95361986} {[k8s.gcr.io/nginx-slim-amd64@sha256:8ca6a9ecef3b2ef02f6e0c3d449235d9c53d532f420cc0a29a6a133aa88df256 k8s.gcr.io/nginx-slim-amd64:0.21] 95339966} {[quay.io/coreos/etcd@sha256:95bbe1abb3417b81a91904db1ce7784c632a4b07fb362f6aaa21dd4a75494374 quay.io/coreos/etcd:v3.2.28] 38773385} {[gcr.io/kubernetes-e2e-test-images/nettest-amd64@sha256:f2d3eba3ae2043c51389935be4bf41b2b0e60fd7220b9ba9cbc51eceaa74b2b4 gcr.io/kubernetes-e2e-test-images/nettest-amd64:1.0] 27413498} {[gcr.io/kubernetes-e2e-test-images/net-amd64@sha256:67f1bc26a71759aff7a41175ded167eaecae6494edc73cf8279a42a0654eb632 gcr.io/kubernetes-e2e-test-images/net-amd64:1.0] 11393460} {[gcr.io/kubernetes-e2e-test-images/dnsutils-amd64@sha256:f8eca5c0c2e18b7ee6df876e5587c5885d13c59cb453d93bf51628daf656e4b5 gcr.io/kubernetes-e2e-test-images/dnsutils-amd64:1.0] 9030162} {[gcr.io/kubernetes-e2e-test-images/hostexec-amd64@sha256:e1dadb772c6132e3587fc41b8c07facd81ee7c4e74508891b5d98d45b85462e9 gcr.io/kubernetes-e2e-test-images/hostexec-amd64:1.1] 8490662} {[gcr.io/kubernetes-e2e-test-images/netexec-amd64@sha256:da1c165175d5589ed70384e406a0291b6f76d53fc99e2c31ae23003e11e5bfde gcr.io/kubernetes-e2e-test-images/netexec-amd64:1.0] 6713741} {[gcr.io/kubernetes-e2e-test-images/redis-amd64@sha256:2238f5a02d2648d41cc94a88f084060fbfa860890220328eb92696bf2ac649c9 gcr.io/kubernetes-e2e-test-images/redis-amd64:1.0] 5905732} {[gcr.io/kubernetes-e2e-test-images/serve-hostname-amd64@sha256:2dd4032e98a0450d95a0ac71a5e465f542a900812d8c41bc6ca635aed1a5fc91 gcr.io/kubernetes-e2e-test-images/serve-hostname-amd64:1.0] 5470001} {[gcr.io/kubernetes-e2e-test-images/nautilus-amd64@sha256:f4d8b3c2a85339ce928d763d933f27b445bbfbdcf4ede5ed018b170c8055ca60 gcr.io/kubernetes-e2e-test-images/nautilus-amd64:1.0] 4753501} {[gcr.io/kubernetes-e2e-test-images/test-webserver-amd64@sha256:8da73e44d6870f86e0e6a87b2e07829470465afbb6febeb900296d99510789ce gcr.io/kubernetes-e2e-test-images/test-webserver-amd64:1.0] 4732240} {[gcr.io/kubernetes-e2e-test-images/porter-amd64@sha256:824f041b2757f8da2980092ace0991b4b8c6689e76c3b94f677521f7157bc7b8 gcr.io/kubernetes-e2e-test-images/porter-amd64:1.0] 4681408} {[gcr.io/kubernetes-e2e-test-images/entrypoint-tester-amd64@sha256:ef1e5bf4aa80f899f51d173dfcc3106e8daf4c78c28be135b1d421c97f4c9354 gcr.io/kubernetes-e2e-test-images/entrypoint-tester-amd64:1.0] 2729534} {[gcr.io/kubernetes-e2e-test-images/mounttest-amd64@sha256:e3e75014e6df02dc21e6fb95f93b989a2ff8a91f36ae88d74eccbabaa21fc211 gcr.io/kubernetes-e2e-test-images/mounttest-amd64:1.0] 1563521} {[docker.io/busybox@sha256:95cf004f559831017cdf4628aaf1bb30133677be8702a8c5f2994629f637a209 docker.io/busybox:latest] 1219430} {[k8s.gcr.io/busybox@sha256:4bdd623e848417d96127e16037743f0cd8b528c026e9175e22a84f639eca58ff k8s.gcr.io/busybox:1.24] 1113554} {[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1] 742472}],VolumesInUse:[kubernetes.io/gce-pd/kubernetes-dynamic-pvc-6b470226-b177-11ea-b99f-42010a8e0005],VolumesAttached:[{kubernetes.io/gce-pd/kubernetes-dynamic-pvc-6b470226-b177-11ea-b99f-42010a8e0005 /dev/disk/by-id/google-kubernetes-dynamic-pvc-6b470226-b177-11ea-b99f-42010a8e0005}],Config:nil,},}

and:

Logging node info for node ci-op-ixbhsck4-7a04a-ig-n-3gjv
Jun 18 15:22:41.605: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:ci-op-ixbhsck4-7a04a-ig-n-3gjv,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/ci-op-ixbhsck4-7a04a-ig-n-3gjv,UID:459873dc-b175-11ea-b99f-42010a8e0005,ResourceVersion:42889,Generation:0,CreationTimestamp:2020-06-18 15:06:22 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/instance-type: n1-standard-2,beta.kubernetes.io/os: linux,failure-domain.beta.kubernetes.io/region: us-east1,failure-domain.beta.kubernetes.io/zone: us-east1-c,kubernetes.io/hostname: ci-op-ixbhsck4-7a04a-ig-n-3gjv,node-role.kubernetes.io/master: true,role: infra,},Annotations:map[string]string{node.openshift.io/md5sum: d555cf858261577cf916fb7a92453b4a,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:NodeSpec{PodCIDR:,DoNotUse_ExternalID:,ProviderID:gce://openshift-gce-devel-ci/us-east1-c/ci-op-ixbhsck4-7a04a-ig-n-3gjv,Unschedulable:false,Taints:[],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7671906304 0} {<nil>}  BinarySI},pods: {{250 0} {<nil>} 250 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7567048704 0} {<nil>}  BinarySI},pods: {{250 0} {<nil>} 250 DecimalSI},},Phase:,Conditions:[{NetworkUnavailable False 0001-01-01 00:00:00 +0000 UTC 2020-06-18 15:06:22 +0000 UTC RouteCreated openshift-sdn cleared kubelet-set NoRouteCreated} {OutOfDisk False 2020-06-18 15:22:35 +0000 UTC 2020-06-18 15:06:22 +0000 UTC KubeletHasSufficientDisk kubelet has sufficient disk space available} {MemoryPressure False 2020-06-18 15:22:35 +0000 UTC 2020-06-18 15:06:22 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2020-06-18 15:22:35 +0000 UTC 2020-06-18 15:06:22 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2020-06-18 15:22:35 +0000 UTC 2020-06-18 15:06:22 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2020-06-18 15:22:35 +0000 UTC 2020-06-18 15:06:53 +0000 UTC KubeletReady kubelet is posting ready status}],Addresses:[{InternalIP 10.142.0.3} {ExternalIP 35.196.138.42} {Hostname ci-op-ixbhsck4-7a04a-ig-n-3gjv}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:11b71dacfc47dad75f32c40baf2c9336,SystemUUID:93D3837E-EB5E-9B15-116C-2B58BB359E93,BootID:d2b04494-c7fb-4476-a750-a95cc52c217d,KernelVersion:3.10.0-1127.8.2.el7.x86_64,OSImage:Red Hat Enterprise Linux Server 7.8 (Maipo),ContainerRuntimeVersion:docker://1.13.1,KubeletVersion:v1.11.0+d4cacc0,KubeProxyVersion:v1.11.0+d4cacc0,OperatingSystem:linux,Architecture:amd64,},Images:[{[registry.svc.ci.openshift.org/ci-op-ixbhsck4/stable@sha256:5ccd2d23e5638c55b49ea7ac6c70bf860073ebd26cc254013b20407a058cb42f registry.svc.ci.openshift.org/ci-op-ixbhsck4/stable:node] 1189533806} {[registry.svc.ci.openshift.org/ci-op-ixbhsck4/stable@sha256:d8306998cf1ba5dc04724b06d21aa0394d601b0917c2476766f4acdec51b0e94 registry.svc.ci.openshift.org/ci-op-ixbhsck4/stable:pod] 262132280} {[gcr.io/kubernetes-e2e-test-images/volume-nfs@sha256:86e62e3284154c97dfa5aac1e07df9bd0592ec8ec45386c0fd979dc5688cab6f gcr.io/kubernetes-e2e-test-images/volume-nfs:0.8] 247157334} {[docker.io/openshift/test-multicast@sha256:71f45c5f9b2e121fdc4c696f73f94910fe99314582c29b353538bbcaac101fad docker.io/openshift/test-multicast:latest] 211481541} {[docker.io/nginx@sha256:21f32f6c08406306d822a0e6e8b7dc81f53f336570e852e25fbe1e3e3d0d0133 docker.io/nginx:latest] 132122017} {[k8s.gcr.io/nginx-slim-amd64@sha256:6654db6d4028756062edac466454ee5c9cf9b20ef79e35a81e3c840031eb1e2b k8s.gcr.io/nginx-slim-amd64:0.20] 103591055} {[k8s.gcr.io/echoserver@sha256:cb5c1bddd1b5665e1867a7fa1b5fa843a47ee433bbb75d4293888b71def53229 k8s.gcr.io/echoserver:1.10] 95361986} {[k8s.gcr.io/nginx-slim-amd64@sha256:8ca6a9ecef3b2ef02f6e0c3d449235d9c53d532f420cc0a29a6a133aa88df256 k8s.gcr.io/nginx-slim-amd64:0.21] 95339966} {[gcr.io/kubernetes-e2e-test-images/dnsutils-amd64@sha256:f8eca5c0c2e18b7ee6df876e5587c5885d13c59cb453d93bf51628daf656e4b5 gcr.io/kubernetes-e2e-test-images/dnsutils-amd64:1.0] 9030162} {[gcr.io/kubernetes-e2e-test-images/hostexec-amd64@sha256:e1dadb772c6132e3587fc41b8c07facd81ee7c4e74508891b5d98d45b85462e9 gcr.io/kubernetes-e2e-test-images/hostexec-amd64:1.1] 8490662} {[gcr.io/kubernetes-e2e-test-images/netexec-amd64@sha256:da1c165175d5589ed70384e406a0291b6f76d53fc99e2c31ae23003e11e5bfde gcr.io/kubernetes-e2e-test-images/netexec-amd64:1.0] 6713741} {[gcr.io/kubernetes-e2e-test-images/redis-amd64@sha256:2238f5a02d2648d41cc94a88f084060fbfa860890220328eb92696bf2ac649c9 gcr.io/kubernetes-e2e-test-images/redis-amd64:1.0] 5905732} {[gcr.io/k8s-authenticated-test/serve-hostname-amd64@sha256:c9f005a8230cd24cb56faa020cba08403864ddd565f984067afe7d221e58bebe gcr.io/k8s-authenticated-test/serve-hostname-amd64:1.0] 5492890} {[gcr.io/kubernetes-e2e-test-images/serve-hostname-amd64@sha256:2dd4032e98a0450d95a0ac71a5e465f542a900812d8c41bc6ca635aed1a5fc91 gcr.io/kubernetes-e2e-test-images/serve-hostname-amd64:1.0] 5470001} {[gcr.io/kubernetes-e2e-test-images/nautilus-amd64@sha256:f4d8b3c2a85339ce928d763d933f27b445bbfbdcf4ede5ed018b170c8055ca60 gcr.io/kubernetes-e2e-test-images/nautilus-amd64:1.0] 4753501} {[gcr.io/kubernetes-e2e-test-images/kitten-amd64@sha256:1b5083ec770ba654859d78040fa7fec7400eacc27a55f40a4b4616a1c470146f gcr.io/kubernetes-e2e-test-images/kitten-amd64:1.0] 4747037} {[gcr.io/kubernetes-e2e-test-images/porter-amd64@sha256:824f041b2757f8da2980092ace0991b4b8c6689e76c3b94f677521f7157bc7b8 gcr.io/kubernetes-e2e-test-images/porter-amd64:1.0] 4681408} {[gcr.io/kubernetes-e2e-test-images/port-forward-tester-amd64@sha256:09814bbf63990abfc7e95a0850d3800a88fedd0e4433225e187d537799d31a87 gcr.io/kubernetes-e2e-test-images/port-forward-tester-amd64:1.0] 1992230} {[gcr.io/kubernetes-e2e-test-images/mounttest-amd64@sha256:e3e75014e6df02dc21e6fb95f93b989a2ff8a91f36ae88d74eccbabaa21fc211 gcr.io/kubernetes-e2e-test-images/mounttest-amd64:1.0] 1563521} {[gcr.io/kubernetes-e2e-test-images/mounttest-user-amd64@sha256:1ac490f1442cad4b1c886527470393c71302cc62f5e3304fb08854403cbc68dc gcr.io/kubernetes-e2e-test-images/mounttest-user-amd64:1.0] 1450451} {[docker.io/busybox@sha256:95cf004f559831017cdf4628aaf1bb30133677be8702a8c5f2994629f637a209 docker.io/busybox:latest] 1219430} {[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1] 742472}],VolumesInUse:[],VolumesAttached:[],Config:nil,},}

Those aren't very readable, but it looks like all the *Pressure conditions are False.  And looking at the gathered node JSON, the last condition change for each was the node going ready before the tests began:

$ curl -s https://storage.googleapis.com/origin-ci-test/pr-logs/pull/24382/pull-ci-openshift-origin-release-3.11-e2e-gcp/1273612535324479488/artifacts/e2e-gcp/nodes.json | jq -r '.items[] | .metadata.name as $name | .status.conditions[] | .lastTransitionTime + " " + .type + "=" + .status + " " + $name + " " + .reason + ": " + .message' | sort
...
2020-06-18T15:04:50Z Ready=True ci-op-ixbhsck4-7a04a-ig-m-7gl2 KubeletReady: kubelet is posting ready status
...
2020-06-18T15:06:50Z Ready=True ci-op-ixbhsck4-7a04a-ig-n-fxl3 KubeletReady: kubelet is posting ready status
2020-06-18T15:06:53Z Ready=True ci-op-ixbhsck4-7a04a-ig-n-3gjv KubeletReady: kubelet is posting ready status
2020-06-18T15:06:57Z Ready=True ci-op-ixbhsck4-7a04a-ig-n-rwdz KubeletReady: kubelet is posting ready status

Assigning to the node folks for now, in case they can shed some light on why scheduling was failing for these test pods, but feel free to kick this to another team if I'm guessing wrong.

Urgent at Vikas' request as a 3.11.z blocker.

[1]: https://deck-ci.apps.ci.l2s4.p1.openshiftapps.com/view/gs/origin-ci-test/pr-logs/pull/24382/pull-ci-openshift-origin-release-3.11-e2e-gcp/1273612535324479488
[2]: https://gcsweb-ci.apps.ci.l2s4.p1.openshiftapps.com/gcs/origin-ci-test/pr-logs/pull/24382/pull-ci-openshift-origin-release-3.11-e2e-gcp/1273612535324479488/artifacts/e2e-gcp/pods/

Comment 9 Scott Dodson 2020-07-24 14:58:37 UTC
There have been no occurrences of this failure in 3.11 since the changes were made on July 8th.

Comment 11 errata-xmlrpc 2020-07-27 13:49:10 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:2990


Note You need to log in before you can comment on or make changes to this bug.