Bug 1732901
Summary: | FailedScheduling after few deployments on azure | ||
---|---|---|---|
Product: | OpenShift Container Platform | Reporter: | Simon <skordas> |
Component: | Storage | Assignee: | ravig <rgudimet> |
Status: | CLOSED DUPLICATE | QA Contact: | ge liu <geliu> |
Severity: | medium | Docs Contact: | |
Priority: | unspecified | ||
Version: | 4.2.0 | CC: | aos-bugs, aos-storage-staff, jsafrane, mfojtik |
Target Milestone: | --- | ||
Target Release: | 4.2.0 | ||
Hardware: | Unspecified | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | If docs needed, set a value | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2019-07-26 10:30:49 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Simon
2019-07-24 16:32:05 UTC
It's looks like is storage related: when there are multiple availability zones PV/PVC should be in the same zone. oc get storageclass managed-premium -o yaml | grep volumeBindingMode Actual: volumeBindingMode: Immediate Expected: volumeBindingMode: WaitForFirstConsumer With volumeBindingMode: Immediate PVC can created in different zone than the pod. WaitForFirstConsumer will assure PV, PVC pod will be in the same zone > 105m Warning FailedScheduling pod/git-2-g8cg7 0/6 nodes are available: 1 node(s) exceed max volume count, 2 node(s) had volume node affinity conflict, 3 node(s) had taints that the pod didn't tolerate. It seems that one node is at its limit of attached Azure volumes. > volumeBindingMode: WaitForFirstConsumer This is already covered in bug #1731059. It should help in your case. But please check nr. of pods on the other nodes - are they reaching the volume attachment limit too? Kubernetes should distribute volumes across zones roughly evenly. Mabe it's a time to scale the cluster up. *** This bug has been marked as a duplicate of bug 1731059 *** |