Bug 1731059
Summary: | Pod with persistent volumes failed scheduling on Azure due to volume node affinity conflict | ||
---|---|---|---|
Product: | OpenShift Container Platform | Reporter: | Liang Xia <lxia> |
Component: | Storage | Assignee: | Jan Safranek <jsafrane> |
Status: | CLOSED ERRATA | QA Contact: | Liang Xia <lxia> |
Severity: | medium | Docs Contact: | |
Priority: | medium | ||
Version: | 4.2.0 | CC: | abudavis, aos-bugs, aos-storage-staff, bchilds, chaoyang, jialiu, skordas |
Target Milestone: | --- | ||
Target Release: | 4.2.0 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | If docs needed, set a value | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2019-10-16 06:29:52 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Liang Xia
2019-07-18 09:07:01 UTC
How did you create the cluster? By default, our installer creates one master and one node in each zone, so any PVC can be scheduled. I can see that you have only 5 nodes (3 masters, 2 nodes) instead of 6. Can you please check what happened to the 6th node? *** Bug 1732901 has been marked as a duplicate of this bug. *** Verified the issue has been fixed. Tested on a cluster with 5 nodes ( 3 masters, 2 nodes), and create dynamic pvc/pod several times, pods are up and running with the volumes. $ oc get co storage NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE storage 4.2.0-0.nightly-2019-07-28-222114 True False False 35m $ oc get sc managed-premium -o yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: annotations: storageclass.kubernetes.io/is-default-class: "true" creationTimestamp: "2019-07-29T02:00:13Z" name: managed-premium ownerReferences: - apiVersion: v1 kind: clusteroperator name: storage uid: 5f960bf0-b1a3-11e9-bb54-000d3a92e279 resourceVersion: "9674" selfLink: /apis/storage.k8s.io/v1/storageclasses/managed-premium uid: 9a64e4a3-b1a4-11e9-9ac3-000d3a92e440 parameters: kind: Managed storageaccounttype: Premium_LRS provisioner: kubernetes.io/azure-disk reclaimPolicy: Delete volumeBindingMode: WaitForFirstConsumer Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:2922 Facing the same problem on Openshift 4.3.28 (5 node cluster), what is the solution / the root cause of the problem? |