Bug 1670241
Summary: | How gp2 PVs chooses a zone? | ||
---|---|---|---|
Product: | OpenShift Container Platform | Reporter: | Hongkai Liu <hongkliu> |
Component: | Storage | Assignee: | Hemant Kumar <hekumar> |
Status: | CLOSED WONTFIX | QA Contact: | Liang Xia <lxia> |
Severity: | medium | Docs Contact: | |
Priority: | unspecified | ||
Version: | 4.1.0 | CC: | aos-bugs, aos-storage-staff, hongkliu, jsafrane, mifiedle |
Target Milestone: | --- | ||
Target Release: | 4.1.0 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | If docs needed, set a value | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2019-02-18 20:35:05 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Hongkai Liu
2019-01-29 01:57:28 UTC
75 project, each has one statefulset with 2 REPLICAS and each pod has a PVC gernated by `volumeClaimTemplates` Tried with the 1.12 of k8s. The problem is still there. # oc get clusterversion version -o json | jq .status.desired { "image": "registry.svc.ci.openshift.org/ocp/release@sha256:d03ce0ef85540a1fff8bfc1c408253404aaecb2b958d7c3f24896f3597c3715b", "version": "4.0.0-0.nightly-2019-01-30-145955" } # oc version oc v4.0.0-0.150.0 kubernetes v1.12.4+f39ab668d3 features: Basic-Auth GSSAPI Kerberos SPNEGO Server https://hongkliu28-api.qe.devcluster.openshift.com:6443 kubernetes v1.12.4+f39ab668d3 Placement of dynamically provisioned volumes is based *only* only PVC name. It's hashed, the hash is divided by nr. of zones and the remainder is used as index of the zone. It works well if each PVC has a different name - their hashes are different and PVs are provisioned roughly equally among zones. If the PVCs have the same names (in different namespaces), they have the same hash and PVs are provisioned in the same zones. There is bug #1663012 that tries to fix that, but change of the hashing algorithm on Kubernetes update looks like significant behavior change. Can you use different StatefulSet names in each namespace? It should help you with this issue. Oh, and since this is 4.0, setting "volumeBindingMode: WaitForFirstConsumer" in storage class should fix it too, even with the same PVC names in all namespaces. It works when `volumeBindingMode: WaitForFirstConsumer` # cat ~/gp2b.yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: annotations: creationTimestamp: 2019-02-18T14:16:17Z labels: cluster.storage.openshift.io/owner-name: cluster-config-v1 cluster.storage.openshift.io/owner-namespace: kube-system name: gp2b resourceVersion: "9640" selfLink: /apis/storage.k8s.io/v1/storageclasses/gp2 uid: c1904b3f-3387-11e9-9c73-0ac06c3388a2 parameters: type: gp2 provisioner: kubernetes.io/aws-ebs reclaimPolicy: Delete volumeBindingMode: WaitForFirstConsumer # oc get clusterversion version -o json | jq -r .status.desired { "image": "registry.svc.ci.openshift.org/ocp/release@sha256:9f37d93acf2e7442e5bf74f06ca253e37ba299e89bbb66fb30b2cafda6c3d217", "version": "4.0.0-0.ci-2019-02-18-105238" } |