Bug 1490477 - dynamic provisioning doesn't work well in multizone deployments
Summary: dynamic provisioning doesn't work well in multizone deployments
Keywords:
Status: CLOSED DUPLICATE of bug 1509028
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Storage
Version: 3.6.0
Hardware: x86_64
OS: Linux
unspecified
low
Target Milestone: ---
: ---
Assignee: Pavel Pospisil
QA Contact: Liang Xia
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-09-11 17:20 UTC by Peter Schiffer
Modified: 2018-03-01 07:55 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-11-03 19:53:18 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Peter Schiffer 2017-09-11 17:20:59 UTC
Description of problem:
dynamic provisioning fails to work in multizone deployment (GCP) with error:
No nodes are available that match all of the following predicates:: MatchNodeSelector (3), NoVolumeZoneConflict (4).

Version-Release number of selected component (if applicable):
atomic-openshift-3.6.173.0.21-1.git.0.f95b0e7.el7.x86_64

How reproducible:
always

Steps to Reproduce:
1. deploy multizone OCP in GCP

2. observe default storage class created by openshift ansible installer:
$ oc describe storageclass standard
Name:		standard
IsDefaultClass:	Yes
Annotations:	storageclass.beta.kubernetes.io/is-default-class=true
Provisioner:	kubernetes.io/gce-pd
Parameters:	type=pd-standard
Events:		<none>

3. create some demo app:
$ oc new-app openshift/php:7.0~https://github.com/christianh814/openshift-php-upload-demo --name=demo

4. create pvc:
$ vi app-claim.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
 name: app
 annotations:
   volume.beta.kubernetes.io/storage-class: standard
spec:
 accessModes:
  - ReadWriteOnce
 resources:
   requests:
     storage: 10Gi

$ oc describe pvc app
Name:		app
Namespace:	rwo
StorageClass:	standard
Status:		Bound
Volume:		pvc-1dfbdaac-96f1-11e7-940c-42010a9c0009
Labels:		<none>
Annotations:	pv.kubernetes.io/bind-completed=yes
		pv.kubernetes.io/bound-by-controller=yes
		volume.beta.kubernetes.io/storage-class=standard
		volume.beta.kubernetes.io/storage-provisioner=kubernetes.io/gce-pd
Capacity:	10Gi
Access Modes:	RWO
Events:		<none>

$ oc describe pv pvc-1dfbdaac-96f1-11e7-940c-42010a9c0009
Name:		pvc-1dfbdaac-96f1-11e7-940c-42010a9c0009
Labels:		failure-domain.beta.kubernetes.io/region=europe-west3
		failure-domain.beta.kubernetes.io/zone=europe-west3-c
Annotations:	kubernetes.io/createdby=gce-pd-dynamic-provisioner
		pv.kubernetes.io/bound-by-controller=yes
		pv.kubernetes.io/provisioned-by=kubernetes.io/gce-pd
StorageClass:	standard
Status:		Bound
Claim:		rwo/app
Reclaim Policy:	Delete
Access Modes:	RWO
Capacity:	10Gi
Message:	
Source:
    Type:	GCEPersistentDisk (a Persistent Disk resource in Google Compute Engine)
    PDName:	kubernetes-dynamic-pvc-1dfbdaac-96f1-11e7-940c-42010a9c0009
    FSType:	
    Partition:	0
    ReadOnly:	false
Events:		<none>

5. attach volume to the pod:
$ oc volume dc/demo --add --name=persistent-volume --type=persistentVolumeClaim --claim-name=app --mount-path=/opt/app-root/src/uploaded

$ oc describe pod demo-4-x699m
Name:			demo-4-x699m
Namespace:		rwo
Security Policy:	restricted
Node:			/
Labels:			app=demo
			deployment=demo-4
			deploymentconfig=demo
Annotations:		kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"rwo","name":"demo-4","uid":"b736e241-9714-11e7-991a-42010a9c0007","api...
			openshift.io/deployment-config.latest-version=4
			openshift.io/deployment-config.name=demo
			openshift.io/deployment.name=demo-4
			openshift.io/generated-by=OpenShiftNewApp
			openshift.io/scc=restricted
Status:			Pending
IP:			
Controllers:		ReplicationController/demo-4
Containers:
  demo:
    Image:		docker-registry.default.svc:5000/rwo/demo@sha256:75d330389f6c165b71df9f35be8ef7564945dcb4c82580deebee750ab168b9c4
    Port:		8080/TCP
    Environment:	<none>
    Mounts:
      /opt/app-root/src/uploaded from persistent-volume (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-2nk8m (ro)
Conditions:
  Type		Status
  PodScheduled 	False 
Volumes:
  persistent-volume:
    Type:	PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:	app
    ReadOnly:	false
  default-token-2nk8m:
    Type:	Secret (a volume populated by a Secret)
    SecretName:	default-token-2nk8m
    Optional:	false
QoS Class:	BestEffort
Node-Selectors:	role=app
Tolerations:	<none>
Events:
  FirstSeen	LastSeen	Count	From			SubObjectPath	Type		Reason			Message
  ---------	--------	-----	----			-------------	--------	------			-------
  12s		5s		5	default-scheduler			Warning		FailedScheduling	No nodes are available that match all of the following predicates:: MatchNodeSelector (3), NoVolumeZoneConflict (4).

Actual results:
default-scheduler			Warning		FailedScheduling	No nodes are available that match all of the following predicates:: MatchNodeSelector (3), NoVolumeZoneConflict (4).

Expected results:
it should work

Master Log:
Sep 11 13:14:51 ocp-master-k082 atomic-openshift-master-controllers[11367]: I0911 13:14:51.015374   11367 replication_controller.go:451] Too few "rwo"/"demo-4" replicas, need 1, creating 1
Sep 11 13:14:51 ocp-master-k082 atomic-openshift-master-controllers[11367]: I0911 13:14:51.031659   11367 scheduler.go:161] Failed to schedule pod: rwo/demo-4-x699m
Sep 11 13:14:51 ocp-master-k082 atomic-openshift-master-controllers[11367]: I0911 13:14:51.031731   11367 factory.go:719] Updating pod condition for rwo/demo-4-x699m to (PodScheduled==False)
Sep 11 13:14:51 ocp-master-k082 atomic-openshift-master-controllers[11367]: I0911 13:14:51.032591   11367 event.go:217] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"rwo", Name:"demo-4", UID:"b736e241-9714-11e7-991a-42010a9c0007", APIVersion:"v1", ResourceVersion:"301385", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: demo-4-x699m
Sep 11 13:14:51 ocp-master-k082 atomic-openshift-master-controllers[11367]: I0911 13:14:51.039047   11367 scheduler.go:161] Failed to schedule pod: rwo/demo-4-x699m
Sep 11 13:14:51 ocp-master-k082 atomic-openshift-master-controllers[11367]: I0911 13:14:51.039175   11367 factory.go:719] Updating pod condition for rwo/demo-4-x699m to (PodScheduled==False)
Sep 11 13:14:51 ocp-master-k082 atomic-openshift-master-controllers[11367]: W0911 13:14:51.039219   11367 factory.go:656] Request for pod rwo/demo-4-x699m already in flight, abandoning
Sep 11 13:14:52 ocp-master-k082 atomic-openshift-master-controllers[11367]: I0911 13:14:52.035451   11367 scheduler.go:161] Failed to schedule pod: rwo/demo-4-x699m
Sep 11 13:14:52 ocp-master-k082 atomic-openshift-master-controllers[11367]: I0911 13:14:52.035531   11367 factory.go:719] Updating pod condition for rwo/demo-4-x699m to (PodScheduled==False)
Sep 11 13:14:54 ocp-master-k082 atomic-openshift-master-controllers[11367]: I0911 13:14:54.039183   11367 scheduler.go:161] Failed to schedule pod: rwo/demo-4-x699m

Node Log (of failed PODs):
N/A

PV Dump:
above

PVC Dump:
above

StorageClass Dump (if StorageClass used by PV/PVC):
above

Additional info:
$ oc get nodes --show-labels
NAME                  STATUS                     AGE       VERSION             LABELS
ocp-infra-node-1227   Ready                      3d        v1.6.1+5115d708d7   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=n1-standard-2,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/region=europe-west3,failure-domain.beta.kubernetes.io/zone=europe-west3-b,kubernetes.io/hostname=ocp-infra-node-1227,role=infra
ocp-infra-node-g4d5   Ready                      3d        v1.6.1+5115d708d7   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=n1-standard-2,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/region=europe-west3,failure-domain.beta.kubernetes.io/zone=europe-west3-a,kubernetes.io/hostname=ocp-infra-node-g4d5,role=infra
ocp-infra-node-lg0s   Ready                      3d        v1.6.1+5115d708d7   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=n1-standard-2,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/region=europe-west3,failure-domain.beta.kubernetes.io/zone=europe-west3-c,kubernetes.io/hostname=ocp-infra-node-lg0s,role=infra
ocp-master-k082       Ready,SchedulingDisabled   3d        v1.6.1+5115d708d7   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=n1-standard-2,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/region=europe-west3,failure-domain.beta.kubernetes.io/zone=europe-west3-a,kubernetes.io/hostname=ocp-master-k082,role=master
ocp-master-l9lx       Ready,SchedulingDisabled   3d        v1.6.1+5115d708d7   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=n1-standard-2,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/region=europe-west3,failure-domain.beta.kubernetes.io/zone=europe-west3-b,kubernetes.io/hostname=ocp-master-l9lx,role=master
ocp-master-tdv1       Ready,SchedulingDisabled   3d        v1.6.1+5115d708d7   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=n1-standard-2,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/region=europe-west3,failure-domain.beta.kubernetes.io/zone=europe-west3-c,kubernetes.io/hostname=ocp-master-tdv1,role=master
ocp-node-2lrj         Ready                      3d        v1.6.1+5115d708d7   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=n1-standard-2,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/region=europe-west3,failure-domain.beta.kubernetes.io/zone=europe-west3-a,kubernetes.io/hostname=ocp-node-2lrj,role=app
ocp-node-dgpl         Ready                      3d        v1.6.1+5115d708d7   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=n1-standard-2,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/region=europe-west3,failure-domain.beta.kubernetes.io/zone=europe-west3-b,kubernetes.io/hostname=ocp-node-dgpl,role=app

Volume affinity should work according to the https://kubernetes.io/docs/admin/multiple-zones/#volume-affinity

Comment 1 Peter Schiffer 2017-09-11 18:57:13 UTC
Ah, now I see it. The pv is in the zone=europe-west3-c, but app nodes are only in the zone=europe-west3-a and zone=europe-west3-b.

So, the question now is, how are the zones selected? According to the master zones? In that case, number of infra and app nodes in multizone deployment should be at least number of masters I guess.

Comment 2 Eric Paris 2017-09-11 21:52:50 UTC
I thought PVs could be created in any zone which had GCE instances with the same project ID as set on the master config. So if you have non-OpenShift instances in the same project ID you could have the trouble.

Comment 3 Pavel Pospisil 2017-09-12 10:05:42 UTC
Please, would you add output of the below command:
$ oc get nodes

Comment 4 Pavel Pospisil 2017-09-12 10:30:02 UTC
Note: there's a Storage Class configuration parameter "zone" [1] that can be used to specify in which zone is the PV provisioned.

Note: OpenShift 3.7 will have Storage Class configuration parameter "zones" [2].

In case the "zone" configuration parameter is not specified in the Storage Class the PV is provisioned in an arbitrary zone in the cluster.

As far as I remember there was a bug causing that a PV could have been provisioned in a zone where only masters resided. This bug was fixed by I can't find whether the fix was for K8s 1.6 or 1.7.

[1] https://docs.openshift.org/latest/install_config/persistent_storage/dynamically_provisioning_pvs.html#gce-persistentdisk-gcePd
[2] https://trello.com/c/hIoJFosv/506-8-admins-can-configure-zones-in-storage-class

Comment 5 Peter Schiffer 2017-09-12 10:49:28 UTC
$ oc get nodes
NAME                  STATUS                     AGE       VERSION
ocp-infra-node-1227   Ready                      3d        v1.6.1+5115d708d7
ocp-infra-node-g4d5   Ready                      3d        v1.6.1+5115d708d7
ocp-infra-node-lg0s   Ready                      3d        v1.6.1+5115d708d7
ocp-master-k082       Ready,SchedulingDisabled   3d        v1.6.1+5115d708d7
ocp-master-l9lx       Ready,SchedulingDisabled   3d        v1.6.1+5115d708d7
ocp-master-tdv1       Ready,SchedulingDisabled   3d        v1.6.1+5115d708d7
ocp-node-2lrj         Ready                      3d        v1.6.1+5115d708d7
ocp-node-dgpl         Ready                      3d        v1.6.1+5115d708d7

But I'm going to increase the number of app nodes and try again.

Comment 6 Jan Safranek 2017-09-12 12:27:24 UTC
This bug is discussed upstream here: https://github.com/kubernetes/kubernetes/issues/50115

There is no solution so far. As a workaround, you should always have at least one node in every zone that has a master.

Comment 7 Peter Schiffer 2017-09-12 13:33:03 UTC
Thanks. I can confirm that adding an app node in my case solved the problem.

Comment 8 Pavel Pospisil 2017-11-03 19:53:18 UTC

*** This bug has been marked as a duplicate of bug 1509028 ***


Note You need to log in before you can comment on or make changes to this bug.