Created attachment 1216244 [details] The screen shot of AWS volumes showing that the Volume is not attached to node/instance Description of problem: Creating a pvc in project on AWS is not attaching the AWS volume to the node/instance Version-Release number of selected component (if applicable): openshift v3.4.0.18+ada983f kubernetes v1.4.0+776c994 etcd 3.1.0-rc.0 How reproducible: Always/Everytime Steps to Reproduce: 1) create a new project 2) oc create -f pvc.yaml (yaml is pasted in the additional info) 3) oc get pv and oc get pvc show the volume created and bound. (output pasted below) root@ip-172-31-15-159: ~/svt/application_performance/osperf # oc project Using project "pvcproj" on server "https://ip-172-31-15-159.us-west-2.compute.internal:8443". root@ip-172-31-15-159: ~/svt/application_performance/osperf # oc get pvc NAME STATUS VOLUME CAPACITY ACCESSMODES AGE claim1 Bound pvc-57c98d2d-a06b-11e6-b735-02def2788f0d 3Gi RWO 53m root@ip-172-31-15-159: ~/svt/application_performance/osperf # oc get pv | grep pvcproj pvc-57c98d2d-a06b-11e6-b735-02def2788f0d 3Gi RWO Delete Bound pvcproj/claim1 53m 4) Go to the Volumes section on AWS console and volume is there but is not attached to the node (png screen shot attached) Actual results: The volume is not attached to the node and status is "available" Expected results: The volume should be attached to the node and status should be "in-use" Additional info: the yaml file for pvc is: kind: "PersistentVolumeClaim" apiVersion: "v1" metadata: name: "claim1" annotations: volume.alpha.kubernetes.io/storage-class: "foo" spec: accessModes: - "ReadWriteOnce" resources: requests: storage: "3Gi"
The AWS volume will not attach to an instance until the PVC is used in a POD. When the pod is deployed and is running on a node then the volume will show attached to the instance.
Created attachment 1216707 [details] The json file used in "requested info"
Brad, here is the entire scenario steps to reproduce: 1. oc new-project cakepv 2. oc process -f cakephp-mysql-pv.json | oc create -f (the json file is attached to the bug) service "cakephp-mysql-example" created route "cakephp-mysql-example" created imagestream "cakephp-mysql-example" created buildconfig "cakephp-mysql-example" created persistentvolumeclaim "mysql" created deploymentconfig "cakephp-mysql-example" created service "mysql" created deploymentconfig "mysql" created 3. oc get pv and oc get pvc shows that pv and pvc gets created and bound root@ip-172-31-15-159: ~/svt/openshift_scalability/content/quickstarts/cakephp # oc get pvc NAME STATUS VOLUME CAPACITY ACCESSMODES AGE mysql Bound pvc-922c52d6-a12c-11e6-b735-02def2788f0d 1Gi RWO 54s root@ip-172-31-15-159: ~/svt/openshift_scalability/content/quickstarts/cakephp # oc get pv NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM REASON AGE pvc-922c52d6-a12c-11e6-b735-02def2788f0d 1Gi RWO Delete Bound cakephppv/mysql 55s 4. but the pods never scale and the mysql pod either never comes up or if we do oc rollout latest will get stuck in containerCreating root@ip-172-31-15-159: ~/svt/openshift_scalability/content/quickstarts/cakephp # oc rollout latest dc/mysql deploymentconfig "mysql" rolled out root@ip-172-31-15-159: ~/svt/openshift_scalability/content/quickstarts/cakephp # oc get pods NAME READY STATUS RESTARTS AGE cakephp-mysql-example-1-build 0/1 Completed 0 3m cakephp-mysql-example-1-deploy 1/1 Running 0 2m cakephp-mysql-example-1-hook-pre 0/1 CrashLoopBackOff 4 2m mysql-1-deploy 1/1 Running 0 3s mysql-1-k005l 0/1 ContainerCreating 0 0s
Yes both the node and the pv are in the same AZ (us-west-2b). This is a one master two node setup. NAME READY STATUS RESTARTS AGE IP NODE cakephp-mysql-example-1-build 0/1 Completed 0 10m 172.20.1.92 ip-172-31-44-228.us-west-2.compute.internal cakephp-mysql-example-1-deploy 1/1 Running 0 10m 172.20.2.67 ip-172-31-44-229.us-west-2.compute.internal cakephp-mysql-example-1-hook-pre 0/1 CrashLoopBackOff 6 10m 172.20.1.93 ip-172-31-44-228.us-west-2.compute.internal mysql-1-1mhws 0/1 ContainerCreating 0 7m <none> ip-172-31-44-228.us-west-2.compute.internal mysql-1-deploy 1/1 Running 0 7m 172.20.2.68 ip-172-31-44-229.us-west-2.compute.internal the logs are located here: http://file.rdu.redhat.com/schituku/logs/
I think this particular problem was fixed by https://github.com/openshift/origin/pull/11620
Siva, the fix was merged into OSE v3.4.0.19. Can you try that version (or newer) and let me know if this still happens?
Hemant, I verified the bug on verion openshift v3.4.0.19+346a31d kubernetes v1.4.0+776c994 etcd 3.1.0-rc.0 it is fixed and closing the bug now.
Tested on openshift v3.4.0.19+346a31d kubernetes v1.4.0+776c994 etcd 3.1.0-rc.0 the pods come up fine and the volume shows in "in-use" state.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2017:0066