Bug 1390758 - AWS volume not getting attached to the node even when PVC is bound
Summary: AWS volume not getting attached to the node even when PVC is bound
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Storage
Version: 3.4.0
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
: ---
Assignee: Hemant Kumar
QA Contact: Chao Yang
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-11-01 20:39 UTC by Siva Reddy
Modified: 2017-03-08 18:43 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Cause: Cloud provider was not getting initialized properly. Consequence: Features that require cloud provider API access such as pvc creation was not working. Fix: https://github.com/openshift/origin/pull/11620/files Fixes cloud provider initialization on nodes. Result:
Clone Of:
Environment:
Last Closed: 2017-01-18 12:48:21 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
The screen shot of AWS volumes showing that the Volume is not attached to node/instance (56.93 KB, image/png)
2016-11-01 20:39 UTC, Siva Reddy
no flags Details
The json file used in "requested info" (10.51 KB, text/plain)
2016-11-02 18:51 UTC, Siva Reddy
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2017:0066 0 normal SHIPPED_LIVE Red Hat OpenShift Container Platform 3.4 RPM Release Advisory 2017-01-18 17:23:26 UTC

Description Siva Reddy 2016-11-01 20:39:30 UTC
Created attachment 1216244 [details]
The screen shot of AWS volumes showing that the Volume is not attached to node/instance

Description of problem:
     Creating a pvc in project on AWS is not attaching the AWS volume to the node/instance

Version-Release number of selected component (if applicable):

openshift v3.4.0.18+ada983f
kubernetes v1.4.0+776c994
etcd 3.1.0-rc.0

How reproducible:
Always/Everytime

Steps to Reproduce:
1) create a new project 
2) oc create -f pvc.yaml (yaml is pasted in the additional info)   
3)  oc get pv and oc get pvc show the volume created and bound. (output pasted below)
root@ip-172-31-15-159: ~/svt/application_performance/osperf # oc project
Using project "pvcproj" on server "https://ip-172-31-15-159.us-west-2.compute.internal:8443".
root@ip-172-31-15-159: ~/svt/application_performance/osperf # oc get pvc 
NAME      STATUS    VOLUME                                     CAPACITY   ACCESSMODES   AGE
claim1    Bound     pvc-57c98d2d-a06b-11e6-b735-02def2788f0d   3Gi        RWO           53m
root@ip-172-31-15-159: ~/svt/application_performance/osperf # oc get pv | grep pvcproj
pvc-57c98d2d-a06b-11e6-b735-02def2788f0d   3Gi        RWO           Delete          Bound     pvcproj/claim1                                     53m
 
4) Go to the Volumes section on AWS console and volume is there but is not attached to the node (png screen shot attached)

Actual results:
  The volume is not attached to the node and status is "available"

Expected results:
  The volume should be attached to the node and status should be "in-use"

Additional info:
  the yaml file for pvc is:

kind: "PersistentVolumeClaim"
apiVersion: "v1"
metadata:
  name: "claim1"
  annotations:
    volume.alpha.kubernetes.io/storage-class: "foo"
spec:
  accessModes:
    - "ReadWriteOnce"
  resources:
    requests:
      storage: "3Gi"

Comment 1 Bradley Childs 2016-11-02 17:44:25 UTC
The AWS volume will not attach to an instance until the PVC is used in a POD.  When the pod is deployed and is running on a node then the volume will show attached to the instance.

Comment 3 Siva Reddy 2016-11-02 18:51:07 UTC
Created attachment 1216707 [details]
The json file used in "requested info"

Comment 4 Siva Reddy 2016-11-02 18:52:33 UTC
Brad,
   here is the entire scenario steps to reproduce:
1. oc new-project cakepv
2. oc process -f cakephp-mysql-pv.json | oc create -f
(the json file is attached to the bug)

service "cakephp-mysql-example" created
route "cakephp-mysql-example" created
imagestream "cakephp-mysql-example" created
buildconfig "cakephp-mysql-example" created
persistentvolumeclaim "mysql" created
deploymentconfig "cakephp-mysql-example" created
service "mysql" created
deploymentconfig "mysql" created

3.  oc get pv and oc get pvc shows that pv and pvc gets created and bound 

root@ip-172-31-15-159: ~/svt/openshift_scalability/content/quickstarts/cakephp # oc get pvc
NAME      STATUS    VOLUME                                     CAPACITY   ACCESSMODES   AGE
mysql     Bound     pvc-922c52d6-a12c-11e6-b735-02def2788f0d   1Gi        RWO           54s
root@ip-172-31-15-159: ~/svt/openshift_scalability/content/quickstarts/cakephp # oc get pv
NAME                                       CAPACITY   ACCESSMODES   RECLAIMPOLICY   STATUS    CLAIM             REASON    AGE
pvc-922c52d6-a12c-11e6-b735-02def2788f0d   1Gi        RWO           Delete          Bound     cakephppv/mysql             55s

4. but the pods never scale and the mysql pod either never comes up or if we do oc rollout latest will get stuck in containerCreating
root@ip-172-31-15-159: ~/svt/openshift_scalability/content/quickstarts/cakephp # oc rollout latest dc/mysql
deploymentconfig "mysql" rolled out
root@ip-172-31-15-159: ~/svt/openshift_scalability/content/quickstarts/cakephp # oc get pods
NAME                               READY     STATUS              RESTARTS   AGE
cakephp-mysql-example-1-build      0/1       Completed           0          3m
cakephp-mysql-example-1-deploy     1/1       Running             0          2m
cakephp-mysql-example-1-hook-pre   0/1       CrashLoopBackOff    4          2m
mysql-1-deploy                     1/1       Running             0          3s
mysql-1-k005l                      0/1       ContainerCreating   0          0s

Comment 6 Siva Reddy 2016-11-03 14:15:50 UTC
Yes both the node and the pv are in the same AZ (us-west-2b). This is a one master two node setup. 
NAME                               READY     STATUS              RESTARTS   AGE       IP            NODE
cakephp-mysql-example-1-build      0/1       Completed           0          10m       172.20.1.92   ip-172-31-44-228.us-west-2.compute.internal
cakephp-mysql-example-1-deploy     1/1       Running             0          10m       172.20.2.67   ip-172-31-44-229.us-west-2.compute.internal
cakephp-mysql-example-1-hook-pre   0/1       CrashLoopBackOff    6          10m       172.20.1.93   ip-172-31-44-228.us-west-2.compute.internal
mysql-1-1mhws                      0/1       ContainerCreating   0          7m        <none>        ip-172-31-44-228.us-west-2.compute.internal
mysql-1-deploy                     1/1       Running             0          7m        172.20.2.68   ip-172-31-44-229.us-west-2.compute.internal

the logs are located here: http://file.rdu.redhat.com/schituku/logs/

Comment 8 Hemant Kumar 2016-11-03 19:17:36 UTC
I think this particular problem was fixed by 
https://github.com/openshift/origin/pull/11620

Comment 9 Hemant Kumar 2016-11-03 19:20:47 UTC
Siva, the fix was merged into  OSE v3.4.0.19. Can you try that version (or newer) and let me know if this still happens?

Comment 10 Siva Reddy 2016-11-03 20:42:45 UTC
Hemant, I verified the bug on verion
 openshift v3.4.0.19+346a31d
 kubernetes v1.4.0+776c994
 etcd 3.1.0-rc.0

it is fixed and closing the bug now.

Comment 11 Siva Reddy 2016-11-03 20:44:52 UTC
Tested on 
openshift v3.4.0.19+346a31d
kubernetes v1.4.0+776c994
etcd 3.1.0-rc.0

   the pods come up fine and the volume shows in "in-use" state.

Comment 13 errata-xmlrpc 2017-01-18 12:48:21 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:0066


Note You need to log in before you can comment on or make changes to this bug.