Bug 1390758
Summary: | AWS volume not getting attached to the node even when PVC is bound | ||||||||
---|---|---|---|---|---|---|---|---|---|
Product: | OpenShift Container Platform | Reporter: | Siva Reddy <schituku> | ||||||
Component: | Storage | Assignee: | Hemant Kumar <hekumar> | ||||||
Status: | CLOSED ERRATA | QA Contact: | Chao Yang <chaoyang> | ||||||
Severity: | high | Docs Contact: | |||||||
Priority: | unspecified | ||||||||
Version: | 3.4.0 | CC: | aos-bugs, bchilds, hchen, mifiedle, schituku, tdawson | ||||||
Target Milestone: | --- | Keywords: | Reopened | ||||||
Target Release: | --- | ||||||||
Hardware: | x86_64 | ||||||||
OS: | Linux | ||||||||
Whiteboard: | |||||||||
Fixed In Version: | Doc Type: | Bug Fix | |||||||
Doc Text: |
Cause:
Cloud provider was not getting initialized properly.
Consequence:
Features that require cloud provider API access such as pvc creation was not working.
Fix:
https://github.com/openshift/origin/pull/11620/files Fixes cloud provider initialization on nodes.
Result:
|
Story Points: | --- | ||||||
Clone Of: | Environment: | ||||||||
Last Closed: | 2017-01-18 12:48:21 UTC | Type: | Bug | ||||||
Regression: | --- | Mount Type: | --- | ||||||
Documentation: | --- | CRM: | |||||||
Verified Versions: | Category: | --- | |||||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||||
Cloudforms Team: | --- | Target Upstream Version: | |||||||
Embargoed: | |||||||||
Attachments: |
|
Description
Siva Reddy
2016-11-01 20:39:30 UTC
The AWS volume will not attach to an instance until the PVC is used in a POD. When the pod is deployed and is running on a node then the volume will show attached to the instance. Created attachment 1216707 [details]
The json file used in "requested info"
Brad, here is the entire scenario steps to reproduce: 1. oc new-project cakepv 2. oc process -f cakephp-mysql-pv.json | oc create -f (the json file is attached to the bug) service "cakephp-mysql-example" created route "cakephp-mysql-example" created imagestream "cakephp-mysql-example" created buildconfig "cakephp-mysql-example" created persistentvolumeclaim "mysql" created deploymentconfig "cakephp-mysql-example" created service "mysql" created deploymentconfig "mysql" created 3. oc get pv and oc get pvc shows that pv and pvc gets created and bound root@ip-172-31-15-159: ~/svt/openshift_scalability/content/quickstarts/cakephp # oc get pvc NAME STATUS VOLUME CAPACITY ACCESSMODES AGE mysql Bound pvc-922c52d6-a12c-11e6-b735-02def2788f0d 1Gi RWO 54s root@ip-172-31-15-159: ~/svt/openshift_scalability/content/quickstarts/cakephp # oc get pv NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM REASON AGE pvc-922c52d6-a12c-11e6-b735-02def2788f0d 1Gi RWO Delete Bound cakephppv/mysql 55s 4. but the pods never scale and the mysql pod either never comes up or if we do oc rollout latest will get stuck in containerCreating root@ip-172-31-15-159: ~/svt/openshift_scalability/content/quickstarts/cakephp # oc rollout latest dc/mysql deploymentconfig "mysql" rolled out root@ip-172-31-15-159: ~/svt/openshift_scalability/content/quickstarts/cakephp # oc get pods NAME READY STATUS RESTARTS AGE cakephp-mysql-example-1-build 0/1 Completed 0 3m cakephp-mysql-example-1-deploy 1/1 Running 0 2m cakephp-mysql-example-1-hook-pre 0/1 CrashLoopBackOff 4 2m mysql-1-deploy 1/1 Running 0 3s mysql-1-k005l 0/1 ContainerCreating 0 0s Yes both the node and the pv are in the same AZ (us-west-2b). This is a one master two node setup. NAME READY STATUS RESTARTS AGE IP NODE cakephp-mysql-example-1-build 0/1 Completed 0 10m 172.20.1.92 ip-172-31-44-228.us-west-2.compute.internal cakephp-mysql-example-1-deploy 1/1 Running 0 10m 172.20.2.67 ip-172-31-44-229.us-west-2.compute.internal cakephp-mysql-example-1-hook-pre 0/1 CrashLoopBackOff 6 10m 172.20.1.93 ip-172-31-44-228.us-west-2.compute.internal mysql-1-1mhws 0/1 ContainerCreating 0 7m <none> ip-172-31-44-228.us-west-2.compute.internal mysql-1-deploy 1/1 Running 0 7m 172.20.2.68 ip-172-31-44-229.us-west-2.compute.internal the logs are located here: http://file.rdu.redhat.com/schituku/logs/ I think this particular problem was fixed by https://github.com/openshift/origin/pull/11620 Siva, the fix was merged into OSE v3.4.0.19. Can you try that version (or newer) and let me know if this still happens? Hemant, I verified the bug on verion openshift v3.4.0.19+346a31d kubernetes v1.4.0+776c994 etcd 3.1.0-rc.0 it is fixed and closing the bug now. Tested on openshift v3.4.0.19+346a31d kubernetes v1.4.0+776c994 etcd 3.1.0-rc.0 the pods come up fine and the volume shows in "in-use" state. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2017:0066 |