Bug 1273739 - Event shows "Cloud provider not initialized properly" when creating pod with cinder PV
Summary: Event shows "Cloud provider not initialized properly" when creating pod with ...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Storage
Version: 3.0.0
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: ---
Assignee: Mark Turansky
QA Contact: Liang Xia
URL:
Whiteboard:
Depends On:
Blocks: 1267746
TreeView+ depends on / blocked
 
Reported: 2015-10-21 07:18 UTC by Jianwei Hou
Modified: 2016-01-26 19:16 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-01-26 19:16:32 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2016:0070 0 normal SHIPPED_LIVE Important: Red Hat OpenShift Enterprise 3.1.1 bug fix and enhancement update 2016-01-27 00:12:41 UTC

Description Jianwei Hou 2015-10-21 07:18:57 UTC
Description of problem:
Create a PV and PVC for cinder volume, the create a pod to mount the PV, pod can not be created successfully because Cloud provider not initialized properly.

In kubernetes, we start kube-apiserver and kubelet with flags '--cloud-provider' and '--cloud-config' to make openstack cloud provider initialized properly. In openshift, I think we can do the same thing to have cinder supported as part of the installation work, or have a documentation for manual installation

Version-Release number of selected component (if applicable):
openshift v3.0.2.901-61-g568adb6
kubernetes v1.1.0-alpha.1-653-g86b4e77

How reproducible:
Always

Steps to Reproduce:
1.  Create PV and PVC
oc create -f https://raw.githubusercontent.com/openshift-qe/v3-testfiles/master/persistent-volumes/cinder/pv-rwo-recycle.json
oc create -f https://raw.githubusercontent.com/openshift-qe/v3-testfiles/master/persistent-volumes/cinder/pvc-rwo.json

2. Create pod
oc create -f https://raw.githubusercontent.com/openshift-qe/v3-testfiles/master/persistent-volumes/cinder/pod.json

3.oc get pods; oc get events

Actual results:
Events indicated a failed mount:

9s          9s         1         cinderpd      Pod                                                         Scheduled          {scheduler }                                     Successfully assigned cinderpd to openshift-140.lab.eng.nay.redhat.com
10s         0s         2         cinderpd      Pod                                                         FailedSync         {kubelet openshift-140.lab.eng.nay.redhat.com}   Error syncing pod, skipping: Cloud provider not initialized properly
10s         0s         2         cinderpd      Pod                                                         FailedMount        {kubelet openshift-140.lab.eng.nay.redhat.com}   Unable to mount volumes for pod "cinderpd_jhou": Cloud provider not initialized properly


Expected results:
Should be able to create the pod successfully

Additional info:

Comment 2 Jianwei Hou 2015-10-21 07:50:21 UTC
Sorry, I wasn't aware that the doc is here https://github.com/jsafrane/openshift-docs/blob/195df5f3a67e267d7b67d0511e8e72abc50cd624/admin_guide/configuring_openstack.adoc

I followed the doc to restart node services, and when I run 'service atomic-openshift-node status -l', I got the error:

Oct 21 15:44:27 openshift-140.lab.eng.nay.redhat.com atomic-openshift-node[9392]: I1021 15:44:27.039291    9392 openstack.go:202] Claiming to support Instances
Oct 21 15:44:27 openshift-140.lab.eng.nay.redhat.com atomic-openshift-node[9392]: E1021 15:44:27.089563    9392 kubelet.go:845] Unable to construct api.Node object for kubelet: failed to get external ID from cloud provider: Failed to find object
Oct 21 15:44:34 openshift-140.lab.eng.nay.redhat.com atomic-openshift-node[9392]: I1021 15:44:34.105614    9392 openstack.go:202] Claiming to support Instances
Oct 21 15:44:34 openshift-140.lab.eng.nay.redhat.com atomic-openshift-node[9392]: E1021 15:44:34.153838    9392 kubelet.go:845] Unable to construct api.Node object for kubelet: failed to get external ID from cloud provider: Failed to find object
Oct 21 15:44:41 openshift-140.lab.eng.nay.redhat.com atomic-openshift-node[9392]: I1021 15:44:41.170865    9392 openstack.go:202] Claiming to support Instances
Oct 21 15:44:41 openshift-140.lab.eng.nay.redhat.com atomic-openshift-node[9392]: E1021 15:44:41.217828    9392 kubelet.go:845] Unable to construct api.Node object for kubelet: failed to get external ID from cloud provider: Failed to find object
Oct 21 15:44:48 openshift-140.lab.eng.nay.redhat.com atomic-openshift-node[9392]: I1021 15:44:48.237796    9392 openstack.go:202] Claiming to support Instances
Oct 21 15:44:48 openshift-140.lab.eng.nay.redhat.com atomic-openshift-node[9392]: E1021 15:44:48.344387    9392 kubelet.go:845] Unable to construct api.Node object for kubelet: failed to get external ID from cloud provider: Failed to find object
Oct 21 15:44:55 openshift-140.lab.eng.nay.redhat.com atomic-openshift-node[9392]: I1021 15:44:55.362615    9392 openstack.go:202] Claiming to support Instances
Oct 21 15:44:55 openshift-140.lab.eng.nay.redhat.com atomic-openshift-node[9392]: E1021 15:44:55.413861    9392 kubelet.go:845] Unable to construct api.Node object for kubelet: failed to get external ID from cloud provider: Failed to find object


I seems this issue https://github.com/kubernetes/kubernetes/issues/13556 has not been fixed. We will need work around for our ose setup to test cinder.

Comment 3 Jan Safranek 2015-10-21 08:35:16 UTC
I have a patch in the pipeline, which stro[s domain names from hostname: https://github.com/kubernetes/kubernetes/pull/15537

I'll try to push it to Origin - tracked in https://github.com/openshift/origin/pull/5272

Comment 4 Jianwei Hou 2015-11-11 12:56:01 UTC
Verified on 
openshift v3.1.0.4
kubernetes v1.1.0-origin-1107-g4c8e6f4
etcd 2.1.

This bug is fixed now, cinder pods can be created successfully!

Comment 6 errata-xmlrpc 2016-01-26 19:16:32 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2016:0070


Note You need to log in before you can comment on or make changes to this bug.