Bug 1293732 - Node does not start after adding kublet arg cloud provider
Node does not start after adding kublet arg cloud provider
Status: CLOSED NOTABUG
Product: OpenShift Container Platform
Classification: Red Hat
Component: Pod (Show other bugs)
3.1.0
Unspecified Unspecified
unspecified Severity high
: ---
: ---
Assigned To: Jan Chaloupka
Jianwei Hou
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2015-12-22 16:30 EST by Ryan Howe
Modified: 2016-01-07 14:21 EST (History)
4 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2016-01-07 14:21:45 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Ryan Howe 2015-12-22 16:30:00 EST
Description of problem:
Unable to start node after adding kubeletArguments for cloud provider. 

https://docs.openshift.com/enterprise/3.1/install_config/configuring_openstack.html


Version-Release number of selected component (if applicable):
3.1

How reproducible:
100%

Steps to Reproduce:
1. Following Docs Step by Step
https://docs.openshift.com/enterprise/3.1/install_config/configuring_openstack.html


Actual results:

┌─[✗]─[root@master1]─[~/ansible-custom]
└──> systemctl status atomic-openshift-node.service
● atomic-openshift-node.service - Atomic OpenShift Node
   Loaded: loaded (/usr/lib/systemd/system/atomic-openshift-node.service; enabled; vendor preset: disabled)
  Drop-In: /usr/lib/systemd/system/atomic-openshift-node.service.d
           └─openshift-sdn-ovs.conf
   Active: failed (Result: start-limit) since Tue 2015-12-22 16:24:00 EST; 9s ago
     Docs: https://github.com/openshift/origin
  Process: 100614 ExecStart=/usr/bin/openshift start node --config=${CONFIG_FILE} $OPTIONS (code=exited, status=255)
 Main PID: 100614 (code=exited, status=255)

Dec 22 16:24:00 master1.bender.com systemd[1]: atomic-openshift-node.service: main process exited, code=exited, status=255/n/a
Dec 22 16:24:00 master1.bender.com systemd[1]: Failed to start Atomic OpenShift Node.
Dec 22 16:24:00 master1.bender.com systemd[1]: Unit atomic-openshift-node.service entered failed state.
Dec 22 16:24:00 master1.bender.com systemd[1]: atomic-openshift-node.service failed.
Dec 22 16:24:00 master1.bender.com systemd[1]: atomic-openshift-node.service holdoff time over, scheduling restart.
Dec 22 16:24:00 master1.bender.com systemd[1]: start request repeated too quickly for atomic-openshift-node.service
Dec 22 16:24:00 master1.bender.com systemd[1]: Failed to start Atomic OpenShift Node.
Dec 22 16:24:00 master1.bender.com systemd[1]: Unit atomic-openshift-node.service entered failed state.
Dec 22 16:24:00 master1.bender.com systemd[1]: atomic-openshift-node.service failed.



Expected results:

Node to start 


Additional info:

#cat node-config

allowDisabledDocker: false
apiVersion: v1
dnsDomain: cluster.local
dockerConfig:
  execHandlerName: ""
iptablesSyncPeriod: "5s"
imageConfig:
  format: openshift3/ose-${component}:${version}
  latest: false
kind: NodeConfig
kubeletArguments:
  cloud-provider:
    - "openstack"
  cloud-config:
    - "/etc/cloud.conf"
masterKubeConfig: system:node:master1.bender.com.kubeconfig
networkPluginName: redhat/openshift-ovs-subnet
# networkConfig struct introduced in origin 1.0.6 and OSE 3.0.2 which
# deprecates networkPluginName above. The two should match.
networkConfig:
   mtu: 1400
   networkPluginName: redhat/openshift-ovs-subnet
nodeName: master1.bender.com
podManifestConfig:
servingInfo:
  bindAddress: 0.0.0.0:10250
  certFile: server.crt
  clientCA: ca.crt
  keyFile: server.key
volumeDirectory: /var/lib/origin/openshift.local.volumes


cat /etc/cloud.conf
[Global]
auth-url = http://10.10.73.4:5000/v2.0
username = user
password = password
tenant-id = f11e189d8fcc4866a7d1a6b683355aa6 
region = nova
Comment 1 Paul Weil 2015-12-23 10:02:39 EST
Do you have the actual error that is causing the node to fail to start?
Comment 2 Jan Chaloupka 2016-01-06 14:06:00 EST
If you run 'start node --config=node-config' from the command line, what errors can you see?
Comment 3 Ryan Howe 2016-01-07 14:21:45 EST
Sorry for the noise. Was testing without configuring master first. 
Was successful after changing region to correct region name and configuring masters first according to documentation. 

Closing not a bug

Note You need to log in before you can comment on or make changes to this bug.