Bug 1315472 - kube-controller-manager ignores --master argument
Summary: kube-controller-manager ignores --master argument
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Fedora
Classification: Fedora
Component: kubernetes
Version: 23
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: Jan Chaloupka
QA Contact: Fedora Extras Quality Assurance
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-03-07 20:26 UTC by Dusty Mabe
Modified: 2019-12-11 03:02 UTC (History)
7 users (show)

Fixed In Version: kubernetes-1.2.0-0.15.alpha6.gitf0cd09a.fc23
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-04-01 00:28:07 UTC
Type: Bug
Embargoed:


Attachments (Terms of Use)

Description Dusty Mabe 2016-03-07 20:26:21 UTC
Description of problem:

kube-controller-manager ignores --master argument


Version-Release number of selected component (if applicable):
[root@f23 kubernetes]# rpm -qa | grep kubernetes sort
kubernetes-1.2.0-0.13.alpha6.gitf0cd09a.fc23.x86_64
kubernetes-client-1.2.0-0.13.alpha6.gitf0cd09a.fc23.x86_64
kubernetes-master-1.2.0-0.13.alpha6.gitf0cd09a.fc23.x86_64
kubernetes-node-1.2.0-0.13.alpha6.gitf0cd09a.fc23.x86_64

How reproducible:
Always

Steps to Reproduce:
After setting up system like you normally would do (create cert etc..) observe the kube-controller manager service ignores the '--master' arg on the command line.
You can repro with this command:
```
/usr/bin/kube-controller-manager --logtostderr=true --v=0 --master=http://127.0.0.1:8080 --service_account_private_key_file=/etc/pki/kube-apiserver/serviceaccount.key
```

I set up my systems using the lines in this ansible playbook: 
https://github.com/dustymabe/vagrantdirs/blob/master/f23/playbook.yml#L62


Actual results:

The following is the message you receive. Note the "nor --master was specified" log message, which indicates it didn't recognize the --master=http://127.0.0.1:8080 argument we provided: 

```
[root@f23 kubernetes]# /usr/bin/kube-controller-manager --logtostderr=true --v=0 --master=http://127.0.0.1:8080 --service_account_private_key_file=/etc/pki/kube-apiserver/serviceaccount.key 
W0307 20:07:44.007893   13698 client_config.go:352] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
W0307 20:07:44.008076   13698 client_config.go:357] error creating inClusterConfig, falling back to default config: %vunable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined
I0307 20:07:44.008618   13698 plugins.go:71] No cloud provider specified.
I0307 20:07:44.010789   13698 replication_controller.go:185] Starting RC Manager
I0307 20:07:44.012329   13698 nodecontroller.go:134] Sending events to api server.
E0307 20:07:44.012490   13698 controllermanager.go:212] Failed to start service controller: ServiceController should not be run without a cloudprovider.
I0307 20:07:44.012519   13698 controllermanager.go:225] allocate-node-cidrs set to false, node controller not creating routes
I0307 20:07:44.023177   13698 controllermanager.go:258] Starting extensions/v1beta1 apis
I0307 20:07:44.023323   13698 controllermanager.go:260] Starting horizontal pod controller.
I0307 20:07:44.023500   13698 controllermanager.go:274] Starting daemon set controller
I0307 20:07:44.023730   13698 controllermanager.go:280] Starting job controller
I0307 20:07:44.023833   13698 controller.go:180] Starting Daemon Sets controller manager
```

As a result we see this when trying to create a pod:

```
[root@f23 ~]# kubectl create -f /tmp/busybox.yaml 
Error from server: error when creating "/tmp/busybox.yaml": pods "busybox" is forbidden: no API token found for service account default/default, retry after the token is automatically created and added to the service account
```

Comment 1 Jan Chaloupka 2016-03-08 16:12:48 UTC
I have the same result just by starting kube-controller-manager.service from installed rpm without any configuration file modification.

Comment 2 Fedora Update System 2016-03-08 19:03:00 UTC
kubernetes-1.2.0-0.15.alpha6.gitf0cd09a.fc23 has been submitted as an update to Fedora 23. https://bodhi.fedoraproject.org/updates/FEDORA-2016-a89f5ce5f4

Comment 3 Fedora Update System 2016-03-09 22:55:46 UTC
kubernetes-1.2.0-0.15.alpha6.gitf0cd09a.fc23 has been pushed to the Fedora 23 testing repository. If problems still persist, please make note of it in this bug report.
See https://fedoraproject.org/wiki/QA:Updates_Testing for
instructions on how to install test updates.
You can provide feedback for this update here: https://bodhi.fedoraproject.org/updates/FEDORA-2016-a89f5ce5f4

Comment 4 Fedora Update System 2016-04-01 00:27:43 UTC
kubernetes-1.2.0-0.15.alpha6.gitf0cd09a.fc23 has been pushed to the Fedora 23 stable repository. If problems still persist, please make note of it in this bug report.

Comment 5 hong 2019-12-11 03:02:30 UTC
ult config: unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined
Dec 11 10:35:19 master01 kube-controller-manager: invalid configuration: no configuration has been provided
Dec 11 10:35:19 master01 systemd: kube-controller-manager.service: main process exited, code=exited, status=1/FAILURE
Dec 11 10:35:19 master01 systemd: Unit kube-controller-manager.service entered failed state.
Dec 11 10:35:19 master01 systemd: kube-controller-manager.service failed.
Dec 11 10:35:19 master01 systemd: kube-controller-manager.service holdoff time over, scheduling restart.
Dec 11 10:35:19 master01 systemd: --experimental-cluster-signing-duration=87600h0m0s': /opt/k8s/conf/kube-controller-manager.env
Dec 11 10:35:19 master01 kube-controller-manager: I1211 10:35:19.911712  124830 serving.go:319] Generated self-signed cert in-memory
Dec 11 10:35:19 master01 kube-controller-manager: W1211 10:35:19.911767  124830 client_config.go:541] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
Dec 11 10:35:19 master01 kube-controller-manager: W1211 10:35:19.911774  124830 client_config.go:546] error creating inClusterConfig, falling back to default config: unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined
Dec 11 10:35:19 master01 kube-controller-manager: invalid configuration: no configuration has been provided
Dec 11 10:35:19 master01 systemd: kube-controller-manager.service: main process exited, code=exited, status=1/FAILURE
Dec 11 10:35:19 master01 systemd: Unit kube-controller-manager.service entered failed state.
Dec 11 10:35:19 master01 systemd: kube-controller-manager.service failed.
Dec 11 10:35:20 master01 systemd: kube-controller-manager.service holdoff time over, scheduling restart.
Dec 11 10:35:20 master01 systemd: --experimental-cluster-signing-duration=87600h0m0s': /opt/k8s/conf/kube-controller-manager.env
Dec 11 10:35:20 master01 kube-controller-manager: I1211 10:35:20.527572  124838 serving.go:319] Generated self-signed cert in-memory
Dec 11 10:35:20 master01 kube-controller-manager: W1211 10:35:20.527650  124838 client_config.go:541] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
Dec 11 10:35:20 master01 kube-controller-manager: W1211 10:35:20.527660  124838 client_config.go:546] error creating inClusterConfig, falling back to default config: unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined
Dec 11 10:35:20 master01 kube-controller-manager: invalid configuration: no configuration has been provided
Dec 11 10:35:20 master01 systemd: kube-controller-manager.service: main process exited, code=exited, status=1/FAILURE
Dec 11 10:35:20 master01 systemd: Unit kube-controller-manager.service entered failed state.
Dec 11 10:35:20 master01 systemd: kube-controller-manager.service failed.
Dec 11 10:35:20 master01 systemd: kube-controller-manager.service holdoff time over, scheduling restart.
Dec 11 10:35:20 master01 systemd: start request repeated too quickly for kube-controller-manager.service
Dec 11 10:35:20 master01 systemd: Unit kube-controller-manager.service entered failed state.
Dec 11 10:35:20 master01 systemd: kube-controller-manager.service failed.
[root@master01 ansible]# cat /etc/redhat-release
CentOS Linux release 7.6.1810 (Core)
[root@master01 ansible]#


hi, when i install the kubernetes via binary,i got the same bug.

the install version is kubernetes-server-linux-amd64-1.16.tar.gz and my system environment is CentOS Linux release 7.6.1810 (Core).


Note You need to log in before you can comment on or make changes to this bug.