Bug 1315472
Summary: | kube-controller-manager ignores --master argument | ||
---|---|---|---|
Product: | [Fedora] Fedora | Reporter: | Dusty Mabe <dustymabe> |
Component: | kubernetes | Assignee: | Jan Chaloupka <jchaloup> |
Status: | CLOSED ERRATA | QA Contact: | Fedora Extras Quality Assurance <extras-qa> |
Severity: | unspecified | Docs Contact: | |
Priority: | unspecified | ||
Version: | 23 | CC: | 511173846, eparis, golang-updates, jcajka, jchaloup, nhorman, vbatts |
Target Milestone: | --- | ||
Target Release: | --- | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | kubernetes-1.2.0-0.15.alpha6.gitf0cd09a.fc23 | Doc Type: | Bug Fix |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2016-04-01 00:28:07 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Dusty Mabe
2016-03-07 20:26:21 UTC
I have the same result just by starting kube-controller-manager.service from installed rpm without any configuration file modification. kubernetes-1.2.0-0.15.alpha6.gitf0cd09a.fc23 has been submitted as an update to Fedora 23. https://bodhi.fedoraproject.org/updates/FEDORA-2016-a89f5ce5f4 kubernetes-1.2.0-0.15.alpha6.gitf0cd09a.fc23 has been pushed to the Fedora 23 testing repository. If problems still persist, please make note of it in this bug report. See https://fedoraproject.org/wiki/QA:Updates_Testing for instructions on how to install test updates. You can provide feedback for this update here: https://bodhi.fedoraproject.org/updates/FEDORA-2016-a89f5ce5f4 kubernetes-1.2.0-0.15.alpha6.gitf0cd09a.fc23 has been pushed to the Fedora 23 stable repository. If problems still persist, please make note of it in this bug report. ult config: unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined Dec 11 10:35:19 master01 kube-controller-manager: invalid configuration: no configuration has been provided Dec 11 10:35:19 master01 systemd: kube-controller-manager.service: main process exited, code=exited, status=1/FAILURE Dec 11 10:35:19 master01 systemd: Unit kube-controller-manager.service entered failed state. Dec 11 10:35:19 master01 systemd: kube-controller-manager.service failed. Dec 11 10:35:19 master01 systemd: kube-controller-manager.service holdoff time over, scheduling restart. Dec 11 10:35:19 master01 systemd: --experimental-cluster-signing-duration=87600h0m0s': /opt/k8s/conf/kube-controller-manager.env Dec 11 10:35:19 master01 kube-controller-manager: I1211 10:35:19.911712 124830 serving.go:319] Generated self-signed cert in-memory Dec 11 10:35:19 master01 kube-controller-manager: W1211 10:35:19.911767 124830 client_config.go:541] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. Dec 11 10:35:19 master01 kube-controller-manager: W1211 10:35:19.911774 124830 client_config.go:546] error creating inClusterConfig, falling back to default config: unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined Dec 11 10:35:19 master01 kube-controller-manager: invalid configuration: no configuration has been provided Dec 11 10:35:19 master01 systemd: kube-controller-manager.service: main process exited, code=exited, status=1/FAILURE Dec 11 10:35:19 master01 systemd: Unit kube-controller-manager.service entered failed state. Dec 11 10:35:19 master01 systemd: kube-controller-manager.service failed. Dec 11 10:35:20 master01 systemd: kube-controller-manager.service holdoff time over, scheduling restart. Dec 11 10:35:20 master01 systemd: --experimental-cluster-signing-duration=87600h0m0s': /opt/k8s/conf/kube-controller-manager.env Dec 11 10:35:20 master01 kube-controller-manager: I1211 10:35:20.527572 124838 serving.go:319] Generated self-signed cert in-memory Dec 11 10:35:20 master01 kube-controller-manager: W1211 10:35:20.527650 124838 client_config.go:541] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. Dec 11 10:35:20 master01 kube-controller-manager: W1211 10:35:20.527660 124838 client_config.go:546] error creating inClusterConfig, falling back to default config: unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined Dec 11 10:35:20 master01 kube-controller-manager: invalid configuration: no configuration has been provided Dec 11 10:35:20 master01 systemd: kube-controller-manager.service: main process exited, code=exited, status=1/FAILURE Dec 11 10:35:20 master01 systemd: Unit kube-controller-manager.service entered failed state. Dec 11 10:35:20 master01 systemd: kube-controller-manager.service failed. Dec 11 10:35:20 master01 systemd: kube-controller-manager.service holdoff time over, scheduling restart. Dec 11 10:35:20 master01 systemd: start request repeated too quickly for kube-controller-manager.service Dec 11 10:35:20 master01 systemd: Unit kube-controller-manager.service entered failed state. Dec 11 10:35:20 master01 systemd: kube-controller-manager.service failed. [root@master01 ansible]# cat /etc/redhat-release CentOS Linux release 7.6.1810 (Core) [root@master01 ansible]# hi, when i install the kubernetes via binary,i got the same bug. the install version is kubernetes-server-linux-amd64-1.16.tar.gz and my system environment is CentOS Linux release 7.6.1810 (Core). |