| Summary: | after update kube-apiserver fails to start | ||
|---|---|---|---|
| Product: | Red Hat Enterprise Linux 7 | Reporter: | bitchecker <ciro.deluca> |
| Component: | kubernetes | Assignee: | Jan Chaloupka <jchaloup> |
| Status: | CLOSED NOTABUG | QA Contact: | atomic-bugs <atomic-bugs> |
| Severity: | unspecified | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | 7.2 | CC: | aos-bugs, ciro.deluca, gouyang, jokerman, mmccomas |
| Target Milestone: | rc | Keywords: | Extras |
| Target Release: | --- | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | Bug Fix | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2016-03-29 11:03:14 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
Is this with Atomic Enterprise Platform or OpenShift Enterprise, or simply Kubernetes on RHEL? simply kubernetes on RHEL. Hi bitchecker, what version of kubernetes are you running? Checking the latest build KUBE_API_ARGS is empty. What you get when running $ rpm -q kubernetes-master (In reply to Jan Chaloupka from comment #4) > Hi bitchecker, what version of kubernetes are you running? Checking the > latest build KUBE_API_ARGS is empty. hi jan, i don't know the previous version but i run periodically updates, so i can suppose that the version when KUBE_API_ARGS was empty is the previous build of this package. (In reply to Jan Chaloupka from comment #5) > What you get when running > > $ rpm -q kubernetes-master kubernetes-master-1.2.0-0.6.alpha1.git8632732.el7.x86_64 So you have installed kubernetes-master-1.2.0-0.6.alpha1.git8632732.el7.x86_64 and the KUBE_API_ARGS env got non-empty? Is it possible that you have accidentally update apiserver config file between updates? Checking previous builds of kubernetes, KUBE_API_ARGS is still empty. When you update the kubernetes, do you use ansible scripts to install/deploy your cluster? It is possible the playbook modifed the config file. no, is not possible. i've got a copy of the config file when i've installed the environment. KUBE_API_ARGS was empty when i've installed and in this update i've found it with KUBE_API_ARGS="--service_account_key_file=/tmp/serviceaccount.key" the update procedure moved the previous version of /etc/kubernetes/apiserver to /etc/kubernetes/apiserver.rpmnew Can you repeat the process? Install the previous version of kubernetes and then update to kubernetes-master-1.2.0-0.6.alpha1.git8632732.el7.x86_64 and check apiserver config file? no sorry, i can't create other instances on our servers. but in my config's backup after installation process i see this variable empty. Previous version of released kubernetes is kubernetes-1.0.3-0.2.gitb9a88a7.el7. KUBE_API_ARGS is set empty. The release after is kubernetes-1.2.0-0.6.alpha1.git8632732.el7. Again, with empty KUBE_API_ARGS. The next release of kubernetes has KUBE_API_ARGS empty too. How do you upgrade your kubernetes? By running yum update or by AH? (In reply to Jan Chaloupka from comment #12) > How do you upgrade your kubernetes? By running yum update or by AH? yum clean all && yum distro-sync -y the default content of /etc/kubernetes/apiserver looks like below. # rpm2cpio kubernetes-master-1.2.0-0.6.alpha1.git8632732.el7.x86_64.rpm | cpio -idv # cat ./etc/kubernetes/apiserver ### # kubernetes system config # # The following values are used to configure the kube-apiserver # # The address on the local server to listen to. KUBE_API_ADDRESS="--insecure-bind-address=127.0.0.1" # The port on the local server to listen on. # KUBE_API_PORT="--port=8080" # Port minions listen on # KUBELET_PORT="--kubelet-port=10250" # Comma separated list of nodes in the etcd cluster KUBE_ETCD_SERVERS="--etcd-servers=http://127.0.0.1:2379" # Address range to use for services KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16" # default admission control policies KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota" # Add your own! KUBE_API_ARGS="" Who else could have access to /etc/kubernetes/apiserver?
I can not reproduce the issue with vanilla installation of RHEL and updating from kubernetes-1.0.3-0.2.gitb9a88a7.el7 to 1.2.0-0.6.alpha1.git8632732.el7 and to the current build to be released. There must be other player in the game.
If you are using pure kubernetes (no AH), I can think of some script that could possible update /etc/kubernetes/apiserver. Still, the kubernetes itself has no power to update its configuration files.
> the update procedure moved the previous version of /etc/kubernetes/apiserver
> to /etc/kubernetes/apiserver.rpmnew
/etc/kubernetes/apiserver.rpmnew does not get generated as long as /etc/kubernetes/apiserver is unchanged between updates. Even if new update changes the configuration file itself. The fact that .rpmnew file was generated means someone/something changed the configuration file after the rpm was intalled.
|
i've update my kubernetes cluster and after this update i'm running: # kubectl version Client Version: version.Info{Major:"1", Minor:"2", GitVersion:"v1.2.0", GitCommit:"86327329213fed4af2661c5ae1e92f9956b24f55", GitTreeState:"clean"} Server Version: version.Info{Major:"1", Minor:"2", GitVersion:"v1.2.0", GitCommit:"86327329213fed4af2661c5ae1e92f9956b24f55", GitTreeState:"clean"} but kube-apiserver fails to start. i've noted that in file /etc/kubernetes/apiserver on previous version there is KUBE_API_ARGS="" but in the updated version, i've found KUBE_API_ARGS="--service_account_key_file=/tmp/serviceaccount.key" the problem is that this file doesn't exists and the update procedure doesn't create it! i've created it with: openssl genrsa -out /tmp/serviceaccount.key 2048 and after that service started.