Bug 1320618 - after update kube-apiserver fails to start
Summary: after update kube-apiserver fails to start
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: kubernetes
Version: 7.2
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: Jan Chaloupka
QA Contact: atomic-bugs@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-03-23 16:11 UTC by bitchecker
Modified: 2016-03-29 11:03 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-03-29 11:03:14 UTC
Target Upstream Version:


Attachments (Terms of Use)

Description bitchecker 2016-03-23 16:11:52 UTC
i've update my kubernetes cluster and after this update i'm running:

# kubectl version 
Client Version: version.Info{Major:"1", Minor:"2", GitVersion:"v1.2.0", GitCommit:"86327329213fed4af2661c5ae1e92f9956b24f55", GitTreeState:"clean"}
Server Version: version.Info{Major:"1", Minor:"2", GitVersion:"v1.2.0", GitCommit:"86327329213fed4af2661c5ae1e92f9956b24f55", GitTreeState:"clean"}
but kube-apiserver fails to start.

i've noted that in file /etc/kubernetes/apiserver on previous version there is KUBE_API_ARGS="" but in the updated version, i've found KUBE_API_ARGS="--service_account_key_file=/tmp/serviceaccount.key"

the problem is that this file doesn't exists and the update procedure doesn't create it!

i've created it with: openssl genrsa -out /tmp/serviceaccount.key 2048 and after that service started.

Comment 1 Andy Goldstein 2016-03-23 16:15:06 UTC
Is this with Atomic Enterprise Platform or OpenShift Enterprise, or simply Kubernetes on RHEL?

Comment 2 bitchecker 2016-03-23 16:16:09 UTC
simply kubernetes on RHEL.

Comment 4 Jan Chaloupka 2016-03-23 16:26:55 UTC
Hi bitchecker, what version of kubernetes are you running? Checking the latest build KUBE_API_ARGS is empty.

Comment 5 Jan Chaloupka 2016-03-23 16:28:17 UTC
What you get when running

$ rpm -q kubernetes-master

Comment 6 bitchecker 2016-03-23 16:32:02 UTC
(In reply to Jan Chaloupka from comment #4)
> Hi bitchecker, what version of kubernetes are you running? Checking the
> latest build KUBE_API_ARGS is empty.

hi jan,
i don't know the previous version but i run periodically updates, so i can suppose that the version when KUBE_API_ARGS was empty is the previous build of this package.

Comment 7 bitchecker 2016-03-23 16:32:18 UTC
(In reply to Jan Chaloupka from comment #5)
> What you get when running
> 
> $ rpm -q kubernetes-master

kubernetes-master-1.2.0-0.6.alpha1.git8632732.el7.x86_64

Comment 8 Jan Chaloupka 2016-03-23 16:43:17 UTC
So you have installed kubernetes-master-1.2.0-0.6.alpha1.git8632732.el7.x86_64 and the KUBE_API_ARGS env got non-empty? Is it possible that you have accidentally update apiserver config file between updates? Checking previous builds of kubernetes, KUBE_API_ARGS is still empty.

When you update the kubernetes, do you use ansible scripts to install/deploy your cluster? It is possible the playbook modifed the config file.

Comment 9 bitchecker 2016-03-23 16:47:30 UTC
no, is not possible. i've got a copy of the config file when i've installed the environment.

KUBE_API_ARGS was empty when i've installed and in this update i've found it with KUBE_API_ARGS="--service_account_key_file=/tmp/serviceaccount.key"

the update procedure moved the previous version of /etc/kubernetes/apiserver to /etc/kubernetes/apiserver.rpmnew

Comment 10 Jan Chaloupka 2016-03-23 16:52:37 UTC
Can you repeat the process? Install the previous version of kubernetes and then update to kubernetes-master-1.2.0-0.6.alpha1.git8632732.el7.x86_64 and check apiserver config file?

Comment 11 bitchecker 2016-03-23 16:56:30 UTC
no sorry, i can't create other instances on our servers.

but in my  config's backup after installation process i see this variable empty.

Comment 12 Jan Chaloupka 2016-03-23 17:00:29 UTC
Previous version of released kubernetes is kubernetes-1.0.3-0.2.gitb9a88a7.el7. 
KUBE_API_ARGS is set empty. The release after is kubernetes-1.2.0-0.6.alpha1.git8632732.el7. Again, with empty KUBE_API_ARGS. The next release of kubernetes has KUBE_API_ARGS empty too.

How do you upgrade your kubernetes? By running yum update or by AH?

Comment 13 bitchecker 2016-03-23 18:03:26 UTC
(In reply to Jan Chaloupka from comment #12)
> How do you upgrade your kubernetes? By running yum update or by AH?

yum clean all && yum distro-sync -y

Comment 14 Guohua Ouyang 2016-03-29 07:31:33 UTC
the default content of /etc/kubernetes/apiserver looks like below.

# rpm2cpio kubernetes-master-1.2.0-0.6.alpha1.git8632732.el7.x86_64.rpm | cpio -idv
# cat ./etc/kubernetes/apiserver 
###
# kubernetes system config
#
# The following values are used to configure the kube-apiserver
#

# The address on the local server to listen to.
KUBE_API_ADDRESS="--insecure-bind-address=127.0.0.1"

# The port on the local server to listen on.
# KUBE_API_PORT="--port=8080"

# Port minions listen on
# KUBELET_PORT="--kubelet-port=10250"

# Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=http://127.0.0.1:2379"

# Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"

# default admission control policies
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"

# Add your own!
KUBE_API_ARGS=""

Comment 15 Jan Chaloupka 2016-03-29 11:03:14 UTC
Who else could have access to /etc/kubernetes/apiserver?

I can not reproduce the issue with vanilla installation of RHEL and updating from kubernetes-1.0.3-0.2.gitb9a88a7.el7 to 1.2.0-0.6.alpha1.git8632732.el7 and to the current build to be released. There must be other player in the game.

If you are using pure kubernetes (no AH), I can think of some script that could possible update /etc/kubernetes/apiserver. Still, the kubernetes itself has no power to update its configuration files. 

> the update procedure moved the previous version of /etc/kubernetes/apiserver
> to /etc/kubernetes/apiserver.rpmnew

/etc/kubernetes/apiserver.rpmnew does not get generated as long as /etc/kubernetes/apiserver is unchanged between updates. Even if new update changes the configuration file itself. The fact that .rpmnew file was generated means someone/something changed the configuration file after the rpm was intalled.


Note You need to log in before you can comment on or make changes to this bug.