Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.

Bug 1320618

Summary: after update kube-apiserver fails to start
Product: Red Hat Enterprise Linux 7 Reporter: bitchecker <bitchecker>
Component: kubernetesAssignee: Jan Chaloupka <jchaloup>
Status: CLOSED NOTABUG QA Contact: atomic-bugs <atomic-bugs>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 7.2CC: aos-bugs, bitchecker, gouyang, jokerman, mmccomas
Target Milestone: rcKeywords: Extras
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2016-03-29 11:03:14 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description bitchecker 2016-03-23 16:11:52 UTC
i've update my kubernetes cluster and after this update i'm running:

# kubectl version 
Client Version: version.Info{Major:"1", Minor:"2", GitVersion:"v1.2.0", GitCommit:"86327329213fed4af2661c5ae1e92f9956b24f55", GitTreeState:"clean"}
Server Version: version.Info{Major:"1", Minor:"2", GitVersion:"v1.2.0", GitCommit:"86327329213fed4af2661c5ae1e92f9956b24f55", GitTreeState:"clean"}
but kube-apiserver fails to start.

i've noted that in file /etc/kubernetes/apiserver on previous version there is KUBE_API_ARGS="" but in the updated version, i've found KUBE_API_ARGS="--service_account_key_file=/tmp/serviceaccount.key"

the problem is that this file doesn't exists and the update procedure doesn't create it!

i've created it with: openssl genrsa -out /tmp/serviceaccount.key 2048 and after that service started.

Comment 1 Andy Goldstein 2016-03-23 16:15:06 UTC
Is this with Atomic Enterprise Platform or OpenShift Enterprise, or simply Kubernetes on RHEL?

Comment 2 bitchecker 2016-03-23 16:16:09 UTC
simply kubernetes on RHEL.

Comment 4 Jan Chaloupka 2016-03-23 16:26:55 UTC
Hi bitchecker, what version of kubernetes are you running? Checking the latest build KUBE_API_ARGS is empty.

Comment 5 Jan Chaloupka 2016-03-23 16:28:17 UTC
What you get when running

$ rpm -q kubernetes-master

Comment 6 bitchecker 2016-03-23 16:32:02 UTC
(In reply to Jan Chaloupka from comment #4)
> Hi bitchecker, what version of kubernetes are you running? Checking the
> latest build KUBE_API_ARGS is empty.

hi jan,
i don't know the previous version but i run periodically updates, so i can suppose that the version when KUBE_API_ARGS was empty is the previous build of this package.

Comment 7 bitchecker 2016-03-23 16:32:18 UTC
(In reply to Jan Chaloupka from comment #5)
> What you get when running
> 
> $ rpm -q kubernetes-master

kubernetes-master-1.2.0-0.6.alpha1.git8632732.el7.x86_64

Comment 8 Jan Chaloupka 2016-03-23 16:43:17 UTC
So you have installed kubernetes-master-1.2.0-0.6.alpha1.git8632732.el7.x86_64 and the KUBE_API_ARGS env got non-empty? Is it possible that you have accidentally update apiserver config file between updates? Checking previous builds of kubernetes, KUBE_API_ARGS is still empty.

When you update the kubernetes, do you use ansible scripts to install/deploy your cluster? It is possible the playbook modifed the config file.

Comment 9 bitchecker 2016-03-23 16:47:30 UTC
no, is not possible. i've got a copy of the config file when i've installed the environment.

KUBE_API_ARGS was empty when i've installed and in this update i've found it with KUBE_API_ARGS="--service_account_key_file=/tmp/serviceaccount.key"

the update procedure moved the previous version of /etc/kubernetes/apiserver to /etc/kubernetes/apiserver.rpmnew

Comment 10 Jan Chaloupka 2016-03-23 16:52:37 UTC
Can you repeat the process? Install the previous version of kubernetes and then update to kubernetes-master-1.2.0-0.6.alpha1.git8632732.el7.x86_64 and check apiserver config file?

Comment 11 bitchecker 2016-03-23 16:56:30 UTC
no sorry, i can't create other instances on our servers.

but in my  config's backup after installation process i see this variable empty.

Comment 12 Jan Chaloupka 2016-03-23 17:00:29 UTC
Previous version of released kubernetes is kubernetes-1.0.3-0.2.gitb9a88a7.el7. 
KUBE_API_ARGS is set empty. The release after is kubernetes-1.2.0-0.6.alpha1.git8632732.el7. Again, with empty KUBE_API_ARGS. The next release of kubernetes has KUBE_API_ARGS empty too.

How do you upgrade your kubernetes? By running yum update or by AH?

Comment 13 bitchecker 2016-03-23 18:03:26 UTC
(In reply to Jan Chaloupka from comment #12)
> How do you upgrade your kubernetes? By running yum update or by AH?

yum clean all && yum distro-sync -y

Comment 14 Guohua Ouyang 2016-03-29 07:31:33 UTC
the default content of /etc/kubernetes/apiserver looks like below.

# rpm2cpio kubernetes-master-1.2.0-0.6.alpha1.git8632732.el7.x86_64.rpm | cpio -idv
# cat ./etc/kubernetes/apiserver 
###
# kubernetes system config
#
# The following values are used to configure the kube-apiserver
#

# The address on the local server to listen to.
KUBE_API_ADDRESS="--insecure-bind-address=127.0.0.1"

# The port on the local server to listen on.
# KUBE_API_PORT="--port=8080"

# Port minions listen on
# KUBELET_PORT="--kubelet-port=10250"

# Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=http://127.0.0.1:2379"

# Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"

# default admission control policies
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"

# Add your own!
KUBE_API_ARGS=""

Comment 15 Jan Chaloupka 2016-03-29 11:03:14 UTC
Who else could have access to /etc/kubernetes/apiserver?

I can not reproduce the issue with vanilla installation of RHEL and updating from kubernetes-1.0.3-0.2.gitb9a88a7.el7 to 1.2.0-0.6.alpha1.git8632732.el7 and to the current build to be released. There must be other player in the game.

If you are using pure kubernetes (no AH), I can think of some script that could possible update /etc/kubernetes/apiserver. Still, the kubernetes itself has no power to update its configuration files. 

> the update procedure moved the previous version of /etc/kubernetes/apiserver
> to /etc/kubernetes/apiserver.rpmnew

/etc/kubernetes/apiserver.rpmnew does not get generated as long as /etc/kubernetes/apiserver is unchanged between updates. Even if new update changes the configuration file itself. The fact that .rpmnew file was generated means someone/something changed the configuration file after the rpm was intalled.