Bug 1795395 - init container setup does not have the proper `securityContext`
Summary: init container setup does not have the proper `securityContext`
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: kube-apiserver
Version: 4.2.z
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: 4.2.z
Assignee: Stefan Schimanski
QA Contact: Ke Wang
URL:
Whiteboard:
Depends On: 1795394
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-01-27 21:21 UTC by Scott Dodson
Modified: 2020-07-10 06:15 UTC (History)
13 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1795394
Environment:
Last Closed: 2020-06-03 09:26:03 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Github openshift cluster-kube-apiserver-operator pull 724 None closed Bug 1795395: Properly set pod as privileged 2020-07-28 08:23:41 UTC
Red Hat Product Errata RHBA-2020:2307 None None None 2020-06-03 09:26:18 UTC

Comment 3 Ke Wang 2020-05-26 08:13:11 UTC
$ oc login https://api.ci.openshift.org --token=<token value>
$ docker login -u <docker user id> -p $(oc whoami -t)  registry.svc.ci.openshift.org
$ oc adm release info --commits registry.svc.ci.openshift.org/ocp/release:4.2.0-0.nightly-2020-05-26-002810 | grep kube-apiserver

$ git log --date local --pretty="%h %an %cd - %s" 1224485 | grep '#724'

No results

The fix was not in latest OCP 4.2 payload. Waiting for the next.

Comment 4 Ke Wang 2020-05-27 02:31:04 UTC
Verified with OCP build 4.2.0-0.nightly-2020-05-26-081314.

Verification steps referring to the code changes from PR https://github.com/openshift/cluster-kube-apiserver-operator/pull/724

1. Check the apiserver pods information, 
$ apiserver_pod=$(oc get po -n openshift-kube-apiserver | grep kube-apiserver | awk '{print $1}' | head -1)

$ oc get po $apiserver_pod -n openshift-kube-apiserver -o json | jq .spec.initContainers[0].securityContext
{
  "privileged": true
}
$ oc get po $apiserver_pod -n openshift-kube-apiserver -o json | jq .spec.containers[0].securityContext
{
  "privileged": true
}

The containers' securityContext were changed.

2. Check if the related error ‘failed to tryAcquireOrRenew’ can be found.
$ apiserver_node=$(oc get po -o wide -n openshift-kube-apiserver | grep kube-apiserver | awk '{print $7}' | head -1)
$ oc debug node/$apiserver_node

After logged in  the debug pod of the apiserver node , 
sh-4.2# chroot /host

sh-4.4# grep -r 'failed to tryAcquireOrRenew' /var/log/pods/openshift-kube-apiserver_kube-apiserver-*/

No results, not found the related error ‘failed to tryAcquireOrRenew’ . It is as expected, move the bug verified.

Comment 6 errata-xmlrpc 2020-06-03 09:26:03 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:2307


Note You need to log in before you can comment on or make changes to this bug.