Bug 1679898 - openshift-kube-apiserver pod spec is wrong for the loglevel flag -v
Summary: openshift-kube-apiserver pod spec is wrong for the loglevel flag -v
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Master
Version: 4.1.0
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: 4.1.0
Assignee: Michal Fojtik
QA Contact: Xingxing Xia
URL:
Whiteboard:
Depends On:
Blocks: 1680342
TreeView+ depends on / blocked
 
Reported: 2019-02-22 08:06 UTC by Xingxing Xia
Modified: 2019-06-04 10:44 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1750610 (view as bug list)
Environment:
Last Closed: 2019-06-04 10:44:19 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2019:0758 0 None None None 2019-06-04 10:44:27 UTC

Description Xingxing Xia 2019-02-22 08:06:51 UTC
Description of problem:
openshift-kube-apiserver pod spec is wrong for the loglevel flag -v

Version-Release number of selected component (if applicable):
4.0.0-0.nightly-2019-02-20-194410

How reproducible:
Always

Steps to Reproduce:
1. Check pod spec
$ oc project openshift-kube-apiserver
$ oc get po kube-apiserver-ip-10-0-136-232.ap-northeast-1.compute.internal -o yaml

2. oc rsh to pod, check entrypoint process
$ oc rsh kube-apiserver-ip-10-0-136-232.ap-northeast-1.compute.internal 
sh-4.2# ps -eF | grep root
root  ... hypershift openshift-kube-apiserver --config=/etc/kubernetes/static-pod-resources/configmaps/config/config.yaml

Actual results:
Step 1 shows -v=2 is not in the same line of hypershift cmd:
...
spec:
  containers:
  - args:
    - |
      mkdir -p /var/log/kube-apiserver
      chmod 0700 /var/log/kube-apiserver
      exec hypershift openshift-kube-apiserver  --config=/etc/kubernetes/static-pod-resources/configmaps/config/config.yaml
    - -v=2
    command:
    - /bin/bash
    - -xec

This makes step 2 process does not show loglevel flag -v=2

Expected results:
The pod spec should move the loglevel flag -v=2 in the same line of hypershift cmd

Additional info:
This is found when checking https://jira.coreos.com/browse/MSTR-315 .
Check openshift-apiserver and kube-controller-manager pod spec, and oc rsh to them, their entrypoint processes show the flag -v

Comment 1 Michal Fojtik 2019-03-05 09:32:11 UTC
Fixed.

Comment 2 Xingxing Xia 2019-03-07 08:13:29 UTC
Failed in latest build 4.0.0-0.nightly-2019-03-06-074438:
$ oc rsh kube-apiserver-ip-10-0-169-103.us-east-2.compute.internal
sh-4.2# ps -eF | grep root
root          1      0 29 313950 620376 3 08:03 ?        00:01:23 hypershift openshift-kube-apiserver --config=/etc/kubernetes/static-pod-resources/configmaps/config/config.yaml
The process still has no the loglevel flag -v

$ oc get pod kube-apiserver-ip-10-0-169-103.us-east-2.compute.internal -o yaml
...
  containers:
  - args:
    - |-
      mkdir -p /var/log/kube-apiserver
      chmod 0700 /var/log/kube-apiserver
      exec hypershift openshift-kube-apiserver --config=/etc/kubernetes/static-pod-resources/configmaps/config/config.yaml
       -v=2
...

Here -v=2 is still in a new line instead of the same line of `exec`

Comment 4 Xingxing Xia 2019-03-11 09:51:26 UTC
(In reply to Michal Fojtik from comment #3)
> https://github.com/openshift/cluster-kube-apiserver-operator/pull/317 fixed
> this and removed the bash, this looks like an old image. Please make sure
> you retest with updated images?
Quite useful pasting the fix PR info, which can help decide whether a fix landed in underlying test env when verifying something.
Above build 4.0.0-0.nightly-2019-03-06-074438 indeed does not include the PR. But due to bug 1687247, no new build installation can succeed. So wait bug 1687247 to be solved and then check this bug again

Comment 6 Xingxing Xia 2019-03-14 09:03:05 UTC
Verified in latest 4.0.0-0.nightly-2019-03-13-233958

Comment 8 errata-xmlrpc 2019-06-04 10:44:19 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:0758


Note You need to log in before you can comment on or make changes to this bug.