Bug 1711447 - Cannot recover from bad serving cert secret on Kube api server
Summary: Cannot recover from bad serving cert secret on Kube api server
Keywords:
Status: CLOSED DUPLICATE of bug 1728754
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: kube-apiserver
Version: 4.1.0
Hardware: All
OS: All
unspecified
medium
Target Milestone: ---
: 4.1.z
Assignee: Luis Sanchez
QA Contact: Xingxing Xia
URL:
Whiteboard:
Depends On: 1711431
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-05-17 20:12 UTC by chris alfonso
Modified: 2019-11-12 15:26 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1711431
Environment:
Last Closed: 2019-11-12 15:26:16 UTC
Target Upstream Version:


Attachments (Terms of Use)

Description chris alfonso 2019-05-17 20:12:07 UTC
+++ This bug was initially created as a clone of Bug #1711431 +++

Description of problem:
After specifying a bad secret (one that doesn't contain tls.crt/tls.key) as a serving cert for the api server, modifying the secret or modifying the apiserver/cluster resource to point to a good secret does not fix the issue.

Version-Release number of selected component (if applicable):
4.1.0-rc.4

How reproducible:
Always

Steps to Reproduce:
1. Create a serving cert secret with wrong keys (say crt,key instead of tls.crt,tls.key) in the openshift-config namespace.

2. Modify the apiserver/cluster resource and specify the new secret as the default serving cert:

apiVersion: config.openshift.io/v1
kind: APIServer
metadata:
  name: cluster
spec:
  servingCerts:
    defaultServingCertificate:
      name: bad-creds

3. Wait for kube-apiserver operator to apply the configuration and observe that one of the kube-apiserver pods starts crashlooping (as expected).
4. Modify the original secret to contain proper keys (tls.crt/tls.key). Wait for a change in the kube-apiserver pods.
5. Modify apiserver/cluster to point to a newly created secret that contains the right keys:

apiVersion: config.openshift.io/v1
kind: APIServer
metadata:
  name: cluster
spec:
  servingCerts:
    defaultServingCertificate:
      name: good-creds

Wait for a change in the kube-apiserver pods.

Actual results:

After step 4 and/or 5, nothing changes. The one apiserver pod keeps crashlooping saying it can't find tls.crt.


Expected results:

After modifying the secret or the configuration, the secret should be updated on the master and the new serving cert should take effect.

Additional info:

Removing the serving certificate configuration completely:
apiVersion: config.openshift.io/v1
kind: APIServer
metadata:
  name: cluster
spec: {}

and waiting for the kube-apiserver to become stable again does get around this issue.

Comment 1 Cesar Wong 2019-05-17 21:19:32 UTC
Update: this is breaking all the time regardless of the certificate. Specifying a good certificate is not working as in:

spec:
  servingCerts:
    defaultServingCertificate:
      name: good-creds


However, using a named cert does work:

spec:
  servingCerts:
    namedCertificates:
    - names:
      - api.cewong.new-installer.openshift.com
      servingCertificate:
        name: good-creds

Comment 2 Luis Sanchez 2019-11-12 15:26:16 UTC
This functionality

*** This bug has been marked as a duplicate of bug 1728754 ***


Note You need to log in before you can comment on or make changes to this bug.