Description of problem: When kubeconfig has cluster certificate-authority-data info, each `oc login` can produce "TLS handshake error from <ingress pod IP and port>: remote error: tls: bad certificate" in oauth-openshift pod logs. Though this error does not hurt and affect the function of successful login, this error may quite confuse customer, especially when there are many `oc login` operations against the cluster. Therefore the error should not be shown, worth addressing. After all, the certificate-authority-data Version-Release number of selected component (if applicable): 4.4.0-0.nightly-2020-03-31-215957 How reproducible: Always Steps to Reproduce: 1. Launch 4.4 env, configure htpasswd IDP 2. In one terminal A, watch oauth-openshift pod logs: $ oc logs -f --tail=5 -n openshift-authentication oauth-openshift-b97947fcd-p8hxm 3. In another terminal B: $ cp path/to/admin.kubeconfig path/to/admin.kubeconfig.copied $ export KUBECONFIG=path/to/admin.kubeconfig.copied # this file contains cluster certificate-authority-data info Repeatedly run: $ oc login -u xxia1 -p redhat 4. In terminal B, try oc login with kubeconfig file that does not have certificate-authority-data: $ touch empty.kubeconfig $ export KUBECONFIG=empty.kubeconfig Repeatedly run: $ oc login -u xxia1 -p redhat https://<api server>:6443 --insecure-skip-tls-verify Actual results: 3. For each login with certificate-authority-data, terminal A will definitely show one line of below: I0401 08:34:30.210254 1 log.go:172] http: TLS handshake error from 10.128.2.9:36202: remote error: tls: bad certificate I0401 08:46:37.175267 1 log.go:172] http: TLS handshake error from 10.131.0.9:41046: remote error: tls: bad certificate I0401 08:49:22.409251 1 log.go:172] http: TLS handshake error from 10.131.0.9:42866: remote error: tls: bad certificate 4. As said in above Description, such error should not be shown. Expected results: 3. The error may be not hurting. But it may confuse customer. Worth addressing to make it not shown. Additional info: Check found above IPs are of ingress pods: $ oc get po -A -o wide | grep -E "(10.131.0.9|10.128.2.9)" openshift-ingress router-default-656d77d7d8-gjrr6 1/1 Running 0 6h11m 10.128.2.9 ... openshift-ingress router-default-656d77d7d8-grmx4 1/1 Running 0 6h11m 10.131.0.9 ...
The message we see is caused by the `oc` first trying to connect without the CA (=> that's where the bad certificate comes from), and only using the CA in a subsequent request.
Verified in oc 4.5.0-202004202137-8dda2e7 with original steps, fixed and cannot reproduce now.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:2409
*** Bug 1901379 has been marked as a duplicate of this bug. ***