Hide Forgot
Description of problem: In order to configure custom certificates, needed to change the masterURL value (to avoid conflicts with masterPublicURL). Thus, needed to run the "redeploy certs" ansible playbook. Customer did this, and deployed the custom/named certificates. Cluster appears to be working fine; console and cli working with the new certs, and pods running normally. However, getting constant logspam in the master logs. Version-Release number of selected component (if applicable): 3.3.0 How reproducible: Unconfirmed Actual results: Nov 29 10:42:01 master01.example.com atomic-openshift-master-api[87147]: I1129 10:42:01.610152 87147 server.go:2161] http: TLS handshake error from 10.47.137.125:35469: remote error: bad certificate Note: that 10.47.137.125 is the HAProxy loadbalancer, not a master or node. Expected results: No message Additional info: Used openssl to pull the cert being served by 10.47.137.125 ; it is signed by the OpenShift internal signer, and inspecting the cert shows that under the subject alternate names, the master url is signed but not the loadbalancer ip address
If I'm reading this correctly the cert re-deploy doesn't account for [lb] group and that's the root cause here.
(In reply to Scott Dodson from comment #5) > If I'm reading this correctly the cert re-deploy doesn't account for [lb] > group and that's the root cause here. So to verify, the certificates should have the loadbalancer hostnames in the Subject Alternative Name field?
Any client that attempts to connect without a matching CA will generate this error. There's no evidence of actual malfunction in this bug so I'm closing it.