Hide Forgot
Description of problem: After swap to etcd3, the etcd report "grpc: addrConn.resetTransport failed to create client transport: connection error. " The smoke testing pass (restart master/nodes/openvswith/oc get nodes/oc start-build/deploy pod/access service and openshift ex diagnostics) Version-Release number of selected component (if applicable): atomic-openshift-3.1.1.8-1.git.0.d469026.el7aos.x86_64 How reproducible: always Steps to Reproduce: 1. install ocp 3.1.1.8 with external clustered etcd. 2. upgrade to etcd3 step by step. etcd-2.1.1 -> etcd-2.2.2 -> etcd-2.2.5 -> etcd-2.3.7-> etcd3-3.0.3etcd3-3.0.3 3. check the etcd3 status 4. run smoke testing. Actual results: 3&4, the smoke testing pass, while etcd3 status report the following error [root@etcd20-2 ~]# systemctl status etcd -l ● etcd.service - Etcd Server Loaded: loaded (/usr/lib/systemd/system/etcd.service; disabled; vendor preset: disabled) Active: active (running) since Mon 2016-11-28 03:24:30 EST; 5min ago Main PID: 17647 (etcd) CGroup: /system.slice/etcd.service └─17647 /usr/bin/etcd --name=openshift-195.lab.eng.nay.redhat.com --data-dir=/var/lib/etcd/ --listen-client-urls=https://192.168.1.246:2379 Nov 28 03:27:18 etcd20-2.novalocal etcd[17647]: grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: remote error: bad certificate"; Reconnecting to {"192.168.1.246:2379" <nil>} Nov 28 03:27:27 etcd20-2.novalocal etcd[17647]: grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: remote error: bad certificate"; Reconnecting to {"192.168.1.246:2379" <nil>} Nov 28 03:27:33 etcd20-2.novalocal etcd[17647]: grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: remote error: bad certificate"; Reconnecting to {"192.168.1.246:2379" <nil>} Nov 28 03:27:36 etcd20-2.novalocal etcd[17647]: grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: remote error: bad certificate"; Reconnecting to {"192.168.1.246:2379" <nil>} Nov 28 03:28:40 etcd20-2.novalocal etcd[17647]: grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: remote error: bad certificate"; Reconnecting to {"192.168.1.246:2379" <nil>} Nov 28 03:28:44 etcd20-2.novalocal etcd[17647]: grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: remote error: bad certificate"; Reconnecting to {"192.168.1.246:2379" <nil>} Nov 28 03:29:20 etcd20-2.novalocal etcd[17647]: grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: remote error: bad certificate"; Reconnecting to {"192.168.1.246:2379" <nil>} Nov 28 03:29:21 etcd20-2.novalocal etcd[17647]: grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: remote error: bad certificate"; Reconnecting to {"192.168.1.246:2379" <nil>} Nov 28 03:29:29 etcd20-2.novalocal etcd[17647]: grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: remote error: bad certificate"; Reconnecting to {"192.168.1.246:2379" <nil>} Nov 28 03:29:37 etcd20-2.novalocal etcd[17647]: grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: remote error: bad certificate"; Reconnecting to {"192.168.1.246:2379" <nil>} Expected results: Additional info:
The issue doesn't link to OCP version. so I move tag to latest OCP release.
Dropping severity given no adverse effects observed. If this is actually creating problems for openshift lets raise it again.
After working with QE it looks like this is a false positive.