+++ This bug was initially created as a clone of Bug #1282718 +++ Description of problem: In an HA scenario logins may fail with a 401 Unauthorized error. This seems to happen due to stale reads from etcd. How reproducible: 50% Steps to Reproduce: 1) set up a 3.1 environment with HA etcd 2) log into the server multiple times # sum=0;for i in {1..500}; do oc login https://<master-url>:8443 -u <username> -p <password>; if [[ $? == 0 ]]; then sum=$(($sum + 1)); fi ; done; echo $sum Actual results: Sometime can't login, run 500 times, about 230 login failed. Expected results: All the login opts should be successful Additional info: In non-HA setups the test plan always runs with 0 failed logins --- Additional comment from Jordan Liggitt on 2015-11-18 19:06:22 CET --- Pretty sure this is a stale read issue when using an etcd cluster. I see this in the master configs: etcdClientInfo: ca: master.etcd-ca.crt certFile: master.etcd-client.crt keyFile: master.etcd-client.key urls: - https://openshift-159.lab.eng.nay.redhat.com:2379 - https://openshift-138.lab.eng.nay.redhat.com:2379 - https://openshift-155.lab.eng.nay.redhat.com:2379 So there are at least three etcd servers in place, right? 1. The token is created, written to etcd, and returned to the client. 2. The client then uses the token against the users/~ API 3. The authentication layer attempts to verify the token exists in etcd. There is no guarantee the same etcd server is queried for the token. In this case, I think a quorum read may be needed when the token is not found.
Recreated locally with single master and 3-node etcd cluster. A quorum read is needed. The current (deprecated) etcd client does not expose the quorum read option. Work upstream to switch to the current etcd client is tracked in https://github.com/kubernetes/kubernetes/issues/11962
For reference, I spun up an etcd cluster in three docker containers like this: IP=1.2.3.4 docker run -d -p 4001:4001 -p 7001:7001 --name etcd0 quay.io/coreos/etcd:v2.0.3 \ -name etcd0 \ -advertise-client-urls http://$IP:4001 \ -listen-client-urls http://0.0.0.0:4001 \ -initial-advertise-peer-urls http://$IP:7001 \ -listen-peer-urls http://0.0.0.0:7001 \ -initial-cluster-token my-etcd-cluster \ -initial-cluster etcd0=http://$IP:7001,etcd1=http://$IP:7002,etcd2=http://$IP:7003 \ -initial-cluster-state new docker run -d -p 4002:4002 -p 7002:7002 --name etcd1 quay.io/coreos/etcd:v2.0.3 \ -name etcd1 \ -advertise-client-urls http://$IP:4002 \ -listen-client-urls http://0.0.0.0:4002 \ -initial-advertise-peer-urls http://$IP:7002 \ -listen-peer-urls http://0.0.0.0:7002 \ -initial-cluster-token my-etcd-cluster \ -initial-cluster etcd0=http://$IP:7001,etcd1=http://$IP:7002,etcd2=http://$IP:7003 \ -initial-cluster-state existing docker run -d -p 4003:4003 -p 7003:7003 --name etcd2 quay.io/coreos/etcd:v2.0.3 \ -name etcd2 \ -advertise-client-urls http://$IP:4003 \ -listen-client-urls http://0.0.0.0:4003 \ -initial-advertise-peer-urls http://$IP:7003 \ -listen-peer-urls http://0.0.0.0:7003 \ -initial-cluster-token my-etcd-cluster \ -initial-cluster etcd0=http://$IP:7001,etcd1=http://$IP:7002,etcd2=http://$IP:7003 \ -initial-cluster-state existing Then started my master server from a config referencing an external etcd like this: ... etcdClientInfo: ca: "" certFile: "" keyFile: "" urls: - http://1.2.3.4:4001 - http://1.2.3.4:4002 - http://1.2.3.4:4003 ...
Fixed in puddle AtomicOpenShift/3.1/2016-01-13.1
Verify on puddle AtomicOpenShift/3.1/2016-01-13.1, this bug is fixed.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2016:0070