I0502 15:23:30.593107 1 flags.go:33] FLAG: --kubeconfig="" I0502 15:23:30.593111 1 flags.go:33] FLAG: --leader-elect="true" I0502 15:23:30.593114 1 flags.go:33] FLAG: --leader-elect-lease-duration="15s" I0502 15:23:30.593118 1 flags.go:33] FLAG: --leader-elect-renew-deadline="10s" I0502 15:23:30.593122 1 flags.go:33] FLAG: --leader-elect-resource-lock="endpoints" I0502 15:23:30.593126 1 flags.go:33] FLAG: --leader-elect-retry-period="2s" I0502 15:23:30.593130 1 flags.go:33] FLAG: --lock-object-name="kube-scheduler" Components *must not* use leader-elect-resource-lock=endpoints because it causes high churn on kube-proxies. This is also not a safe change to roll out, so is a 4.1 blocker. I will clone to other components to verify.
We need something this https://github.com/openshift/cluster-kube-controller-manager-operator/blob/master/bindata/v3.11.0/kube-controller-manager/defaultconfig.yaml#L22-L23 In here https://github.com/openshift/cluster-kube-scheduler-operator/blob/master/bindata/v3.11.0/kube-scheduler/defaultconfig.yaml
KSO PR https://github.com/openshift/cluster-kube-scheduler-operator/pull/127
New KSO PR https://github.com/openshift/cluster-kube-scheduler-operator/pull/128
moving back to assigned until we either figure the patch out or work out if we are going to force merge.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:0758