Description of problem: During upgrade from 4.8 to 4.9 etcd fails to start because it has an invalid cipher[1]. Instead each cipher should be checked for validity[2]. example CI run [3]. I believe this fix should also include a bump to etcd client for the operator. ``` failed","error":"unexpected TLS cipher suite \"TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256\"","stacktrace":"go.etcd.io/etcd/etcdmain.startEtcdOrProxyV2\n\t/go/src/go.etcd.io/etcd/etcdmain/etcd.go:271\ngo.etcd.io/etcd/etcdmain.Main\n\t/go/src/go.etcd.io/etcd/etcdmain/main.go:46\nmain.main\n\t/go/src/go.etcd.io/etcd/main.go:28\nruntime.main\n\t/usr/lib/golang/src/runtime/proc.go:200"} ``` [1] https://gcsweb-ci.apps.ci.l2s4.p1.openshiftapps.com/gcs/origin-ci-test/logs/release-openshift-origin-installer-launch-gcp/1423749036162158592/artifacts/launch/pods/openshift-etcd_etcd-ci-ln-pydlsdk-f76d1-9lxqj-master-1_etcd.log [2] https://github.com/openshift/cluster-etcd-operator/blob/300bdf3949e155295313cb6ecdc58dc7ecf17632/pkg/etcdenvvar/etcd_env.go#L281 [3] https://prow.ci.openshift.org/view/gs/origin-ci-test/logs/release-openshift-origin-installer-launch-gcp/1423749036162158592 Version-Release number of selected component (if applicable): How reproducible: not sure probably high Steps to Reproduce: 1. upgrade 4.8 to 4.9 2. 3. Actual results: etcd fails to start and upgrade fails as a result. Expected results: upgrade is success Additional info:
Sam added UpgradeBlocker, so... We're asking the following questions to evaluate whether or not this bug warrants blocking an upgrade edge from either the previous X.Y or X.Y.Z. The ultimate goal is to avoid delivering an update which introduces new risk or reduces cluster functionality in any way. Sample answers are provided to give more context and the ImpactStatementRequested label has been added to this bug. When responding, please remove ImpactStatementRequested and set the ImpactStatementProposed label. The expectation is that the assignee answers these questions. Who is impacted? If we have to block upgrade edges based on this issue, which edges would need blocking? * example: Customers upgrading from 4.y.Z to 4.y+1.z running on GCP with thousands of namespaces, approximately 5% of the subscribed fleet * example: All customers upgrading from 4.y.z to 4.y+1.z fail approximately 10% of the time What is the impact? Is it serious enough to warrant blocking edges? * example: Up to 2 minute disruption in edge routing * example: Up to 90 seconds of API downtime * example: etcd loses quorum and you have to restore from backup How involved is remediation (even moderately serious impacts might be acceptable if they are easy to mitigate)? * example: Issue resolves itself after five minutes * example: Admin uses oc to fix things * example: Admin must SSH to hosts, restore from backups, or other non standard admin activities Is this a regression (if all previous versions were also vulnerable, updating to the new, vulnerable version does not increase exposure)? * example: No, it has always been like this we just never noticed * example: Yes, from 4.y.z to 4.y+1.z Or 4.y.z to 4.y.z+1 Or just make this blocker+ to ensure it gets fixed before 4.9 GAs and drop the UpgradeBlocker keyword?
since the bug is verified I am dropping the upgrade blocker status.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: OpenShift Container Platform 4.9.0 bug fix and security update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2021:3759