tested with the cluster that launched by cluster-bot (launch openshift/cluster-ingress-operator#705 aws) and the PR works as expected. after changing scope to internal and manually changing annotation value to "0.0.0.0/0", ingress-operator update the annotation to "true" immediately. $ oc -n openshift-ingress annotate svc/router-default service.beta.kubernetes.io/aws-load-balancer-internal="0.0.0.0/0" --overwrite service/router-default annotated $ oc -n openshift-ingress get svc/router-default -oyaml apiVersion: v1 kind: Service metadata: annotations: service.beta.kubernetes.io/aws-load-balancer-healthcheck-healthy-threshold: "2" service.beta.kubernetes.io/aws-load-balancer-healthcheck-interval: "5" service.beta.kubernetes.io/aws-load-balancer-healthcheck-timeout: "4" service.beta.kubernetes.io/aws-load-balancer-healthcheck-unhealthy-threshold: "2" service.beta.kubernetes.io/aws-load-balancer-internal: "true" service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: '*' traffic-policy.network.alpha.openshift.io/local-with-fallback: "" ### logs of ingress-operator 2022-02-23T04:57:26.536Z INFO operator.ingress_controller ingress/load_balancer_service.go:294 normalized annotation {"namespace": "openshift-ingress", "name": "router-default", "annotation": "service.beta.kubernetes.io/aws-load-balancer-internal", "old": "0.0.0.0/0", "new": "true"} $ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.10.0-0.ci.test-2022-02-23-041216-ci-ln-bqsc5qk-latest True False 20m Cluster version is 4.10.0-0.ci.test-2022-02-23-041216-ci-ln-bqsc5qk-latest
Moving to MODIFIED. No nightly 4.10 includes the fix for this.
Verified in "4.10.0-0.nightly-2022-02-24-034852" release version. Testing upgrade from 4.9.23 to 4.10.0-0.nightly-2022-02-24-034852, it is observed that the patch works as intended and upgrade gets completed successfully: -------- oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.9.23 True False 5m57s Cluster version is 4.9.23 oc -n openshift-ingress edit service/router-default service/router-default edited oc -n openshift-ingress get service/router-default -o yaml apiVersion: v1 kind: Service metadata: annotations: service.beta.kubernetes.io/aws-load-balancer-healthcheck-healthy-threshold: "2" service.beta.kubernetes.io/aws-load-balancer-healthcheck-interval: "5" service.beta.kubernetes.io/aws-load-balancer-healthcheck-timeout: "4" service.beta.kubernetes.io/aws-load-balancer-healthcheck-unhealthy-threshold: "2" service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0 <------- service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: '*' traffic-policy.network.alpha.openshift.io/local-with-fallback: "" creationTimestamp: "2022-02-24T06:26:49Z" oc adm upgrade --to-image=registry.ci.openshift.org/ocp/release:4.10.0-0.nightly-2022-02-24-034852 --allow-explicit-upgrade=true --force Updating to release image registry.ci.openshift.org/ocp/release:4.10.0-0.nightly-2022-02-24-034852 Post upgrade: oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.10.0-0.nightly-2022-02-24-034852 True False 9m52s Cluster version is 4.10.0-0.nightly-2022-02-24-034852 oc get co ingress NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE ingress 4.10.0-0.nightly-2022-02-24-034852 True False False 66m oc -n openshift-ingress get service/router-default -o yaml apiVersion: v1 kind: Service metadata: annotations: service.beta.kubernetes.io/aws-load-balancer-healthcheck-healthy-threshold: "2" service.beta.kubernetes.io/aws-load-balancer-healthcheck-interval: "5" service.beta.kubernetes.io/aws-load-balancer-healthcheck-timeout: "4" service.beta.kubernetes.io/aws-load-balancer-healthcheck-unhealthy-threshold: "2" service.beta.kubernetes.io/aws-load-balancer-internal: "true" <----------- service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: '*' traffic-policy.network.alpha.openshift.io/local-with-fallback: "" creationTimestamp: "2022-02-24T06:26:49Z" finalizers: - service.kubernetes.io/load-balancer-cleanup --------
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: OpenShift Container Platform 4.10.3 security update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2022:0056