- **Description of problem:** The ingress controller is trying to add all subnets included in the VPC to the default router when installing OCP in existing VPC[1] which has subnets in Local Zones[1] (which does not support networking load balancers[3], only application ones/ALB). The ingress cluster operator is reporting the following error when installing the cluster: ~~~ ingress False True True 92s The "default" ingress controller reports Available=False: IngressControllerUnavailable: One or more status conditions indicate unavailable: LoadBalancerReady=False (SyncLoadBalancerFailed: The service-controller component is reporting SyncLoadBalancerFailed events like: Error syncing load balancer: failed to ensure load balancer: ValidationError: You cannot have any Local Zone subnets for load balancers of type 'classic'... ~~~ I managed to workaround it by tagging `kubernetes.io/cluster/unmanaged=true` on the Local Zone subnet so that the ingress controller will ignore that subnet. The name `unmanaged` must be anything different than the `InfraID`. When the tag key suffix is the `InfraID`, regardless the value, it still failing. There is work in progress to create the official support (product documentation+QE in progress) of Local Zones installing in existing VPCs[4], then implement the fully support in installer[5][6]. The current issue seems to be a blocker for the full implementation, as the installer tag the subnets with cluster tag[1] `kubernetes.io/cluster/<infraID>=.*`. - **OpenShift release version:** All versions (tested in 4.10.18, 4.11.0-fc.0, 4.11.0-rc.1) - **How reproducible:** Always - **Steps to Reproduce (in detail):** 1. Create the VPC 2. Create the Local Zone subnet, setting the tag `kubernetes.io/cluster/unmanaged=true` 3. Create the Installer configuration setting the subnets in the "availability-zone" (parent zone) 4. Create the manifests 5. Create the Machine Sets for the machines located on the Local Zone subnet 6. Create the cluster - **Actual results:** Installer failed due to the ingress operator (and dependencies) reports degraded (message above from cluster operators) - **Expected results:** The ingress (OR): - Should not auto discovery all the subnets in the VPC when the subnets has been set on the install-config.yaml; - Should not auto discovery all the subnets in the VPC when the `kubernetes.io/role/elb=1` has been added to public subnets; - Should not try to add subnets not supported (Local Zones, Wavelength) to the technology used by Load Balancer (CLB/NLB) on the ingress[7]; - Auto discover could ignore the tag `kubernetes.io/role/elb=0` when it's set on the public subnet, so we can specify what subnets we would not like to be added/used by Load Balancer; - **Impact of the problem:** - Installations are not finished when trying to use existig VPCs with subnets in Local Zones (without workaround) - Block the fully support of Local Zones/Wavelenght on Installer since the cluster tag must be set on the subnet: `kubernetes.io/cluster/<infraID>=.*` - **Additional info:** [1] Install in existing VPC: https://docs.openshift.com/container-platform/4.10/installing/installing_aws/installing-aws-vpc.html [2] Local Zone documentation: https://aws.amazon.com/about-aws/global-infrastructure/localzones/ [3] Local Zones limitations (LB): https://aws.amazon.com/about-aws/global-infrastructure/localzones/features/ [4] Research and Day-0 support documentation to install OCP in existing VPC with Local Zones subnets: https://issues.redhat.com/browse/SPLAT-635 [5] Epic to create Machine pools in Local Zones in existing VPC with Local Zones subnets: https://issues.redhat.com/browse/SPLAT-636 [6] Epic to implement full support on installer to create subnets in Local Zones: https://issues.redhat.com/browse/SPLAT-657 [7] The SDK provides the field indicating the type of the subnet, since the network load balancers (CLB/NLB) the controller should look into the field `ZoneType` and add only subnets on the zone type value `availability-zone`, or ignore zones `wavelength-zone` and `local-zone`. ~~~ $ aws ec2 describe-availability-zones --filters Name=region-name,Values=us-east-1 --all-availability-zones |jq -r '.AvailabilityZones[] | ( .ZoneName, .ZoneType)' us-east-1a availability-zone (...) us-east-1-bos-1a local-zone (...) us-east-1-wl1-atl-wlz-1 wavelength-zone (...) ~~~
The ingress operator doesn't add subnets to the ELB; the cloud provider implementation (which runs as part of k-c-m) is doing that. Your steps to reproduce the problem include the following as the second step: "Create the Local Zone subnet, setting the tag `kubernetes.io/cluster/unmanaged=true`", but this is also your workaround—do you mean that if you omit the second step, then the problem occurs? If you have a public subnet with the "kubernetes.io/cluster/<cluster id>" tag and the "kubernetes.io/role/elb" tag, then the cloud provider should prefer that subnet over one with just the "kubernetes.io/cluster/<cluster id>" tag. Can you confirm that you have a *public, non-Local Zone* subnet with *both* tags and that the *Local Zone* subnet does *not* have the "kubernetes.io/role/elb" tag? (The cloud provider code only uses the tag keys; it ignores the values.)
Created attachment 1896389 [details] bz2105337-12-vpc_subnets_tags-before_installer.json $ aws ec2 describe-subnets --filters Name=vpc-id,Values=${VPC_ID} | jq -r '.Subnets[] | [.AvailabilityZone, .Tags[] ]' > bz2105337-12-vpc_subnets_tags-before_installer.json
Created attachment 1896390 [details] bz2105337-12-vpc_subnets_tags-after_installer.json $ aws ec2 describe-subnets --filters Name=vpc-id,Values=${VPC_ID} | jq -r '.Subnets[] | [.AvailabilityZone, .Tags[] ]' > bz2105337-12-vpc_subnets_tags-after_installer.json
Hi Miciah: > do you mean that if you omit the second step, then the problem occurs? That's correct. If I don't set the tag `kubernetes.io/cluster/unmanaged` on the Local Zone subnet, it will be discovered and tried to be added on LB creation. > Can you confirm that you have a *public, non-Local Zone* subnet with *both* tags and that the *Local Zone* subnet does *not* have the "kubernetes.io/role/elb" tag? The cluster was created with the tags as requested[1]. The cluster installation has failed[2]. Looking at the ingress CO status, it got stuck with the same reason: ``` $ oc get co ingress NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE ingress False True True 55s The "default" ingress controller reports Available=False: IngressControllerUnavailable: One or more status conditions indicate unavailable: LoadBalancerReady=False (SyncLoadBalancerFailed: The service-controller component is reporting SyncLoadBalancerFailed events like: Error syncing load balancer: failed to ensure load balancer: ValidationError: You cannot have any Local Zone subnets for load balancers of type 'classic'... ``` See also attached the must-gather of this execution (testID #12). Any other suggestions? --- [1] AZ name and tags for each VPC subnets, in two states: before and after installer (create cluster). Attachments: - [bz2105337-12-vpc_subnets_tags-before_installer.json](https://bugzilla.redhat.com/attachment.cgi?id=1896389) - [bz2105337-12-vpc_subnets_tags-after_installer.json](https://bugzilla.redhat.com/attachment.cgi?id=1896390) [2] Installer failed with errors: ``` ERROR Cluster initialization failed because one or more operators are not functioning properly. ERROR The cluster should be accessible for troubleshooting as detailed in the documentation linked below, ERROR https://docs.openshift.com/container-platform/latest/support/troubleshooting/troubleshooting-installations.html ERROR The 'wait-for install-complete' subcommand can then be used to continue the installation ERROR failed to initialize the cluster: Some cluster operators are still updating: authentication, console, ingress ```
I think this will need to be addressed upstream, and it may take some time to sort out what the proper behavior is for the cloud provider implementation. Curiously, upstream added logic in Kubernetes 1.19 (and backported it to 1.16) to infer the correct region name for local zones, but since attaching an ELB to a local-zone subnet doesn't work, it isn't clear what this change actually achieves: https://github.com/kubernetes/kubernetes/pull/90874 For now, tagging subnets that you don't want the cloud provider implementation to use with "kubernetes.io/cluster/unmanaged" might be the best option. An alternative workaround would be to add the "service.beta.kubernetes.io/aws-load-balancer-subnets" annotation to the ingress operator's service to enumerate the desired subnets explicitly. (There is a related RFE for this service annotation: <https://issues.redhat.com/browse/RFE-1717>.) These are the only viable options I see to get something working in 4.11. Setting this service annotation would be something the ingress operator could do, but the ingress operator has no way to determine which subnets the user intends for it to use.
> Curiously, upstream added logic in Kubernetes 1.19 (and backported it to 1.16) to infer the correct region name for local zones, but since attaching an ELB to a local-zone subnet doesn't work, it isn't clear what this change actually achieves: https://github.com/kubernetes/kubernetes/pull/90874 IIUC this PR only has been fixed how the Region name is extracted from the new locations (Local Zone and WL), to be used to initialize[1] the services instances on that region, it's not necessarily doing anything to Load Balancer creation or service discovery through. I am not familiar with that code base, but looking the source it seems the problem is here[2], the subnet is added to the AZ list dictionary without evaluate the tags. The tags will be evaluated only when there are more than one subnet in the same AZ. So I can see here two different problems on the findELBSubnets() behavior: A) it is adding the non-supported subnets to create the ELB (very related with this BZ) B) it is adding at least one subnet from any AZ to the subnet list, ignoring completely the labels evaluation - falling back into the behavior we are seeing here. (maybe need a second BZ to address it?) Let me know what do you think about it. > These are the only viable options I see to get something working in 4.11. For 4.11 we will document to use the tag "kubernetes.io/cluster/unmanaged" to make it work on installer when creating a cluster in existing VPC. When the full implementation will be developed on installer, the current BZ could be a potential blocker for it (ETA 4.12+). [1] https://github.com/Jeffwan/kubernetes/blob/4ae021d5ce7245a7062e7c869bdbd8edbb05d416/staging/src/k8s.io/legacy-cloud-providers/aws/aws.go#L1218-L1288 [2] https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/legacy-cloud-providers/aws/aws.go#L3486-L3518
Re-assigning to the Cluster Infrastructure team.
Denis is going to look into this, have also asked him to try this with the external cloud provider to see if the behaviour is reproducible there as well Please bear in mind that fixes to the legacy cloud providers are very hard to get merged and backports are impossible. So this is unlikely to be fixed quickly.
OpenShift has moved to Jira for its defect tracking! This bug can now be found in the OCPBUGS project in Jira. https://issues.redhat.com/browse/OCPBUGS-9376