Bug 2060650 - Azure: Creating an LB with port 6443 always fails
Summary: Azure: Creating an LB with port 6443 always fails
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Cloud Compute
Version: 4.10
Hardware: Unspecified
OS: Unspecified
low
medium
Target Milestone: ---
: ---
Assignee: Joel Speed
QA Contact: sunzhaohua
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-03-03 22:32 UTC by aaleman
Modified: 2022-05-24 09:54 UTC (History)
1 user (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-05-24 09:54:41 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description aaleman 2022-03-03 22:32:10 UTC
Description of problem:

Creating a service type loadbalancer with port: 6443 on an Azure IPI cluster always fails


Version-Release number of selected component (if applicable):
Server Version: 4.9.15

How reproducible:

100%


Steps to Reproduce:

Apply a manifest like this onto an Azure cluster:

apiVersion: v1
kind: Service
metadata:
  name: test
  namespace: default
spec:
  ports:
  - port: 6443
    protocol: TCP
  selector:
    app: test
  type: LoadBalancer


Actual results:
$ k describe svc -n default test 
Name:                     test
Namespace:                default
Labels:                   <none>
Annotations:              <none>
Selector:                 app=test
Type:                     LoadBalancer
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       172.30.10.94
IPs:                      172.30.10.94
LoadBalancer Ingress:     52.191.34.152
Port:                     <unset>  6443/TCP
TargetPort:               6443/TCP
NodePort:                 <unset>  32581/TCP
Endpoints:                <none>
Session Affinity:         None
External Traffic Policy:  Cluster
Events:
  Type     Reason                  Age                  From                Message
  ----     ------                  ----                 ----                -------
  Normal   EnsuredLoadBalancer     2m43s                service-controller  Ensured load balancer
  Normal   EnsuringLoadBalancer    66s (x6 over 2m55s)  service-controller  Ensuring load balancer
  Warning  SyncLoadBalancerFailed  66s (x5 over 2m24s)  service-controller  Error syncing load balancer: failed to ensure load balancer: Retriable: false, RetryAfter: 0s, HTTPStatusCode: 400, RawError: Retriable: false, RetryAfter: 0s, HTTPStatusCode: 400, RawError: {
  "error": {
    "code": "RulesUseSameBackendPortProtocolAndPool",
    "message": "Load balancing rules /subscriptions/89a9ba4b-8b66-446a-813d-a9ed3a129e3d/resourceGroups/alvaro-aleman-azurecl-q6km8-rg/providers/Microsoft.Network/loadBalancers/alvaro-aleman-azurecl-q6km8/loadBalancingRules/api-internal-v4 and /subscriptions/89a9ba4b-8b66-446a-813d-a9ed3a129e3d/resourceGroups/alvaro-aleman-azurecl-q6km8-rg/providers/Microsoft.Network/loadBalancers/alvaro-aleman-azurecl-q6km8/loadBalancingRules/ac99042e055cd47cf8b01e4778aa45bb-TCP-6443 with floating IP disabled use the same protocol Tcp and backend port 6443, and must not be used with the same backend address pool /subscriptions/89a9ba4b-8b66-446a-813d-a9ed3a129e3d/resourceGroups/alvaro-aleman-azurecl-q6km8-rg/providers/Microsoft.Network/loadBalancers/alvaro-aleman-azurecl-q6km8/backendAddressPools/alvaro-aleman-azurecl-q6km8.",
    "details": []
  }
}



Expected results:

A loadbalancer

Additional info:

Comment 1 Joel Speed 2022-03-04 10:10:17 UTC
I'm not sure there's a lot we could do here. On Azure, the services all have to have a unique user facing port as the load balancers are shared across all nodes. OpenShift already uses port 6443 for the API internal load balancer, I think all we can say is that customers shouldn't try to use port 6443.

Is there a particular reason you are using this port/want to reuse this port multiple times?

Comment 2 Joel Speed 2022-05-24 09:54:41 UTC
As this has been stale for almost three months now, I'm going to close this one out. If you need further advice please reopen the issue


Note You need to log in before you can comment on or make changes to this bug.