Bug 2013004
| Summary: | Error syncing load balancer: failed to ensure load balancer: EnsureBackendPoolDeleted: failed to parse the VMAS ID : getAvailabilitySetNameByID: failed to parse the VMAS ID #9 | ||
|---|---|---|---|
| Product: | OpenShift Container Platform | Reporter: | Zhigang Wang <zhigwang> |
| Component: | Networking | Assignee: | Miciah Dashiel Butler Masters <mmasters> |
| Networking sub component: | router | QA Contact: | Hongan Li <hongli> |
| Status: | CLOSED ERRATA | Docs Contact: | |
| Severity: | high | ||
| Priority: | high | CC: | aiyengar, aos-bugs, dlawrenc, eparis, jboutaud, jokerman, mmasters, scuppett, shudili, swasthan |
| Version: | 4.7 | ||
| Target Milestone: | --- | ||
| Target Release: | 4.7.z | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2021-10-27 08:22:59 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
|
Description
Zhigang Wang
2021-10-11 19:45:25 UTC
https://github.com/kubernetes/legacy-cloud-providers/issues/9#issuecomment-941085591 indicates that the fix for this issue is in 4.7.32. The fix that that GitHub comment mentions was merged into 4.7 here: https://github.com/openshift/kubernetes/pull/935/commits/d07de023fc519cce5858ffee9e3403fa7055de9a So it looks like the fix is already merged, and we just need to verify it. Verified in "4.7.0-0.nightly-2021-10-18-191324" release version. With this release, it is observed that the load-balancer service gets assigned with the external-IP address properly:
------
oc get clusterversion
NAME VERSION AVAILABLE PROGRESSING SINCE STATUS
version 4.7.0-0.nightly-2021-10-18-191324 True False 3h50m Cluster version is 4.7.0-0.nightly-2021-10-18-191324
Template in use to deploy the service:
service-ip-loadbalancer-type.yaml
apiVersion: v1
kind: Service
metadata:
name: service-unsecure2
spec:
ports:
- name: https
port: 27443
protocol: TCP
targetPort: 8443
selector:
name: web-server-rc
type: LoadBalancer
oc create -f service-ip-loadbalancer-type.yaml
service/service-unsecure2 created
oc describe svc service-unsecure2
Name: service-unsecure2
Namespace: test1
Labels: <none>
Annotations: <none>
Selector: name=web-server-rc
Type: LoadBalancer
IP Families: <none>
IP: 172.30.103.231
IPs: 172.30.103.231
LoadBalancer Ingress: 65.52.30.157
Port: https 27443/TCP
TargetPort: 8443/TCP
NodePort: https 30938/TCP
Endpoints: 10.129.2.10:8443
Session Affinity: None
External Traffic Policy: Cluster
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal EnsuringLoadBalancer 2m53s service-controller Ensuring load balancer
Normal EnsuredLoadBalancer 2m35s service-controller Ensured load balancer
oc get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/service-secure ClusterIP 172.30.207.96 <none> 27443/TCP 5m47s
service/service-unsecure ClusterIP 172.30.42.83 <none> 27017/TCP 5m47s
service/service-unsecure2 LoadBalancer 172.30.103.231 65.52.30.157 27443:30938/TCP 3m45s <-----
Similarly for internal LB scoped services:
service-ip-lb-int-type.yaml
apiVersion: v1
kind: Service
metadata:
name: service-unsecure3
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
spec:
ports:
- name: https
port: 27443
protocol: TCP
targetPort: 8443
selector:
name: web-server-rc
type: LoadBalancer
oc create -f service-ip-lb-int-type.yaml
service/service-unsecure3 created
oc get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service-secure ClusterIP 172.30.207.96 <none> 27443/TCP 14m
service-unsecure ClusterIP 172.30.42.83 <none> 27017/TCP 14m
service-unsecure2 LoadBalancer 172.30.103.231 65.52.30.157 27443:30938/TCP 12m
service-unsecure3 LoadBalancer 172.30.208.16 10.0.32.7 27443:30204/TCP 2m15s <----
------
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (OpenShift Container Platform 4.7.36 bug fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2021:3931 (In reply to Miciah Dashiel Butler Masters from comment #3) > https://github.com/kubernetes/legacy-cloud-providers/issues/9#issuecomment- > 941085591 indicates that the fix for this issue is in 4.7.32. The fix that > that GitHub comment mentions was merged into 4.7 here: > https://github.com/openshift/kubernetes/pull/935/commits/ > d07de023fc519cce5858ffee9e3403fa7055de9a In case anyone encounters this issue on OpenShift 4.8, the same fix was merged into 4.8.z here: https://github.com/openshift/kubernetes/pull/888/commits/6c460e6a1e42d69b4d6d1a0ce1881c281a31a01f The fix shipped in 4.8.13 (see bug 1994457, comment 10). |