Description of problem: A useful tool in making services externally accessible is the LoadBalancer service type. This kind of service uses the underlying cloud load balancing to give an IP address that load balances amongst the endpoints of the service. Steps to Reproduce: 1. oc run --image=celebdor/kuryr-demo demo 2. oc expose dc/demo --type=LoadBalancer --targetPort 8080 --port 80 3. Take the IP in the loadbalancer status oc describe svc demo 4. curl http://load_balanced_ip Expected results: HOSTNAME: HELLO, I AM ALIVE!!! Additional info: It should be tested also with the exposing specifying a loadbalancer ip
Can be tested together with https://bugzilla.redhat.com/show_bug.cgi?id=1593662 on the OCP side once that's complete.
Hi Jon! We have a theory that you may already have tested this - could you confirm?
Hey! Yes, it was already tested and verified again in: openstack-kuryr-kubernetes-controller-0.4.3-1.el7ost.noarch openstack-kuryr-kubernetes-cni-0.4.3-1.el7ost.noarch openstack-kuryr-kubernetes-common-0.4.3-1.el7ost.noarch Verification steps: Scenario 1 (not specifying the Load Balancer external IP): --------------------------------------------------------- $ oc new-project test $ oc run --image kuryr/demo demo $ oc scale dc/demo --replicas=2 $ oc get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE demo-1-dcmxw 1/1 Running 0 39s 10.11.0.104 app-node-1.openshift.example.com demo-1-jlmg4 1/1 Running 0 2m 10.11.0.102 app-node-0.openshift.example.com $ curl 10.11.0.104:8080 demo-1-dcmxw: HELLO! I AM ALIVE!!! $ curl 10.11.0.102:8080 demo-1-jlmg4: HELLO! I AM ALIVE!!! Expose the service (LoadBalancer type): $ oc expose dc/demo --port 80 --target-port 8080 --type LoadBalancer Wait until the Load Balancer is created. $ oc get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE demo LoadBalancer 172.30.84.80 10.0.0.210 80:30713/TCP 1m [cloud-user@bastion ~]$ openstack loadbalancer list +----------------+-----------+-----------------+----------------+---------------------+----------+ | id | name | project_id | vip_address | provisioning_status | provider | +----------------+-----------+-----------------+----------------+---------------------+----------+ ... | ded6ed04-7173- | test/demo | 293061856e1a4cf | 172.30.84.80 | ACTIVE | octavia | +----------------+-----------+-----------------+----------------+---------------------+----------+ Curl to internal IP: $ curl 172.30.84.80 demo-1-jlmg4: HELLO! I AM ALIVE!!! $ curl 172.30.84.80 demo-1-dcmxw: HELLO! I AM ALIVE!!! Curl to external IP (from the Bastion instance): [cloud-user@bastion ~]$ curl 10.0.0.210 demo-1-jlmg4: HELLO! I AM ALIVE!!! [cloud-user@bastion ~]$ curl 10.0.0.210 demo-1-dcmxw: HELLO! I AM ALIVE!!! Scenario 2 (specifying the Load Balancer external IP): ----------------------------------------------------- $ oc new-project test3 $ oc run --image kuryr/demo demo $ oc scale dc/demo --replicas=2 $ oc get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE demo-1-2txqk 1/1 Running 0 9s 10.11.0.104 app-node-1.openshift.example.com demo-1-6pzc2 1/1 Running 0 9m 10.11.0.113 app-node-0.openshift.example.com $ curl 10.11.0.104:8080 demo-1-2txqk: HELLO! I AM ALIVE!!! $ curl 10.11.0.113:8080 demo-1-6pzc2: HELLO! I AM ALIVE!!! Expose the service (LoadBalancer type) by expecifying the external IP, a FIP that: $ oc expose dc/demo --port 80 --target-port 8080 --type LoadBalancer --external-ip 10.0.0.233 Wait until the Load Balancer is created $ oc get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE demo LoadBalancer 172.30.78.235 10.0.0.203,10.0.0.233 80:30558/TCP 1m Note two external IPs are set, the requested one (10.0.0.233) and a new one (10.0.0.203) [cloud-user@bastion ~]$ openstack loadbalancer list +----------------+------------+------------+----------------+---------------------+----------+ | id | name | project_id | vip_address | provisioning_status | provider | +----------------+------------+------------+----------------+---------------------+----------+ ... | 1ceac96c-13b0- | test3/demo | 293061856e | 172.30.78.235 | ACTIVE | octavia | +----------------+------------+------------+----------------+---------------------+----------+ Curl to internal IP: $ curl 172.30.78.235 demo-1-2txqk: HELLO! I AM ALIVE!!! $ curl 172.30.78.235 demo-1-6pzc2: HELLO! I AM ALIVE!!! Curl to external IP (from the Bastion instance): [cloud-user@bastion ~]$ curl 10.0.0.233 curl: (7) Failed connect to 10.0.0.233:80; No route to host The requested IP is not reachable. The assigned Load Balancer IP is reachable instead: [cloud-user@bastion ~]$ curl 10.0.0.203 demo-1-2txqk: HELLO! I AM ALIVE!!! [cloud-user@bastion ~]$ curl 10.0.0.203 demo-1-6pzc2: HELLO! I AM ALIVE!!! The same test has been executed requesting a previously created floating IP address, but the result is the same, there is no connectivity with the requested Load Balancer IP. Toni, could you confirm this is the expected behaviour?
This looks good. Can you add, on the second case, the output of: oc get svc demo -o yaml This way we'll be able to verify that it is valid.
Adding requested info: Note IP addresses have changed. $ oc expose dc/demo --port 80 --target-port 8080 --type LoadBalancer --external-ip 10.46.22.39 (10.46.22.39 floating ip has been previously created) $ oc get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE demo LoadBalancer 172.30.25.133 10.46.22.46,10.46.22.39 80:31607/TCP 1m $ oc get svc demo -o yaml apiVersion: v1 kind: Service metadata: annotations: openstack.org/kuryr-lbaas-spec: '{"versioned_object.data": {"ip": "172.30.25.133", "lb_ip": null, "ports": [{"versioned_object.data": {"name": null, "port": 80, "protocol": "TCP"}, "versioned_object.name": "LBaaSPortSpec", "versioned_object.namespace": "kuryr_kubernetes", "versioned_object.version": "1.0"}], "project_id": "5e252df020604a0f83d4bf22f7f45fcc", "security_groups_ids": ["dd9d9528-79c5-4dab-8f94-bd5d5127a0bb"], "subnet_id": "a2302144-d4e1-43d1-ab17-20eed2df4bbd", "type": "LoadBalancer"}, "versioned_object.name": "LBaaSServiceSpec", "versioned_object.namespace": "kuryr_kubernetes", "versioned_object.version": "1.0"}' creationTimestamp: 2018-08-30T10:15:46Z labels: run: demo name: demo namespace: test resourceVersion: "439210" selfLink: /api/v1/namespaces/test/services/demo uid: a94982e9-ac3d-11e8-9f03-fa163e2b7bc6 spec: clusterIP: 172.30.25.133 externalIPs: - 10.46.22.39 externalTrafficPolicy: Cluster ports: - nodePort: 31607 port: 80 protocol: TCP targetPort: 8080 selector: run: demo sessionAffinity: None type: LoadBalancer status: loadBalancer: ingress: - ip: 10.46.22.46 $ curl 10.46.22.46 demo-1-k2ghr: HELLO! I AM ALIVE!!! $ curl 10.46.22.46 demo-1-fnhcv: HELLO! I AM ALIVE!!! $ curl 10.46.22.39 curl: (7) Failed connect to 10.46.22.39:80; No route to `host'
Quoting the official Kubernetes documentation: "Traffic from the external load balancer will be directed at the backend Pods, though exactly how that works depends on the cloud provider. Some cloud providers allow the loadBalancerIP to be specified. In those cases, the load-balancer will be created with the user-specified loadBalancerIP. If the loadBalancerIP field is not specified, an ephemeral IP will be assigned to the loadBalancer. *If the loadBalancerIP is specified, but the cloud provider does not support the feature, the field will be ignored." OpenStack cloud with kuryr does not support specifying the loadbalancerIP on OSP 13, so the field is ignored. The user should check the published service IP from status.loadBalancer.ingress.ip.
Verified again in: openstack-kuryr-kubernetes-controller-0.4.3-2.el7ost.noarch openstack-kuryr-kubernetes-cni-0.4.3-2.el7ost.noarch openstack-kuryr-kubernetes-common-0.4.3-2.el7ost.noarch
To specify the external (floating) IP you should use '--load-balancer-ip=' and not '--external-ip' Could u please verify it with '--load-balancer-ip=' ?
Kuryr supports specifying the floating IP for services of type LoadBalancer by the user. To specify the load balancer floating IP (e.g: 78.11.24.19), you need : 1. Create a floating IP address 2. Specify 'loadBalancerIP: 78.11.24.19' under service spec. And for the 'oc expose' the '--load-balancer-ip' option of 'oc expose' is translated into 'loadBalancerIP: 78.11.24.19' at service spec
Verified with '--load-balancer-ip=' and works: 1. Create a floating IP on the public network (10.46.22.39) 2. Expose the service (LoadBalancer type) by specifying the Load Balancer IP (the just created fip one): $ oc expose dc/demo --port 80 --target-port 8080 --type LoadBalancer --load-balancer-ip 10.46.22.39 3. Check the service: $ oc get svc demo NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE demo LoadBalancer 172.30.91.107 10.46.22.39 80:31821/TCP 1m [openshift@master-0 ~]$ oc get svc demo -o yaml apiVersion: v1 kind: Service metadata: annotations: openstack.org/kuryr-lbaas-spec: '{"versioned_object.data": {"ip": "172.30.91.107", "lb_ip": "10.46.22.39", "ports": [{"versioned_object.data": {"name": null, "port": 80, "protocol": "TCP"}, "versioned_object.name": "LBaaSPortSpec", "versioned_object.namespace": "kuryr_kubernetes", "versioned_object.version": "1.0"}], "project_id": "5e252df020604a0f83d4bf22f7f45fcc", "security_groups_ids": ["dd9d9528-79c5-4dab-8f94-bd5d5127a0bb"], "subnet_id": "a2302144-d4e1-43d1-ab17-20eed2df4bbd", "type": "LoadBalancer"}, "versioned_object.name": "LBaaSServiceSpec", "versioned_object.namespace": "kuryr_kubernetes", "versioned_object.version": "1.0"}' creationTimestamp: 2018-08-30T11:44:33Z labels: run: demo name: demo namespace: test resourceVersion: "448390" selfLink: /api/v1/namespaces/test/services/demo uid: 10069c52-ac4a-11e8-9f03-fa163e2b7bc6 spec: clusterIP: 172.30.91.107 externalTrafficPolicy: Cluster loadBalancerIP: 10.46.22.39 ports: - nodePort: 31821 port: 80 protocol: TCP targetPort: 8080 selector: run: demo sessionAffinity: None type: LoadBalancer status: loadBalancer: ingress: - ip: 10.46.22.39 [openshift@master-0 ~]$ oc get svc demo NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE demo LoadBalancer 172.30.91.107 10.46.22.39 80:31821/TCP 1m [openshift@master-0 ~]$ [openshift@master-0 ~]$ [openshift@master-0 ~]$ (reverse-i-search)`': oc get svc demo^C (reverse-i-search)`y': oc get pod demo-1-xx2mx -o ^Cml [openshift@master-0 ~]$ [openshift@master-0 ~]$ [openshift@master-0 ~]$ [openshift@master-0 ~]$ oc get ep demo -o yaml apiVersion: v1 kind: Endpoints metadata: annotations: openstack.org/kuryr-lbaas-spec: '{"versioned_object.data": {"ip": "172.30.91.107", "lb_ip": "10.46.22.39", "ports": [{"versioned_object.data": {"name": null, "port": 80, "protocol": "TCP"}, "versioned_object.name": "LBaaSPortSpec", "versioned_object.namespace": "kuryr_kubernetes", "versioned_object.version": "1.0"}], "project_id": "5e252df020604a0f83d4bf22f7f45fcc", "security_groups_ids": ["dd9d9528-79c5-4dab-8f94-bd5d5127a0bb"], "subnet_id": "a2302144-d4e1-43d1-ab17-20eed2df4bbd", "type": "LoadBalancer"}, "versioned_object.name": "LBaaSServiceSpec", "versioned_object.namespace": "kuryr_kubernetes", "versioned_object.version": "1.0"}' openstack.org/kuryr-lbaas-state: '{"versioned_object.data": {"listeners": [{"versioned_object.changes": ["id"], "versioned_object.data": {"id": "a397dd3f-a9c0-480e-8ae8-e88c1316f678", "loadbalancer_id": "d9822e38-a535-4815-a0fa-6b828317ac69", "name": "test/demo:TCP:80", "port": 80, "project_id": "5e252df020604a0f83d4bf22f7f45fcc", "protocol": "TCP"}, "versioned_object.name": "LBaaSListener", "versioned_object.namespace": "kuryr_kubernetes", "versioned_object.version": "1.0"}], "loadbalancer": {"versioned_object.data": {"id": "d9822e38-a535-4815-a0fa-6b828317ac69", "ip": "172.30.91.107", "name": "test/demo", "port_id": "21230534-8ea8-4f75-9364-d2f61891700e", "project_id": "5e252df020604a0f83d4bf22f7f45fcc", "provider": "octavia", "security_groups": ["dd9d9528-79c5-4dab-8f94-bd5d5127a0bb"], "subnet_id": "a2302144-d4e1-43d1-ab17-20eed2df4bbd"}, "versioned_object.name": "LBaaSLoadBalancer", "versioned_object.namespace": "kuryr_kubernetes", "versioned_object.version": "1.1"}, "members": [{"versioned_object.changes": ["id"], "versioned_object.data": {"id": "c92e8a1d-6133-42ac-87c9-b7501f3827e2", "ip": "10.11.0.3", "name": "test/demo-1-k2ghr:8080", "pool_id": "068a1d18-d91d-4e40-baf2-8f432d7f9c5a", "port": 8080, "project_id": "5e252df020604a0f83d4bf22f7f45fcc", "subnet_id": "a2302144-d4e1-43d1-ab17-20eed2df4bbd"}, "versioned_object.name": "LBaaSMember", "versioned_object.namespace": "kuryr_kubernetes", "versioned_object.version": "1.0"}, {"versioned_object.changes": ["id"], "versioned_object.data": {"id": "8478d371-08da-4895-b1b7-f3f0fb0a0fb6", "ip": "10.11.0.7", "name": "test/demo-1-fnhcv:8080", "pool_id": "068a1d18-d91d-4e40-baf2-8f432d7f9c5a", "port": 8080, "project_id": "5e252df020604a0f83d4bf22f7f45fcc", "subnet_id": "a2302144-d4e1-43d1-ab17-20eed2df4bbd"}, "versioned_object.name": "LBaaSMember", "versioned_object.namespace": "kuryr_kubernetes", "versioned_object.version": "1.0"}], "pools": [{"versioned_object.changes": ["id"], "versioned_object.data": {"id": "068a1d18-d91d-4e40-baf2-8f432d7f9c5a", "listener_id": "a397dd3f-a9c0-480e-8ae8-e88c1316f678", "loadbalancer_id": "d9822e38-a535-4815-a0fa-6b828317ac69", "name": "test/demo:TCP:80", "project_id": "5e252df020604a0f83d4bf22f7f45fcc", "protocol": "TCP"}, "versioned_object.name": "LBaaSPool", "versioned_object.namespace": "kuryr_kubernetes", "versioned_object.version": "1.0"}], "service_pub_ip_info": {"versioned_object.data": {"alloc_method": "user", "ip_addr": "10.46.22.39", "ip_id": "26b01206-c878-4ca9-953e-35e7b203c4ca"}, "versioned_object.name": "LBaaSPubIp", "versioned_object.namespace": "kuryr_kubernetes", "versioned_object.version": "1.0"}}, "versioned_object.name": "LBaaSState", "versioned_object.namespace": "kuryr_kubernetes", "versioned_object.version": "1.0"}' creationTimestamp: 2018-08-30T11:44:33Z labels: run: demo name: demo namespace: test resourceVersion: "448391" selfLink: /api/v1/namespaces/test/endpoints/demo uid: 1007bc06-ac4a-11e8-9f03-fa163e2b7bc6 subsets: - addresses: - ip: 10.11.0.3 nodeName: app-node-1.openshift.example.com targetRef: kind: Pod name: demo-1-k2ghr namespace: test resourceVersion: "437993" uid: 3c5ffc22-ac3c-11e8-9f03-fa163e2b7bc6 - ip: 10.11.0.7 nodeName: app-node-0.openshift.example.com targetRef: kind: Pod name: demo-1-fnhcv namespace: test resourceVersion: "438222" uid: 4400c036-ac3c-11e8-9f03-fa163e2b7bc6 ports: - port: 8080 protocol: TCP 4. Check connectivity to the Load Balancer, replies from both pods: $ curl 10.46.22.39 demo-1-fnhcv: HELLO! I AM ALIVE!!! $ curl 10.46.22.39 demo-1-k2ghr: HELLO! I AM ALIVE!!!
According to our records, this should be resolved by openstack-kuryr-kubernetes-0.4.3-2.el7ost. This build is available now.