Bugzilla will be upgraded to version 5.0. The upgrade date is tentatively scheduled for 2 December 2018, pending final testing and feedback.
Bug 1174691 - LBaaS VIP does not work with IPv6 addresses because haproxy cannot bind socket
LBaaS VIP does not work with IPv6 addresses because haproxy cannot bind socket
Status: CLOSED ERRATA
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-octavia (Show other bugs)
6.0 (Juno)
Unspecified Linux
medium Severity high
: ga
: 12.0 (Pike)
Assigned To: Nir Magnezi
Alexander Stafeyev
: Triaged
Depends On: 1433537
Blocks: 1174741 1305021 1305023
  Show dependency treegraph
 
Reported: 2014-12-16 05:10 EST by Nir Magnezi
Modified: 2018-02-05 14:02 EST (History)
12 users (show)

See Also:
Fixed In Version: openstack-octavia-1.0.0-0.20170628055307.3ccd8a3.el7ost
Doc Type: Enhancement
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2017-12-13 15:33:46 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
lbaas-agent.log (18.44 KB, text/plain)
2014-12-16 05:10 EST, Nir Magnezi
no flags Details


External Trackers
Tracker ID Priority Status Summary Last Updated
Launchpad 1403001 None None None Never
Red Hat Product Errata RHEA-2017:3462 normal SHIPPED_LIVE Red Hat OpenStack Platform 12.0 Enhancement Advisory 2018-02-15 20:43:25 EST

  None (edit)
Description Nir Magnezi 2014-12-16 05:10:07 EST
Created attachment 969508 [details]
lbaas-agent.log

Description of problem:
=======================
IPv6 VIP remains in ERROR state due to haproxy cannot bind socket.

neutron.services.loadbalancer.agent.agent_manager Traceback (most recent call last):
neutron.services.loadbalancer.agent.agent_manager   File "/usr/lib/python2.7/site-packages/neutron/services/loadbalancer/agent/agent_manager.py", line 214, in create_vip
neutron.services.loadbalancer.agent.agent_manager     driver.create_vip(vip)
neutron.services.loadbalancer.agent.agent_manager   File "/usr/lib/python2.7/site-packages/neutron/services/loadbalancer/drivers/haproxy/namespace_driver.py", line 318, in create_vip
neutron.services.loadbalancer.agent.agent_manager     self._refresh_device(vip['pool_id'])
neutron.services.loadbalancer.agent.agent_manager   File "/usr/lib/python2.7/site-packages/neutron/services/loadbalancer/drivers/haproxy/namespace_driver.py", line 315, in _refresh_device
neutron.services.loadbalancer.agent.agent_manager     self.deploy_instance(logical_config)
neutron.services.loadbalancer.agent.agent_manager   File "/usr/lib/python2.7/site-packages/neutron/openstack/common/lockutils.py", line 249, in inner
neutron.services.loadbalancer.agent.agent_manager     return f(*args, **kwargs)
neutron.services.loadbalancer.agent.agent_manager   File "/usr/lib/python2.7/site-packages/neutron/services/loadbalancer/drivers/haproxy/namespace_driver.py", line 311, in deploy_instance
neutron.services.loadbalancer.agent.agent_manager     self.create(logical_config)
neutron.services.loadbalancer.agent.agent_manager   File "/usr/lib/python2.7/site-packages/neutron/services/loadbalancer/drivers/haproxy/namespace_driver.py", line 92, in create
neutron.services.loadbalancer.agent.agent_manager     self._spawn(logical_config)
neutron.services.loadbalancer.agent.agent_manager   File "/usr/lib/python2.7/site-packages/neutron/services/loadbalancer/drivers/haproxy/namespace_driver.py", line 115, in _spawn
neutron.services.loadbalancer.agent.agent_manager     ns.netns.execute(cmd)
neutron.services.loadbalancer.agent.agent_manager   File "/usr/lib/python2.7/site-packages/neutron/agent/linux/ip_lib.py", line 550, in execute
neutron.services.loadbalancer.agent.agent_manager     check_exit_code=check_exit_code, extra_ok_codes=extra_ok_codes)
neutron.services.loadbalancer.agent.agent_manager   File "/usr/lib/python2.7/site-packages/neutron/agent/linux/utils.py", line 84, in execute
neutron.services.loadbalancer.agent.agent_manager     raise RuntimeError(m)
neutron.services.loadbalancer.agent.agent_manager RuntimeError: 
neutron.services.loadbalancer.agent.agent_manager Command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec',


Version-Release number of selected component (if applicable):
=============================================================
RHEL-OSP6-Beta: 2014-12-12.1
openstack-neutron-2014.2.1-2.el7ost.noarch
haproxy-1.5.2-3.el7_0.x86_64

How reproducible:
=================
2/2

Steps to Reproduce:
===================
1. Spawn Two instances and wait for them to become active
    Via tenant_a:
    nova boot tenant_a_instance --flavor m1.small --image <image_id> --min-count 2 --key-name tenant_a_keypair --security-groups default --nic net-id=<internal_ipv4_a_id> --nic net-id=<tenant_a_radvd_stateful_id>

2. Retrive your instances IPv6 addresses, tenant id and the subnet id you are about to use.
   You may use any IPv6 subnet, in this example we'll use tenant_a_radvd_stateful_subnet
   # nova list | awk '/tenant_a_instance/ {print $12}' | cut -d"=" -f2 | sed -e s/\;\//
   # neutron subnet-list | awk '/tenant_a_radvd_stateful_subnet/ {print $2}'

3. Create a LBaaS pool
   # neutron lb-pool-create --lb-method ROUND_ROBIN --name Ipv6_LBaaS --protocol HTTP --subnet-id c54f8745-2aba-42da-8845-15050db1d5d1

4. Add members to the pool
   # neutron lb-member-create Ipv6_LBaaS --address 2001:65:65:65:f816:3eff:feda:b05e --protocol-port 80
   # neutron lb-member-create Ipv6_LBaaS --address 2001:65:65:65:f816:3eff:fe82:5d8 --protocol-port 80

5. Create a VIP:
   # neutron lb-vip-create Ipv6_LBaaS --name Ipv6_LBaaS_VIP --protocol-port 80 --protocol HTTP --subnet-id 0458273a-efe8-4d37-b2a0-e11cbd5e4d13

6. Check the VIP status:
   # neutron lb-vip-show Ipv6_LBaaS_VIP | grep status

Actual results:
===============
1. status = ERROR

2. lbaas-agent.log (attached):
 
TRACE neutron.services.loadbalancer.agent.agent_manager Stderr: '[ALERT] 349/101731 (20878) : Starting frontend fcb9db64-e877-4e95-a86f-fed6d1b244c2: cannot bind socket [2001:64:64:64::a:80]\n'

Expected results:
=================
IPv6 VIP should work.

Additional info:
================
haproxy configuration:
global
        daemon
        user nobody
        group haproxy
        log /dev/log local0
        log /dev/log local1 notice
        stats socket /var/lib/neutron/lbaas/2c18a738-05f4-4099-8348-94575c9ed290/sock mode 0666 level user
defaults
        log global
        retries 3
        option redispatch
        timeout connect 5000
        timeout client 50000
        timeout server 50000
frontend cb833240-d5ed-43b9-9ef1-5bc70e961366
        option tcplog
        bind 2001:65:65:65:f816:3eff:fe86:d7ce:80
        mode http
        default_backend 2c18a738-05f4-4099-8348-94575c9ed290
        option forwardfor
backend 2c18a738-05f4-4099-8348-94575c9ed290
        mode http
        balance roundrobin
        option forwardfor
        timeout check 3s
        option httpchk GET /
        http-check expect rstatus 200
        server a2b475f0-3247-49d4-8e04-bf570ffc9fb2 2001:65:65:65:f816:3eff:fe82:5d8:80 weight 1 check inter 3s fall 1
        server ab96b468-3950-47ea-a37b-f9b9fab7485b 2001:65:65:65:f816:3eff:feda:b05e:80 weight 1 check inter 3s fall 1
Comment 2 Nir Yechiel 2014-12-16 06:09:46 EST
Thanks Nir for filing this. We did not expect Neutron advanced services to operate properly with IPv6, but it's good to have a confirmation for this - specifically for LBaaS which is the service we do support in RHEL OSP (VPNaaS and FWaaS are still in Tech Preview). 

I will make sure this is documented as well so it's clear we provide support for IPv4 only.


Thanks,
Nir
Comment 3 Assaf Muller 2017-01-12 09:12:00 EST
Just to confirm, Octavia supports IPv6 correct?
Comment 4 Nir Magnezi 2017-01-16 03:19:19 EST
(In reply to Assaf Muller from comment #3)
> Just to confirm, Octavia supports IPv6 correct?

The LBaaSv2 API supports it, and I also managed to create a loadbalancer + listener with an IPv6 subnet:

# Configuration for nir-ipv6-private-subnet
global
    daemon
    user nobody
    group nogroup
    log /dev/log local0
    log /dev/log local1 notice
    stats socket /var/lib/octavia/9928a487-f7bd-4ec9-bf5d-45305c3e152d.sock mode 0666 level user

defaults
    log global
    retries 3
    option redispatch
    timeout connect 5000
    timeout client 50000
    timeout server 50000



frontend 9928a487-f7bd-4ec9-bf5d-45305c3e152d
    option httplog
    bind fd0c:e9d8:d85c::3:80
    mode http


So, it should work, but I don't know if it was tested.

BTW, the above mentioned LP Bug was fixed in U/S Liberty: https://review.openstack.org/#/c/185556/
Comment 16 errata-xmlrpc 2017-12-13 15:33:46 EST
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2017:3462

Note You need to log in before you can comment on or make changes to this bug.