Created attachment 1134558 [details] /var/log/neutron/server.log from Controller Description of problem: Repos setup as follows :- Install the yum-plugin-priorities package: yum -y install yum-plugin-priorities For CentOS 7 and RHEL 7, install the required .repo files: cd /etc/yum.repos.d/ curl -O http://trunk.rdoproject.org/centos7/delorean-deps.repo curl -O http://trunk.rdoproject.org/centos7/current-passed-ci/delorean.repo Upon completion of packstack install [root@ip-192-169-142-127 ~]# nova-manage version 13.0.0-0.20160304162843.c5a45a2.el7.centos System Controller && Compute is configured and successfully running DVR Configuration was done exactly as it is supposed to be done on Liberty Update neutron.conf:- l3_ha = True min_l3_agents_per_router = 2 l3_ha_net_cidr = 169.254.192.0/18 # openstack-service restart neutron - success Two L3 agents are running on the system ML2&OVS&VXLAN setup via packstack [root@ip-192-169-142-127 ~(keystone_admin)]# neutron agent-list | grep L3 | bccda000-325f-4a53-8189-fd65b9dd55e2 | L3 agent | ip-192-169-142-127.ip.secureserver.net | nova | :-) | True | neutron-l3-agent | | dcedf1fb-4253-4f81-99af-a59f05c5cb5c | L3 agent | ip-192-169-142-137.ip.secureserver.net | nova | :-) | True | neutron-l3-agent | However [root@ip-192-169-142-127 ~(keystone_admin)]# neutron router-create --distributed True --ha True --tenant-id c10e12e26c67477ebe1c1127b2e810aa Router01 Not enough l3 agents available to ensure HA. Minimum required 2, available 1. Neutron server returns request_ids: ['req-e60c5ef2-c33b-4ded-8406-ff44d09535e4'] Version-Release number of selected component (if applicable): [root@ip-192-169-142-127 ~(keystone_admin)]# rpm -qa \*neutron\* openstack-neutron-common-8.0.0.0b4-0.20160304174813.0ae20a3.el7.centos.noarch openstack-neutron-ml2-8.0.0.0b4-0.20160304174813.0ae20a3.el7.centos.noarch openstack-neutron-openvswitch-8.0.0.0b4-0.20160304174813.0ae20a3.el7.centos.noarch python-neutron-lib-0.0.3-0.20160227020344.999828a.el7.centos.noarch openstack-neutron-8.0.0.0b4-0.20160304174813.0ae20a3.el7.centos.noarch python-neutronclient-4.1.2-0.20160304195803.5d28651.el7.centos.noarch python-neutron-8.0.0.0b4-0.20160304174813.0ae20a3.el7.centos.noarch How reproducible: Steps to Reproduce: 1. Build Controller/Network && Compute nodes cluster 2. Convert to DVR (OK) 3. Enable l3_ha in neutron.conf 4. Restart neutron services Actual results: neutron router-create --distributed True --ha True --tenant-id c10e12e26c67477ebe1c1127b2e810aa Router01 Not enough l3 agents available to ensure HA. Minimum required 2, available 1 Expected results: Distributed and L3_HA neutron router gets created due to plesence two L3 routers on cluster ( DVR is running with no problems ) Additional info: Successfully created and working distributed router for tenant demo provides following reports [root@ip-192-169-142-127 ~(keystone_admin)]# neutron l3-agent-list-hosting-router RouterDVR +---------------------------------+---------------------------------+----------------+-------+----------+ | id | host | admin_state_up | alive | ha_state | +---------------------------------+---------------------------------+----------------+-------+----------+ | bccda000-325f- | ip-192-169-142-127.ip.secureser | True | :-) | | | 4a53-8189-fd65b9dd55e2 | ver.net | | | | +---------------------------------+---------------------------------+----------------+-------+----------+ [root@ip-192-169-142-127 ~(keystone_admin)]# neutron router-show RouterDVR +-------------------------+-----------------------------------------------------------------------------+ | Field | Value | +-------------------------+-----------------------------------------------------------------------------+ | admin_state_up | True | | availability_zone_hints | | | availability_zones | nova | | distributed | True | | external_gateway_info | {"network_id": "0b3aa2d4-e2a9-4272-b3ab-e57e104cd190", "enable_snat": true, | | | "external_fixed_ips": [{"subnet_id": | | | "44c4a25c-4333-4cb7-af37-cec97aa3814c", "ip_address": "192.169.142.174"}]} | | ha | False | | id | 0c067901-4c2f-4e32-bde0-bd0fdff89a11 | | name | RouterDVR | | routes | | | status | ACTIVE | | tenant_id | c10e12e26c67477ebe1c1127b2e810aa | +-------------------------+-----------------------------------------------------------------------------+
Before attempt to create router on both nodes # systemctl start keepalived # systemctl enable keepalived
Per https://bugs.launchpad.net/neutron/+bug/1365473 should packaged in Mitaka M3
On Compute Node should be agent_mode=dvr_snat in l3_agent.ini Then [root@ip-192-169-142-127 ~(keystone_admin)]# neutron router-create --distributed True --ha True --tenant-id c0a3e61a3147419f8f5ceb9308395454 RouterDSA Created a new router: +-------------------------+--------------------------------------+ | Field | Value | +-------------------------+--------------------------------------+ | admin_state_up | True | | availability_zone_hints | | | availability_zones | | | distributed | True | | external_gateway_info | | | ha | True | | id | 906c24df-611d-479d-aac5-ab54ae60a091 | | name | RouterDSA | | routes | | | status | ACTIVE | | tenant_id | c0a3e61a3147419f8f5ceb9308395454 | +-------------------------+--------------------------------------+ [root@ip-192-169-142-127 ~(keystone_admin)]# neutron router-show RouterDSA+-------------------------+--------------------------------------+ | Field | Value | +-------------------------+--------------------------------------+ | admin_state_up | True | | availability_zone_hints | | | availability_zones | nova | | distributed | True | | external_gateway_info | | | ha | True | | id | 906c24df-611d-479d-aac5-ab54ae60a091 | | name | RouterDSA | | routes | | | status | ACTIVE | | tenant_id | c0a3e61a3147419f8f5ceb9308395454 | +-------------------------+--------------------------------------+ [root@ip-192-169-142-127 ~(keystone_admin)]# neutron l3-agent-list-hosting-router RouterDSA +-----------------------------+-----------------------------+----------------+-------+----------+ | id | host | admin_state_up | alive | ha_state | +-----------------------------+-----------------------------+----------------+-------+----------+ | bccda000-325f- | ip-192-169-142-127.ip.secur | True | :-) | standby | | 4a53-8189-fd65b9dd55e2 | eserver.net | | | | | dcedf1fb-4253-4f81-99af- | ip-192-169-142-137.ip.secur | True | :-) | standby | | a59f05c5cb5c | eserver.net | | | | +-----------------------------+-----------------------------+----------------+-------+----------+
Bug is closed . agent_mode=dvr_snat may be used in case of two Network Nodes which allows to add HA support for DVR centralized default SNAT functionality It is compatible with agent_mode=dvr running on Compute Nodes. I apologize for misunderstanding this concept.
Please, remove from database. Everything works as expected and tested during March 10,11 2016 via Delorean trunk. See http://planet.rdoproject.org/ March 15, 2016 RDO BLOG HA support for DVR centralized default SNAT functionality on RDO Mitaka Milestone 3 http://tm3.org/5j
Closing per comment #5. Thanks for testing!