Hide Forgot
Description of problem: By default Neutron L3HA is automatically turned on based on the controller count. Since we now have composable roles it makes less sense to use ControllerCount as a test condition because users can move L3 Agents on dedicated nodes. L3HA should be turn on automatically based on the number of L3 agents instead of number of controllers. Version-Release number of selected component (if applicable): 10 How reproducible: Every Time - from /usr/share/openstack-tripleo-heat-templates/puppet/services/neutron-api.yaml conditions: auto_enable_l3_ha: and: - not: equals: - get_param: ControllerCount - 1 - equals: - get_param: NeutronEnableDVR - false (...) neutron::server::l3_ha: {if: ["auto_enable_l3_ha", true, {get_param: NeutronL3HA}]} Steps to Reproduce: 1. Deploy a 3 controllers / 1 Networker node (L3/DHCP/Metadata) 2. 3. Actual results: Can't create router as L3HA is turned on. Expected results: L3HA turned of as there is only one L3 agent. Additional info:
I tried to workaround this issue by setting NeutronL3HA: False in an environment but it doesn't work.
This has already been discussed upstream in https://bugs.launchpad.net/tripleo/+bug/1629187 and I think we know what the fix is (move the conditional calculation into puppe-tripleo). I'll look at getting a fix posted today.
(In reply to Marius Cornea from comment #1) > I tried to workaround this issue by setting NeutronL3HA: False in an > environment but it doesn't work. In the end I was able to workaround this issue by overriding the neutron::server::l3_ha hieradata: parameter_defaults: ControllerExtraConfig: neutron::server::l3_ha: False
NeutronL3HA: False does not work because auto_enable_l3_ha is set to one due to ControllerCount > 1. Then neutron::server::l3_ha: {if: ["auto_enable_l3_ha", true, {get_param: NeutronL3HA}]} skips NeutronL3HA if auto_enable_l3_ha is true. overriding the neutron::server::l3_ha hieradata indeed works but is not really convenient. Cheers, Greg
> overriding the neutron::server::l3_ha hieradata indeed works but is not really convenient. Yes, this is a reasonable interim workaround though I think. Upstream patches posted which aim to resolve the issue: https://review.openstack.org/#/c/398926/ https://review.openstack.org/#/c/398934/
Agreed. Thanks for the quick resolution Steve, much appreciated ! Please let us know whenever the fix is merged downstream.
The patches to master landed: https://review.openstack.org/#/c/398926/ https://review.openstack.org/#/c/398934/ Stable backports proposed: https://review.openstack.org/#/c/402369/ https://review.openstack.org/#/c/402370/