Testing lb functionalities and I saw that if I kill haproxy process, or fence corresponding controller node who started haproxy, this lb never will be restarted on other controller nodes or on the same controller node. On lbaas-agent.log log, if I kill haproxy process I can see logs like this: 2015-09-22 16:42:32.885 20418 WARNING neutron.services.loadbalancer.drivers.haproxy.namespace_driver [-] Error while connecting to stats socket: [Errno 111] ECONNREFUSED 2015-09-22 16:42:42.887 20418 WARNING neutron.services.loadbalancer.drivers.haproxy.namespace_driver [-] Error while connecting to stats socket: [Errno 111] ECONNREFUSED 2015-09-22 16:42:52.887 20418 WARNING neutron.services.loadbalancer.drivers.haproxy.namespace_driver [-] Error while connecting to stats socket: [Errno 111] ECONNREFUSED Also, all LB status remain up, even after killing haproxy: # neutron lb-pool-list +--------------------------------------+----------+----------+-------------+----------+----------------+--------+ | id | name | provider | lb_method | protocol | admin_state_up | status | +--------------------------------------+----------+----------+-------------+----------+----------------+--------+ | 2c7b6eaf-a43b-447c-b88f-7a27fb58ffc9 | lbpool01 | haproxy | ROUND_ROBIN | HTTP | True | ACTIVE | +--------------------------------------+----------+----------+-------------+----------+----------------+--------+
Please re-open upstream, there's nothing RDO or OSP specific about this bug.
Apologies, I didn't know this was coming from a customer.
While our strategy is to move to Octavia in compliant with upstream, our current assessment shows that it will take more time to get it mature and ready for production use. As a mid-term solution we are aiming to provide a better HA solution for LBaaS v2 using the HAProxy driver. This work is planned for Newton based on these two bugs: https://bugs.launchpad.net/neutron/+bug/1565511 https://bugs.launchpad.net/neutron/+bug/1565801
*** This bug has been marked as a duplicate of bug 1326224 ***