Description of problem: Rebooting the cluster causes the loadbalancers are not working anymore Version-Release number of selected component (if applicable): 13.0 How reproducible: Always after reboot Steps to Reproduce: 1. Create a loadbalancer with octavia 2. Wait till is accessible (amphora is running) 3. Reboot all the cluster Actual results: $ openstack loadbalancer list +--------------------------------------+------------------------------------------------+----------------------------------+----------------+---------------------+----------+ | id | name | project_id | vip_address | provisioning_status | provider | +--------------------------------------+------------------------------------------------+----------------------------------+----------------+---------------------+----------+ | 5174d8f0-15b7-4051-b8db-8cc2216505cd | default/router | 922c89bfc75b43fbb6cb23ae55480a74 | 172.30.205.175 | ACTIVE | octavia | | e7d90234-0a8a-4f13-ae57-9dbbd7af9a9c | openshift-ansible-openshift.example.com-api-lb | 922c89bfc75b43fbb6cb23ae55480a74 | 172.30.0.1 | ACTIVE | octavia | $ openstack loadbalancer amphora list +--------------------------------------+--------------------------------------+----------------+------------+---------------+---------------+ | id | loadbalancer_id | status | role | lb_network_ip | ha_ip | +--------------------------------------+--------------------------------------+----------------+------------+---------------+---------------+ | 59ccc690-a5e7-4a67-86dc-4999d78b01a3 | a77fc719-74fb-4951-855a-d417fb858bb1 | ERROR | STANDALONE | 172.24.0.5 | 172.30.13.157 | | 7dc5e169-86ec-4ccc-b7c8-bd337183ff89 | e7d90234-0a8a-4f13-ae57-9dbbd7af9a9c | PENDING_DELETE | STANDALONE | 172.24.0.16 | 172.30.0.1 | +--------------------------------------+--------------------------------------+----------------+------------+---------------+---------------+ Expected results: LoadBalancer working and amphora servers running Additional info: Most of the times the only way to remove loadbalancers is directly in the database..
*** This bug has been marked as a duplicate of bug 1609064 ***