+++ This bug was initially created as a clone of Bug #1512375 +++ Description of problem: Create a healthmonitor and add to a pool, shutdown or delete one member but member state is not set to inactive when the max of failed retries is reached for the instance. Version-Release number of selected component (if applicable): RHOSP10 How reproducible: Steps to Reproduce: 1. Create Lbaasv2: $ nova list +--------------------------------------+-------+--------+------------+-------------+---------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+-------+--------+------------+-------------+---------------------+ | e7e10318-ac85-4191-8a56-fed46f2422f9 | node1 | ACTIVE | - | Running | private=10.10.1.103 | | 8d691b56-cedb-4723-82f4-65938e2f71a8 | node2 | ACTIVE | - | Running | private=10.10.1.102 | +--------------------------------------+-------+--------+------------+-------------+---------------------+ $ neutron lbaas-loadbalancer-status 041b24a3-55e1-4880-b04f-cf22ab057b30 { "loadbalancer": { "name": "lb1", "provisioning_status": "ACTIVE", "listeners": [ { "name": "listener1", "provisioning_status": "ACTIVE", "pools": [ { "name": "pool1", "provisioning_status": "ACTIVE", "healthmonitor": { "provisioning_status": "ACTIVE", "type": "HTTP", "id": "083f068f-3ee1-471b-a6ba-2438648078c0", "name": "" }, "members": [ { "name": "", "provisioning_status": "ACTIVE", "address": "10.10.1.103", "protocol_port": 80, "id": "97e4de09-7e88-41fb-9e7f-5b061fd4d44f", "operating_status": "ONLINE" }, { "name": "", "provisioning_status": "ACTIVE", "address": "10.10.1.102", "protocol_port": 80, "id": "9bdc780f-2553-40cf-82bf-aa43ab94ce8a", "operating_status": "ONLINE" } ], "id": "17c63199-b63e-4f22-83a3-baea98345923", "operating_status": "ONLINE" } ], "l7policies": [], "id": "2c1f8311-d834-45af-8efb-05ff1684e2a6", "operating_status": "ONLINE" } ], "pools": [ { "name": "pool1", "provisioning_status": "ACTIVE", "healthmonitor": { "provisioning_status": "ACTIVE", "type": "HTTP", "id": "083f068f-3ee1-471b-a6ba-2438648078c0", "name": "" }, "members": [ { "name": "", "provisioning_status": "ACTIVE", "address": "10.10.1.103", "protocol_port": 80, "id": "97e4de09-7e88-41fb-9e7f-5b061fd4d44f", "operating_status": "ONLINE" }, { "name": "", "provisioning_status": "ACTIVE", "address": "10.10.1.102", "protocol_port": 80, "id": "9bdc780f-2553-40cf-82bf-aa43ab94ce8a", "operating_status": "ONLINE" } ], "id": "17c63199-b63e-4f22-83a3-baea98345923", "operating_status": "ONLINE" } ], "id": "041b24a3-55e1-4880-b04f-cf22ab057b30", "operating_status": "ONLINE" } } 2. Delete one of the member: $ nova delete e7e10318-ac85-4191-8a56-fed46f2422f9 $ nova list +--------------------------------------+-------+--------+------------+-------------+---------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+-------+--------+------------+-------------+---------------------+ | 8d691b56-cedb-4723-82f4-65938e2f71a8 | node2 | ACTIVE | - | Running | private=10.10.1.102 | +--------------------------------------+-------+--------+------------+-------------+---------------------+ 3. Check lbaas status again: $ neutron lbaas-loadbalancer-status 041b24a3-55e1-4880-b04f-cf22ab057b30 { "loadbalancer": { "name": "lb1", "provisioning_status": "ACTIVE", "listeners": [ { "name": "listener1", "provisioning_status": "ACTIVE", "pools": [ { "name": "pool1", "provisioning_status": "ACTIVE", "healthmonitor": { "provisioning_status": "ACTIVE", "type": "HTTP", "id": "083f068f-3ee1-471b-a6ba-2438648078c0", "name": "" }, "members": [ { "name": "", "provisioning_status": "ACTIVE", "address": "10.10.1.103", "protocol_port": 80, "id": "97e4de09-7e88-41fb-9e7f-5b061fd4d44f", "operating_status": "ONLINE" }, { "name": "", "provisioning_status": "ACTIVE", "address": "10.10.1.102", "protocol_port": 80, "id": "9bdc780f-2553-40cf-82bf-aa43ab94ce8a", "operating_status": "ONLINE" } ], "id": "17c63199-b63e-4f22-83a3-baea98345923", "operating_status": "ONLINE" } ], "l7policies": [], "id": "2c1f8311-d834-45af-8efb-05ff1684e2a6", "operating_status": "ONLINE" } ], "pools": [ { "name": "pool1", "provisioning_status": "ACTIVE", "healthmonitor": { "provisioning_status": "ACTIVE", "type": "HTTP", "id": "083f068f-3ee1-471b-a6ba-2438648078c0", "name": "" }, "members": [ { "name": "", "provisioning_status": "ACTIVE", "address": "10.10.1.103", "protocol_port": 80, "id": "97e4de09-7e88-41fb-9e7f-5b061fd4d44f", "operating_status": "ONLINE" }, { "name": "", "provisioning_status": "ACTIVE", "address": "10.10.1.102", "protocol_port": 80, "id": "9bdc780f-2553-40cf-82bf-aa43ab94ce8a", "operating_status": "ONLINE" } ], "id": "17c63199-b63e-4f22-83a3-baea98345923", "operating_status": "ONLINE" } ], "id": "041b24a3-55e1-4880-b04f-cf22ab057b30", "operating_status": "ONLINE" } } Actual results: The operating_status for member 10.10.1.103 is still online. Expected results: The operating_status for member 10.10.1.103 should be offline or error. Additional info: There are some similar issue reported in upstream: https://bugs.launchpad.net/neutron/+bug/1548774 https://bugs.launchpad.net/octavia/+bug/1607309 --- Additional comment from Jakub Libosvar on 2017-11-13 09:35:13 EST --- Nir is going to look at this one
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2018:0245