Hide Forgot
OPS Tools | Availability Monitoring | OS Checks | Current OS checks initial deployment on each overcloud node is not effective and gives a misleading information to the ops tools administrator. Current oschecks are checking openstack as a unit and not checking each overcloud node. Checks are running against Virtual IP and checking whether certain API responding or not. Each check runs against only 1 controller or compute. The one that holds Virtual IP. So ,obviously,if some of controllers or computes will be down or certain services on them will be down (HA scenario) it won't effect openstack as a unit and oschecks will still report "ok" status. The problem is that by deploying and running those checks on each overcloud node we provide to a user a misleading information and it looks like all the checks are being executed on EACH overcloud node checking API status on EACH overcloud node when in fact it's being checked only against 1 node. Moreover if ,for some reason, one of controllers/computes will have a problem with openstack services it still will be reported as "ok" in Availability Monitoring UI (Uchiwa).
Removing OSP flag as check configuration is server side.
Check configuration is performed on server side and as such it cannot be marked for OSP and cannot be blocker.
Patch was merged to opstools-ansible.
Current opstools-ansible build includes systemd checks for openstack services on each overcloud node.