Bug 1476890 - Running health check playbooks can have unexpected side effects
Running health check playbooks can have unexpected side effects
Status: NEW
Product: OpenShift Container Platform
Classification: Red Hat
Component: Installer (Show other bugs)
Unspecified Unspecified
unspecified Severity medium
: ---
: 3.7.0
Assigned To: Luke Meyer
Johnny Liu
Depends On:
  Show dependency treegraph
Reported: 2017-07-31 13:53 EDT by Luke Meyer
Modified: 2017-09-18 14:39 EDT (History)
6 users (show)

See Also:
Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
Last Closed:
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)

  None (edit)
Description Luke Meyer 2017-07-31 13:53:10 EDT
Description of problem:
When running the health checks provided in openshift-ansible, users probably do not expect to make significant changes to the host systems, but these playbooks may. It is not their intent, but a side effect of the current architecture.

Version-Release number of selected component (if applicable):
openshift-ansible 3.6.0

Steps to Reproduce:
1. Deploy OpenShift with Ansible
2. Modify something on the host systems that Ansible manages. For instance:
  a. Add or remove something in INSECURE_REGISTRY in /etc/sysconfig/docker
  b. Disable the yum repo that provides OpenShift packages
3. Run a health check playbook (currently "playbooks/byo/openshift-checks/health.yml" is available).

Actual results:
Docker reconfigured and restarted, yum repo re-enabled, etc... only for roles that are dependencies of openshift_health_checker though (TODO: list these).

Expected results:
Only expected changes to install package dependencies of openshift_health_checker like python-docker-py and skopeo.

Additional info:
The health checks themselves are not making these changes, it is the roles they depend on for system information (or that their dependencies do). In an installation scenario there is not much reason to avoid making changes, but for post-installation playbooks there needs to be a way to gather the information from these roles without performing the configuration tasks.
Comment 1 Luke Meyer 2017-08-15 10:01:37 EDT
The roles in question are docker, os_firewall, and openshift_repos. We are investigating ways to make these take no action under this usage. In fact, something seems to have changed since discovering this issue such that docker no longer re-configures/restarts before the health checks, so that part may already be solved.
Comment 2 Michael Gugino 2017-08-24 20:47:13 EDT
Some work to refactor docker roll has started:  https://github.com/openshift/openshift-ansible/pull/5165

This refactor will remove docker from dependency chains of other roles.

Note You need to log in before you can comment on or make changes to this bug.