Description of problem: Running the logging uninstall playbook returns error Version-Release number of selected component (if applicable): [root@master ~]# cat /etc/redhat-release Red Hat Enterprise Linux Server release 7.4 (Maipo) [root@master ~]# oc version oc v3.6.173.0.5 kubernetes v1.6.1+5115d708d7 features: Basic-Auth GSSAPI Kerberos SPNEGO Server https://master.172.16.1.10.nip.io:8443 openshift v3.6.173.0.5 kubernetes v1.6.1+5115d708d7 How reproducible: Always Steps to Reproduce: 1. Install Logging 2. Try and Debug problems 3. Run the following command ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/common/openshift-cluster/openshift_logging.yml \ -e openshift_logging_install_logging=False Actual results: Displays Error ``` PLAY [Populate config host groups] ****************************************************************************************************************************************** TASK [Evaluate groups - g_etcd_hosts required] ****************************************************************************************************************************** fatal: [localhost]: FAILED! => { "changed": false, "failed": true } MSG: This playbook requires g_etcd_hosts to be set to retry, use: --limit @/usr/share/ansible/openshift-ansible/playbooks/common/openshift-cluster/openshift_logging.retry PLAY RECAP ****************************************************************************************************************************************************************** localhost : ok=0 changed=0 unreachable=0 failed=1 ``` Expected results: It uninstalls logging Additional info:
I would like to reproduce this issue, could you please provide your inventory file and tag/branch/version of the openshift-ansible playbooks?
Here is my inv file https://paste.fedoraproject.org/paste/m0VpjGnqXl4ABh0MAkA46g/raw Here is the version of openshift-ansible ``` [root@master ~]# rpm -qi openshift-ansible Name : openshift-ansible Version : 3.6.173.0.5 Release : 3.git.0.522a92a.el7 Architecture: noarch Install Date: Fri 11 Aug 2017 09:50:21 PM PDT Group : Unspecified Size : 63977 License : ASL 2.0 Signature : RSA/SHA256, Fri 04 Aug 2017 08:50:18 PM PDT, Key ID 199e2f91fd431d51 Source RPM : openshift-ansible-3.6.173.0.5-3.git.0.522a92a.el7.src.rpm Build Date : Fri 04 Aug 2017 07:51:07 AM PDT Build Host : x86-041.build.eng.bos.redhat.com Relocations : (not relocatable) Packager : Red Hat, Inc. <http://bugzilla.redhat.com/bugzilla> Vendor : Red Hat, Inc. URL : https://github.com/openshift/openshift-ansible Summary : Openshift and Atomic Enterprise Ansible Description : Openshift and Atomic Enterprise Ansible This repo contains Ansible code and playbooks for Openshift and Atomic Enterprise. ```
The playbook you use [1] as an entry point is probably not the one you are supposed to be calling with your inventory [2]. The task file in 'common' is shared and called after certain initialization happens by a specific playbook. In your case, the specific playbook should be in 'byo' [3] because you are 'bringing your own' infrastructure. Let me know if you still have an issue. [1] /usr/share/ansible/openshift-ansible/playbooks/common/openshift-cluster/openshift_logging.yml [2] https://docs.openshift.com/container-platform/3.6/install_config/aggregate_logging.html [3] /usr/share/ansible/openshift-ansible/playbooks/byo/openshift-cluster/openshift-logging.yml
@Scott do you have any comments regarding #c0 that might help us reproduce?
Can you please call playbooks/byo/openshift-cluster/openshift_logging.yml instead? You should not be calling any playbook in playbooks/common directly.
I'll run it as asked here...but the docs state otherwise https://docs.openshift.com/container-platform/3.6/install_config/aggregate_logging.html#aggregate-logging-cleanup
If you find that Scott's comment resolves this issue, please update this issue to be a docs bug.
I just tested this. The `byo` playbook worked.
Moving this to a docs bug to correct