Bug 1480889 - Cannot run the logging uninstaller
Cannot run the logging uninstaller
Product: OpenShift Container Platform
Classification: Red Hat
Component: Documentation (Show other bugs)
x86_64 Linux
unspecified Severity medium
: ---
: ---
Assigned To: Vikram Goyal
Vikram Goyal
Vikram Goyal
Depends On:
  Show dependency treegraph
Reported: 2017-08-12 13:06 EDT by Christian Hernandez
Modified: 2018-01-25 17:03 EST (History)
8 users (show)

See Also:
Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
Last Closed:
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)

  None (edit)
Description Christian Hernandez 2017-08-12 13:06:15 EDT
Description of problem:

Running the logging uninstall playbook returns error

Version-Release number of selected component (if applicable):

[root@master ~]# cat /etc/redhat-release 
Red Hat Enterprise Linux Server release 7.4 (Maipo)

[root@master ~]# oc version
oc v3.
kubernetes v1.6.1+5115d708d7
features: Basic-Auth GSSAPI Kerberos SPNEGO

Server https://master.
openshift v3.
kubernetes v1.6.1+5115d708d7

How reproducible:


Steps to Reproduce:
1. Install Logging
2. Try and Debug problems
3. Run the following command

ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/common/openshift-cluster/openshift_logging.yml \
-e openshift_logging_install_logging=False                                          

Actual results:

Displays Error

PLAY [Populate config host groups] ******************************************************************************************************************************************

TASK [Evaluate groups - g_etcd_hosts required] ******************************************************************************************************************************
fatal: [localhost]: FAILED! => {           
    "changed": false,                      
    "failed": true                         


This playbook requires g_etcd_hosts to be set                                         

        to retry, use: --limit @/usr/share/ansible/openshift-ansible/playbooks/common/openshift-cluster/openshift_logging.retry                                              

PLAY RECAP ******************************************************************************************************************************************************************
localhost                  : ok=0    changed=0    unreachable=0    failed=1 

Expected results:

It uninstalls logging 

Additional info:
Comment 1 Jan Wozniak 2017-08-18 05:33:32 EDT
I would like to reproduce this issue, could you please provide your inventory file and tag/branch/version of the openshift-ansible playbooks?
Comment 2 Christian Hernandez 2017-08-18 10:24:18 EDT
Here is my inv file

Here is the version of openshift-ansible

[root@master ~]# rpm -qi openshift-ansible
Name        : openshift-ansible
Version     :
Release     : 3.git.0.522a92a.el7
Architecture: noarch
Install Date: Fri 11 Aug 2017 09:50:21 PM PDT
Group       : Unspecified
Size        : 63977
License     : ASL 2.0
Signature   : RSA/SHA256, Fri 04 Aug 2017 08:50:18 PM PDT, Key ID 199e2f91fd431d51
Source RPM  : openshift-ansible-
Build Date  : Fri 04 Aug 2017 07:51:07 AM PDT
Build Host  : x86-041.build.eng.bos.redhat.com
Relocations : (not relocatable)
Packager    : Red Hat, Inc. <http://bugzilla.redhat.com/bugzilla>
Vendor      : Red Hat, Inc.
URL         : https://github.com/openshift/openshift-ansible
Summary     : Openshift and Atomic Enterprise Ansible
Description :
Openshift and Atomic Enterprise Ansible

This repo contains Ansible code and playbooks
for Openshift and Atomic Enterprise.
Comment 3 Jan Wozniak 2017-08-23 08:34:07 EDT
The playbook you use [1] as an entry point is probably not the one you are supposed to be calling with your inventory [2]. The task file in 'common' is shared and called after certain initialization happens by a specific playbook. In your case, the specific playbook should be in 'byo' [3] because you are 'bringing your own' infrastructure.

Let me know if you still have an issue.

[1] /usr/share/ansible/openshift-ansible/playbooks/common/openshift-cluster/openshift_logging.yml

[2] https://docs.openshift.com/container-platform/3.6/install_config/aggregate_logging.html

[3] /usr/share/ansible/openshift-ansible/playbooks/byo/openshift-cluster/openshift-logging.yml
Comment 4 Jeff Cantrill 2017-08-23 11:36:55 EDT
@Scott do you have any comments regarding #c0 that might help us reproduce?
Comment 5 Scott Dodson 2017-08-23 12:50:21 EDT
Can you please call playbooks/byo/openshift-cluster/openshift_logging.yml instead? You should not be calling any playbook in playbooks/common directly.
Comment 6 Christian Hernandez 2017-08-23 12:57:23 EDT
I'll run it as asked here...but the docs state otherwise

Comment 7 Jeff Cantrill 2017-08-25 15:39:56 EDT
If you find that Scott's comment resolves this issue, please update this issue to be a docs bug.
Comment 8 Christian Hernandez 2017-08-25 16:10:34 EDT
I just tested this. The `byo` playbook worked.
Comment 9 Jeff Cantrill 2017-08-25 16:34:46 EDT
Moving this to a docs bug to correct

Note You need to log in before you can comment on or make changes to this bug.