Bug 1480889 - Cannot run the logging uninstaller
Summary: Cannot run the logging uninstaller
Keywords:
Status: CLOSED EOL
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Documentation
Version: 3.6.1
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: ---
: ---
Assignee: Vikram Goyal
QA Contact: Vikram Goyal
Vikram Goyal
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-08-12 17:06 UTC by Christian Hernandez
Modified: 2020-06-30 14:55 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-06-30 14:55:29 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Christian Hernandez 2017-08-12 17:06:15 UTC
Description of problem:

Running the logging uninstall playbook returns error

Version-Release number of selected component (if applicable):

[root@master ~]# cat /etc/redhat-release 
Red Hat Enterprise Linux Server release 7.4 (Maipo)

[root@master ~]# oc version
oc v3.6.173.0.5
kubernetes v1.6.1+5115d708d7
features: Basic-Auth GSSAPI Kerberos SPNEGO

Server https://master.172.16.1.10.nip.io:8443
openshift v3.6.173.0.5
kubernetes v1.6.1+5115d708d7


How reproducible:

Always

Steps to Reproduce:
1. Install Logging
2. Try and Debug problems
3. Run the following command

ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/common/openshift-cluster/openshift_logging.yml \
-e openshift_logging_install_logging=False                                          


Actual results:

Displays Error

```
PLAY [Populate config host groups] ******************************************************************************************************************************************

TASK [Evaluate groups - g_etcd_hosts required] ******************************************************************************************************************************
fatal: [localhost]: FAILED! => {           
    "changed": false,                      
    "failed": true                         
}                                          

MSG:                                       

This playbook requires g_etcd_hosts to be set                                         

        to retry, use: --limit @/usr/share/ansible/openshift-ansible/playbooks/common/openshift-cluster/openshift_logging.retry                                              

PLAY RECAP ******************************************************************************************************************************************************************
localhost                  : ok=0    changed=0    unreachable=0    failed=1 
```

Expected results:

It uninstalls logging 

Additional info:

Comment 1 Jan Wozniak 2017-08-18 09:33:32 UTC
I would like to reproduce this issue, could you please provide your inventory file and tag/branch/version of the openshift-ansible playbooks?

Comment 2 Christian Hernandez 2017-08-18 14:24:18 UTC
Here is my inv file
https://paste.fedoraproject.org/paste/m0VpjGnqXl4ABh0MAkA46g/raw

Here is the version of openshift-ansible

```
[root@master ~]# rpm -qi openshift-ansible
Name        : openshift-ansible
Version     : 3.6.173.0.5
Release     : 3.git.0.522a92a.el7
Architecture: noarch
Install Date: Fri 11 Aug 2017 09:50:21 PM PDT
Group       : Unspecified
Size        : 63977
License     : ASL 2.0
Signature   : RSA/SHA256, Fri 04 Aug 2017 08:50:18 PM PDT, Key ID 199e2f91fd431d51
Source RPM  : openshift-ansible-3.6.173.0.5-3.git.0.522a92a.el7.src.rpm
Build Date  : Fri 04 Aug 2017 07:51:07 AM PDT
Build Host  : x86-041.build.eng.bos.redhat.com
Relocations : (not relocatable)
Packager    : Red Hat, Inc. <http://bugzilla.redhat.com/bugzilla>
Vendor      : Red Hat, Inc.
URL         : https://github.com/openshift/openshift-ansible
Summary     : Openshift and Atomic Enterprise Ansible
Description :
Openshift and Atomic Enterprise Ansible

This repo contains Ansible code and playbooks
for Openshift and Atomic Enterprise.
```

Comment 3 Jan Wozniak 2017-08-23 12:34:07 UTC
The playbook you use [1] as an entry point is probably not the one you are supposed to be calling with your inventory [2]. The task file in 'common' is shared and called after certain initialization happens by a specific playbook. In your case, the specific playbook should be in 'byo' [3] because you are 'bringing your own' infrastructure.

Let me know if you still have an issue.



[1] /usr/share/ansible/openshift-ansible/playbooks/common/openshift-cluster/openshift_logging.yml

[2] https://docs.openshift.com/container-platform/3.6/install_config/aggregate_logging.html

[3] /usr/share/ansible/openshift-ansible/playbooks/byo/openshift-cluster/openshift-logging.yml

Comment 4 Jeff Cantrill 2017-08-23 15:36:55 UTC
@Scott do you have any comments regarding #c0 that might help us reproduce?

Comment 5 Scott Dodson 2017-08-23 16:50:21 UTC
Can you please call playbooks/byo/openshift-cluster/openshift_logging.yml instead? You should not be calling any playbook in playbooks/common directly.

Comment 6 Christian Hernandez 2017-08-23 16:57:23 UTC
I'll run it as asked here...but the docs state otherwise

https://docs.openshift.com/container-platform/3.6/install_config/aggregate_logging.html#aggregate-logging-cleanup

Comment 7 Jeff Cantrill 2017-08-25 19:39:56 UTC
If you find that Scott's comment resolves this issue, please update this issue to be a docs bug.

Comment 8 Christian Hernandez 2017-08-25 20:10:34 UTC
I just tested this. The `byo` playbook worked.

Comment 9 Jeff Cantrill 2017-08-25 20:34:46 UTC
Moving this to a docs bug to correct


Note You need to log in before you can comment on or make changes to this bug.