Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1608269

Summary: Uninstall the olm failed via the openshift-ansible
Product: OpenShift Container Platform Reporter: Jian Zhang <jiazha>
Component: Service BrokerAssignee: Evan Cordell <ecordell>
Status: CLOSED ERRATA QA Contact: Jian Zhang <jiazha>
Severity: medium Docs Contact:
Priority: medium    
Version: 3.11.0CC: aos-bugs, chezhang, jmatthew, jokerman, mmccomas, vrutkovs, zitang
Target Milestone: ---   
Target Release: 3.11.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2018-10-11 07:22:06 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Jian Zhang 2018-07-25 08:45:07 UTC
Description of problem:
Got below errors when uninstalling the olm via ansible.
"Error from server (NotFound): the server could not find the requested resource (delete catalogsource-v1s.app.coreos.com tectonic-ocs)\n"

Version-Release number of selected component (if applicable):
openshift-ansible master branch
OCP 3.11
oc v3.11.0-0.9.0

How reproducible:
always

Steps to Reproduce:
1, build an OCP 3.11 with the olm enabled.
2. git clone the opnshift-ansible, checkout the master branch.

3. Enable the below variables in your inventory file, and then run "ansible-playbook -i qe-inventory-host-file playbooks/olm/config.yml" command.

operator_lifecycle_manager_remove=true
operator_lifecycle_manager_install=false


Actual results:
TASK [olm : Remove tectonic-ocs CatalogSource-v1 manifest] ********************************************************************************************************************************************************
Wednesday 25 July 2018  15:52:50 +0800 (0:00:02.068)       0:01:09.365 ******** 
...
fatal: [qe-jiazha-311master-etcd-1.0724-q2x.qe.rhcloud.com]: FAILED! => {"changed": false, "msg": {"cmd": "/usr/bin/oc delete CatalogSource-v1 tectonic-ocs -n operator-lifecycle-manager", "results": {}, "returncode": 1, "stderr": "Error from server (NotFound): the server could not find the requested resource (delete catalogsource-v1s.app.coreos.com tectonic-ocs)\n", "stdout": ""}}

Before the uninstalling, the "tectonic-ocs" did exist, I think we should put this removing before the CRD removing.

Expected results:
The olm can be uninstalled success via ansible.

Additional info:

Comment 1 Jian Zhang 2018-07-25 08:52:08 UTC
I submit a PR https://github.com/openshift/openshift-ansible/pull/9334 to fix this issue, looking for your comments.

Comment 3 Vadim Rutkovsky 2018-08-10 07:54:57 UTC
Fix is available in openshift-ansible-3.11.0-0.13.0

Comment 5 Jian Zhang 2018-08-15 07:37:26 UTC
Remove the Depends on 1615191 since I can install the OLM separately.

Set below variable in the inventory file:
operator_lifecycle_manager_remove=true
operator_lifecycle_manager_install=false

And then, run:
[jzhang@localhost openshift-ansible]$ ansible-playbook -i qe-inventory-host-file playbooks/olm/config.yml 

Success, but the namespace was NOT deleted. Verify failed.

[root@qe-jiazha-round3master-etcd-1 ~]# oc get all -n operator-lifecycle-manager
No resources found.
[root@qe-jiazha-round3master-etcd-1 ~]# oc get ns | grep operator
operator-lifecycle-manager          Active    3h


Verify failed and I file a PR(https://github.com/openshift/openshift-ansible/pull/9599) to fix this. Please have a review!

Comment 6 Vadim Rutkovsky 2018-08-15 09:52:06 UTC
Thanks for the PR, merged in master

Comment 7 Vadim Rutkovsky 2018-08-15 12:03:58 UTC
Fix is available in openshift-ansible-3.11.0-0.16.0

Comment 8 Jian Zhang 2018-08-16 03:10:23 UTC
I used the openshift-ansible-3.11.0-0.16.0 branch to test it, LGTM, verify it.

Comment 10 errata-xmlrpc 2018-10-11 07:22:06 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:2652