Bug 1497408 - Failed to undeploy HOSA
Summary: Failed to undeploy HOSA
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Installer
Version: 3.7.0
Hardware: Unspecified
OS: Unspecified
medium
high
Target Milestone: ---
: 3.9.0
Assignee: John Sanda
QA Contact: Junqi Zhao
URL:
Whiteboard:
Depends On:
Blocks: 1548567
TreeView+ depends on / blocked
 
Reported: 2017-09-30 08:10 UTC by Junqi Zhao
Modified: 2018-03-28 14:07 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1548567 (view as bug list)
Environment:
Last Closed: 2018-03-28 14:07:14 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
HOSA pods info (4.54 KB, text/plain)
2017-09-30 08:10 UTC, Junqi Zhao
no flags Details
Undeploy HOSA ansible log (736.69 KB, text/plain)
2017-09-30 08:11 UTC, Junqi Zhao
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2018:0489 0 None None None 2018-03-28 14:07:53 UTC

Description Junqi Zhao 2017-09-30 08:10:39 UTC
Created attachment 1332610 [details]
HOSA pods info

Description of problem:
it failed to undeploy HOSA only,  hawkular-openshift-agent pods always in ContainerCreating status, described pods, 
error info:
	MountVolume.SetUp failed for volume "hawkular-openshift-agent-token-f60xz" : secrets "hawkular-openshift-agent-token-f60xz" not found

Checked,secret,sa were deleted, but configmap, daemonset,clusterrole were not delete
# oc get secret | grep hawkular-openshift-agent
# oc get template  | grep agent
No resources found.
# oc get configmap | grep hawkular-openshift-agent
hawkular-openshift-agent-configuration   2         18m
# oc get daemonset | grep hawkular-openshift-agent
hawkular-openshift-agent   2         2         0         2            0           <none>          18m
# oc get clusterrole | grep hawkular-openshift-agent
hawkular-openshift-agent
# oc get sa | grep hawkular-openshift-agent
# oc get secret | grep hawkular-openshift-agent
# oc get template  | grep hawkular-openshift-agent
No resources found.



Version-Release number of the following components:
# rpm -qa | grep openshift-ansible
openshift-ansible-docs-3.7.0-0.134.0.git.0.6f43fc3.el7.noarch
openshift-ansible-filter-plugins-3.7.0-0.134.0.git.0.6f43fc3.el7.noarch
openshift-ansible-3.7.0-0.134.0.git.0.6f43fc3.el7.noarch
openshift-ansible-lookup-plugins-3.7.0-0.134.0.git.0.6f43fc3.el7.noarch
openshift-ansible-roles-3.7.0-0.134.0.git.0.6f43fc3.el7.noarch
openshift-ansible-callback-plugins-3.7.0-0.134.0.git.0.6f43fc3.el7.noarch
openshift-ansible-playbooks-3.7.0-0.134.0.git.0.6f43fc3.el7.noarch

HOSA image:
metrics-hawkular-openshift-agent:v3.7.0-0.135.0.0


How reproducible:
Always

Steps to Reproduce:
1. Deploy HOSA along with Metrics
2. After all the pods get ready, undeploy HOSA only
3.

Actual results:
Failed to undeploy HOSA

Expected results:
Should be undepoly HOSA successfully

Additional info:
#Inventory file
[OSEv3:children]
masters
etcd

[masters]
${MASTER} openshift_public_hostname=${MASTER}

[etcd]
${ETCD} openshift_public_hostname=${ETCD}


[OSEv3:vars]
ansible_ssh_user=root
ansible_ssh_private_key_file="~/libra.pem"
deployment_type=openshift-enterprise
openshift_docker_additional_registries=brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888


# Undeploy HOSA
openshift_metrics_install_hawkular_agent=false
openshift_metrics_hawkular_hostname=hawkular-metrics.apps.0930-0ar.qe.rhcloud.com
openshift_metrics_image_prefix=brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/openshift3/
openshift_metrics_image_version=v3.7

Comment 1 Junqi Zhao 2017-09-30 08:11:50 UTC
Created attachment 1332611 [details]
Undeploy HOSA ansible log

Comment 4 Junqi Zhao 2018-01-25 09:33:56 UTC
Tested, HOSA can be uninstalled successfully now, although the hawkular-openshift-agent confimap is kept after uninstallation.

oc get configmap | grep hawkular-openshift-agent
hawkular-openshift-agent-configuration   2         19m

Env:
metrics-hawkular-openshift-agent/images/v3.9.0-0.24.0.0
# rpm -qa | grep openshift-ansible
openshift-ansible-playbooks-3.9.0-0.23.0.git.0.d53d7ed.el7.noarch
openshift-ansible-roles-3.9.0-0.23.0.git.0.d53d7ed.el7.noarch
openshift-ansible-3.9.0-0.23.0.git.0.d53d7ed.el7.noarch
openshift-ansible-docs-3.9.0-0.23.0.git.0.d53d7ed.el7.noarch

Comment 5 Joel Rosental R. 2018-01-25 10:09:02 UTC
The current workaround for OCP 3.7 would be just to remove ds (oc delete ds hawkular-openshift-agent)?

Comment 6 Junqi Zhao 2018-01-26 00:35:11 UTC
oh, for OCP 3.7

You can use the following commands to delete HOSA
oc project default
oc delete configmap hawkular-openshift-agent-configuration
oc delete daemonset hawkular-openshift-agent
oc delete clusterrole hawkular-openshift-agent
oc delete sa hawkular-openshift-agent

Comment 7 Junqi Zhao 2018-01-26 00:37:14 UTC
Please change it to ON_QA, it is fixed in 3.9, see Comment 4

Comment 8 John Sanda 2018-02-23 21:42:58 UTC
Because it is not completely fixed for 3.9 and because the changes cover both 3.7 and 3.9, I am going to change the target release of this to 3.9. I will also clone this BZ to create a 3.7 BZ.

Comment 9 John Sanda 2018-02-23 21:43:25 UTC
PR submitted - https://github.com/openshift/openshift-ansible/pull/7276

Comment 11 Junqi Zhao 2018-02-27 03:24:13 UTC
Tested with openshift-ansible-3.9.0-0.53.0, HOSA can be undeployed successfully now.

# rpm -qa | grep openshift-ansible
openshift-ansible-roles-3.9.0-0.53.0.git.0.f8f01ef.el7.noarch
openshift-ansible-playbooks-3.9.0-0.53.0.git.0.f8f01ef.el7.noarch
openshift-ansible-3.9.0-0.53.0.git.0.f8f01ef.el7.noarch
openshift-ansible-docs-3.9.0-0.53.0.git.0.f8f01ef.el7.noarch

Comment 14 errata-xmlrpc 2018-03-28 14:07:14 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:0489


Note You need to log in before you can comment on or make changes to this bug.