Bug 1755009

Summary: kubemacpool not removed when deleting CNV
Product: Container Native Virtualization (CNV) Reporter: Irina Gulina <igulina>
Component: NetworkingAssignee: Sebastian Scheinkman <sscheink>
Status: CLOSED ERRATA QA Contact: Yan Du <yadu>
Severity: medium Docs Contact:
Priority: medium    
Version: 2.1.0CC: cnv-qe-bugs, danken, myakove, ncredi, phoracek, sgordon, sscheink, yadu
Target Milestone: ---   
Target Release: 2.2.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: kubemacpool-container-v2.2.0-2 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2020-01-30 16:27:13 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
bz reproducing steps
none
reproduced again none

Description Irina Gulina 2019-09-24 14:24:22 UTC
Created attachment 1618620 [details]
bz reproducing steps

Description of problem:
kubemacpool not removed when deleting CNV

Version-Release number of selected component (if applicable):
kubemacpool-mac-controller-manager-6f7c4d499b-2xlmj  brew/kubemacpool:v2.1.0-7
kubemacpool-mac-controller-manager-6f7c4d499b-jkrlv   brew/kubemacpool:v2.1.0-7

How reproducible:
always

Steps to Reproduce:
1. install CNV
2. delete CNV
3. check resources

Actual results:
kubemacpool is the only remaining resource in openshift-cnv and consequently the ns can'be removed. 
see logs attached 

Expected results:
kubemacpool removed

Additional info:

Comment 1 Meni Yakove 2019-10-02 08:12:44 UTC
How did you delete CNV?

I undeployed CNV via
curl -k https://pkgs.devel.redhat.com/cgit/containers/hco-bundle-registry/plain/cleanup-ds.sh?h=cnv-2.1-rhel-8 | bash -x
and No MAC pool leftovers

Comment 2 Irina Gulina 2019-10-02 13:00:31 UTC
Created attachment 1621823 [details]
reproduced again

Comment 3 Nelly Credi 2019-10-06 13:50:51 UTC
@Sebastian, can you please check?

Comment 4 Sebastian Scheinkman 2019-10-07 09:11:56 UTC
I will take a look on this issue

Comment 5 Sebastian Scheinkman 2019-10-07 13:56:08 UTC
I found the issue.

The kubemacpool was deployed to is own namespace before so the network addons operator removed all the namespace (including the service).
Now we deploy the kubemacpool in the same namespace as the network addons operator to support OLM upgrades.


Create a PR to fix that issue.

https://github.com/K8sNetworkPlumbingWG/kubemacpool/pull/71

With this PR we add a owner reference on the service object so when the kubemacpool manager deployment is removed the service will be also removed.

Comment 6 Yan Du 2019-11-14 07:48:39 UTC
Test on OCP4.2 + CNV2.1, no kubemacpool resource left over after removing cnv

$ oc get all -n openshift-cnv
No resources found.

QE will re-test once get an available OCP4.3 + CNV2.2 build.

Comment 7 Yan Du 2019-11-14 07:50:12 UTC
Please ignore above #comment6, Fix the typo:

Test on OCP4.2 + CNV2.2, no kubemacpool resource left over after removing cnv

$ oc get all -n openshift-cnv
No resources found.

QE will re-test once get an available OCP4.3 + CNV2.2 build.

Comment 8 Meni Yakove 2019-11-14 14:10:19 UTC
Yan, if you tested on kubemacpool version with the fix you can set this bug as verified.

Comment 9 Yan Du 2019-11-15 02:44:50 UTC
Test with image container-native-virtualization-kubemacpool:v2.2.0-3

Comment 11 errata-xmlrpc 2020-01-30 16:27:13 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2020:0307