Bug 1755009 - kubemacpool not removed when deleting CNV
Summary: kubemacpool not removed when deleting CNV
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Container Native Virtualization (CNV)
Classification: Red Hat
Component: Networking
Version: 2.1.0
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: 2.2.0
Assignee: Sebastian Scheinkman
QA Contact: Yan Du
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-09-24 14:24 UTC by Irina Gulina
Modified: 2020-01-30 16:27 UTC (History)
8 users (show)

Fixed In Version: kubemacpool-container-v2.2.0-2
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-01-30 16:27:13 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
bz reproducing steps (17.29 KB, text/plain)
2019-09-24 14:24 UTC, Irina Gulina
no flags Details
reproduced again (10.20 KB, text/plain)
2019-10-02 13:00 UTC, Irina Gulina
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2020:0307 0 None None None 2020-01-30 16:27:23 UTC

Description Irina Gulina 2019-09-24 14:24:22 UTC
Created attachment 1618620 [details]
bz reproducing steps

Description of problem:
kubemacpool not removed when deleting CNV

Version-Release number of selected component (if applicable):
kubemacpool-mac-controller-manager-6f7c4d499b-2xlmj  brew/kubemacpool:v2.1.0-7
kubemacpool-mac-controller-manager-6f7c4d499b-jkrlv   brew/kubemacpool:v2.1.0-7

How reproducible:
always

Steps to Reproduce:
1. install CNV
2. delete CNV
3. check resources

Actual results:
kubemacpool is the only remaining resource in openshift-cnv and consequently the ns can'be removed. 
see logs attached 

Expected results:
kubemacpool removed

Additional info:

Comment 1 Meni Yakove 2019-10-02 08:12:44 UTC
How did you delete CNV?

I undeployed CNV via
curl -k https://pkgs.devel.redhat.com/cgit/containers/hco-bundle-registry/plain/cleanup-ds.sh?h=cnv-2.1-rhel-8 | bash -x
and No MAC pool leftovers

Comment 2 Irina Gulina 2019-10-02 13:00:31 UTC
Created attachment 1621823 [details]
reproduced again

Comment 3 Nelly Credi 2019-10-06 13:50:51 UTC
@Sebastian, can you please check?

Comment 4 Sebastian Scheinkman 2019-10-07 09:11:56 UTC
I will take a look on this issue

Comment 5 Sebastian Scheinkman 2019-10-07 13:56:08 UTC
I found the issue.

The kubemacpool was deployed to is own namespace before so the network addons operator removed all the namespace (including the service).
Now we deploy the kubemacpool in the same namespace as the network addons operator to support OLM upgrades.


Create a PR to fix that issue.

https://github.com/K8sNetworkPlumbingWG/kubemacpool/pull/71

With this PR we add a owner reference on the service object so when the kubemacpool manager deployment is removed the service will be also removed.

Comment 6 Yan Du 2019-11-14 07:48:39 UTC
Test on OCP4.2 + CNV2.1, no kubemacpool resource left over after removing cnv

$ oc get all -n openshift-cnv
No resources found.

QE will re-test once get an available OCP4.3 + CNV2.2 build.

Comment 7 Yan Du 2019-11-14 07:50:12 UTC
Please ignore above #comment6, Fix the typo:

Test on OCP4.2 + CNV2.2, no kubemacpool resource left over after removing cnv

$ oc get all -n openshift-cnv
No resources found.

QE will re-test once get an available OCP4.3 + CNV2.2 build.

Comment 8 Meni Yakove 2019-11-14 14:10:19 UTC
Yan, if you tested on kubemacpool version with the fix you can set this bug as verified.

Comment 9 Yan Du 2019-11-15 02:44:50 UTC
Test with image container-native-virtualization-kubemacpool:v2.2.0-3

Comment 11 errata-xmlrpc 2020-01-30 16:27:13 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2020:0307


Note You need to log in before you can comment on or make changes to this bug.