Bug 1659797 - Upgrading kubevirt to v0.12.0-alpha.1 leaves old, non-functional components
Summary: Upgrading kubevirt to v0.12.0-alpha.1 leaves old, non-functional components
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Container Native Virtualization (CNV)
Classification: Red Hat
Component: Installation
Version: 1.3
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: 2.0
Assignee: Marc Sluiter
QA Contact: Irina Gulina
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-12-16 14:41 UTC by Yossi Segev
Modified: 2019-07-24 20:15 UTC (History)
8 users (show)

Fixed In Version: kubevirt-0.17.0-g690215d4f.15.g8f65322.690215d.el7 kubevirt-0.17.0-g690215d4f.19.gb03d7c7.690215d.el8 virt-operator-container-v2.0.0-29 virt-api-container-v2.0.0-29 virt-controll-container-v2.0.0-29 virt-handler-container-v2.0.0-29 virt-launcher-container-v2.0.0-29
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-07-24 20:15:50 UTC
Target Upstream Version:


Attachments (Terms of Use)
verification logs (4.18 KB, text/plain)
2019-06-17 04:23 UTC, Irina Gulina
no flags Details


Links
System ID Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2019:1850 None None None 2019-07-24 20:15:59 UTC

Description Yossi Segev 2018-12-16 14:41:47 UTC
Description of problem:
Upgrading kubevirt to v0.12.0-alpha.1 leaves old components, which must be removed manually.
There is not supposed to be any issue when installing a new kubevirt for the first time.


Version-Release number of selected component (if applicable):
v0.12.0-alpha.1


How reproducible:


Steps to Reproduce:
1. From a machine that has kubevirt installed - login as an OpenShift client to some installed OpenShift ("oc login")
1. Remove old installation of kubevirt:
# oc delete -f https://github.com/kubevirt/kubevirt/releases/download/v0.9.3/kubevirt.yaml
2. Install kubevirt v0.12.0-alpha.1:
# oc apply -f https://github.com/kubevirt/kubevirt/releases/download/v0.12.0-alpha.1/kubevirt.yaml
3. View the pod to verify that all of them were started successfully.
#oc get pods -n kubevirt


Actual results:
virt-api-* pods are down.
# oc get pods -n kubevirt
NAME                               READY     STATUS    RESTARTS   AGE
virt-api-5c9b678cb5-8cmd9          0/1       CrashLoopBackOff   27         1h                                                                                                                                  
virt-api-5c9b678cb5-cccxn          0/1       CrashLoopBackOff   26         1hvirt-controller-5bf4456857-hs6js   1/1       Running   0          4h
virt-controller-5bf4456857-lp97q   1/1       Running   0          4h
virt-handler-mhm7h                 1/1       Running   0          4h
virt-handler-tb59d                 1/1       Running   0          4h


Expected results:
All pods are supposed to be in "Running" status.


Additional info:
Some resources (webhooks and apiservices) remain from the previous installation.

Workaround:
1. Manually delete these resources:
# oc delete apiservices v1alpha2.subresources.kubevirt.io
# oc delete validatingwebhookConfigurations virt-api-validator
# oc delete mutatingwebhookConfigurations virt-api-mutator

2. Manually delete the virt-api-* pods
# oc delete pods virt-api-5c9b678cb5-8cmd9 virt-api-5c9b678cb5-cccxn
New pods are created.

3. Verify the new pods are in valid Running status:
# oc get pods -n kubevirt
NAME                               READY     STATUS    RESTARTS   AGE
virt-api-5c9b678cb5-mn79q          1/1       Running   0          1h
virt-api-5c9b678cb5-rt5v6          1/1       Running   0          1h
virt-controller-5bf4456857-hs6js   1/1       Running   0          4h
virt-controller-5bf4456857-lp97q   1/1       Running   0          4h
virt-handler-mhm7h                 1/1       Running   0          4h
virt-handler-tb59d                 1/1       Running   0          4h

Comment 1 Nelly Credi 2018-12-16 14:44:19 UTC
oops, wrong bug
removing target release

Comment 3 Fabian Deutsch 2018-12-18 15:50:46 UTC
My 2ct: Let's use a kbase to cover the removal of left over pods.
It's specific for 1.3 to 1.4, later on this should be covered by operators and it would really be a bug.

Comment 4 Stephen Gordon 2019-01-14 14:18:23 UTC
(In reply to Fabian Deutsch from comment #3)
> My 2ct: Let's use a kbase to cover the removal of left over pods.
> It's specific for 1.3 to 1.4, later on this should be covered by operators
> and it would really be a bug.

Agree, we did not as I recall commit to deliver automatic upgrades for 1.4.

Comment 5 Nelly Credi 2019-01-15 13:34:56 UTC
this should be handled by the kubevirt operator
please put fixed in version once we have a new kubevirt-ansible build

Comment 6 Fabian Deutsch 2019-01-15 13:45:45 UTC
With the kubevrit operator in 1.4 upgrades will still not work.

However, the operator should be able to remove all artifacts of a kubevirt deployment.

(Prior to my previous off-list remark I am not sure if this is the case in 1.4)

Comment 9 Nelly Credi 2019-05-02 17:00:55 UTC
where do we stand with this bug?

Comment 10 Fabian Deutsch 2019-05-02 19:00:00 UTC
Assuming that we still test the virt operator independently.

Marc, please provide the steps to remove KubeVirt from a cluster using the operator.

Comment 11 Marc Sluiter 2019-05-06 08:27:29 UTC
Speeking about KubeVirt alone:

- delete the KubeVirt CR, the operator will then remove kubevirt
- after that you can delete the operator itself (oc delete -f kubevirt-operator.yaml)

Not sure if it already works the same way using HCO

Comment 12 Fabian Deutsch 2019-05-10 12:12:08 UTC
Moving this to ON_QA as HCO should be taking care of cleanup these days.

Comment 14 Irina Gulina 2019-05-23 11:57:24 UTC
HCO operator/kubevirt doesn't clean up a ns (yet). See: https://bugzilla.redhat.com/show_bug.cgi?id=1712429
Moving back to Assign.

Comment 15 Fabian Deutsch 2019-05-24 08:50:06 UTC
The steps to verify this bug:

1. Deploy virt-operator (using CSV)
2. Deploy KubeVirt CR
3. Wait for KubeVirt ot be ready
4. Remove KubeVirt CR

All kubevirt pods, except kubevirt operator should be gone.

5. Remove KubeVirt CSV

All kubevirt pods including virt-operator should be gone

The namespace issue is a different bug, and uninstalling kubevirt via namespace is _not_ the way to uninstall kubevirt atm

Please re-verify with the steps above

Comment 16 Irina Gulina 2019-06-17 04:23:33 UTC
Created attachment 1581276 [details]
verification logs

Comment 17 Irina Gulina 2019-06-17 04:26:05 UTC
All pods are gone, see the attachment, the project is not terminated as in BZ 1712429.

Comment 19 errata-xmlrpc 2019-07-24 20:15:50 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2019:1850


Note You need to log in before you can comment on or make changes to this bug.