Bug 1910104 - [oVirt] Node is not removed when VM has been removed from oVirt engine
Summary: [oVirt] Node is not removed when VM has been removed from oVirt engine
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Cloud Compute
Version: 4.7
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: 4.6.z
Assignee: Gal Zaidman
QA Contact: michal
URL:
Whiteboard:
Depends On: 1898487
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-12-22 17:00 UTC by OpenShift BugZilla Robot
Modified: 2021-01-18 18:00 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-01-18 18:00:14 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift cluster-api-provider-ovirt pull 79 0 None closed [release-4.6] Bug 1910104: Node is not removed when VM has been removed from oVirt engine 2021-01-12 22:21:35 UTC
Red Hat Product Errata RHSA-2021:0037 0 None None None 2021-01-18 18:00:38 UTC

Comment 2 michal 2021-01-04 19:28:12 UTC
Verify on:
OCP- 4.6.0-0.nightly-2021-01-03-162024
RHV- 4.4.4.3-0.5

Step:
1) In the command line check 'oc get nodes' and verify that all VMs there
1) Open RHV UI
2) In the 'Virtual Machine' screen, choose any worker virtual machine and 'Shutdown'
3) Remove the virtual machine
4) come back to the command line and press again 'oc get nodes'- verify that node was deleted
5) check 'oc get machines' - verify that one machine became to 'failed' and after a will it will delete also


Result:
deleted vm from rhv was updated on nodes and machines list

if you perform these steps again, it leads to different bug - Bug 1912567
1) Open RHV UI
2) In the 'Virtual Machine' screen, choose any worker virtual machine and 'Shutdown'
3) Remove the virtual machine
4) check 'oc get nodes'- verify that node was deleted
5) check 'oc get machines' - verify that relevant machine became to 'failed'

actual:
node became to 'NotReady' status and machine status doesn't change

[root@mgold-ocp-engine primary]# oc get machines
NAME                           PHASE     TYPE   REGION   ZONE   AGE
ovirt10-7c7kw-master-0         Running                          4h1m
ovirt10-7c7kw-master-1         Running                          4h1m
ovirt10-7c7kw-master-2         Running                          4h1m
ovirt10-7c7kw-worker-0-9t49p   Failed                           14m
ovirt10-7c7kw-worker-0-svn7p   Running                          104m
[root@mgold-ocp-engine primary]# oc get nodes
NAME                           STATUS     ROLES    AGE     VERSION
ovirt10-7c7kw-master-0         Ready      master   3h57m   v1.19.0+9c69bdc
ovirt10-7c7kw-master-1         Ready      master   3h57m   v1.19.0+9c69bdc
ovirt10-7c7kw-master-2         Ready      master   3h57m   v1.19.0+9c69bdc
ovirt10-7c7kw-worker-0-svn7p   NotReady   worker   96m     v1.19.0+9c69bdc


expected: 
node was deleted and relevant machine became to 'failed'

Comment 5 errata-xmlrpc 2021-01-18 18:00:14 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: OpenShift Container Platform 4.6.12 bug fix and security update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2021:0037


Note You need to log in before you can comment on or make changes to this bug.