Verify on: OCP- 4.6.0-0.nightly-2021-01-03-162024 RHV- 4.4.4.3-0.5 Step: 1) In the command line check 'oc get nodes' and verify that all VMs there 1) Open RHV UI 2) In the 'Virtual Machine' screen, choose any worker virtual machine and 'Shutdown' 3) Remove the virtual machine 4) come back to the command line and press again 'oc get nodes'- verify that node was deleted 5) check 'oc get machines' - verify that one machine became to 'failed' and after a will it will delete also Result: deleted vm from rhv was updated on nodes and machines list if you perform these steps again, it leads to different bug - Bug 1912567 1) Open RHV UI 2) In the 'Virtual Machine' screen, choose any worker virtual machine and 'Shutdown' 3) Remove the virtual machine 4) check 'oc get nodes'- verify that node was deleted 5) check 'oc get machines' - verify that relevant machine became to 'failed' actual: node became to 'NotReady' status and machine status doesn't change [root@mgold-ocp-engine primary]# oc get machines NAME PHASE TYPE REGION ZONE AGE ovirt10-7c7kw-master-0 Running 4h1m ovirt10-7c7kw-master-1 Running 4h1m ovirt10-7c7kw-master-2 Running 4h1m ovirt10-7c7kw-worker-0-9t49p Failed 14m ovirt10-7c7kw-worker-0-svn7p Running 104m [root@mgold-ocp-engine primary]# oc get nodes NAME STATUS ROLES AGE VERSION ovirt10-7c7kw-master-0 Ready master 3h57m v1.19.0+9c69bdc ovirt10-7c7kw-master-1 Ready master 3h57m v1.19.0+9c69bdc ovirt10-7c7kw-master-2 Ready master 3h57m v1.19.0+9c69bdc ovirt10-7c7kw-worker-0-svn7p NotReady worker 96m v1.19.0+9c69bdc expected: node was deleted and relevant machine became to 'failed'
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: OpenShift Container Platform 4.6.12 bug fix and security update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2021:0037