Bug 1982595 - Web console does not display new RHV VM in inventory
Summary: Web console does not display new RHV VM in inventory
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Migration Toolkit for Virtualization
Classification: Red Hat
Component: Inventory
Version: 2.1.0
Hardware: Unspecified
OS: Unspecified
high
urgent
Target Milestone: ---
: 2.1.0
Assignee: Jeff Ortel
QA Contact: Ilanit Stein
Avital Pinnick
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-07-15 08:55 UTC by Tzahi Ashkenazi
Modified: 2021-08-26 07:09 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-08-26 07:09:25 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github konveyor forklift-controller pull 316 0 None open Handle user add VM (asynchronously). 2021-07-19 19:40:44 UTC
Red Hat Product Errata RHEA-2021:3278 0 None None None 2021-08-26 07:09:38 UTC

Description Tzahi Ashkenazi 2021-07-15 08:55:36 UTC
Description of problem:
ceating  a new VM on  RHEV, the new vm doesn't appear on MTV UI, exit on > " Refresh failed / Apply event failed"

new VM > " test-tzahi"
on cloud38 with RHEV red01 using MTV29  

the new VM that was created on red01 doesn't appears on the MTV UI on new plan 
from the inventory log : 

{"level":"info","ts":1626338131.177048,"logger":"reconciler|ovirt","msg":"Apply event failed.","provider":"openshift-mtv/rhev01","sql":"\nSELECT\nID\n,Name\n,Description\n,Revision\n,Cluster\n,Host\n,RevisionValidated\n,PolicyVersion\n,GuestName\n,CpuSockets\n,CpuCores\n,CpuAffinity\n,CpuShares\n,Memory\n,BalloonedMemory\n,BIOS\n,Display\n,IOThreads\n,StorageErrorResumeBehaviour\n,HaEnabled\n,UsbEnabled\n,BootMenuEnabled\n,PlacementPolicyAffinity\n,Timezone\n,Status\n,Stateless\n,HasIllegalImages\n,NumaNodeAffinity\n,LeaseStorageDomain\n,DiskAttachments\n,NICs\n,HostDevices\n,CDROMs\n,WatchDogs\n,Properties\n,Snapshots\n,Concerns\nFROM VM\nWHERE\nID = :ID\n;\n","params":[{"Name":"ID","Value":"7d021a12-3eab-49bd-bd33-c2e9d72dfd31"}],"event":{"id":"1618769","code":"97","description":"The disk test-tzahi_Disk1 was successfully added to VM test-tzahi.","data_center":{"href":"/ovirt-engine/api/datacenters/25067f1d-8caa-418f-814d-14c779383c25","id":"25067f1d-8caa-418f-814d-14c779383c25"},"cluster":{"href":"/ovirt-engine/api/clusters/6b54093a-e3b2-4a75-b0c9-8f8fb23cde9f","id":"6b54093a-e3b2-4a75-b0c9-8f8fb23cde9f"},"host":{"href":"","id":""},"vm":{"href":"/ovirt-engine/api/vms/7d021a12-3eab-49bd-bd33-c2e9d72dfd31","id":"7d021a12-3eab-49bd-bd33-c2e9d72dfd31"}},"error":"sql: no rows in result set","stacktrace":"\ngithub.com/konveyor/controller/pkg/inventory/model.Table.scan()\n\t/remote-source/deps/gomod/pkg/mod/github.com/konveyor/controller.6/pkg/inventory/model/table.go:1002\ngithub.com/konveyor/controller/pkg/inventory/model.Table.Get()\n\t/remote-source/deps/gomod/pkg/mod/github.com/konveyor/controller.6/pkg/inventory/model/table.go:460\ngithub.com/konveyor/controller/pkg/inventory/model.(*Tx).Get()\n\t/remote-source/deps/gomod/pkg/mod/github.com/konveyor/controller.6/pkg/inventory/model/client.go:455\ngithub.com/konveyor/forklift-controller/pkg/controller/provider/container/ovirt.(*VMAdapter).Apply.func2()\n\t/remote-source/app/pkg/controller/provider/container/ovirt/model.go:768\ngithub.com/konveyor/forklift-controller/pkg/controller/provider/container/ovirt.(*Reconciler).apply()\n\t/remote-source/app/pkg/controller/provider/container/ovirt/reconciler.go:403\ngithub.com/konveyor/forklift-controller/pkg/controller/provider/container/ovirt.(*Reconciler).refresh()\n\t/remote-source/app/pkg/controller/provider/container/ovirt/reconciler.go:358\ngithub.com/konveyor/forklift-controller/pkg/controller/provider/container/ovirt.(*Reconciler).Start.func1()\n\t/remote-source/app/pkg/controller/provider/container/ovirt/reconciler.go:131\nruntime.goexit()\n\t/usr/lib/golang/src/runtime/asm_amd64.s:1374"}



{"level":"info","ts":1626338131.1771224,"logger":"reconciler|ovirt","msg":"Refresh failed.","provider":"openshift-mtv/rhev01","sql":"\nSELECT\nID\n,Name\n,Description\n,Revision\n,Cluster\n,Host\n,RevisionValidated\n,PolicyVersion\n,GuestName\n,CpuSockets\n,CpuCores\n,CpuAffinity\n,CpuShares\n,Memory\n,BalloonedMemory\n,BIOS\n,Display\n,IOThreads\n,StorageErrorResumeBehaviour\n,HaEnabled\n,UsbEnabled\n,BootMenuEnabled\n,PlacementPolicyAffinity\n,Timezone\n,Status\n,Stateless\n,HasIllegalImages\n,NumaNodeAffinity\n,LeaseStorageDomain\n,DiskAttachments\n,NICs\n,HostDevices\n,CDROMs\n,WatchDogs\n,Properties\n,Snapshots\n,Concerns\nFROM VM\nWHERE\nID = :ID\n;\n","params":[{"Name":"ID","Value":"7d021a12-3eab-49bd-bd33-c2e9d72dfd31"}],"error":"sql: no rows in result set","stacktrace":"\ngithub.com/konveyor/controller/pkg/inventory/model.Table.scan()\n\t/remote-source/deps/gomod/pkg/mod/github.com/konveyor/controller.6/pkg/inventory/model/table.go:1002\ngithub.com/konveyor/controller/pkg/inventory/model.Table.Get()\n\t/remote-source/deps/gomod/pkg/mod/github.com/konveyor/controller.6/pkg/inventory/model/table.go:460\ngithub.com/konveyor/controller/pkg/inventory/model.(*Tx).Get()\n\t/remote-source/deps/gomod/pkg/mod/github.com/konveyor/controller.6/pkg/inventory/model/client.go:455\ngithub.com/konveyor/forklift-controller/pkg/controller/provider/container/ovirt.(*VMAdapter).Apply.func2()\n\t/remote-source/app/pkg/controller/provider/container/ovirt/model.go:768\ngithub.com/konveyor/forklift-controller/pkg/controller/provider/container/ovirt.(*Reconciler).apply()\n\t/remote-source/app/pkg/controller/provider/container/ovirt/reconciler.go:403\ngithub.com/konveyor/forklift-controller/pkg/controller/provider/container/ovirt.(*Reconciler).refresh()\n\t/remote-source/app/pkg/controller/provider/container/ovirt/reconciler.go:358\ngithub.com/konveyor/forklift-controller/pkg/controller/provider/container/ovirt.(*Reconciler).Start.func1()\n\t/remote-source/app/pkg/controller/provider/container/ovirt/reconciler.go:131\nruntime.goexit()

Version-Release number of selected component (if applicable):

Cloud38 
OCP-48
CNV-48-451
OCS-48
MTV-2.1.0-29

Comment 1 Jeff Ortel 2021-07-15 15:28:14 UTC
The root cause is a difference in events reported by different RHV instances.
ENG mostly tests with: https://rhvm.v2v.bos.redhat.com/ which reports a created VM as USER_ADD_VM=34 and the VM add is properly handled and reflected in the inventory.  However, the RHV instance https://rhev-red-01.rdu2.scalelab.redhat.com/ reports as: USER_ADD_VM_STARTED=37 which is not processed and results in the VM creation being missed and subsequent configuration events failing.

Comment 2 Arik 2021-07-18 15:57:47 UTC
There can be 3 events when a VM is created successfully in RHV:
USER_ADD_VM(34)
USER_ADD_VM_STARTED(37)
USER_ADD_VM_FINISHED_SUCCESS(53)

And there are two flows.

(A) With asynchronous tasks:
1. First, USER_ADD_VM_STARTED is generated
2. When the async tasks are completed successfully, USER_ADD_VM_FINISHED_SUCCESS is generated

(B) Without asynchronous tasks there is a single event of USER_ADD_VM.

This flow depends on creation flow - if the VM is created without disks then there's no asynchronous tasks whereas when the vm is created with disks (i.e., from a template that contains disks or from the blank template and the add request asks to provision disks) there are asynchronous tasks.

MTV should monitor both USER_ADD_VM and USER_ADD_VM_FINISHED_SUCCESS as an indication that a VM was added to cover both flows.

Comment 3 Arik 2021-07-18 16:19:58 UTC
(In reply to Jeff Ortel from comment #1)
> ENG mostly tests with: https://rhvm.v2v.bos.redhat.com/ which reports a
> created VM as USER_ADD_VM=34 and the VM add is properly handled and
> reflected in the inventory.  However, the RHV instance
> https://rhev-red-01.rdu2.scalelab.redhat.com/ reports as:
> USER_ADD_VM_STARTED=37 which is not processed and results in the VM creation
> being missed and subsequent configuration events failing.

OK, I have permissions to access those environments so I took a look:
In rhvm.v2v.bos.redhat.com the blank template is "optimized" for Desktop which means the disks are thin-provisioned and if the add-vm request doesn't include disks, there would be no async task and USER_ADD_VM is generated.
In rhev-red-01.rdu2.scalelab.redhat.com the blank template is "optimized" for Server which means the disks are cloned and no matter if add-vm request includes disks or not, the USER_ADD_VM_STARTED would be generated and followed by USER_ADD_VM_FINISHED_SUCCESS.

But again, if MTV would treat USER_ADD_VM and USER_ADD_VM_FINISHED_SUCCESS the same way, that would cover all cases - also of different disk allocations.

Comment 4 Ilanit Stein 2021-07-27 05:59:16 UTC
@Jeff,

Could you please move this bug to MODIFIED?

Comment 5 Tzahi Ashkenazi 2021-07-27 12:25:41 UTC
verified on cloud38 with MTV31

new VM name - > new-vm-test-mtv-31  on > L0_Group_0 
on a new plan the new VM appears on the select VMs section as expected

Comment 8 errata-xmlrpc 2021-08-26 07:09:25 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Migration Toolkit for Virtualization 2.1.0), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2021:3278


Note You need to log in before you can comment on or make changes to this bug.