Description of problem: During the very first step of import VM from VMware procedure, VDSErrorException occurs and webadmin remains in "loading" state instead of failing it with an appropriate message. Version-Release number of selected component (if applicable): rhevm-3.6.0.3-0.1.el6 (3.6.0-20) libvirt-client-1.2.17-5.el7.x86_64 qemu-kvm-rhev-2.3.0-31.el7_2.1.x86_64 vdsm-4.17.10.1-0.el7ev.noarch sanlock-3.2.4-1.el7.x86_64 How reproducible: 100% Steps to Reproduce: 1. From webadmin, navigate to virtual machines tab -> import 2. Enter VMware environment components and credentials. 3. Click "load" button. Actual results: VMware available VMs are not queried at all due to VDSErrorException. Also, webadmin remains in "loading" state. Expected results: Available VMs to import should be queried and listed in "virtual machines on source" pane. In case of failure, webadmin should report it with an appropriate message. Additional info: engine and vdsm logs attached. import page scrreenshot attached
Created attachment 1097604 [details] engine.log
Created attachment 1097605 [details] vdsm.log
Created attachment 1097606 [details] Import page get stuck in "querying" state.
engine.log issue started at: 2015-11-23 11:34:22,631 vdsm.log issue started at: 2015-11-23 11:35:12,435
Nisim/Meital, did you look for the exact issue in the VDSM log? Thread-148418::ERROR::2015-11-23 11:35:26,134::v2v::145::root::(get_external_vms) error connection to hypervisor: "internal error: HTTP response code 500 for call to 'Login'. Fault: ServerFaultCode - Cannot complete login due to an incorrect user name or password." seems relevant. The bug is probably that we don't treat this error correctly in the backend.
The ERROR in VDSM log retrieved when i tried to use ESXi credentials instead of vSphere credentails. According to the next logs, connection actually succeeded: - vdsm.log: Thread-185139::ERROR::2015-11-29 10:36:23,258::v2v::145::root::(get_external_vms) error connection to hypervisor: "internal error: Could not find compute resource specified in '/CFME/10.8.58.12'" - engine.log: ERROR [org.ovirt.engine.core.bll.GetVmsFromExternalProviderQuery] (ajp-/127.0.0.1:8702-10) [] Exception: org.ovirt.engine.core.common.errors.EngineException: EngineE xception: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException: VDSGenericException: VDSErrorException: Failed to GetVmsFromExternalProviderVDS, error = internal error: Could not fin d compute resource specified in '/CFME/10.8.58.12', code = 65 (Failed with error unexpected and code 16)
(In reply to Nisim Simsolo from comment #6) > The ERROR in VDSM log retrieved when i tried to use ESXi credentials instead > of vSphere credentails. > > According to the next logs, connection actually succeeded: > > - vdsm.log: > Thread-185139::ERROR::2015-11-29 > 10:36:23,258::v2v::145::root::(get_external_vms) error connection to > hypervisor: "internal error: Could not find compute resource specified in > '/CFME/10.8.58.12'" There are no attached logs from the 29th of November.
Created attachment 1100175 [details] 29th of November engine.log
Created attachment 1100176 [details] 29th of November vdsm.log
engine issue starts at: 2015-11-29 14:26:17,613 VDSM issue starts at: 2015-11-29 14:27:06,409
The main flow of this feature is not working at all and there is no webadmin workaround for it.
There are several issues: 1. Some of the VMs are ileagal in libvirt point of view (there is exception in XMLDesc) - a patch will be sent to 3.6 to vdsm and will be reference in this bug. 2. The UI dialog is stuck in "waiting" mode if there is a timeout or error in the connection and not presenting the error to the user (me and Arik are working on that and will send a reference patch here). 3. The vcenter that you tested is getting timeout since you had a lot of VMs in the cluster and it took more then 5 minutes (the default timeout) to probe, we can overcode that with a cluster with only a few VMs but in any case I am prety sure that it will fail the convert process (I did tested v2v feature with server that was based in china and client in israel with no success).
(In reply to meital avital from comment #11) > The main flow of this feature is not working at all and there is no webadmin > workaround for it. No, You do have a workaround! * Open virsh to the server and see what VMs are legal (ie dumpxml <vm>) * Add only the legal VMs to a cluster * enter the RIGHT credentials to import-v2v dialog.
Bug tickets must have version flags set prior to targeting them to a release. Please ask maintainer to set the correct version flags and only then set the target milestone.
(In reply to Nisim Simsolo from comment #0) > Version-Release number of selected component (if applicable): > rhevm-3.6.0.3-0.1.el6 (3.6.0-20) > libvirt-client-1.2.17-5.el7.x86_64 Nisim, libvirt version doesn't correspond to the one you should have installed for build 20. Does it happen on latest libvirt? Is it an old host? Worth upgrading
fix to stuck UI is in 3.6.1 we're missing the other part (recognizing the problematic VMs), but that's not blocking working VMs - hence moving bug to 3.6.2
Using latest build (rhevm-3.6.1.2-0.1.el6), It is possible to get VMware VMs list. Still, DC/cluster parameters should be entered in data center text box (cluster text box is missing).
oVirt 3.6.2 RC1 has been released for testing, moving to ON_QA
Verified: rhevm-3.6.2.6-0.1.el6 vdsm-4.17.17-0.el7ev.noarch libvirt-client-1.2.17-13.el7_2.2.x86_64 qemu-kvm-rhev-2.3.0-31.el7_2.4.x86_64