Description of problem: Improve error message when failing to add nic to running VM with external provider network, because server is not installed with External Network Provider. - If trying to add vNIC with External provider Network profile to running VM(if VM down this is possible) we get the next error message: RHEV-M Error while executing action Add NIC to VM: Failed to activate VM Network Interface. engine.log 2015-04-29 14:11:01,521 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugNicVDSCommand] (ajp--127.0.0.1-8702-10) [293b7cb9] Command 'HotPlugNicVDSCommand(HostName = navy-vds3.qa.lab.tlv.redhat.com, HostId = 900504c9-0397-460c-abc0-2346f825de35, vm.vm_name=MicBur_5, nic=nic8 {id=5717f17d-1b0f-4af4-b446-12eea26da5bd, networkName=mb_test5, vnicProfileName=null, vnicProfileId=fa14ee39-9490-46ae-90da-279829e17dda, speed=1000, type=3, macAddress=00:00:00:01:00:1a, active=true, linked=true, portMirroring=false, vmId=e54b7b4c-1ea8-4ba4-b238-76db823c51de, vmName=null, vmTemplateId=null, QoSName=null}, vmDevice=VmDevice {vmId=e54b7b4c-1ea8-4ba4-b238-76db823c51de, deviceId=5717f17d-1b0f-4af4-b446-12eea26da5bd, device=bridge, type=INTERFACE, bootOrder=9, specParams={outbound={}, inbound={}}, address=, managed=true, plugged=true, readOnly=false, deviceAlias=, customProperties={}, snapshotId=null, logicalName=null})' execution failed: VDSGenericException: VDSErrorException: Failed to HotPlugNicVDS, error = Cannot get interface MTU on 'mb_test5': No such device, code = 49 2015-04-29 14:11:01,526 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugNicVDSCommand] (ajp--127.0.0.1-8702-10) [293b7cb9] FINISH, HotPlugNicVDSCommand, log id: 6adc6af2 2015-04-29 14:11:02,480 ERROR [org.ovirt.engine.core.bll.network.vm.ActivateDeactivateVmNicCommand] (ajp--127.0.0.1-8702-10) [293b7cb9] Command 'org.ovirt.engine.core.bll.network.vm.ActivateDeactivateVmNicCommand' failed: VdcBLLException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException: VDSGenericException: VDSErrorException: Failed to HotPlugNicVDS, error = Cannot get interface MTU on 'mb_test5': No such device, code = 49 (Failed with error ACTIVATE_NIC_FAILED and code 49) vdsm.log Thread-30930::DEBUG::2015-04-29 14:11:04,748::__init__::445::jsonrpc.JsonRpcServer::(_serveRequest) Calling 'VM.hotplugNic' in bridge with {u'params': {u'nic': {u'nicModel': u'pv', u'macAddr': u'00:00:00:01:00:1a' , u'linkActive': u'true', u'network': u'mb_test5', u'bootOrder': u'9', u'custom': {u'plugin_type': u'OPEN_VSWITCH', u'security_groups': u'e864eee7-1fec-4c17-8a24-69bbee136525', u'vnic_id': u'056e22e9-b53d-4bba-827 f-81839a79edc6', u'provider_type': u'OPENSTACK_NETWORK'}, u'specParams': {u'inbound': {}, u'outbound': {}}, u'deviceId': u'5717f17d-1b0f-4af4-b446-12eea26da5bd', u'device': u'bridge', u'type': u'interface'}, u'vmI d': u'e54b7b4c-1ea8-4ba4-b238-76db823c51de'}, u'vmID': u'e54b7b4c-1ea8-4ba4-b238-76db823c51de'} JsonRpcServer::DEBUG::2015-04-29 14:11:04,748::__init__::482::jsonrpc.JsonRpcServer::(serve_requests) Waiting for request Thread-30930::INFO::2015-04-29 14:11:04,752::vm::2297::vm.Vm::(hotplugNic) vmId=`e54b7b4c-1ea8-4ba4-b238-76db823c51de`::Hotplug NIC xml: <interface type="bridge"> <mac address="00:00:00:01:00:1a"/> <model type="virtio"/> <source bridge="mb_test5"/> <link state="up"/> <boot order="9"/> <bandwidth/> </interface> Thread-30930::ERROR::2015-04-29 14:11:04,804::vm::2302::vm.Vm::(hotplugNic) vmId=`e54b7b4c-1ea8-4ba4-b238-76db823c51de`::Hotplug failed Traceback (most recent call last): File "/usr/share/vdsm/virt/vm.py", line 2300, in hotplugNic self._dom.attachDevice(nicXml) File "/usr/share/vdsm/virt/vm.py", line 617, in f ret = attr(*args, **kwargs) File "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py", line 126, in wrapper ret = f(*args, **kwargs) File "/usr/lib64/python2.7/site-packages/libvirt.py", line 500, in attachDevice if ret == -1: raise libvirtError ('virDomainAttachDevice() failed', dom=self) libvirtError: Cannot get interface MTU on 'mb_test5': No such device - error doesn't explaining what is wrong or what is the issue. it actually took me quite some time to understand what is wrong, because i have mixed setup with several servers, some of them installed with External Network provider and some of them not. Version-Release number of selected component (if applicable): Improve error message when failing to add nic to VM with external provider network, because server is not installed with External Network ProviderProvider How reproducible: 100 Steps to Reproduce: 1. Working setup with neutron as external network provider configured 2. 2 servers.server 1 installed with External Network provider, server 2 installed without External Network provider. 3. Create network on neutron and import to RHEV-M 4. Run VM on server 1 and add nic with external network 5. Run VM on server 2 and try to add nic with the same network Actual results: step 4 succeed. step 5 failed with uninformative error message. Expected results: error message should explain what is wrong or what is the issue. Failed to HotPlugNicVDS, error = Cannot get interface MTU on 'mb_test5': No such device, code = 49 (Failed with error ACTIVATE_NIC_FAILED and code 49) is not good enough, specialy when you have mixed setup with few servers, that some of them installed with external provider and some not. in GUI i can't really tell or understand which host actually was installed with external network provider and which host is not. * May be this is a good RFE for future.
Thank you Lior .
Target release should be placed once a package build is known to fix a issue. Since this bug is not modified, the target version has been reset. Please use target milestone to plan a fix for a oVirt release.
It would be best if Vdsm could issue Errors to Engine. Each error should have a UUID and textual explanation. With such a feature, Vdsm could report that the required packages are not installed. Engine would take the host to non-operations, and report the reason prominently in the event log. Engine would not attempt to use this host for starting VMs.
Moving from 4.0 alpha to 4.0 beta since 4.0 alpha has been already released and bug is not ON_QA.
oVirt 4.0 beta has been released, moving to RC milestone.
This specific error is not a vdsm, but a libvirt error, which vdsm just forwards. What we could do in this case, is to warn the user (before engine sends vm run request) that he is about to add an openstack neutron managed nic to a vm which runs on a host not provisioned with openstack. Would this be ok?
host not provisioned with openstack mean that the host was added without 'External Network Provider' configuration? Now we add external network provider and run packstack installer to configure the hosts and not via add host > external network provider so how can you tell if my hosts have neutron installed or not?
We can only tell is the host was installed with neutron during host installation. When this was not done, we could assume that there is no neutron on the host. If the user installs openstack on the host manually (or does not configure it properly), we have no way of telling this.
ok by us.
Our current focus for external network providers is *not* to be involved in the host installation. Vdsm nor Engine can tell whether a specific 3rd-party VIF driver is properly installed on the host. IF the VIF driver is properly installed, it could sense whether it is properly configured, and if it is not - make sure that the host is considered non-operational by Engine.
Closing old issues, please reopen if still needed. In any case patches are welcome.