Bug 1216991 - Let external providers' VIF driver affect the state of a host
Summary: Let external providers' VIF driver affect the state of a host
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: ovirt-engine
Classification: oVirt
Component: Frontend.WebAdmin
Version: ---
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: ---
: ---
Assignee: Marcin Mirecki
QA Contact: Michael Burman
URL:
Whiteboard:
Depends On: 1061611 1593804
Blocks: 1063716 1314375
TreeView+ depends on / blocked
 
Reported: 2015-04-29 11:26 UTC by Michael Burman
Modified: 2022-04-13 14:33 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-08-01 07:49:04 UTC
oVirt Team: Network
Embargoed:
sbonazzo: ovirt-4.3-


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHV-45654 0 None None None 2022-04-13 14:33:28 UTC

Description Michael Burman 2015-04-29 11:26:57 UTC
Description of problem:
Improve error message when failing to add nic to running VM with external provider network, because server is not installed with External Network Provider.

- If trying to add vNIC with External provider Network profile to running VM(if VM down this is possible) we get the next error message:

RHEV-M
 Error while executing action Add NIC to VM: Failed to activate VM Network Interface.

engine.log
2015-04-29 14:11:01,521 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugNicVDSCommand] (ajp--127.0.0.1-8702-10) [293b7cb9] Command 'HotPlugNicVDSCommand(HostName = navy-vds3.qa.lab.tlv.redhat.com, HostId = 900504c9-0397-460c-abc0-2346f825de35, vm.vm_name=MicBur_5, nic=nic8 {id=5717f17d-1b0f-4af4-b446-12eea26da5bd, networkName=mb_test5, vnicProfileName=null, vnicProfileId=fa14ee39-9490-46ae-90da-279829e17dda, speed=1000, type=3, macAddress=00:00:00:01:00:1a, active=true, linked=true, portMirroring=false, vmId=e54b7b4c-1ea8-4ba4-b238-76db823c51de, vmName=null, vmTemplateId=null, QoSName=null}, vmDevice=VmDevice {vmId=e54b7b4c-1ea8-4ba4-b238-76db823c51de, deviceId=5717f17d-1b0f-4af4-b446-12eea26da5bd, device=bridge, type=INTERFACE, bootOrder=9, specParams={outbound={}, inbound={}}, address=, managed=true, plugged=true, readOnly=false, deviceAlias=, customProperties={}, snapshotId=null, logicalName=null})' execution failed: VDSGenericException: VDSErrorException: Failed to HotPlugNicVDS, error = Cannot get interface MTU on 'mb_test5': No such device, code = 49
2015-04-29 14:11:01,526 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugNicVDSCommand] (ajp--127.0.0.1-8702-10) [293b7cb9] FINISH, HotPlugNicVDSCommand, log id: 6adc6af2
2015-04-29 14:11:02,480 ERROR [org.ovirt.engine.core.bll.network.vm.ActivateDeactivateVmNicCommand] (ajp--127.0.0.1-8702-10) [293b7cb9] Command 'org.ovirt.engine.core.bll.network.vm.ActivateDeactivateVmNicCommand' failed: VdcBLLException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException: VDSGenericException: VDSErrorException: Failed to HotPlugNicVDS, error = Cannot get interface MTU on 'mb_test5': No such device, code = 49 (Failed with error ACTIVATE_NIC_FAILED and code 49)


vdsm.log 
Thread-30930::DEBUG::2015-04-29 14:11:04,748::__init__::445::jsonrpc.JsonRpcServer::(_serveRequest) Calling 'VM.hotplugNic' in bridge with {u'params': {u'nic': {u'nicModel': u'pv', u'macAddr': u'00:00:00:01:00:1a'
, u'linkActive': u'true', u'network': u'mb_test5', u'bootOrder': u'9', u'custom': {u'plugin_type': u'OPEN_VSWITCH', u'security_groups': u'e864eee7-1fec-4c17-8a24-69bbee136525', u'vnic_id': u'056e22e9-b53d-4bba-827
f-81839a79edc6', u'provider_type': u'OPENSTACK_NETWORK'}, u'specParams': {u'inbound': {}, u'outbound': {}}, u'deviceId': u'5717f17d-1b0f-4af4-b446-12eea26da5bd', u'device': u'bridge', u'type': u'interface'}, u'vmI
d': u'e54b7b4c-1ea8-4ba4-b238-76db823c51de'}, u'vmID': u'e54b7b4c-1ea8-4ba4-b238-76db823c51de'}
JsonRpcServer::DEBUG::2015-04-29 14:11:04,748::__init__::482::jsonrpc.JsonRpcServer::(serve_requests) Waiting for request
Thread-30930::INFO::2015-04-29 14:11:04,752::vm::2297::vm.Vm::(hotplugNic) vmId=`e54b7b4c-1ea8-4ba4-b238-76db823c51de`::Hotplug NIC xml: <interface type="bridge">
        <mac address="00:00:00:01:00:1a"/>
        <model type="virtio"/>
        <source bridge="mb_test5"/>
        <link state="up"/>
        <boot order="9"/>
        <bandwidth/>
</interface>

Thread-30930::ERROR::2015-04-29 14:11:04,804::vm::2302::vm.Vm::(hotplugNic) vmId=`e54b7b4c-1ea8-4ba4-b238-76db823c51de`::Hotplug failed
Traceback (most recent call last):
  File "/usr/share/vdsm/virt/vm.py", line 2300, in hotplugNic
    self._dom.attachDevice(nicXml)
  File "/usr/share/vdsm/virt/vm.py", line 617, in f
    ret = attr(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py", line 126, in wrapper
    ret = f(*args, **kwargs)
  File "/usr/lib64/python2.7/site-packages/libvirt.py", line 500, in attachDevice
    if ret == -1: raise libvirtError ('virDomainAttachDevice() failed', dom=self)
libvirtError: Cannot get interface MTU on 'mb_test5': No such device


- error doesn't explaining what is wrong or what is the issue.
it actually took me quite some time to understand what is wrong, because i have mixed setup with several servers, some of them installed with External Network provider and some of them not.

Version-Release number of selected component (if applicable):
Improve error message when failing to add nic to VM with external provider network, because server is not installed with External Network ProviderProvider

How reproducible:
100

Steps to Reproduce:
1. Working setup with neutron as external network provider configured
2. 2 servers.server 1 installed with External Network provider, server 2 installed without External Network provider.
3. Create network on neutron and import to RHEV-M
4. Run VM on server 1 and add nic with external network
5. Run VM on server 2 and try to add nic with the same network 

Actual results:
step 4 succeed. step 5 failed with uninformative error message.

Expected results:
error message should explain what is wrong or what is the issue.

Failed to HotPlugNicVDS, error = Cannot get interface MTU on 'mb_test5': No such device, code = 49 (Failed with error ACTIVATE_NIC_FAILED and code 49)

is not good enough, specialy when you have mixed setup with few servers, that some of them installed with external provider and some not.
in GUI i can't really tell or understand which host actually was installed with external network provider and which host is not.

* May be this is a good RFE for future.

Comment 1 Michael Burman 2015-04-29 11:55:34 UTC
Thank you Lior .

Comment 2 Red Hat Bugzilla Rules Engine 2015-10-19 10:59:28 UTC
Target release should be placed once a package build is known to fix a issue. Since this bug is not modified, the target version has been reset. Please use target milestone to plan a fix for a oVirt release.

Comment 3 Dan Kenigsberg 2016-03-27 15:27:46 UTC
It would be best if Vdsm could issue Errors to Engine. Each error should have a UUID and textual explanation.

With such a feature, Vdsm could report that the required packages are not installed. Engine would take the host to non-operations, and report the reason prominently in the event log. Engine would not attempt to use this host for starting VMs.

Comment 4 Sandro Bonazzola 2016-05-02 09:54:01 UTC
Moving from 4.0 alpha to 4.0 beta since 4.0 alpha has been already released and bug is not ON_QA.

Comment 5 Yaniv Lavi 2016-05-23 13:16:12 UTC
oVirt 4.0 beta has been released, moving to RC milestone.

Comment 6 Yaniv Lavi 2016-05-23 13:22:16 UTC
oVirt 4.0 beta has been released, moving to RC milestone.

Comment 7 Marcin Mirecki 2016-06-10 12:20:30 UTC
This specific error is not a vdsm, but a libvirt error, which vdsm just forwards.

What we could do in this case, is to warn the user (before engine sends vm run request) that he is about to add an openstack neutron managed nic to a vm which runs on a host not provisioned with openstack.
Would this be ok?

Comment 8 Meni Yakove 2016-06-13 07:37:29 UTC
host not provisioned with openstack mean that the host was added without 'External Network Provider' configuration?
Now we add external network provider and run packstack installer to configure the hosts and not via add host > external network provider so how can you tell if my hosts have neutron installed or not?

Comment 9 Marcin Mirecki 2016-06-13 07:53:44 UTC
We can only tell is the host was installed with neutron during host installation. When this was not done, we could assume that there is no neutron on the host.

If the user installs openstack on the host manually (or does not configure it properly), we have no way of telling this.

Comment 10 Meni Yakove 2016-06-13 09:07:27 UTC
ok by us.

Comment 11 Dan Kenigsberg 2016-06-13 09:24:34 UTC
Our current focus for external network providers is *not* to be involved in the host installation. Vdsm nor Engine can tell whether a specific 3rd-party VIF driver is properly installed on the host.

IF the VIF driver is properly installed, it could sense whether it is properly configured, and if it is not - make sure that the host is considered non-operational by Engine.

Comment 12 Yaniv Lavi 2018-08-01 07:49:04 UTC
Closing old issues, please reopen if still needed.
In any case patches are welcome.


Note You need to log in before you can comment on or make changes to this bug.