Bug 1625904 - [RFE] Possibility to assign the same mac address for the VFs of different SRIOV PF ports
Summary: [RFE] Possibility to assign the same mac address for the VFs of different SRI...
Keywords:
Status: CLOSED DUPLICATE of bug 1636395
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-neutron
Version: 10.0 (Newton)
Hardware: x86_64
OS: Linux
medium
urgent
Target Milestone: ---
: ---
Assignee: Assaf Muller
QA Contact: Yariv
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-09-06 08:21 UTC by Sergii Mykhailushko
Modified: 2022-03-13 16:29 UTC (History)
11 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-11-12 11:30:38 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker NFV-2441 0 None None None 2022-03-13 16:28:50 UTC
Red Hat Issue Tracker OSP-13766 0 None None None 2022-03-13 16:29:00 UTC

Description Sergii Mykhailushko 2018-09-06 08:21:21 UTC
Description of problem:

It seems in current versions of RHOSP to be impossible to assign the same mac address for VFs of different SRIOV PF ports on one compute node and one VNF without introducing changes to 'vif_plugging_timeout' option in nova.conf.

Sample config:

- PF X VF 0 MAC1
- PF Y VF 0 MAC1
- PF Z VF 0 MAC1


When using heat template with the above config, the stack create fails with timeout and following errors:

Heat-engine log:
~~~
2018-07-28 14:50:57.987 17329 ERROR heat.engine.resource
2018-07-28 14:51:05.489 17298 ERROR heat.engine.resource Traceback (most recent call last):
2018-07-28 14:51:05.489 17298 ERROR heat.engine.resource File "/usr/lib/python2.7/site-packages/heat/engine/resource.py", line 770, in _action_recorder
2018-07-28 14:51:05.489 17298 ERROR heat.engine.resource yield
2018-07-28 14:51:05.489 17298 ERROR heat.engine.resource File "/usr/lib/python2.7/site-packages/heat/engine/resource.py", line 872, in _do_action
2018-07-28 14:51:05.489 17298 ERROR heat.engine.resource yield self.action_handler_task(action, args=handler_args)
2018-07-28 14:51:05.489 17298 ERROR heat.engine.resource File "/usr/lib/python2.7/site-packages/heat/engine/scheduler.py", line 353, in wrapper
2018-07-28 14:51:05.489 17298 ERROR heat.engine.resource step = next(subtask)
2018-07-28 14:51:05.489 17298 ERROR heat.engine.resource File "/usr/lib/python2.7/site-packages/heat/engine/resource.py", line 823, in action_handler_task
2018-07-28 14:51:05.489 17298 ERROR heat.engine.resource done = check(handler_data)
2018-07-28 14:51:05.489 17298 ERROR heat.engine.resource File "/usr/lib/python2.7/site-packages/heat/engine/resources/openstack/nova/server.py", line 902, in check_create_complete
2018-07-28 14:51:05.489 17298 ERROR heat.engine.resource check = self.client_plugin()._check_active(server_id)
2018-07-28 14:51:05.489 17298 ERROR heat.engine.resource File "/usr/lib/python2.7/site-packages/heat/engine/clients/os/nova.py", line 233, in _check_active
2018-07-28 14:51:05.489 17298 ERROR heat.engine.resource 'code': fault.get('code', _('Unknown'))
2018-07-28 14:51:05.489 17298 ERROR heat.engine.resource ResourceInError: Went to status ERROR due to "Message: Build of instance 2b1f5b10-a9ac-40f5-9752-dbe9b1126984 aborted: Failed to allocate the network(s), not rescheduling., Code: 500"
~~~

nova-conductor:
~~~
2018-07-28 14:19:18.758 10832 WARNING oslo.service.loopingcall [-] Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run outlasted interval by 0.16 sec
2018-07-28 14:39:15.696 10746 ERROR nova.scheduler.utils [req-6b7440e8-c809-48ca-919b-8389a2567d01 ecb808d5f6d945fdb80af802c44b422c 3ed603a387cb4088a5c88aaaac3f2025 - - -] [instance: 371bae0c-2d11-480e-a7ce-98d83e43dbbb] Error from last host: gbudpcpt11.turkcell.tgc (node gbudpcpt11.turkcell.tgc): [u'Traceback (most recent call last):
', u' File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1787, in _do_build_and_run_instance
filter_properties)
', u' File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1942, in _build_and_run_instance
instance_uuid=instance.uuid, reason=e.format_message())
', u'RescheduledException: Build of instance 371bae0c-2d11-480e-a7ce-98d83e43dbbb was re-scheduled: Insufficient compute resources: Requested instance NUMA topology together with requested PCI devices cannot fit the given host NUMA topology; Claim pci failed..
']
~~~


If we set the "vif_plugging_timeout=0" at nova.conf files at each compute hosts, the above MAC assignment works (after stack deploy). But in that case we may hit consequences with nova endlessly retrying to get results from neutron about VIF plugging status (for other instances)'. Basically plugging VIFs with the same MAC addresses takes only a few seconds, but any other value except '0' in vif_plugging_timeout causes instance spawn to error with VIF waiting timeout.


Additionaly, if it is possible to implement the possibility of assigning same VF MAC addresses in that way, would be good to have an option to configure it in heat templates as well.


Note You need to log in before you can comment on or make changes to this bug.