Description of problem: os-net-config accepts a nic abstraction rather than a nic name in the form nic1, nic2, etc., but it is hard to predict sometimes what that mapping will be (especially when there are multiple PCI NICs). We need a report that shows the mapping of nic abstraction to nic name. Version-Release number of selected component (if applicable): OSP-D 7 GA and 7.1 How reproducible: 100% Steps to Reproduce: 1. Configure custom NIC templates 2. Deploy 3. Actual results: You can't easily tell which NICs were mapped unless you SSH in to one of the overcloud nodes. Expected results: You should be able to easily tell which nic abstraction maps to which physical nic. Additional info: I can think of two ways to approach this. We can either report on the NIC mapping during deployment (when os-net-config actually runs), or we can add it to discovery. It would arguably be more useful if this ran during discovery, but we still need a good way to parse the extra_hardware blob if that's where the data is going to be stored. If we add it to discovery, we might want to replicate the logic that os-net-config uses to determine NIC numbering in a standalone script. Here is the logic. It's not difficult. First, any NICs named em*, eno*, or eth* are numbered first in alphanumeric order. Then, any remaining NICs are numbered in alphanumeric order. def ordered_active_nics(): embedded_nics = [] nics = [] for name in glob.iglob(_SYS_CLASS_NET + '/*'): nic = name[(len(_SYS_CLASS_NET) + 1):] if _is_active_nic(nic): if nic.startswith('em') or nic.startswith('eth') or \ nic.startswith('eno'): embedded_nics.append(nic) else: nics.append(nic) return sorted(embedded_nics) + sorted(nics)
In Pike we've added two new interface commands that show introspection data: openstack baremetal introspection interface list <node> - show data for all interfaces on node openstack baremetal introspection interface show <node> <interface> - show detailed data per interface It would seem that a new "abstraction name" field would work very well here and allow easy view of the mapping between the abstraction name in the template and the actual interface name. We could add the logic as described in comment 1 to the CLI commands. The only concern I have with adding this new field is that these commands are under Ironic Inspector, and upstream users may not be using os-net-config or have any use or understanding of an "abstraction name". I'm wondering if adding this field would be accepted upstream. An option would be to add a new Ironic Inspector plugin that would populate swift with the abstraction data - basically running the logic above. Since this plugin can be enabled in instack-undercloud only when os-net-config is being used we can limit its use. The CLI could then display this new field as an "extra" field, so there would be no os-net-config specific context.
I'm not really confident we could sell a TripleO-specific inspector plug-in upstream either. Also classifying to embedded and external based only on NIC driver name&ordering sounds like asking for trouble. If needed, TripleO should host&distribute this sort of inspector plug-in and add further mapping criteria as required. If a new field is required for the inspector CLI client, I'd suggest a "traits" field as a list of standardised qualitative strings[1] that would be assigned to a NIC. TripleO could then apply different configs based on the inspector CLI filtering those e.g: # an interface that was classified by an inspector plugin with # the following qualities interface: ..., traits: [logical:network:provisioning, logical:network:cleaning, hw:network:10GB, hw:network:full-duplex] [1] traits, missing hw:networking or logical: traits though, https://github.com/openstack/os-traits
If could make sure that 'openstack baremetal introspection interface list <node>' listed the interfaces in the order specified in the description with the 'ordered_active_nics' example, wouldn't that achieve the same purpose? With that, the operator would be able tell the mapping based on that sorting.
Thanks Milan and Ramon for your feedback! Ramon, the issue with relying on the specific ordering is that it seems it would add os-net-config specific code to the CLI to do the ordering, I was trying to avoid that by putting the specific code in a plugin, which could then be brought in by the deployer. But yes I understand it may be a hard sell adding a plugin that calls into TripleO (os-net-config). Not sure how we could have TripleO shot and distribute the plugin as I thought all plugins had to be in ironic inspector namespace, but I'll discuss this with you further, along with the traits.
So the traits as specified seemed to be more used for making a decision as to which nic to use for a particular network. With this RFE though we're not suggesting changing the decision process - that is done by os-net-config. We're just showing the mapping the the abstraction name that the user puts in a template file to the actual nic name os-net-config chooses. Using traits as an input to the os-net-config algorithm for choosing networks may be worth pursuing as a separate RFE however.
A custom TripleO plugin might still be useful: openstack baremetal introspection data save node-1 | \ jq .tripleo.nics_mapping[0]
This has been resolved with the fix in https://review.openstack.org/#/c/383516/ for https://bugzilla.redhat.com/show_bug.cgi?id=1532140. Marking as duplicate. *** This bug has been marked as a duplicate of bug 1532140 ***