Description of problem: If an operator has two (or more) different flavors of compute nodes, and their network connections differ, we don't provide a way for Director to configure multiple compute configurations. Version-Release number of selected component (if applicable): OSP 8.0/OSP-D 8.0 Actual results: There is no way to properly configure the Compute nodes where some nodes have more network cards than other nodes. For instance, if some of my Compute nodes have 2x10G cards in a bond, and some should have 4x10G cards in a bond, today we have no way to configure both through Director without significant modifications to the Heat templates. Expected results: We should be able to support multiple Compute roles, or at the very least we should be able to send a different NIC configuration template to different nodes. Additional info: I can think of a couple of ways to handle this scenario. We could have per-node NIC configuration templates. We also might be able to do something inside of os-net-config to handle interfaces that may or may not exist. Today, if the interface doesn't exist, in certain scenarios os-net-config will bomb rather than complete the configuration. Modifying os-net-config would only get us so far. For instance, it wouldn't be too hard to support a bond with either 2 or 4 interfaces attached, but if you wanted to split out just the storage VLAN to another bond on some nodes we would have to increase the complexity of os-net-config templates (to support if/then blocks). It would be nice to have per-node NIC configs. Unfortunately, we don't have any way to tie certain Heat resources to certain physical hardware, so there are probably some dependency issues here. We might be able to set a flag in Ironic or Nova that we could tie to a specific NIC configuration, but this will need to be engineered.
I believe this will be solved by fully composable roles, which we are looking at for Newton. Adding needinfo on Steve Hardy to be sure.
@Hugh: Yes, when we support custom/fully-composable roles that will provide one way to solve this, as you could define e.g Compute2 role which uses nodes tagged as e.g a compute2 profile, and give those a different network config. However there may also be some interim alternatives: > we don't have any way to tie certain Heat resources to certain physical hardware This is no longer true as of OSP8, it is now possible to get predictable placement, so it might be possible to do something similar to https://review.openstack.org/#/c/271450/ to simply run os-net-config from a script, and have it look up a different config based on some characteristic of the node (even the hostname prefix perhaps). The latter approach isn't tested, but I think is likely to work provided some key to look up on can be decided ahead of time (again, it could just be compute1/compute2 and have the script decide which to use e.g based on number of nics, this doesn't even require predictable placement). Dan - let me know if you'd like to work together on getting some functional examples of what I described.
This is implemented in OSP 10 via composable roles.