Description of problem:
When live migration instances all data is transferred via the deployment network, which is neither redundant nor 10G
Version-Release number of selected component (if applicable):
How reproducible:
100%
Steps to Reproduce:
1. spin up an instance
2. live migrate the instance to another compute node
3. watch traffic going over the deployment net
Actual results:
All replication traffic if going through the deployment network
Expected results:
The network used for live-migration should be configurable (or at least it should be the "management" network)
Additional info:
HA setup with rhel-osp-installer A2
Keith where should this kind of bug be moved for future Director roadmap tracking? It is possible to configure a migration network (though it's kind of indirect, you configure the live migration URI of each compute node so that it has a prefix/suffix to the hostname wildcard %s that is routed through a separate network) but will require work in director I expect to facilitate.
There is a blueprint/spec proposed upstream to formally separate the migration traffic to a separate network:
https://review.openstack.org/#/c/194990/
It is going to be Mitaka release upstream at the earliest though
Resolved by https://bugzilla.redhat.com/show_bug.cgi?id=1428592.
By default the libvirt live migration socket is now tunnelled over SSH by default. All traffic is on the internal_api network (overcloud management network).