This bug is to track adding support into Foreman for enabling live migrate and resize support into Compute Nodes deployed by Foreman.
This could take place as part of either passwordless SSH root keys being distributed to all nodes in root accounts (not the best solution, but good in a pinch or for PoCs) or via more sophisticated methods.
Some of the better ways to do this were outlined by dberrange:
"Somewhat related to this is that for even plain live migrate you need
to have a communication channel between libvirtd on both hosts. There
are some options for this - root ssh login, non-root ssh login + libvirtd
UNIX domain socket access, SSL + x509 certs, SASL w/ GSSAPI/Kerberos
and SSL + SASL GSSAPI/Kerberos."
--- Additional comment from Perry Myers on 2014-03-31 09:56:56 EDT ---
Adding some additional thoughts/context here...
Presently Packstack only distributes the public keys to all remote hosts, to allow the main node running packstack to talk to all remote hosts. Adding the private key as well would allow passwordless ssh between all compute nodes, which would enable support for Nova migrate/resize using libvirt migration over root passwordless ssh tunnels.
NOTE: This is not a good idea for production environments, but since Packstack is meant for PoC and demos, this is a reasonable option. Even still, we probably want to make enabling of distributing the private key files an explicitly opt-in vs. making it done by default.
There is (as mentioned in comment #2) separate work ongoing to make it so that Nova doesn't rely on libvirt/ssh tunnels, but until that work completes we probably need to rely on this mechanism at least for Packstack.
Some additional requirements to make this useful for migrations:
In order to support cold migrations, we also need the host key of every host in the cluster installed in the nova user's known_hosts file. Otherwise, attempts to use ssh for copying image files will fail immediately because the remote host key is unknown.
Also, note that live migrations happen under the libvirt daemon's user, which appears to be root normally. That means that we need SSH key relationships (both user and host keys) to be installed for root as well.
Further note that these keys will need to be updated on all hosts any time a new host is added to the deployment. Otherwise, migrations (especially those that choose a host via the scheduler) will appear to fail randomly when the sending host doesn't have the (new) destination host's key in known_hosts. Since new hosts are likely to be empty and thus preferred targets by the scheduler, this is likely to be an obvious thing for users to stumble over (add host, migrate stuff to it, fail).