Red Hat Bugzilla – Bug 1469774
Vm with large volume fails to live migrate.
Last modified: 2018-06-22 08:33:40 EDT
The customer created a test vm the exact same specs and volume size as the one described above that doesn't work, and the test vm migrates fine between all compute nodes. So the question is why does the one not? The one not migrating the customer mentions the volume is serving IO and the test that works is not .
Vm not live migrating was set to grub menu so there was no IO. The vm still fails to migrate.
Customer was able to change the old volume type to allow the volume to work again.
The volume type of the failing volume references a backend that's no longer running, and I think this is the reason the messages are timing out - the DB defines a topic that no cinder services is subscribed to receive because they've all be re-configured to respond to the new volume type. Changing the volume type to the new value should fix it.
The primary question I have is about the origin of cinder.conf changes. If this was our deployment tool doing an update/upgrade, then there may be a bug we need to address. If it was a manual change after update, then some appropriate documentation that mentions what to look out for may be best.
OSP11 is now retired, see details at https://access.redhat.com/errata/product/191/ver=11/rhel---7/x86_64/RHBA-2018:1828