Bug 1469774 - Vm with large volume fails to live migrate.
Vm with large volume fails to live migrate.
Status: CLOSED EOL
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-cinder (Show other bugs)
11.0 (Ocata)
Unspecified Unspecified
low Severity low
: ---
: 11.0 (Ocata)
Assigned To: Jon Bernard
Avi Avraham
: Triaged, Unconfirmed, ZStream
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2017-07-11 15:46 EDT by Jeremy
Modified: 2018-06-22 08:33 EDT (History)
13 users (show)

See Also:
Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2018-06-22 08:33:40 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Comment 5 Jeremy 2017-07-13 09:38:14 EDT
Update:

The customer created a test vm the exact same specs and volume size as the one described above that doesn't work, and the test vm migrates fine between all compute nodes. So the question is why does the one not? The one not migrating the customer mentions the volume is serving IO and the test that works is not .
Comment 6 Jeremy 2017-07-17 10:30:02 EDT
Update:
Vm not live migrating was set to grub menu so there was no IO. The vm still fails to migrate.
Comment 8 Jeremy 2017-07-18 11:20:15 EDT
Customer was able to change the old volume type to allow the volume to work again.

https://github.com/openstack/cinder/blob/544d13ef0a9397a18af506607150b0f2c2c3752c/doc/source/admin/blockstorage-multi-backend.rst
Comment 9 Jon Bernard 2017-07-18 11:51:03 EDT
The volume type of the failing volume references a backend that's no longer running, and I think this is the reason the messages are timing out - the DB defines a topic that no cinder services is subscribed to receive because they've all be re-configured to respond to the new volume type.  Changing the volume type to the new value should fix it.
Comment 11 Jon Bernard 2017-08-21 14:12:03 EDT
The primary question I have is about the origin of cinder.conf changes.  If this was our deployment tool doing an update/upgrade, then there may be a bug we need to address.  If it was a manual change after update, then some appropriate documentation that mentions what to look out for may be best.
Comment 13 Scott Lewis 2018-06-22 08:33:40 EDT
OSP11 is now retired, see details at https://access.redhat.com/errata/product/191/ver=11/rhel---7/x86_64/RHBA-2018:1828

Note You need to log in before you can comment on or make changes to this bug.