Bug 1248350 - Guest with maxMemory setting hangs after migration from RHEL7.2 to RHEL7.1
Guest with maxMemory setting hangs after migration from RHEL7.2 to RHEL7.1
Status: CLOSED ERRATA
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: libvirt (Show other bugs)
7.2
x86_64 Linux
medium Severity unspecified
: rc
: ---
Assigned To: Peter Krempa
Virtualization Bugs
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2015-07-30 03:29 EDT by Fangge Jin
Modified: 2015-11-19 01:49 EST (History)
5 users (show)

See Also:
Fixed In Version: libvirt-1.2.17-4.el7
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2015-11-19 01:49:20 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
libvirtd log on target host (9.14 MB, text/plain)
2015-07-30 03:33 EDT, Fangge Jin
no flags Details
qemu log on source host (7.64 KB, text/plain)
2015-07-30 03:36 EDT, Fangge Jin
no flags Details
qemu log on target host (6.23 KB, text/plain)
2015-07-30 03:36 EDT, Fangge Jin
no flags Details

  None (edit)
Comment 1 Fangge Jin 2015-07-30 03:33:05 EDT
Created attachment 1057541 [details]
libvirtd log on target host
Comment 2 Fangge Jin 2015-07-30 03:36:12 EDT
Created attachment 1057542 [details]
qemu log on source host
Comment 3 Fangge Jin 2015-07-30 03:36:56 EDT
Created attachment 1057543 [details]
qemu log on target host
Comment 5 Peter Krempa 2015-07-31 09:17:51 EDT
Upstream fix:

commit 136f3de4112c75af0b38fc1946f44e3658ed1890
Author: Peter Krempa <pkrempa@redhat.com>
Date:   Thu Jul 30 15:27:07 2015 +0200

    qemu: Reject migration with memory-hotplug if destination doesn't support it
    
    If destination libvirt doesn't support memory hotplug since all the
    support was introduced by adding new elements the destination would
    attempt to start qemu with an invalid configuration. The worse part is
    that qemu might hang in such situation.
    
    Fix this by sending a required migration feature called 'memory-hotplug'
    to the destination. If the destination doesn't recognize it it will fail
    the migration.
Comment 8 Fangge Jin 2015-08-31 23:04:56 EDT
Verify this bug on build:

1)Source version:
libvirt-1.2.17-6.el7.x86_64

2)Target version:
libvirt-1.2.8-16.el7_1.3.x86_64

Scenario 1(maxmemory in active xml, live migrate):
0.Prepare two hosts: source(RHEL7.2) and target(RHEL7.1)

1.Prepare a running guest on source host with maxMemory setting:
<domain type='kvm'>
  <name>rhel7d1</name>
  <uuid>62a79b7d-e743-4fd3-86a7-18d6a665993d</uuid>
  <maxMemory slots='16' unit='KiB'>2124288</maxMemory>
  <memory unit='KiB'>1024000</memory>
  <currentMemory unit='KiB'>1024000</currentMemory>
...

2.Migrate the guest to target host:
# virsh migrate rhel7d1 qemu+ssh://10.66.6.6/system --verbose --live --persistent
error: internal error: Unknown migration cookie feature memory-hotplug


Scenario 2(maxmemory in inactive xml, live migrate):
1.Prepare a running guest on source host without maxMemory setting.

2.Edit the guest xml, add the maxmemory attribute:
# virsh edit rhel7d1
...
  <maxMemory slots='16' unit='KiB'>2124288</maxMemory>
...
Domain rhel7d1 XML configuration edited.

3.Migrate the guest to target host:
# virsh migrate rhel7d1 qemu+ssh://10.66.6.6/system --verbose --live 
error: internal error: Unknown migration cookie feature memory-hotplug

or 
# virsh migrate rhel7d1 qemu+ssh://10.66.6.6/system --verbose --live --persistent
error: internal error: Unknown migration cookie feature memory-hotplug


Scenario 3(maxmemory in inactive xml, offline migrate):
1.Prepare a guest with xml:
<domain type='kvm'>
  <name>rhel7d1</name>
  <uuid>62a79b7d-e743-4fd3-86a7-18d6a665993d</uuid>
  <maxMemory slots='16' unit='KiB'>2124288</maxMemory>
  <memory unit='KiB'>1024000</memory>
  <currentMemory unit='KiB'>1024000</currentMemory>
...

2.# virsh migrate rhel7d1 qemu+ssh://10.66.6.6/system --offline --persistent
error: internal error: Unknown migration cookie feature memory-hotplug


I have a question with scenario2 -> step3, when migrating without --persistent, why does it also check the maxmemory setting in the inactive xml?
Comment 9 Peter Krempa 2015-09-08 09:03:44 EDT
The checking code didn't take into account that corner case. I'll post a patch upstream but I don't think it's worth backporting.
Comment 10 Fangge Jin 2015-09-08 23:08:13 EDT
(In reply to Peter Krempa from comment #9)
> The checking code didn't take into account that corner case. I'll post a
> patch upstream but I don't think it's worth backporting.

Since it's not a big issue, I will verify this bug.
Comment 11 Peter Krempa 2015-09-09 03:46:59 EDT
Patch for the corner issue described above was merged upstream:

commit a98e5a78153644e0f13b34c69d60b7a866c4401a
Author: Peter Krempa <pkrempa@redhat.com>
Date:   Tue Sep 8 15:06:26 2015 +0200

    qemu: migration: Relax enforcement of memory hotplug support
    
    If the current live definition does not have memory hotplug enabled, but
    the persistent one does libvirt would reject migration if the
    destination does not support memory hotplug even if the user didn't want
    to persist the VM at the destination and thus the XML containing the
    memory hotplug definition would not be used. To fix this corner case the
    code will check for memory hotplug in the newDef only if
    VIR_MIGRATE_PERSIST_DEST was used.
Comment 13 errata-xmlrpc 2015-11-19 01:49:20 EST
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2015-2202.html

Note You need to log in before you can comment on or make changes to this bug.