RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1248350 - Guest with maxMemory setting hangs after migration from RHEL7.2 to RHEL7.1
Summary: Guest with maxMemory setting hangs after migration from RHEL7.2 to RHEL7.1
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: libvirt
Version: 7.2
Hardware: x86_64
OS: Linux
medium
unspecified
Target Milestone: rc
: ---
Assignee: Peter Krempa
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-07-30 07:29 UTC by Fangge Jin
Modified: 2015-11-19 06:49 UTC (History)
5 users (show)

Fixed In Version: libvirt-1.2.17-4.el7
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-11-19 06:49:20 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
libvirtd log on target host (9.14 MB, text/plain)
2015-07-30 07:33 UTC, Fangge Jin
no flags Details
qemu log on source host (7.64 KB, text/plain)
2015-07-30 07:36 UTC, Fangge Jin
no flags Details
qemu log on target host (6.23 KB, text/plain)
2015-07-30 07:36 UTC, Fangge Jin
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2015:2202 0 normal SHIPPED_LIVE libvirt bug fix and enhancement update 2015-11-19 08:17:58 UTC

Comment 1 Fangge Jin 2015-07-30 07:33:05 UTC
Created attachment 1057541 [details]
libvirtd log on target host

Comment 2 Fangge Jin 2015-07-30 07:36:12 UTC
Created attachment 1057542 [details]
qemu log on source host

Comment 3 Fangge Jin 2015-07-30 07:36:56 UTC
Created attachment 1057543 [details]
qemu log on target host

Comment 5 Peter Krempa 2015-07-31 13:17:51 UTC
Upstream fix:

commit 136f3de4112c75af0b38fc1946f44e3658ed1890
Author: Peter Krempa <pkrempa>
Date:   Thu Jul 30 15:27:07 2015 +0200

    qemu: Reject migration with memory-hotplug if destination doesn't support it
    
    If destination libvirt doesn't support memory hotplug since all the
    support was introduced by adding new elements the destination would
    attempt to start qemu with an invalid configuration. The worse part is
    that qemu might hang in such situation.
    
    Fix this by sending a required migration feature called 'memory-hotplug'
    to the destination. If the destination doesn't recognize it it will fail
    the migration.

Comment 8 Fangge Jin 2015-09-01 03:04:56 UTC
Verify this bug on build:

1)Source version:
libvirt-1.2.17-6.el7.x86_64

2)Target version:
libvirt-1.2.8-16.el7_1.3.x86_64

Scenario 1(maxmemory in active xml, live migrate):
0.Prepare two hosts: source(RHEL7.2) and target(RHEL7.1)

1.Prepare a running guest on source host with maxMemory setting:
<domain type='kvm'>
  <name>rhel7d1</name>
  <uuid>62a79b7d-e743-4fd3-86a7-18d6a665993d</uuid>
  <maxMemory slots='16' unit='KiB'>2124288</maxMemory>
  <memory unit='KiB'>1024000</memory>
  <currentMemory unit='KiB'>1024000</currentMemory>
...

2.Migrate the guest to target host:
# virsh migrate rhel7d1 qemu+ssh://10.66.6.6/system --verbose --live --persistent
error: internal error: Unknown migration cookie feature memory-hotplug


Scenario 2(maxmemory in inactive xml, live migrate):
1.Prepare a running guest on source host without maxMemory setting.

2.Edit the guest xml, add the maxmemory attribute:
# virsh edit rhel7d1
...
  <maxMemory slots='16' unit='KiB'>2124288</maxMemory>
...
Domain rhel7d1 XML configuration edited.

3.Migrate the guest to target host:
# virsh migrate rhel7d1 qemu+ssh://10.66.6.6/system --verbose --live 
error: internal error: Unknown migration cookie feature memory-hotplug

or 
# virsh migrate rhel7d1 qemu+ssh://10.66.6.6/system --verbose --live --persistent
error: internal error: Unknown migration cookie feature memory-hotplug


Scenario 3(maxmemory in inactive xml, offline migrate):
1.Prepare a guest with xml:
<domain type='kvm'>
  <name>rhel7d1</name>
  <uuid>62a79b7d-e743-4fd3-86a7-18d6a665993d</uuid>
  <maxMemory slots='16' unit='KiB'>2124288</maxMemory>
  <memory unit='KiB'>1024000</memory>
  <currentMemory unit='KiB'>1024000</currentMemory>
...

2.# virsh migrate rhel7d1 qemu+ssh://10.66.6.6/system --offline --persistent
error: internal error: Unknown migration cookie feature memory-hotplug


I have a question with scenario2 -> step3, when migrating without --persistent, why does it also check the maxmemory setting in the inactive xml?

Comment 9 Peter Krempa 2015-09-08 13:03:44 UTC
The checking code didn't take into account that corner case. I'll post a patch upstream but I don't think it's worth backporting.

Comment 10 Fangge Jin 2015-09-09 03:08:13 UTC
(In reply to Peter Krempa from comment #9)
> The checking code didn't take into account that corner case. I'll post a
> patch upstream but I don't think it's worth backporting.

Since it's not a big issue, I will verify this bug.

Comment 11 Peter Krempa 2015-09-09 07:46:59 UTC
Patch for the corner issue described above was merged upstream:

commit a98e5a78153644e0f13b34c69d60b7a866c4401a
Author: Peter Krempa <pkrempa>
Date:   Tue Sep 8 15:06:26 2015 +0200

    qemu: migration: Relax enforcement of memory hotplug support
    
    If the current live definition does not have memory hotplug enabled, but
    the persistent one does libvirt would reject migration if the
    destination does not support memory hotplug even if the user didn't want
    to persist the VM at the destination and thus the XML containing the
    memory hotplug definition would not be used. To fix this corner case the
    code will check for memory hotplug in the newDef only if
    VIR_MIGRATE_PERSIST_DEST was used.

Comment 13 errata-xmlrpc 2015-11-19 06:49:20 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2015-2202.html


Note You need to log in before you can comment on or make changes to this bug.