Hide Forgot
+++ This bug was initially created as a clone of Bug #499587 +++ Created an attachment (id=342822) dmesg from the guest, showing the e820 map Description of problem: I have a F-11 domU configured with the following memory parameters (full libvirt XML is attached): <memory>15360000</memory> <currentMemory>786432</currentMemory> With this configuration in place, and the dom0 appropriately ballooned, I should be able to set the amount of memory in the guest to any number between 0 and 15GB. However, I can only balloon down below the "currentMemory" target, and back up to "currentMemory". Anything about 768M (the starting value) just fails to allocate the memory to the domU. This appears to at least partially be a problem with the e820 map. I'll attach a full dmesg from the guest, but this snippet: BIOS-provided physical RAM map: Xen: 0000000000000000 - 00000000000a0000 (usable) Xen: 00000000000a0000 - 0000000000100000 (reserved) Xen: 0000000000100000 - 0000000001fea000 (usable) Xen: 0000000001fea000 - 000000000216d000 (reserved) Xen: 000000000216d000 - 0000000030000000 (usable) Shows that the guest only sees the e820 map up to 0x30000000, which is ~768M. To be able to balloon, the guest would need to have the e820 map be all the way up to "memory" from the above XML. --- Additional comment from clalance on 2009-05-07 06:54:56 EDT --- Created an attachment (id=342823) Libvirt XML from the affected domain --- Additional comment from clalance on 2009-05-08 11:22:53 EDT --- Miroslav, Another important piece of functionality that really should work. Chris Lalancette --- Additional comment from fedora-triage-list on 2009-06-09 11:18:18 EDT --- This bug appears to have been reported against 'rawhide' during the Fedora 11 development cycle. Changing version to '11'. More information and reason for this action is here: http://fedoraproject.org/wiki/BugZappers/HouseKeeping
This request was evaluated by Red Hat Product Management for inclusion in a Red Hat Enterprise Linux major release. Product Management has requested further review of this request by Red Hat Engineering, for potential inclusion in a Red Hat Enterprise Linux Major release. This request is not yet committed for inclusion.
For an update, this problem still exists upstream. I tested with Jeremy's stable branches jeremy/xen/stable-2.6.32.x jeremy/xen/stable-2.6.33.x Jeremy, Is there another branch I should test with? Have you seen ballooning up within maxmem's limit work? Andrew
*** Bug 608610 has been marked as a duplicate of this bug. ***
Technical note added. If any revisions are required, please edit the "Technical Notes" field accordingly. All revisions will be proofread by the Engineering Content Services team. New Contents: RHEL 6.0 XenDomU guests do not support memory ballooning.
Technical note updated. If any revisions are required, please edit the "Technical Notes" field accordingly. All revisions will be proofread by the Engineering Content Services team. Diffed Contents: @@ -1 +1 @@ -RHEL 6.0 XenDomU guests do not support memory ballooning.+RHEL 6.0 paravirtualized Xen guests do not support memory ballooning.
Technical note updated. If any revisions are required, please edit the "Technical Notes" field accordingly. All revisions will be proofread by the Engineering Content Services team. Diffed Contents: @@ -1 +1 @@ -RHEL 6.0 paravirtualized Xen guests do not support memory ballooning.+Memory Ballooning is not a supported by Red Hat Enterprise Linux 6 paravirtualized Xen guests.
The current state upstream is http://lists.xensource.com/archives/html/xen-devel/2010-08/msg01468.html this is true for rhel6 as well.
This request was evaluated by Red Hat Product Management for inclusion in a Red Hat Enterprise Linux maintenance release. Product Management has requested further review of this request by Red Hat Engineering, for potential inclusion in a Red Hat Enterprise Linux Update release for currently deployed products. This request is not yet committed for inclusion in an Update release.
See series at http://permalink.gmane.org/gmane.comp.emulators.xen.devel/95987 http://permalink.gmane.org/gmane.comp.emulators.xen.devel/95988 http://permalink.gmane.org/gmane.comp.emulators.xen.devel/95989 http://permalink.gmane.org/gmane.comp.emulators.xen.devel/95990 http://permalink.gmane.org/gmane.comp.emulators.xen.devel/95991 http://permalink.gmane.org/gmane.comp.emulators.xen.devel/95992 http://permalink.gmane.org/gmane.comp.emulators.xen.devel/95993 http://permalink.gmane.org/gmane.comp.emulators.xen.devel/95994
... and these patches too: 2) boot-time ballooning series c2d08791 xen: clean up "extra" memory handling some more d2a81713 xen: re-enable boot-time ballooning 66946f67 xen/balloon: make sure we only include remaining extra ram 2f70e0ac xen/balloon: the balloon_lock is useless 9be4d457 xen: add extra pages to balloon 419db274 x86, memblock: Fix early_node_mem with big reserved region. 2f7acb20 xen: make sure xen_max_p2m_pfn is up to date 698bb8d1 xen: limit extra memory to a certain ratio of base b5b43ced xen: add extra pages for E820 RAM regions, even if beyond mem_end 36bc251b xen: make sure xen_extra_mem_start is beyond all non-RAM e820 42ee1471 xen: implement "extra" memory to reserve space for pages not present at boot 35ae11fd xen: Use host-provided E820 map cfd8951e xen: don't map missing memory fef5ba79 xen: Cope with unmapped pages when initializing kernel pagetable
It's too late to post/test this many patches for 6.1. Moving to 6.2.
Created attachment 515491 [details] 01/21 xen: release unused free memory
Created attachment 515492 [details] 02/21 xen: make sure pages are really part of domain before freeing
Created attachment 515493 [details] 03/21 xen: Rename the balloon lock
Created attachment 515494 [details] 04/21 xen: don't map missing memory
Created attachment 515496 [details] 05/21 xen: Use host-provided E820 map
Created attachment 515498 [details] 06/21 xen: implement "extra" memory to reserve space for pages not present at boot
Created attachment 515499 [details] 07/21 xen: make sure xen_extra_mem_start is beyond all non-RAM e820
Created attachment 515500 [details] 08/21 xen: add extra pages for E820 RAM regions, even if beyond mem_end
Created attachment 515502 [details] 09/21 xen: limit extra memory to a certain ratio of base
Created attachment 515503 [details] 10/21 xen: make sure xen_max_p2m_pfn is up to date
Created attachment 515504 [details] 11/21 xen: don't add extra_pages for RAM after mem_end
Created attachment 515505 [details] 12/21 xen: add extra pages to balloon
Created attachment 515506 [details] 13/21 xen/balloon: make sure we only include remaining extra ram
Created attachment 515507 [details] 14/21 xen/balloon: the balloon_lock is useless
Created attachment 515508 [details] 15/21 xen: clean up "extra" memory handling some more
Created attachment 515510 [details] 16/21 xen: Mark all initial reserved pages for the balloon as INVALID_P2M_ENTRY.
Created attachment 515511 [details] 17/21 xen/balloon: Removal of driver_pages
Created attachment 515512 [details] 18/21 xen/balloon: Use PageHighMem() for high memory page detection
Created attachment 515514 [details] 19/21 xen/balloon: Move dec_totalhigh_pages() from __balloon_append() to balloon_append()
Created attachment 515515 [details] 20/21 xen: prevent crashes with non-HIGHMEM 32-bit kernels with largeish memory
Created attachment 515517 [details] 21/21 xen: x86_32: Ignore not present at boot time HIGHMEM pages
Patch(es) available on kernel-2.6.32-176.el6
test with RHEL6.2 x86_64 pv guest, (Kernel 2.6.32-191.el6), now failed in Intel W3520 and Intel Q9400, Pass in amd B95 and amd 2427. In Intel platform: Set memory below the initial starting memory - pass Set memory above the initial starting memory -- fail In AMD paltform: Set memory below the initial starting memory - pass Set memory above the initial starting memory -- pass
Can you(In reply to comment #54) > test with RHEL6.2 x86_64 pv guest, (Kernel 2.6.32-191.el6), now failed in Intel > W3520 and Intel Q9400, Pass in amd B95 and amd 2427. > > In Intel platform: > Set memory below the initial starting memory - pass > Set memory above the initial starting memory -- fail Can you provide access to one of this boxes where test fails?
Guest wasn't provided with enough memory: -------------------- Aug 23 05:42:54 virtlab-66-84-79 kernel: BIOS-provided physical RAM map: Aug 23 05:42:54 virtlab-66-84-79 kernel: Xen: 0000000000000000 - 00000000000a0000 (usable) 0 (reserved) Aug 23 05:42:54 virtlab-66-84-79 kernel: Xen: 0000000000100000 - 0000000040800000 (usable) Aug 23 05:42:54 virtlab-66-84-79 kernel: DMI not present or invalid. Aug 23 05:42:54 virtlab-66-84-79 kernel: last_pfn = 0x40800 max_arch_pfn = 0x400000000 --------------------- 0x40800 is roughly 1032Mb and it is host's problem. If you execute xm info on this host you'll see: total_memory : 8125 free_memory : 1035 if you balloon down dom0 to free enough memory guest will be provided with correct map and you'll be able to balloon it up and down. something like this: # xm mem-set 0 4096 # xm info|grep mem total_memory : 8125 free_memory : 3823 now you can boot guest and verify if ballooning works. PS: Maybe you should compare versions of xen tools used on this box and on amd box and free_memory param on both boxes as well.
Hello, Igor Pass in all conditions after enable 'auto-balloon-dom0' in xend-config.sxp Thanks for your help. Yuyu Zhou
Hi Yuyu, (In reply to comment #58) > Pass in all conditions after enable 'auto-balloon-dom0' in xend-config.sxp would it be appropriate to set this BZ to VERIFIED then? Thanks!
Verified with RHEL6.2 x86_64 pv guest, (Kernel 2.6.32-191.el6). In Intel platform: Set memory below the initial starting memory - pass Set memory above the initial starting memory -- pass In AMD paltform: Set memory below the initial starting memory - pass Set memory above the initial starting memory -- pass
Deleted Technical Notes Contents. Old Contents: Memory Ballooning is not a supported by Red Hat Enterprise Linux 6 paravirtualized Xen guests.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHSA-2011-1530.html