Bug 1353591 - maxmemory limits may depend on host hardware
Summary: maxmemory limits may depend on host hardware
Status: CLOSED CURRENTRELEASE
Alias: None
Product: ovirt-engine
Classification: oVirt
Component: General
Version: future
Hardware: Unspecified
OS: Unspecified
low
medium vote
Target Milestone: ovirt-4.4.0
: ---
Assignee: Michal Skrivanek
QA Contact:
URL:
Whiteboard:
Keywords:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-07-07 13:57 UTC by Dr. David Alan Gilbert
Modified: 2019-03-20 14:15 UTC (History)
7 users (show)

(edit)
Clone Of:
(edit)
Last Closed: 2019-03-20 12:51:05 UTC
pm-rhel: ovirt-4.4?
rule-engine: planning_ack?
rule-engine: devel_ack?
rule-engine: testing_ack?


Attachments (Terms of Use)

Description Dr. David Alan Gilbert 2016-07-07 13:57:43 UTC
Description of problem:

i7's and Xeon-E3's have physical address limits of 64GB or 512GB depending on the model. Single socket AMD chips may have 1TB limits.  Thus the current 4000GB default is potentially problematic. It's not a problem for most data centre machines (Xeon E5/E7, multi socket Opteron) since they have a limit of at least 64TB.

The maximum guest memory settings get turned into a QEMU maxmem limit; this in turn ends up potentially changing the guest memory mapping, even in the cases where only a small amount of RAM is actually given to the guest.

The space above maxmem is used for PCI mappings but only 64-bit PCI; and at the moment that's rare, but:
   a) Virtio-1.0 (coming real soon to QEMU) defaults to 64-bit PCI bars.
   b) OVMF (aka EFI) maps those high up in the address space after maxmem.

So you might start to hit problems when (a) and (b) come together on an i7 or an E3.

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 1 Michal Skrivanek 2016-07-08 05:10:28 UTC
Are you aware of any programmable way how to get those limits? We do have an exception today for POWER8, but to differentiate between Intel CPUs would need more finegrained list, and I'd rather not build one into oVirt if there is another way.

Comment 2 Dr. David Alan Gilbert 2016-07-08 08:59:28 UTC
(In reply to Michal Skrivanek from comment #1)
> Are you aware of any programmable way how to get those limits? We do have an
> exception today for POWER8, but to differentiate between Intel CPUs would
> need more finegrained list, and I'd rather not build one into oVirt if there
> is another way.


Hi Michal,
  Yes the limits are in /proc/cpuinfo; for example on my laptop it has:

address sizes	: 36 bits physical, 48 bits virtual

while on a Xeon E5 server we have:

address sizes	: 46 bits physical, 48 bits virtual


I'd say it's probably safe to set a maxmem limit of half the physical address
size;
    2^36=64GB so maxmem=32GB should be safe.

For the xeon it's
    2^46=64TB so maxmem=32TB should be safe.

There are 39,40, 46 and 48 bit boxes around; all of the non-basic Xeon's
are 46 bits so in almost all your normal deployment cases that's what you'll see and you don't have to worry about it.

Dave


Dave

Comment 3 Michal Skrivanek 2016-07-08 09:14:01 UTC
Thanks! We would use the upper bound of 4TB (it was increased from 4000GB to 4096GB for RHEL7 in bug 1300145)
And still keep the 2TB for POWER8 (note: no address sizes in /proc/cpuinfo)

Comment 4 Michal Skrivanek 2016-12-21 09:09:42 UTC
The bug was not addressed in time for 4.1. Postponing to 4.2

Comment 5 Michal Skrivanek 2017-08-22 07:57:45 UTC
we set 4x the initial requested memory size now, so we shoudln't exceed the addressable space other than by manually putting high number

Comment 6 Laszlo Ersek 2018-03-26 12:36:38 UTC
(Coming here from bug 1560453.)

(In reply to Dr. David Alan Gilbert from comment #2)
> I'd say it's probably safe to set a maxmem limit of half the physical address
> size;
>     2^36=64GB so maxmem=32GB should be safe.

The "half" formula could work with higher RAM amounts, but with 64GB/32GB exactly, it won't work. Please see bug 1560453 comment 6.

Comment 7 Dr. David Alan Gilbert 2018-03-26 12:58:34 UTC
(In reply to Laszlo Ersek from comment #6)
> (Coming here from bug 1560453.)
> 
> (In reply to Dr. David Alan Gilbert from comment #2)
> > I'd say it's probably safe to set a maxmem limit of half the physical address
> > size;
> >     2^36=64GB so maxmem=32GB should be safe.
> 
> The "half" formula could work with higher RAM amounts, but with 64GB/32GB
> exactly, it won't work. Please see bug 1560453 comment 6.

Oh that's kind of nasty; I don't really understand how that MMIO aperture works and what it's size/alignment requirements are - can you explain?

Comment 8 Laszlo Ersek 2018-03-26 14:10:08 UTC
Sure, I can attempt :) The function to look at is GetFirstNonAddress() in
"OvmfPkg/PlatformPei/MemDetect.c". I'll try to write it up here in natural
language (although I commented the function heavily as well).

As an introduction, the "number of address bits" is a quantity that the
firmware itself needs to know, so that in the DXE phase page tables exist
that actually map that address space. The GetFirstNonAddress() function (in
the PEI phase) calculates the highest *exclusive* address that the firmware
might want or need to use (in the DXE phase).

(1) First we get the highest exclusive cold-plugged RAM address. (There are
two methods for this, the more robust one is to read QEMU's E820 map, the
older / less robust one is to calculate it from the CMOS.) If the result
would be <4GB, then we take exactly 4GB from this step, because the firmware
always needs to be able to address up to 4GB. Note that this is already
somewhat non-intuitive; for example, if you have 4GB of RAM (as in,
*amount*), it will go up to 6GB in the guest-phys address space (because
[0x8000_0000..0xFFFF_FFFF] is not RAM but MMIO on q35).

(2) If the DXE phase is 32-bit, then we're done. (No addresses >=4GB can be
accessed, either for RAM or MMIO.) For RHEL this is never the case.

(3) Grab the size of the 64-bit PCI MMIO aperture. This defaults to 32GB,
but a custom (OVMF specific) fw_cfg file (from the QEMU command line) can
resize it or even disable it. This aperture is relevant because it's going
to be the top of the address space that the firmware is interested in. If
the aperture is disabled (on the QEMU cmdline), then we're done, and only
the value from point (1) matters -- that determines the address width we
need.

(4) OK, so we have a 64-bit PCI MMIO aperture (for allocating BARs out of,
later); we have to place it somewhere. The base cannot match the value from
(1) directly, because that would not leave room for the DIMM hotplug area.
So the end of that area is read from the fw_cfg file
"etc/reserved-memory-end". DIMM hotplug is enabled iff
"etc/reserved-memory-end" exists. If "etc/reserved-memory-end" exists, then
it is guaranteed to be larger than the value from (1) -- i.e., top of
cold-plugged RAM.

(5) We round up the size of the 64-bit PCI aperture to 1GB. We also round up
the base of the same -- i.e., from (4) or (1), as appropriate -- to 1GB.
This is inspired by SeaBIOS, because this lets the host map the aperture
with 1GB hugepages.

(6) The base address of the aperture is then rounded up so that it ends up
aligned "naturally". "Natural" alignment means that we take the largest
whole power of two (i.e., BAR size) that can fit *within* the aperture
(whose size comes from (3) and (5)) and use that BAR size as alignment
requirement. This is because the PciBusDxe driver sorts the BARs in
decreasing size order (and equivalently, decreasing alignment order), for
allocation in increasing address order, so if our aperture base is aligned
sufficiently for the largest BAR that can theoretically fit into the
aperture, then the base will be aligned correctly for *any* other BAR that
fits.

For example, if you have a 32GB aperture size, then the largest BAR that can
fit is 32GB, so the alignment requirement in step (6) will be 32GB. Whereas,
if the user configures a 48GB aperture size in (3), then your alignment will
remain 32GB in step (6), because a 64GB BAR would not fit, and a 32GB BAR
(which fits) dictates a 32GB alignment.

Thus we have the following "ladder" of ranges:

(a) cold-plugged RAM (low, <2GB)
(b) 32-bit PCI MMIO aperture, ECAM/MMCONFIG, APIC, pflash, etc (<4GB)
(c) cold-plugged RAM (high, >=4GB)
(d) DIMM hot-plug area
(e) padding up to 1GB alignment (for hugepages)
(f) padding up to the natural alignment of the 64-bit PCI MMIO aperture size
   (32GB by default)
(g) 64-bit PCI MMIO aperture

To my understanding, "maxmem" determines the end of (d). And, the address
width is dictated by the end of (g).

Two more examples.

- If you have 36 phys address bits, that doesn't let you use maxmem=32G.
  This is because maxmem=32G puts the end of the DIMM hotplug area (d)
  strictly *above* 32GB (due to the "RAM gap" (b)), and then the padding (f)
  places the 64-bit PCI MMIO aperture at 64GB. So 36 phys address bits don't
  suffice.

- On the other hand, if you have 37 phys address bits, that *should* let you
  use maxmem=64G. While the DIMM hot-plug area will end strictly above 64GB,
  the 64-bit PCI MMIO aperture (of size 32GB) can be placed at 96GB, so it
  will all fit into 128GB (i.e. 37 address bits).

Sorry if this is confusing, I got very little sleep last night.

Comment 9 Dr. David Alan Gilbert 2018-03-26 14:42:47 UTC
(In reply to Laszlo Ersek from comment #8)
> Sure, I can attempt :) The function to look at is GetFirstNonAddress() in
> "OvmfPkg/PlatformPei/MemDetect.c". I'll try to write it up here in natural
> language (although I commented the function heavily as well).
> 
> As an introduction, the "number of address bits" is a quantity that the
> firmware itself needs to know, so that in the DXE phase page tables exist
> that actually map that address space. The GetFirstNonAddress() function (in
> the PEI phase) calculates the highest *exclusive* address that the firmware
> might want or need to use (in the DXE phase).
> 
> (1) First we get the highest exclusive cold-plugged RAM address. (There are
> two methods for this, the more robust one is to read QEMU's E820 map, the
> older / less robust one is to calculate it from the CMOS.) If the result
> would be <4GB, then we take exactly 4GB from this step, because the firmware
> always needs to be able to address up to 4GB. Note that this is already
> somewhat non-intuitive; for example, if you have 4GB of RAM (as in,
> *amount*), it will go up to 6GB in the guest-phys address space (because
> [0x8000_0000..0xFFFF_FFFF] is not RAM but MMIO on q35).

The 4GB giving 6GB doesn't surprise me for the MMIO hole; the <4GB giving 4GB is more fun; Doesn't 3.9999GB also need 5.9999GB because it also has to leave the MMIO hole?

> (2) If the DXE phase is 32-bit, then we're done. (No addresses >=4GB can be
> accessed, either for RAM or MMIO.) For RHEL this is never the case.
> 
> (3) Grab the size of the 64-bit PCI MMIO aperture. This defaults to 32GB,
> but a custom (OVMF specific) fw_cfg file (from the QEMU command line) can
> resize it or even disable it. This aperture is relevant because it's going
> to be the top of the address space that the firmware is interested in. If
> the aperture is disabled (on the QEMU cmdline), then we're done, and only
> the value from point (1) matters -- that determines the address width we
> need.

OK, the 32GB PCI MMIO aperture is the 1st thing I hadn't realised existed and/or why it was 32GB; but OK, makes sense, especially with big devices these days.
 
> (4) OK, so we have a 64-bit PCI MMIO aperture (for allocating BARs out of,
> later); we have to place it somewhere. The base cannot match the value from
> (1) directly, because that would not leave room for the DIMM hotplug area.
> So the end of that area is read from the fw_cfg file
> "etc/reserved-memory-end". DIMM hotplug is enabled iff
> "etc/reserved-memory-end" exists. If "etc/reserved-memory-end" exists, then
> it is guaranteed to be larger than the value from (1) -- i.e., top of
> cold-plugged RAM.

Yes, that makes sense.
 
> (5) We round up the size of the 64-bit PCI aperture to 1GB. We also round up
> the base of the same -- i.e., from (4) or (1), as appropriate -- to 1GB.
> This is inspired by SeaBIOS, because this lets the host map the aperture
> with 1GB hugepages.

Yep.
 
> (6) The base address of the aperture is then rounded up so that it ends up
> aligned "naturally". "Natural" alignment means that we take the largest
> whole power of two (i.e., BAR size) that can fit *within* the aperture
> (whose size comes from (3) and (5)) and use that BAR size as alignment
> requirement. This is because the PciBusDxe driver sorts the BARs in
> decreasing size order (and equivalently, decreasing alignment order), for
> allocation in increasing address order, so if our aperture base is aligned
> sufficiently for the largest BAR that can theoretically fit into the
> aperture, then the base will be aligned correctly for *any* other BAR that
> fits.
> 
> For example, if you have a 32GB aperture size, then the largest BAR that can
> fit is 32GB, so the alignment requirement in step (6) will be 32GB. Whereas,
> if the user configures a 48GB aperture size in (3), then your alignment will
> remain 32GB in step (6), because a 64GB BAR would not fit, and a 32GB BAR
> (which fits) dictates a 32GB alignment.

OK, that's pretty unfortunate; I'm assuming absolutely huge BARs are pretty rare.
 
> Thus we have the following "ladder" of ranges:
> 
> (a) cold-plugged RAM (low, <2GB)
> (b) 32-bit PCI MMIO aperture, ECAM/MMCONFIG, APIC, pflash, etc (<4GB)
> (c) cold-plugged RAM (high, >=4GB)
> (d) DIMM hot-plug area
> (e) padding up to 1GB alignment (for hugepages)
> (f) padding up to the natural alignment of the 64-bit PCI MMIO aperture size
>    (32GB by default)
> (g) 64-bit PCI MMIO aperture
> 
> To my understanding, "maxmem" determines the end of (d). And, the address
> width is dictated by the end of (g).
> 
> Two more examples.
> 
> - If you have 36 phys address bits, that doesn't let you use maxmem=32G.
>   This is because maxmem=32G puts the end of the DIMM hotplug area (d)
>   strictly *above* 32GB (due to the "RAM gap" (b)), and then the padding (f)
>   places the 64-bit PCI MMIO aperture at 64GB. So 36 phys address bits don't
>   suffice.
> 
> - On the other hand, if you have 37 phys address bits, that *should* let you
>   use maxmem=64G. While the DIMM hot-plug area will end strictly above 64GB,
>   the 64-bit PCI MMIO aperture (of size 32GB) can be placed at 96GB, so it
>   will all fit into 128GB (i.e. 37 address bits).

Yep, that makes sense.
So then the question is what's the maximum RAM size you can actually do with 36 phys address bits?  36bits=64GB, so lets assume we have your PCI mmio space taking the top half of that, then we've got to make sure the end of the hotplug area doesn't go above the 32GB line, so with a 2GB gap, that probably means 30GB of RAM?

I guess the other solution here for hosts with small address space is to reduce the MMIO bar space; e.g. if the bar space was MIN(32GB, phys-address-space/4) then the maximum memory space could be phys-address-space/2.

> 
> Sorry if this is confusing, I got very little sleep last night.

Don't be, it made sense.

Dave

Comment 10 Laszlo Ersek 2018-03-26 16:18:17 UTC
(In reply to Dr. David Alan Gilbert from comment #9)

> The 4GB giving 6GB doesn't surprise me for the MMIO hole; the <4GB giving
> 4GB is more fun; Doesn't 3.9999GB also need 5.9999GB because it also has
> to leave the MMIO hole?

Yes, that's a hard split; please see the following two messages:

  http://mid.mail-archive.com/1457340448.25423.43.camel@redhat.com
  http://mid.mail-archive.com/1457343522.25423.77.camel@redhat.com

from the discussion of the following patch (from two years ago):

  [edk2] [PATCH 2/5] OvmfPkg: PlatformPei: enable PCIEXBAR
                     (aka MMCONFIG / ECAM) on Q35

>> For example, if you have a 32GB aperture size, then the largest BAR that
>> can fit is 32GB, so the alignment requirement in step (6) will be 32GB.
>> Whereas, if the user configures a 48GB aperture size in (3), then your
>> alignment will remain 32GB in step (6), because a 64GB BAR would not fit,
>> and a 32GB BAR (which fits) dictates a 32GB alignment.
>
> OK, that's pretty unfortunate; I'm assuming absolutely huge BARs are
> pretty rare.

They are rare, but they exist. This is an upstream OVMF bug report from
March 2016:

  https://github.com/tianocore/edk2/issues/59 (old, dead URL)
  https://bugzilla.tianocore.org/show_bug.cgi?id=80 (relocated URL)

The NVIDIA Tesla K80 that the reporter used needed 24GB for its MMIO BARs in
total (one 16GB BAR and another 8GB BAR). I wanted to go one power higher,
and Gerd liked 32GB too.

Of course exposing the aperture size in the domain XML would be the most
flexible, but users wouldn't have any idea what to put there :) So 32GB
looked like a reasonable default. (If it's really necessary, it can be
overridden with "-fw_cfg name=opt/ovmf/X-PciMmio64Mb,string=4096", for a 4GB
aperture e.g.)

> So then the question is what's the maximum RAM size you can actually do
> with 36 phys address bits?  36bits=64GB, so lets assume we have your PCI
> mmio space taking the top half of that, then we've got to make sure the
> end of the hotplug area doesn't go above the 32GB line, so with a 2GB gap,
> that probably means 30GB of RAM?

OK, I wanted to see this exactly, and I think "maxmem" could be offset by
4GB relative to what I expected. Because I "bisected" a bunch of maxmem
values, and

  -m 2048,slots=4,maxmem=26624M

(i.e., exactly 26GB) is where the address width calculation yields 36 bits
for the last time. If I add just another meg, the width jumps to 37 bits. I
guess if we wanted to know exactly, we'd have to check the precise effect of
"maxmem" on the "etc/reserved-memory-end" fw_cfg file, in QEMU.
pc_memory_init() does some black magic around "maxram_size". :) I guess Igor
could explain in detail.

> I guess the other solution here for hosts with small address space is to
> reduce the MMIO bar space; e.g. if the bar space was MIN(32GB,
> phys-address-space/4) then the maximum memory space could be
> phys-address-space/2.

Right, reducing that aperture is not a bad idea and already possible; the
current name "X-PciMmio64Mb" is experimental though. I put it there until we
figured something better.

Comment 11 Dr. David Alan Gilbert 2018-03-26 16:39:33 UTC
(In reply to Laszlo Ersek from comment #10)

> > So then the question is what's the maximum RAM size you can actually do
> > with 36 phys address bits?  36bits=64GB, so lets assume we have your PCI
> > mmio space taking the top half of that, then we've got to make sure the
> > end of the hotplug area doesn't go above the 32GB line, so with a 2GB gap,
> > that probably means 30GB of RAM?
> 
> OK, I wanted to see this exactly, and I think "maxmem" could be offset by
> 4GB relative to what I expected. Because I "bisected" a bunch of maxmem
> values, and
> 
>   -m 2048,slots=4,maxmem=26624M
> 
> (i.e., exactly 26GB) is where the address width calculation yields 36 bits
> for the last time. If I add just another meg, the width jumps to 37 bits. I
> guess if we wanted to know exactly, we'd have to check the precise effect of
> "maxmem" on the "etc/reserved-memory-end" fw_cfg file, in QEMU.
> pc_memory_init() does some black magic around "maxram_size". :) I guess Igor
> could explain in detail.

Hmm OK, that's 4GB less than I thought it would be; I wonder where that went :-)
I was expecting:
   0  2GB RAM
 2GB  2GB gap
 4GB  28GB RAM
32GB  32GB MMIO space

but I guess perhaps the BIOS and friends have to fit somewhere; I'd expected them to be in that 2GB gap.
 
> > I guess the other solution here for hosts with small address space is to
> > reduce the MMIO bar space; e.g. if the bar space was MIN(32GB,
> > phys-address-space/4) then the maximum memory space could be
> > phys-address-space/2.
> 
> Right, reducing that aperture is not a bad idea and already possible; the
> current name "X-PciMmio64Mb" is experimental though. I put it there until we
> figured something better.

Comment 12 Laszlo Ersek 2018-03-27 09:09:33 UTC
(In reply to Dr. David Alan Gilbert from comment #11)
> I was expecting:
>    0  2GB RAM
>  2GB  2GB gap
>  4GB  28GB RAM
> 32GB  32GB MMIO space

I would expect the same :) I can only repeat that I think the magic happens in pc_memory_init() in QEMU: I just added a debug message to OVMF to print "etc/reserved-memory-end" after it is read from fw_cfg (pls. see point (4) in comment 8 etc). And, with the option

  -m 2048,slots=4,maxmem=26624M

the log contains 0x8_0000_0000 (32GB), so the transition from 26GB to 32GB occurs in QEMU.

> but I guess perhaps the BIOS and friends have to fit somewhere; I'd expected
> them to be in that 2GB gap.

That's correct, the pflash chips are mapped just below 4GB in the guest-phys address space.

Comment 13 Dr. David Alan Gilbert 2018-03-27 15:51:57 UTC
(In reply to Laszlo Ersek from comment #12)
> (In reply to Dr. David Alan Gilbert from comment #11)
> > I was expecting:
> >    0  2GB RAM
> >  2GB  2GB gap
> >  4GB  28GB RAM
> > 32GB  32GB MMIO space
> 
> I would expect the same :) I can only repeat that I think the magic happens
> in pc_memory_init() in QEMU: I just added a debug message to OVMF to print
> "etc/reserved-memory-end" after it is read from fw_cfg (pls. see point (4)
> in comment 8 etc). And, with the option
> 
>   -m 2048,slots=4,maxmem=26624M
> 
> the log contains 0x8_0000_0000 (32GB), so the transition from 26GB to 32GB
> occurs in QEMU.

Does the 26GB change if you change the slots= ?

I see pc_memory_init has:
        if (pcmc->enforce_aligned_dimm) {
            /* size hotplug region assuming 1G page max alignment per slot */
            hotplug_mem_size += (1ULL << 30) * machine->ram_slots;
        }

so with 4 slots, perhaps that explains where our missing 4GB went?

Comment 14 Laszlo Ersek 2018-03-28 11:41:55 UTC
(
[edk2] [PATCH] OvmfPkg/PlatformPei: debug log "etc/reserved-memory-end" from
               fw_cfg
http://mid.mail-archive.com/20180328113857.22788-1-lersek@redhat.com
https://lists.01.org/pipermail/edk2-devel/2018-March/023256.html
)

(In reply to Dr. David Alan Gilbert from comment #13)
> (In reply to Laszlo Ersek from comment #12)
> > (In reply to Dr. David Alan Gilbert from comment #11)
> > > I was expecting:
> > >    0  2GB RAM
> > >  2GB  2GB gap
> > >  4GB  28GB RAM
> > > 32GB  32GB MMIO space
> > 
> > I would expect the same :) I can only repeat that I think the magic happens
> > in pc_memory_init() in QEMU: I just added a debug message to OVMF to print
> > "etc/reserved-memory-end" after it is read from fw_cfg (pls. see point (4)
> > in comment 8 etc). And, with the option
> > 
> >   -m 2048,slots=4,maxmem=26624M
> > 
> > the log contains 0x8_0000_0000 (32GB), so the transition from 26GB to 32GB
> > occurs in QEMU.
> 
> Does the 26GB change if you change the slots= ?
> 
> I see pc_memory_init has:
>         if (pcmc->enforce_aligned_dimm) {
>             /* size hotplug region assuming 1G page max alignment per slot */
>             hotplug_mem_size += (1ULL << 30) * machine->ram_slots;
>         }
> 
> so with 4 slots, perhaps that explains where our missing 4GB went?

Yes, that's a good catch. I tested the following options:

  -m 2048,slots=4,maxmem=26624M
  -m 2048,slots=3,maxmem=26624M
  -m 2048,slots=2,maxmem=26624M
  -m 2048,slots=1,maxmem=26624M

and correspondingly, OVMF logged

  GetFirstNonAddress: HotPlugMemoryEnd=0x800000000
  GetFirstNonAddress: HotPlugMemoryEnd=0x7C0000000
  GetFirstNonAddress: HotPlugMemoryEnd=0x780000000
  GetFirstNonAddress: HotPlugMemoryEnd=0x740000000

Comment 15 Ryan Barry 2019-01-21 14:54:07 UTC
Re-targeting to 4.3.1 since it is missing a patch, an acked blocker flag, or both

Comment 16 Ryan Barry 2019-03-20 12:51:05 UTC
This was updated to set the limit to quadruple the size of the physical memory installed on the host, but is configurable.

Comment 17 Dr. David Alan Gilbert 2019-03-20 14:15:04 UTC
That's an interesting heuristic, but I think you'll hit cases where it breaks.
On a fully loaded host (not that unusual for big hypervisors) 4xRAM could go over
the physical address limit of the CPU, and that's before the guest has done all it's padding and PCI
space crazyness as described in comment 8 onwards.

The older machines (Xeon E3's/i7's or laptops) with 36bit physical are the most
likely to break.


Note You need to log in before you can comment on or make changes to this bug.