Bug 1475656 - When guest's memory was set to [from default to 2G] and guest was with the cpus[240] ,which exceeded 16 cpus (recommended 16 on Power),guest can boot up but got kernel panic immediately
When guest's memory was set to [from default to 2G] and guest was with the cp...
Status: CLOSED NOTABUG
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: qemu-kvm (Show other bugs)
7.4-Alt
ppc64le Linux
high Severity high
: rc
: ---
Assigned To: David Gibson
Min Deng
:
Depends On:
Blocks: 1440030
  Show dependency treegraph
 
Reported: 2017-07-27 02:05 EDT by Min Deng
Modified: 2017-08-06 06:23 EDT (History)
10 users (show)

See Also:
Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2017-08-02 21:19:32 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
IBM Linux Technology Center 157329 None None None 2017-08-06 06:23 EDT

  None (edit)
Description Min Deng 2017-07-27 02:05:28 EDT
Description of problem:
When guest's memory was set to [from default to 2G] but guest was with the cpus[240] ,which exceeded 16 cpus (recommend 16 on Power),guest can boot up but got kernel panic immediately

Version-Release number of selected component (if applicable):
kernel-4.11.0-14.el7a.ppc64le - host
kernel-4.11.0-16.el7a.ppc64le - guest
qemu-kvm-2.9.0-18.el7a.ppc64le

How reproducible:
3/3
Steps to Reproduce:
1.boot up guest with the following cli,
  /usr/libexec/qemu-kvm -name mdeng -sandbox off -machine pseries-rhel7.4.0 -nodefaults -vga none -chardev socket,id=serial_id_serial0,path=/tmp/tt,server,nowait -device spapr-vty,reg=0x30000000,chardev=serial_id_serial0 -device nec-usb-xhci,id=usb1,bus=pci.0,addr=0x3 -device virtio-scsi-pci,id=virtio_scsi_pci0,bus=pci.0,addr=0x4 -drive id=drive_image1,if=none,snapshot=off,aio=native,cache=none,format=qcow2,file=memory.qcow2 -device scsi-hd,id=image1,drive=drive_image1 -device virtio-net-pci,mac=9a:2b:2c:2d:2e:2f,id=id6b5tKj,vectors=4,netdev=idXB7qte,bus=pci.0,addr=0x5 -netdev tap,id=idXB7qte,vhost=on,script=/etc/qemu-ifup,downscript=/etc/qemu-down,id=hostnet1 -device usb-tablet,id=usb-tablet1,bus=usb1.0,port=1 -vnc :11 -rtc base=utc,clock=host -enable-kvm -monitor stdio -qmp tcp:0:4441,server,nowait -numa node,cpus=0-239 -smp 240 -m *[From default to 2G]*

2. "-m" from [default to 2G] will help to reproduce the issue.

Actual results:
There were warnings from qemu-kvm
Warning: Number of SMP cpus requested (240) exceeds the recommended cpus supported by KVM (16)
Warning: Number of hotpluggable cpus requested (240) exceeds the recommended cpus supported by KVM (16)
QEMU still processed and got kernel panic at last.

Expected results:
Communicated with developer,it is expected that the guest boot up slowly though cpus exceeded recommendable cpus number on Power.At least,there shouldn't be any kernel panic issue,thanks.

Additional info:

[  199.797707]  mapped:0 shmem:168 pagetables:289 bounce:0
[  199.797707]  free:1229 free_pcp:106 free_cma:0
[  199.797756] Node 0 active_anon:173696kB inactive_anon:9024kB active_file:0kB inactive_file:0kB unevictable:0kB isolated(anon):0kB isolated(file):0kB mapped:0kB dirty:0kB writeback:0kB shmem:10752kB shmem_thp: 0kB shmem_pmdmapped: 0kB anon_thp: 0kB writeback_tmp:0kB unstable:0kB pages_scanned:0 all_unreclaimable? yes
[  199.797768] Node 0 DMA free:78656kB min:78720kB low:98368kB high:118016kB active_anon:139776kB inactive_anon:9024kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:2097152kB managed:1608128kB mlocked:0kB slab_reclaimable:77824kB slab_unreclaimable:972352kB kernel_stack:33536kB pagetables:18496kB bounce:0kB free_pcp:6784kB local_pcp:320kB free_cma:0kB
[  199.797781] lowmem_reserve[]: 0 0 0 0 0
[  199.797789] Node 0 DMA: 57*64kB (UME) 274*128kB (UME) 133*256kB (UM) 16*512kB (M) 0*1024kB 0*2048kB 0*4096kB 0*8192kB 0*16384kB = 80960kB
[  199.797814] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
[  199.797815] 0 total pagecache pages
[  199.797820] 0 pages in swap cache
[  199.797821] Swap cache stats: add 0, delete 0, find 0/0
[  199.797823] Free swap  = 0kB
[  199.797825] Total swap = 0kB
[  199.797826] 32768 pages RAM
[  199.797827] 0 pages HighMem/MovableOnly
[  199.797828] 7641 pages reserved
[  199.797830] 0 pages cma reserved
[  199.797831] 0 pages hwpoisoned
[  199.797834] [ pid ]   uid  tgid total_vm      rss nr_ptes nr_pmds swapents oom_score_adj name
[  199.797969] [ 2454]     0  2454     1350       65      11       3        0             0 lvmetad
[  199.797975] [ 2464]     0  2464      299       66      10       4        0         -1000 systemd-udevd
[  199.797982] [ 2471]     0  2471      293       60      13       4        0             0 systemd-udevd
[  199.797987] [ 2472]     0  2472      293       60      12       4        0             0 systemd-udevd
[  199.797992] [ 2473]     0  2473      290       59      11       4        0             0 systemd-udevd
[  199.797996] [ 2475]     0  2475      293       76      12       4        0             0 systemd-udevd
[  199.798001] [ 2476]     0  2476      290       58      11       4        0             0 systemd-udevd
[  199.798006] [ 2477]     0  2477      290       59      11       4        0             0 systemd-udevd
[  199.798011] [ 2478]     0  2478      290       59      11       4        0             0 systemd-udevd
[  199.798017] [ 2479]     0  2479      290       59      11       4        0             0 systemd-udevd
[  199.798022] [ 2480]     0  2480      290       59      11       4        0             0 systemd-udevd
[  199.798026] [ 2481]     0  2481      290       58      11       4        0             0 systemd-udevd
[  199.798032] [ 2482]     0  2482      290       59      11       4        0             0 systemd-udevd
[  199.798037] [ 2483]     0  2483      290       58      11       4        0             0 systemd-udevd
[  199.798043] [ 2485]     0  2485      290       59      11       4        0             0 systemd-udevd
[  199.798048] [ 2486]     0  2486      290       59      11       4        0             0 systemd-udevd
[  199.798062] [ 2487]     0  2487      290       58      11       4        0             0 systemd-udevd
[  199.798067] [ 2489]     0  2489      293       67      13       4        0             0 systemd-udevd
[  199.798071] [ 2491]     0  2491      293       60      12       4        0             0 systemd-udevd
[  199.798076] [ 2502]     0  2502      293       61      11       4        0             0 systemd-udevd
[  199.798080] [ 2505]     0  2505      293       75      11       4        0             0 systemd-udevd
[  199.798086] [ 2506]     0  2506      293       61      11       4        0             0 systemd-udevd
[  199.798091] [ 2516]     0  2516      293       62      11       4        0             0 systemd-udevd
[  199.798097] [ 2518]     0  2518      293       62      11       4        0             0 systemd-udevd
[  199.798101] [ 2519]     0  2519      293       61      11       4        0             0 systemd-udevd
[  199.798106] [ 2522]     0  2522      293       62      11       4        0             0 systemd-udevd
[  199.798111] [ 2523]     0  2523      293       67      11       4        0             0 systemd-udevd
[  199.798116] [ 2524]     0  2524      293       61      11       4        0             0 systemd-udevd
[  199.798120] [ 2526]     0  2526      293       62      11       4        0             0 systemd-udevd
[  199.798126] [ 2530]     0  2530      293       63      11       4        0             0 systemd-udevd
[  199.798130] [ 2532]     0  2532      293       61      11       4        0             0 systemd-udevd
[  199.798135] [ 2539]     0  2539      293       61      10       4        0             0 systemd-udevd
[  199.798140] [ 2633]     0  2633      296       63      10       4        0             0 systemd-udevd
[  199.798145] [ 2715]     0  2715      296       63      10       4        0             0 systemd-udevd
[  199.798151] [ 2825]     0  2825      113       31       7       3        0             0 systemctl
[  199.798156] [ 2842]     0  2842      113       31       9       3        0             0 systemctl
[  199.798161] [ 2845]     0  2845      113       31       8       4        0             0 systemctl
[  199.798166] [ 2854]     0  2854      113       31       7       3        0             0 systemctl
[  199.798172] [ 3026]     0  3026      299       85      11       4        0             0 systemd-udevd
[  199.798177] [ 3030]     0  3030      299       67      10       4        0             0 systemd-udevd
[  199.798182] [ 3057]     0  3057      113       32       6       3        0             0 systemctl
[  199.798187] [ 3066]     0  3066      113       31       6       4        0             0 systemctl
[  199.798193] [ 3069]     0  3069      113       31       8       3        0             0 systemctl
[  199.798199] [ 3071]     0  3071      113       31       7       3        0             0 systemctl
[  199.798205] [ 3072]     0  3072      113       31       7       3        0             0 systemctl
[  199.798211] [ 3073]     0  3073      113       32       7       4        0             0 systemctl
[  199.798215] [ 3074]     0  3074      113       37       8       4        0             0 systemctl
[  199.798220] [ 3075]     0  3075      113       31       9       3        0             0 systemctl
[  199.798227] [ 3076]     0  3076      113       31       7       3        0             0 systemctl
[  199.798232] [ 3078]     0  3078      113       31       7       3        0             0 systemctl
[  199.798238] [ 3081]     0  3081      113       31       7       4        0             0 systemctl
[  199.798243] [ 3083]     0  3083      113       31       7       4        0             0 systemctl
[  199.798249] [ 3084]     0  3084      113       31       7       4        0             0 systemctl
[  199.798253] [ 3085]     0  3085      113       31       7       4        0             0 systemctl
[  199.798259] [ 3087]     0  3087      113       36       7       3        0             0 systemctl
[  199.798264] [ 3088]     0  3088      113       31       7       3        0             0 systemctl
[  199.798269] [ 3091]     0  3091      113       31       7       3        0             0 systemctl
[  199.798275] [ 3183]     0  3183      299       67      10       4        0             0 systemd-udevd
[  199.798279] [ 3184]     0  3184      299       67      10       4        0             0 systemd-udevd
[  199.798284] [ 3187]     0  3187      299       67      10       4        0             0 systemd-udevd
[  199.798289] [ 3188]     0  3188      299       67      10       4        0             0 systemd-udevd
[  199.798294] [ 3192]     0  3192      113       50       7       3        0             0 systemctl
[  199.798300] [ 3194]     0  3194      113       31       7       4        0             0 systemctl
[  199.798304] [ 3197]     0  3197      299       67      10       4        0             0 systemd-udevd
[  199.798309] [ 3199]     0  3199      113       48       6       4        0             0 systemctl
[  199.798314] [ 3200]     0  3200      113       31       7       4        0             0 systemctl
[  199.798319] [ 3202]     0  3202      113       31       8       3        0             0 systemctl
[  199.798323] [ 3207]     0  3207      113       47       7       3        0             0 systemctl
[  199.798328] [ 3214]     0  3214      113       31       6       3        0             0 systemctl
[  199.798332] [ 3226]     0  3226      113       51       7       4        0             0 systemctl
[  199.798337] [ 3227]     0  3227      113       37       8       4        0             0 systemctl
[  199.798341] [ 3231]     0  3231      113       31       8       3        0             0 systemctl
[  199.798347] [ 3233]     0  3233      113       32       7       4        0             0 systemctl
[  199.798352] [ 3235]     0  3235      113       34       7       3        0             0 systemctl
[  199.798357] [ 3240]     0  3240      299       67      10       4        0             0 systemd-udevd
[  199.798362] [ 3263]     0  3263      299       67      10       4        0             0 systemd-udevd
[  199.798367] [ 3264]     0  3264      299       68      10       4        0             0 systemd-udevd
[  199.798372] [ 3265]     0  3265      299        0      10       4        0             0 systemd-udevd
[  199.798377] [ 3266]     0  3266      299       69      10       4        0             0 systemd-udevd
[  199.798383] [ 3287]     0  3287      299       67      10       4        0             0 systemd-udevd
[  199.798388] [ 3293]     0  3293       73       12       5       3        0             0 sh
[  199.798393] [ 3294]     0  3294      299       67      10       4        0             0 systemd-udevd
[  199.798397] [ 3296]     0  3296      299       77      10       4        0             0 systemd-udevd
[  199.798403] [ 3306]     0  3306       46        6       4       3        0             0 touch
[  199.798408] [ 3308]     0  3308       46        5       5       3        0             0 touch
[  199.798413] [ 3309]     0  3309       48        8       5       3        0             0 touch
[  199.798416] Out of memory: Kill process 3026 (systemd-udevd) score 3 or sacrifice child
[  199.798432] Killed process 3026 (systemd-udevd) total-vm:19136kB, anon-rss:4288kB, file-rss:1152kB, shmem-rss:0kB
[  199.878269] systemd-udevd cpuset=/ mems_allowed=0
[  199.878554] CPU: 24 PID: 3265 Comm: systemd-udevd Tainted: G        W      ------------   4.11.0-16.el7a.ppc64le #1
[  199.878792] Call Trace:
[  199.878865] [c000000046a2f6e0] [c000000000bdc764] dump_stack+0xb0/0xf0 (unreliable)
[  199.879118] [c000000046a2f720] [c0000000002ff568] warn_alloc+0x128/0x1c0
[  199.879426] [c000000046a2f7d0] [c0000000003003b4] __alloc_pages_nodemask+0xd04/0x1000
[  199.879681] [c000000046a2f9c0] [c0000000003a3520] alloc_pages_current+0x120/0x370
[  199.880982] [c000000046a2fa60] [c0000000002eb4a8] __page_cache_alloc+0x108/0x150
[  199.881193] [c000000046a2faa0] [c0000000002f3890] filemap_fault+0x4a0/0x820
[  199.882533] [c000000046a2fb50] [c0080000075abe94] xfs_filemap_fault+0x84/0x1b0 [xfs]
[  199.883869] [c000000046a2fb90] [c00000000034f240] __do_fault+0x50/0x190
[  199.884287] [c000000046a2fbd0] [c0000000003581d4] do_fault+0x6f4/0x990
[  199.884508] [c000000046a2fc20] [c00000000035bf20] __handle_mm_fault+0x910/0x10e0
[  199.884744] [c000000046a2fd30] [c00000000035c81c] handle_mm_fault+0x12c/0x210
[  199.885109] [c000000046a2fd70] [c000000000071b44] do_page_fault+0x5c4/0x860
[  199.885404] [c000000046a2fe30] [c00000000000a3dc] handle_page_fault+0x18/0x38
Comment 1 Min Deng 2017-07-27 02:18:05 EDT
Also tried it on P8 + rhel7.4 and issue can be found as well.
Builds,
kernel-3.10.0-689.el7.ppc64le - host
kernel-3.10.0-693.el7.ppc64le - guest
qemu-kvm-rhev-2.9.0-14.el7.ppc64le
Log,
[  270.527944] Swap cache stats: add 29306, delete 29238, find 26722/37226
[  270.528042] Free swap  = 4142336kB
[  270.528101] Total swap = 4194240kB
[  270.528162] 32768 pages RAM
[  270.528203] 0 pages HighMem/MovableOnly
[  270.528262] 7641 pages reserved
[  270.528321] 0 pages cma reserved
[  270.528381] 0 pages hwpoisoned
[  270.528439] [ pid ]   uid  tgid total_vm      rss nr_ptes nr_pmds swapents oom_score_adj name
[  270.528676] [ 2465]     0  2465      299        2      10       4       65         -1000 systemd-udevd
[  270.528819] [ 3418]     0  3418     1757        2       8       3       14             0 kdumpctl
[  270.528956] [ 3424]     0  3424     1758        2       7       3       19             0 kdumpctl
[  270.529092] [ 3496]     0  3496     1754       35       9       3       15             0 mkdumprd
[  270.529237] [ 3524]     0  3524      284        7       8       4       41         -1000 auditd
[  270.529373] [ 3574]     0  3574     1751        6       9       4       10             0 sh
[  270.529489] [ 3575]     0  3575     1750        1      10       3       21             0 sulogin
[  270.529625] [ 4522]     0  4522      132       21       7       3       34             0 lvmetad
[  270.529759] [ 4528]     0  4528       91       15       6       3       20             0 systemd-journal
[  270.529896] [ 4562]     0  4562     1754       29       7       3       15             0 mkdumprd
[  270.530035] [ 4564]     0  4564     1741       19      10       4       10             0 sed
[  270.530167] Out of memory: Kill process 4522 (lvmetad) score 0 or sacrifice child
[  270.530311] Killed process 4522 (lvmetad) total-vm:8448kB, anon-rss:0kB, file-rss:1344kB, shmem-rss:0kB
[  270.530990] lvmetad: page allocation failure: order:0, mode:0x14200ca(GFP_HIGHUSER_MOVABLE), nodemask=(null)
[  270.531161] lvmetad cpuset=/ mems_allowed=0
[  270.531232] CPU: 4 PID: 4522 Comm: lvmetad Tainted: G        W      ------------   4.11.0-16.el7a.ppc64le #1
[  270.531263] oom_reaper: reaped process 4522 (lvmetad), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB
[  270.531527] Call Trace:
[  270.531576] [c0000000564bf690] [c000000000bdc764] dump_stack+0xb0/0xf0 (unreliable)
[  270.531696] [c0000000564bf6d0] [c0000000002ff568] warn_alloc+0x128/0x1c0
[  270.531803] [c0000000564bf780] [c000000000300558] __alloc_pages_nodemask+0xea8/0x1000
[  270.531922] [c0000000564bf970] [c0000000003a6284] alloc_pages_vma+0x594/0x6f0
[  270.532047] [c0000000564bfa30] [c000000000386c28] __read_swap_cache_async+0x208/0x300
[  270.532170] [c0000000564bfab0] [c00000000038733c] swapin_readahead+0x32c/0x5b0
[  270.532287] [c0000000564bfba0] [c000000000356bc8] do_swap_page+0x608/0xa80
[  270.532391] [c0000000564bfc20] [c00000000035bfd4] __handle_mm_fault+0x9c4/0x10e0
[  270.532510] [c0000000564bfd30] [c00000000035c81c] handle_mm_fault+0x12c/0x210
[  270.532650] [c0000000564bfd70] [c000000000071b44] do_page_fault+0x5c4/0x860
[  270.532751] [c0000000564bfe30] [c00000000000a3dc] handle_page_fault+0x18/0x38
[  270.532867] warn_alloc_show_mem: 1 callbacks suppressed
[  270.532943] Mem-Info:
[  270.533009] active_anon:56 inactive_anon:165 isolated_anon:17
[  270.533009]  active_file:0 inactive_file:25 isolated_file:0
[  270.533009]  unevictable:0 dirty:0 writeback:18 unstable:0
[  270.533009]  slab_reclaimable:1528 slab_unreclaimable:16423
[  270.533009]  mapped:331 shmem:0 pagetables:234 bounce:0
[  270.533009]  free:1229 free_pcp:203 free_cma:0
[  270.533512] Node 0 active_anon:3584kB inactive_anon:10560kB active_file:0kB inactive_file:1600kB unevictable:0kB isolated(anon):1088kB isolated(file):0kB mapped:21184kB dirty:0kB writeback:1152kB shmem:0kB shmem_thp: 0kB shmem_pmdmapped: 0kB anon_thp: 0kB writeback_tmp:0kB unstable:0kB pages_scanned:12105864 all_unreclaimable? yes
[  270.533926] Node 0 DMA free:78656kB min:78720kB low:98368kB high:118016kB active_anon:13376kB inactive_anon:2176kB active_file:0kB inactive_file:4544kB unevictable:0kB writepending:320kB present:2097152kB managed:1608128kB mlocked:0kB slab_reclaimable:97792kB slab_unreclaimable:1051072kB kernel_stack:32576kB pagetables:14976kB bounce:0kB free_pcp:12992kB local_pcp:192kB free_cma:0kB
[  270.534400] lowmem_reserve[]: 0 0 0 0 0
[  270.534469] Node 0 DMA: 59*64kB (UME) 166*128kB (UM) 108*256kB (UME) 45*512kB (UME) 7*1024kB (UM) 0*2048kB 0*4096kB 0*8192kB 0*16384kB = 82880kB
[  270.534682] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
[  270.534819] 0 total pagecache pages
[  270.534880] 66 pages in swap cache
[  270.534954] Swap cache stats: add 29306, delete 29240, find 26722/37226
[  270.535049] Free swap  = 4144320kB
[  270.535109] Total swap = 4194240kB
[  270.535167] 32768 pages RAM
[  270.535211] 0 pages HighMem/MovableOnly
[  270.535270] 7641 pages reserved
[  270.535332] 0 pages cma reserved
[  270.535391] 0 pages hwpoisoned
Comment 2 Laurent Vivier 2017-08-02 13:25:43 EDT
According to the message in comment 0 and comment 1 ("Out of memory: Kill process ") the system has just enough memory to store internal kernel structures for 240 CPUs but not enough after that to start new processes.

Try to boot with more memory and see how much memory is available once the kernel is booted: content of /proc/meminfo and result of "numactl -H" could be interesting.
Comment 3 Min Deng 2017-08-02 21:03:52 EDT
(In reply to Laurent Vivier from comment #2)
> According to the message in comment 0 and comment 1 ("Out of memory: Kill
> process ") the system has just enough memory to store internal kernel
> structures for 240 CPUs but not enough after that to start new processes.
> 
> Try to boot with more memory and see how much memory is available once the
> kernel is booted: content of /proc/meminfo and result of "numactl -H" could
> be interesting.
  Actually if the memory exceeded 2G the guest will work,approximate 3G.Per Laurent and QE provided the following log.
[root@localhost ~]# cat /proc/meminfo
cat /proc/meminfo
MemTotal:        3048768 kB
MemFree:          636224 kB
MemAvailable:     550080 kB
Buffers:            2816 kB
Cached:           182080 kB
SwapCached:            0 kB
Active:           524096 kB
Inactive:         119168 kB
Active(anon):     460544 kB
Inactive(anon):    10112 kB
Active(file):      63552 kB
Inactive(file):   109056 kB
Unevictable:           0 kB
Mlocked:               0 kB
SwapTotal:       4194240 kB
SwapFree:        4194240 kB
Dirty:              1408 kB
Writeback:             0 kB
AnonPages:        467328 kB
Mapped:            49792 kB
Shmem:             11904 kB
Slab:            1252352 kB
SReclaimable:     106560 kB
SUnreclaim:      1145792 kB
KernelStack:       36944 kB
PageTables:        22144 kB
NFS_Unstable:          0 kB
Bounce:                0 kB
WritebackTmp:          0 kB
CommitLimit:     5718592 kB
Committed_AS:    2751680 kB
VmallocTotal:   549755813888 kB
VmallocUsed:           0 kB
VmallocChunk:          0 kB
HardwareCorrupted:     0 kB
AnonHugePages:         0 kB
ShmemHugePages:        0 kB
ShmemPmdMapped:        0 kB
CmaTotal:              0 kB
CmaFree:               0 kB
HugePages_Total:       0
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB

[root@localhost ~]# numactl -H
numactl -H
available: 1 nodes (0)
node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239
node 0 size: 2977 MB
node 0 free: 671 MB
node distances:
node   0 
  0:  10
Comment 4 David Gibson 2017-08-02 21:19:32 EDT
Ok, looks like 2G is still not enough to support 240 cpus.  There's not really a way that qemu can predict how much RAM the guest will need for various numbers of CPUs, so I don't think there's much we can do about this.

Note You need to log in before you can comment on or make changes to this bug.