Login
[x]
Log in using an account from:
Fedora Account System
Red Hat Associate
Red Hat Customer
Or login using a Red Hat Bugzilla account
Forgot Password
Login:
Hide Forgot
Create an Account
Red Hat Bugzilla – Attachment 299355 Details for
Bug 438960
G965 chipset box grinds to a halt on boot with 6GiB ram
[?]
New
Simple Search
Advanced Search
My Links
Browse
Requests
Reports
Current State
Search
Tabular reports
Graphical reports
Duplicates
Other Reports
User Changes
Plotly Reports
Bug Status
Bug Severity
Non-Defaults
|
Product Dashboard
Help
Page Help!
Bug Writing Guidelines
What's new
Browser Support Policy
5.0.4.rh83 Release notes
FAQ
Guides index
User guide
Web Services
Contact
Legal
This site requires JavaScript to be enabled to function correctly, please enable it.
hopefully complete patchset
mtrr.diff (text/plain), 22.47 KB, created by
Dave Jones
on 2008-03-27 16:13:09 UTC
(
hide
)
Description:
hopefully complete patchset
Filename:
MIME Type:
Creator:
Dave Jones
Created:
2008-03-27 16:13:09 UTC
Size:
22.47 KB
patch
obsolete
>Necessary chunk of 2d2ee8de5f6d26ef2942e0b449aa68d9236d5777 to make the rest apply. > >diff --git a/arch/x86/kernel/cpu/mtrr/main.c b/arch/x86/kernel/cpu/mtrr/main.c >index beb45c9..60af5ed 100644 >--- a/arch/x86/kernel/cpu/mtrr/main.c >+++ b/arch/x86/kernel/cpu/mtrr/main.c >@@ -394,7 +394,9 @@ int mtrr_add_page(unsigned long base, unsigned long size, > if (likely(replace < 0)) > usage_table[i] = 1; > else { >- usage_table[i] = usage_table[replace] + !!increment; >+ usage_table[i] = usage_table[replace]; >+ if (increment) >+ usage_table[i]++; > if (unlikely(replace != i)) { > set_mtrr(replace, 0, 0, 0); > usage_table[replace] = 0; > > > >commit 99fc8d424bc5d803fe92cad56c068fe64e73747a >Author: Jesse Barnes <jesse.barnes@intel.com> >Date: Wed Jan 30 13:33:18 2008 +0100 > > x86, 32-bit: trim memory not covered by wb mtrrs > > On some machines, buggy BIOSes don't properly setup WB MTRRs to cover all > available RAM, meaning the last few megs (or even gigs) of memory will be > marked uncached. Since Linux tends to allocate from high memory addresses > first, this causes the machine to be unusably slow as soon as the kernel > starts really using memory (i.e. right around init time). > > This patch works around the problem by scanning the MTRRs at boot and > figuring out whether the current end_pfn value (setup by early e820 code) > goes beyond the highest WB MTRR range, and if so, trimming it to match. A > fairly obnoxious KERN_WARNING is printed too, letting the user know that > not all of their memory is available due to a likely BIOS bug. > > Something similar could be done on i386 if needed, but the boot ordering > would be slightly different, since the MTRR code on i386 depends on the > boot_cpu_data structure being setup. > > This patch fixes a bug in the last patch that caused the code to run on > non-Intel machines (AMD machines apparently don't need it and it's untested > on other non-Intel machines, so best keep it off). > > Further enhancements and fixes from: > > Yinghai Lu <Yinghai.Lu@Sun.COM> > Andi Kleen <ak@suse.de> > > Signed-off-by: Jesse Barnes <jesse.barnes@intel.com> > Tested-by: Justin Piszcz <jpiszcz@lucidpixels.com> > Cc: Andi Kleen <andi@firstfloor.org> > Cc: "Eric W. Biederman" <ebiederm@xmission.com> > Cc: Yinghai Lu <yhlu.kernel@gmail.com> > Signed-off-by: Andrew Morton <akpm@linux-foundation.org> > Signed-off-by: Ingo Molnar <mingo@elte.hu> > Signed-off-by: Thomas Gleixner <tglx@linutronix.de> > >diff --git a/Documentation/kernel-parameters.txt b/Documentation/kernel-parameters.txt >index 860a908..b8fadf5 100644 >--- a/Documentation/kernel-parameters.txt >+++ b/Documentation/kernel-parameters.txt >@@ -570,6 +570,12 @@ and is between 256 and 4096 characters. It is defined in the file > See drivers/char/README.epca and > Documentation/digiepca.txt. > >+ disable_mtrr_trim [X86-64, Intel only] >+ By default the kernel will trim any uncacheable >+ memory out of your available memory pool based on >+ MTRR settings. This parameter disables that behavior, >+ possibly causing your machine to run very slowly. >+ > dmasound= [HW,OSS] Sound subsystem buffers > > dscc4.setup= [NET] >diff --git a/arch/x86/kernel/bugs_64.c b/arch/x86/kernel/bugs_64.c >index 9a189ce..8f520f9 100644 >--- a/arch/x86/kernel/bugs_64.c >+++ b/arch/x86/kernel/bugs_64.c >@@ -13,7 +13,6 @@ > void __init check_bugs(void) > { > identify_cpu(&boot_cpu_data); >- mtrr_bp_init(); > #if !defined(CONFIG_SMP) > printk("CPU: "); > print_cpu_info(&boot_cpu_data); >diff --git a/arch/x86/kernel/cpu/mtrr/generic.c b/arch/x86/kernel/cpu/mtrr/generic.c >index 55d31ff..103d61a 100644 >--- a/arch/x86/kernel/cpu/mtrr/generic.c >+++ b/arch/x86/kernel/cpu/mtrr/generic.c >@@ -14,7 +14,7 @@ > #include "mtrr.h" > > struct mtrr_state { >- struct mtrr_var_range *var_ranges; >+ struct mtrr_var_range var_ranges[MAX_VAR_RANGES]; > mtrr_type fixed_ranges[NUM_FIXED_RANGES]; > unsigned char enabled; > unsigned char have_fixed; >@@ -86,12 +86,6 @@ void __init get_mtrr_state(void) > struct mtrr_var_range *vrs; > unsigned lo, dummy; > >- if (!mtrr_state.var_ranges) { >- mtrr_state.var_ranges = kmalloc(num_var_ranges * sizeof (struct mtrr_var_range), >- GFP_KERNEL); >- if (!mtrr_state.var_ranges) >- return; >- } > vrs = mtrr_state.var_ranges; > > rdmsr(MTRRcap_MSR, lo, dummy); >diff --git a/arch/x86/kernel/cpu/mtrr/if.c b/arch/x86/kernel/cpu/mtrr/if.c >index 1453568..91e150a 100644 >--- a/arch/x86/kernel/cpu/mtrr/if.c >+++ b/arch/x86/kernel/cpu/mtrr/if.c >@@ -11,10 +11,6 @@ > #include <asm/mtrr.h> > #include "mtrr.h" > >-/* RED-PEN: this is accessed without any locking */ >-extern unsigned int *usage_table; >- >- > #define FILE_FCOUNT(f) (((struct seq_file *)((f)->private_data))->private) > > static const char *const mtrr_strings[MTRR_NUM_TYPES] = >@@ -397,7 +393,7 @@ static int mtrr_seq_show(struct seq_file *seq, void *offset) > for (i = 0; i < max; i++) { > mtrr_if->get(i, &base, &size, &type); > if (size == 0) >- usage_table[i] = 0; >+ mtrr_usage_table[i] = 0; > else { > if (size < (0x100000 >> PAGE_SHIFT)) { > /* less than 1MB */ >@@ -411,7 +407,7 @@ static int mtrr_seq_show(struct seq_file *seq, void *offset) > len += seq_printf(seq, > "reg%02i: base=0x%05lx000 (%4luMB), size=%4lu%cB: %s, count=%d\n", > i, base, base >> (20 - PAGE_SHIFT), size, factor, >- mtrr_attrib_to_str(type), usage_table[i]); >+ mtrr_attrib_to_str(type), mtrr_usage_table[i]); > } > } > return 0; >diff --git a/arch/x86/kernel/cpu/mtrr/main.c b/arch/x86/kernel/cpu/mtrr/main.c >index 60af5ed..ccd36ed 100644 >--- a/arch/x86/kernel/cpu/mtrr/main.c >+++ b/arch/x86/kernel/cpu/mtrr/main.c >@@ -38,8 +38,8 @@ > #include <linux/cpu.h> > #include <linux/mutex.h> > >+#include <asm/e820.h> > #include <asm/mtrr.h> >- > #include <asm/uaccess.h> > #include <asm/processor.h> > #include <asm/msr.h> >@@ -47,7 +47,7 @@ > > u32 num_var_ranges = 0; > >-unsigned int *usage_table; >+unsigned int mtrr_usage_table[MAX_VAR_RANGES]; > static DEFINE_MUTEX(mtrr_mutex); > > u64 size_or_mask, size_and_mask; >@@ -121,13 +121,8 @@ static void __init init_table(void) > int i, max; > > max = num_var_ranges; >- if ((usage_table = kmalloc(max * sizeof *usage_table, GFP_KERNEL)) >- == NULL) { >- printk(KERN_ERR "mtrr: could not allocate\n"); >- return; >- } > for (i = 0; i < max; i++) >- usage_table[i] = 1; >+ mtrr_usage_table[i] = 1; > } > > struct set_mtrr_data { >@@ -383,7 +378,7 @@ int mtrr_add_page(unsigned long base, unsigned long size, > goto out; > } > if (increment) >- ++usage_table[i]; >+ ++mtrr_usage_table[i]; > error = i; > goto out; > } >@@ -391,15 +386,15 @@ int mtrr_add_page(unsigned long base, unsigned long size, > i = mtrr_if->get_free_region(base, size, replace); > if (i >= 0) { > set_mtrr(i, base, size, type); >- if (likely(replace < 0)) >- usage_table[i] = 1; >- else { >- usage_table[i] = usage_table[replace]; >+ if (likely(replace < 0)) { >+ mtrr_usage_table[i] = 1; >+ } else { >+ mtrr_usage_table[i] = mtrr_usage_table[replace]; > if (increment) >- usage_table[i]++; >+ mtrr_usage_table[i]++; > if (unlikely(replace != i)) { > set_mtrr(replace, 0, 0, 0); >- usage_table[replace] = 0; >+ mtrr_usage_table[replace] = 0; > } > } > } else >@@ -529,11 +524,11 @@ int mtrr_del_page(int reg, unsigned long base, unsigned long size) > printk(KERN_WARNING "mtrr: MTRR %d not used\n", reg); > goto out; > } >- if (usage_table[reg] < 1) { >+ if (mtrr_usage_table[reg] < 1) { > printk(KERN_WARNING "mtrr: reg: %d has count=0\n", reg); > goto out; > } >- if (--usage_table[reg] < 1) >+ if (--mtrr_usage_table[reg] < 1) > set_mtrr(reg, 0, 0, 0); > error = reg; > out: >@@ -593,16 +588,11 @@ struct mtrr_value { > unsigned long lsize; > }; > >-static struct mtrr_value * mtrr_state; >+static struct mtrr_value mtrr_state[MAX_VAR_RANGES]; > > static int mtrr_save(struct sys_device * sysdev, pm_message_t state) > { > int i; >- int size = num_var_ranges * sizeof(struct mtrr_value); >- >- mtrr_state = kzalloc(size,GFP_ATOMIC); >- if (!mtrr_state) >- return -ENOMEM; > > for (i = 0; i < num_var_ranges; i++) { > mtrr_if->get(i, >@@ -624,7 +614,6 @@ static int mtrr_restore(struct sys_device * sysdev) > mtrr_state[i].lsize, > mtrr_state[i].ltype); > } >- kfree(mtrr_state); > return 0; > } > >@@ -635,6 +624,109 @@ static struct sysdev_driver mtrr_sysdev_driver = { > .resume = mtrr_restore, > }; > >+#ifdef CONFIG_X86_64 >+static int disable_mtrr_trim; >+ >+static int __init disable_mtrr_trim_setup(char *str) >+{ >+ disable_mtrr_trim = 1; >+ return 0; >+} >+early_param("disable_mtrr_trim", disable_mtrr_trim_setup); >+ >+/* >+ * Newer AMD K8s and later CPUs have a special magic MSR way to force WB >+ * for memory >4GB. Check for that here. >+ * Note this won't check if the MTRRs < 4GB where the magic bit doesn't >+ * apply to are wrong, but so far we don't know of any such case in the wild. >+ */ >+#define Tom2Enabled (1U << 21) >+#define Tom2ForceMemTypeWB (1U << 22) >+ >+static __init int amd_special_default_mtrr(unsigned long end_pfn) >+{ >+ u32 l, h; >+ >+ /* Doesn't apply to memory < 4GB */ >+ if (end_pfn <= (0xffffffff >> PAGE_SHIFT)) >+ return 0; >+ if (boot_cpu_data.x86_vendor != X86_VENDOR_AMD) >+ return 0; >+ if (boot_cpu_data.x86 < 0xf || boot_cpu_data.x86 > 0x11) >+ return 0; >+ /* In case some hypervisor doesn't pass SYSCFG through */ >+ if (rdmsr_safe(MSR_K8_SYSCFG, &l, &h) < 0) >+ return 0; >+ /* >+ * Memory between 4GB and top of mem is forced WB by this magic bit. >+ * Reserved before K8RevF, but should be zero there. >+ */ >+ if ((l & (Tom2Enabled | Tom2ForceMemTypeWB)) == >+ (Tom2Enabled | Tom2ForceMemTypeWB)) >+ return 1; >+ return 0; >+} >+ >+/** >+ * mtrr_trim_uncached_memory - trim RAM not covered by MTRRs >+ * >+ * Some buggy BIOSes don't setup the MTRRs properly for systems with certain >+ * memory configurations. This routine checks that the highest MTRR matches >+ * the end of memory, to make sure the MTRRs having a write back type cover >+ * all of the memory the kernel is intending to use. If not, it'll trim any >+ * memory off the end by adjusting end_pfn, removing it from the kernel's >+ * allocation pools, warning the user with an obnoxious message. >+ */ >+int __init mtrr_trim_uncached_memory(unsigned long end_pfn) >+{ >+ unsigned long i, base, size, highest_addr = 0, def, dummy; >+ mtrr_type type; >+ u64 trim_start, trim_size; >+ >+ /* >+ * Make sure we only trim uncachable memory on machines that >+ * support the Intel MTRR architecture: >+ */ >+ rdmsr(MTRRdefType_MSR, def, dummy); >+ def &= 0xff; >+ if (!is_cpu(INTEL) || disable_mtrr_trim || def != MTRR_TYPE_UNCACHABLE) >+ return 0; >+ >+ /* Find highest cached pfn */ >+ for (i = 0; i < num_var_ranges; i++) { >+ mtrr_if->get(i, &base, &size, &type); >+ if (type != MTRR_TYPE_WRBACK) >+ continue; >+ base <<= PAGE_SHIFT; >+ size <<= PAGE_SHIFT; >+ if (highest_addr < base + size) >+ highest_addr = base + size; >+ } >+ >+ if (amd_special_default_mtrr(end_pfn)) >+ return 0; >+ >+ if ((highest_addr >> PAGE_SHIFT) < end_pfn) { >+ printk(KERN_WARNING "***************\n"); >+ printk(KERN_WARNING "**** WARNING: likely BIOS bug\n"); >+ printk(KERN_WARNING "**** MTRRs don't cover all of " >+ "memory, trimmed %ld pages\n", end_pfn - >+ (highest_addr >> PAGE_SHIFT)); >+ printk(KERN_WARNING "***************\n"); >+ >+ printk(KERN_INFO "update e820 for mtrr\n"); >+ trim_start = highest_addr; >+ trim_size = end_pfn; >+ trim_size <<= PAGE_SHIFT; >+ trim_size -= trim_start; >+ add_memory_region(trim_start, trim_size, E820_RESERVED); >+ update_e820(); >+ return 1; >+ } >+ >+ return 0; >+} >+#endif > > /** > * mtrr_bp_init - initialize mtrrs on the boot CPU >diff --git a/arch/x86/kernel/cpu/mtrr/mtrr.h b/arch/x86/kernel/cpu/mtrr/mtrr.h >index 54347e9..fb74a2c 100644 >--- a/arch/x86/kernel/cpu/mtrr/mtrr.h >+++ b/arch/x86/kernel/cpu/mtrr/mtrr.h >@@ -12,6 +12,7 @@ > #define MTRRphysMask_MSR(reg) (0x200 + 2 * (reg) + 1) > > #define NUM_FIXED_RANGES 88 >+#define MAX_VAR_RANGES 256 > #define MTRRfix64K_00000_MSR 0x250 > #define MTRRfix16K_80000_MSR 0x258 > #define MTRRfix16K_A0000_MSR 0x259 >@@ -32,6 +33,8 @@ > an 8 bit field: */ > typedef u8 mtrr_type; > >+extern unsigned int mtrr_usage_table[MAX_VAR_RANGES]; >+ > struct mtrr_ops { > u32 vendor; > u32 use_intel_if; >diff --git a/arch/x86/kernel/setup_64.c b/arch/x86/kernel/setup_64.c >index 6cbd156..1294831 100644 >--- a/arch/x86/kernel/setup_64.c >+++ b/arch/x86/kernel/setup_64.c >@@ -310,6 +310,13 @@ void __init setup_arch(char **cmdline_p) > * we are rounding upwards: > */ > end_pfn = e820_end_of_ram(); >+ /* update e820 for memory not covered by WB MTRRs */ >+ mtrr_bp_init(); >+ if (mtrr_trim_uncached_memory(end_pfn)) { >+ e820_register_active_regions(0, 0, -1UL); >+ end_pfn = e820_end_of_ram(); >+ } >+ > num_physpages = end_pfn; > > check_efer(); >diff --git a/include/asm-x86/mtrr.h b/include/asm-x86/mtrr.h >index 262670e..319d065 100644 >--- a/include/asm-x86/mtrr.h >+++ b/include/asm-x86/mtrr.h >@@ -97,6 +97,7 @@ extern int mtrr_del_page (int reg, unsigned long base, unsigned long size); > extern void mtrr_centaur_report_mcr(int mcr, u32 lo, u32 hi); > extern void mtrr_ap_init(void); > extern void mtrr_bp_init(void); >+extern int mtrr_trim_uncached_memory(unsigned long end_pfn); > # else > #define mtrr_save_fixed_ranges(arg) do {} while (0) > #define mtrr_save_state() do {} while (0) >@@ -120,7 +121,10 @@ static __inline__ int mtrr_del_page (int reg, unsigned long base, > { > return -ENODEV; > } >- >+static inline int mtrr_trim_uncached_memory(unsigned long end_pfn) >+{ >+ return 0; >+} > static __inline__ void mtrr_centaur_report_mcr(int mcr, u32 lo, u32 hi) {;} > > #define mtrr_ap_init() do {} while (0) >commit 093af8d7f0ba3c6be1485973508584ef081e9f93 >Author: Yinghai Lu <Yinghai.Lu@Sun.COM> >Date: Wed Jan 30 13:33:32 2008 +0100 > > x86_32: trim memory by updating e820 > > when MTRRs are not covering the whole e820 table, we need to trim the > RAM and need to update e820. > > reuse some code on 64-bit as well. > > here need to add early_get_cap and use it in early_cpu_detect, and move > mtrr_bp_init early. > > The code successfully trimmed the memory map on Justin's system: > > from: > > [ 0.000000] BIOS-e820: 0000000100000000 - 000000022c000000 (usable) > > to: > > [ 0.000000] modified: 0000000100000000 - 0000000228000000 (usable) > [ 0.000000] modified: 0000000228000000 - 000000022c000000 (reserved) > > According to Justin it makes quite a difference: > > | When I boot the box without any trimming it acts like a 286 or 386, > | takes about 10 minutes to boot (using raptor disks). > > Signed-off-by: Yinghai Lu <yinghai.lu@sun.com> > Tested-by: Justin Piszcz <jpiszcz@lucidpixels.com> > Signed-off-by: Ingo Molnar <mingo@elte.hu> > Signed-off-by: Thomas Gleixner <tglx@linutronix.de> > >diff --git a/Documentation/kernel-parameters.txt b/Documentation/kernel-parameters.txt >index 50d564d..fe3031d 100644 >--- a/Documentation/kernel-parameters.txt >+++ b/Documentation/kernel-parameters.txt >@@ -583,7 +583,7 @@ and is between 256 and 4096 characters. It is defined in the file > See drivers/char/README.epca and > Documentation/digiepca.txt. > >- disable_mtrr_trim [X86-64, Intel only] >+ disable_mtrr_trim [X86, Intel and AMD only] > By default the kernel will trim any uncacheable > memory out of your available memory pool based on > MTRR settings. This parameter disables that behavior, >diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c >index 56cc341..bba850b 100644 >--- a/arch/x86/kernel/cpu/common.c >+++ b/arch/x86/kernel/cpu/common.c >@@ -278,6 +278,33 @@ void __init cpu_detect(struct cpuinfo_x86 *c) > c->x86_cache_alignment = ((misc >> 8) & 0xff) * 8; > } > } >+static void __cpuinit early_get_cap(struct cpuinfo_x86 *c) >+{ >+ u32 tfms, xlvl; >+ int ebx; >+ >+ memset(&c->x86_capability, 0, sizeof c->x86_capability); >+ if (have_cpuid_p()) { >+ /* Intel-defined flags: level 0x00000001 */ >+ if (c->cpuid_level >= 0x00000001) { >+ u32 capability, excap; >+ cpuid(0x00000001, &tfms, &ebx, &excap, &capability); >+ c->x86_capability[0] = capability; >+ c->x86_capability[4] = excap; >+ } >+ >+ /* AMD-defined flags: level 0x80000001 */ >+ xlvl = cpuid_eax(0x80000000); >+ if ((xlvl & 0xffff0000) == 0x80000000) { >+ if (xlvl >= 0x80000001) { >+ c->x86_capability[1] = cpuid_edx(0x80000001); >+ c->x86_capability[6] = cpuid_ecx(0x80000001); >+ } >+ } >+ >+ } >+ >+} > > /* Do minimum CPU detection early. > Fields really needed: vendor, cpuid_level, family, model, mask, cache alignment. >@@ -306,6 +333,8 @@ static void __init early_cpu_detect(void) > early_init_intel(c); > break; > } >+ >+ early_get_cap(c); > } > > static void __cpuinit generic_identify(struct cpuinfo_x86 * c) >@@ -485,7 +514,6 @@ void __init identify_boot_cpu(void) > identify_cpu(&boot_cpu_data); > sysenter_setup(); > enable_sep_cpu(); >- mtrr_bp_init(); > } > > void __cpuinit identify_secondary_cpu(struct cpuinfo_x86 *c) >diff --git a/arch/x86/kernel/cpu/mtrr/main.c b/arch/x86/kernel/cpu/mtrr/main.c >index ccd36ed..ac4b633 100644 >--- a/arch/x86/kernel/cpu/mtrr/main.c >+++ b/arch/x86/kernel/cpu/mtrr/main.c >@@ -624,7 +624,6 @@ static struct sysdev_driver mtrr_sysdev_driver = { > .resume = mtrr_restore, > }; > >-#ifdef CONFIG_X86_64 > static int disable_mtrr_trim; > > static int __init disable_mtrr_trim_setup(char *str) >@@ -643,13 +642,10 @@ early_param("disable_mtrr_trim", disable_mtrr_trim_setup); > #define Tom2Enabled (1U << 21) > #define Tom2ForceMemTypeWB (1U << 22) > >-static __init int amd_special_default_mtrr(unsigned long end_pfn) >+static __init int amd_special_default_mtrr(void) > { > u32 l, h; > >- /* Doesn't apply to memory < 4GB */ >- if (end_pfn <= (0xffffffff >> PAGE_SHIFT)) >- return 0; > if (boot_cpu_data.x86_vendor != X86_VENDOR_AMD) > return 0; > if (boot_cpu_data.x86 < 0xf || boot_cpu_data.x86 > 0x11) >@@ -687,9 +683,14 @@ int __init mtrr_trim_uncached_memory(unsigned long end_pfn) > * Make sure we only trim uncachable memory on machines that > * support the Intel MTRR architecture: > */ >+ if (!is_cpu(INTEL) || disable_mtrr_trim) >+ return 0; > rdmsr(MTRRdefType_MSR, def, dummy); > def &= 0xff; >- if (!is_cpu(INTEL) || disable_mtrr_trim || def != MTRR_TYPE_UNCACHABLE) >+ if (def != MTRR_TYPE_UNCACHABLE) >+ return 0; >+ >+ if (amd_special_default_mtrr()) > return 0; > > /* Find highest cached pfn */ >@@ -703,8 +704,14 @@ int __init mtrr_trim_uncached_memory(unsigned long end_pfn) > highest_addr = base + size; > } > >- if (amd_special_default_mtrr(end_pfn)) >+ /* kvm/qemu doesn't have mtrr set right, don't trim them all */ >+ if (!highest_addr) { >+ printk(KERN_WARNING "***************\n"); >+ printk(KERN_WARNING "**** WARNING: likely strange cpu\n"); >+ printk(KERN_WARNING "**** MTRRs all blank, cpu in qemu?\n"); >+ printk(KERN_WARNING "***************\n"); > return 0; >+ } > > if ((highest_addr >> PAGE_SHIFT) < end_pfn) { > printk(KERN_WARNING "***************\n"); >@@ -726,7 +733,6 @@ int __init mtrr_trim_uncached_memory(unsigned long end_pfn) > > return 0; > } >-#endif > > /** > * mtrr_bp_init - initialize mtrrs on the boot CPU >diff --git a/arch/x86/kernel/e820_32.c b/arch/x86/kernel/e820_32.c >index 931934a..4e16ef4 100644 >--- a/arch/x86/kernel/e820_32.c >+++ b/arch/x86/kernel/e820_32.c >@@ -749,3 +749,14 @@ static int __init parse_memmap(char *arg) > return 0; > } > early_param("memmap", parse_memmap); >+void __init update_e820(void) >+{ >+ u8 nr_map; >+ >+ nr_map = e820.nr_map; >+ if (sanitize_e820_map(e820.map, &nr_map)) >+ return; >+ e820.nr_map = nr_map; >+ printk(KERN_INFO "modified physical RAM map:\n"); >+ print_memory_map("modified"); >+} >diff --git a/arch/x86/kernel/setup_32.c b/arch/x86/kernel/setup_32.c >index 26a56f7..83ba3ca 100644 >--- a/arch/x86/kernel/setup_32.c >+++ b/arch/x86/kernel/setup_32.c >@@ -48,6 +48,7 @@ > > #include <video/edid.h> > >+#include <asm/mtrr.h> > #include <asm/apic.h> > #include <asm/e820.h> > #include <asm/mpspec.h> >@@ -758,6 +759,11 @@ void __init setup_arch(char **cmdline_p) > > max_low_pfn = setup_memory(); > >+ /* update e820 for memory not covered by WB MTRRs */ >+ mtrr_bp_init(); >+ if (mtrr_trim_uncached_memory(max_pfn)) >+ max_low_pfn = setup_memory(); >+ > #ifdef CONFIG_VMI > /* > * Must be after max_low_pfn is determined, and before kernel >diff --git a/include/asm-x86/e820_32.h b/include/asm-x86/e820_32.h >index e2faf5f..f1da7eb 100644 >--- a/include/asm-x86/e820_32.h >+++ b/include/asm-x86/e820_32.h >@@ -19,12 +19,15 @@ > #ifndef __ASSEMBLY__ > > extern struct e820map e820; >+extern void update_e820(void); > > extern int e820_all_mapped(unsigned long start, unsigned long end, > unsigned type); > extern int e820_any_mapped(u64 start, u64 end, unsigned type); > extern void find_max_pfn(void); > extern void register_bootmem_low_pages(unsigned long max_low_pfn); >+extern void add_memory_region(unsigned long long start, >+ unsigned long long size, int type); > extern void e820_register_memory(void); > extern void limit_regions(unsigned long long size); > extern void print_memory_map(char *who); >commit 76c324182bbd29dfe4298ca65efb15be18055df1 >Author: Yinghai Lu <yhlu.kernel.send@gmail.com> >Date: Sun Mar 23 00:16:49 2008 -0700 > > x86: fix trim mtrr not to setup_memory two times > > we could call find_max_pfn() directly instead of setup_memory() to get > max_pfn needed for mtrr trimming. > > otherwise setup_memory() is called two times... that is duplicated... > > [ mingo@elte.hu: both Thomas and me simulated a double call to > setup_bootmem_allocator() and can confirm that it is a real bug > which can hang in certain configs. It's not been reported yet but > that is probably due to the relatively scarce nature of > MTRR-trimming systems. ] > > Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com> > Signed-off-by: Ingo Molnar <mingo@elte.hu> > >diff --git a/arch/x86/kernel/setup_32.c b/arch/x86/kernel/setup_32.c >index a1d7071..2b3e5d4 100644 >--- a/arch/x86/kernel/setup_32.c >+++ b/arch/x86/kernel/setup_32.c >@@ -406,8 +406,6 @@ static unsigned long __init setup_memory(void) > */ > min_low_pfn = PFN_UP(init_pg_tables_end); > >- find_max_pfn(); >- > max_low_pfn = find_max_low_pfn(); > > #ifdef CONFIG_HIGHMEM >@@ -764,12 +762,13 @@ void __init setup_arch(char **cmdline_p) > if (efi_enabled) > efi_init(); > >- max_low_pfn = setup_memory(); >- > /* update e820 for memory not covered by WB MTRRs */ >+ find_max_pfn(); > mtrr_bp_init(); > if (mtrr_trim_uncached_memory(max_pfn)) >- max_low_pfn = setup_memory(); >+ find_max_pfn(); >+ >+ max_low_pfn = setup_memory(); > > #ifdef CONFIG_VMI > /* >diff --git a/arch/x86/mm/discontig_32.c b/arch/x86/mm/discontig_32.c >index c394ca0..8e25e06 100644 >--- a/arch/x86/mm/discontig_32.c >+++ b/arch/x86/mm/discontig_32.c >@@ -324,7 +324,6 @@ unsigned long __init setup_memory(void) > * this space and use it to adjust the boundary between ZONE_NORMAL > * and ZONE_HIGHMEM. > */ >- find_max_pfn(); > get_memcfg_numa(); > > kva_pages = calculate_numa_remap_pages();
You cannot view the attachment while viewing its details because your browser does not support IFRAMEs.
View the attachment on a separate page
.
View Attachment As Raw
Actions:
View
Attachments on
bug 438960
:
299355
|
299365