x86: mtrr_cleanup: first 1M may be covered in var mtrrs
authorYinghai Lu <yhlu.kernel@gmail.com>
Sat, 4 Oct 2008 21:50:32 +0000 (14:50 -0700)
committerH. Peter Anvin <hpa@zytor.com>
Sun, 5 Oct 2008 03:09:14 +0000 (20:09 -0700)
The first 1M is don't care when it comes to the variables MTRRs.
Cover it as WB as a heuristic approximation; this is generally what we
want to minimize the number of registers.

Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
arch/x86/kernel/cpu/mtrr/main.c

index 9086b38fbabe4aec6a75c19e1df6af7ac3078c66..663e530e08e0f878006f8a6e6ebcacd6a4522c64 100644 (file)
@@ -1293,6 +1293,15 @@ static int __init mtrr_cleanup(unsigned address_bits)
        }
        nr_range = x86_get_mtrr_mem_range(range, 0, extra_remove_base,
                                          extra_remove_size);
+       /*
+        * [0, 1M) should always be coverred by var mtrr with WB
+        * and fixed mtrrs should take effective before var mtrr for it
+        */
+       nr_range = add_range_with_merge(range, nr_range, 0,
+                                       (1ULL<<(20 - PAGE_SHIFT)) - 1);
+       /* sort the ranges */
+       sort(range, nr_range, sizeof(struct res_range), cmp_range, NULL);
+
        range_sums = sum_ranges(range, nr_range);
        printk(KERN_INFO "total RAM coverred: %ldM\n",
               range_sums >> (20 - PAGE_SHIFT));