summaryrefslogtreecommitdiffstats
path: root/arch/x86/mm/init.c
diff options
context:
space:
mode:
authorVlastimil Babka <vbabka@suse.cz>2018-06-22 17:39:33 +0200
committerThomas Gleixner <tglx@linutronix.de>2018-06-27 11:10:22 +0200
commit0d0f6249058834ffe1ceaad0bb31464af66f6e7a (patch)
tree74a0caf48a80d51bdc19ddbdf99c387e56aed199 /arch/x86/mm/init.c
parent7ce2f0393ea2396142b7faf6ee9b1f3676d08a5f (diff)
downloadlinux-0d0f6249058834ffe1ceaad0bb31464af66f6e7a.tar.bz2
x86/speculation/l1tf: Protect PAE swap entries against L1TF
The PAE 3-level paging code currently doesn't mitigate L1TF by flipping the offset bits, and uses the high PTE word, thus bits 32-36 for type, 37-63 for offset. The lower word is zeroed, thus systems with less than 4GB memory are safe. With 4GB to 128GB the swap type selects the memory locations vulnerable to L1TF; with even more memory, also the swap offfset influences the address. This might be a problem with 32bit PAE guests running on large 64bit hosts. By continuing to keep the whole swap entry in either high or low 32bit word of PTE we would limit the swap size too much. Thus this patch uses the whole PAE PTE with the same layout as the 64bit version does. The macros just become a bit tricky since they assume the arch-dependent swp_entry_t to be 32bit. Signed-off-by: Vlastimil Babka <vbabka@suse.cz> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Michal Hocko <mhocko@suse.com>
Diffstat (limited to 'arch/x86/mm/init.c')
-rw-r--r--arch/x86/mm/init.c2
1 files changed, 1 insertions, 1 deletions
diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
index c0870df32b2d..862191ed3d6e 100644
--- a/arch/x86/mm/init.c
+++ b/arch/x86/mm/init.c
@@ -896,7 +896,7 @@ unsigned long max_swapfile_size(void)
* We encode swap offsets also with 3 bits below those for pfn
* which makes the usable limit higher.
*/
-#ifdef CONFIG_X86_64
+#if CONFIG_PGTABLE_LEVELS > 2
l1tf_limit <<= PAGE_SHIFT - SWP_OFFSET_FIRST_BIT;
#endif
pages = min_t(unsigned long, l1tf_limit, pages);