summaryrefslogtreecommitdiffstats
path: root/init
diff options
context:
space:
mode:
authorLinus Torvalds <torvalds@linux-foundation.org>2020-08-07 11:39:33 -0700
committerLinus Torvalds <torvalds@linux-foundation.org>2020-08-07 11:39:33 -0700
commit81e11336d97e7a4c25a65c302ef2bf9cd9808ed4 (patch)
tree44e069e0bb26068137a1bda7ca5125c5b068b4ab /init
parent5631c5e0eb9035d92ceb20fcd9cdb7779a3f5cc7 (diff)
parent912c05720f00d039103d356a59c37dc7c3995e01 (diff)
downloadlinux-81e11336d97e7a4c25a65c302ef2bf9cd9808ed4.tar.bz2
Merge branch 'akpm' (patches from Andrew)
Merge misc updates from Andrew Morton: - a few MM hotfixes - kthread, tools, scripts, ntfs and ocfs2 - some of MM Subsystems affected by this patch series: kthread, tools, scripts, ntfs, ocfs2 and mm (hofixes, pagealloc, slab-generic, slab, slub, kcsan, debug, pagecache, gup, swap, shmem, memcg, pagemap, mremap, mincore, sparsemem, vmalloc, kasan, pagealloc, hugetlb and vmscan). * emailed patches from Andrew Morton <akpm@linux-foundation.org>: (162 commits) mm: vmscan: consistent update to pgrefill mm/vmscan.c: fix typo khugepaged: khugepaged_test_exit() check mmget_still_valid() khugepaged: retract_page_tables() remember to test exit khugepaged: collapse_pte_mapped_thp() protect the pmd lock khugepaged: collapse_pte_mapped_thp() flush the right range mm/hugetlb: fix calculation of adjust_range_if_pmd_sharing_possible mm: thp: replace HTTP links with HTTPS ones mm/page_alloc: fix memalloc_nocma_{save/restore} APIs mm/page_alloc.c: skip setting nodemask when we are in interrupt mm/page_alloc: fallbacks at most has 3 elements mm/page_alloc: silence a KASAN false positive mm/page_alloc.c: remove unnecessary end_bitidx for [set|get]_pfnblock_flags_mask() mm/page_alloc.c: simplify pageblock bitmap access mm/page_alloc.c: extract the common part in pfn_to_bitidx() mm/page_alloc.c: replace the definition of NR_MIGRATETYPE_BITS with PB_migratetype_bits mm/shuffle: remove dynamic reconfiguration mm/memory_hotplug: document why shuffle_zone() is relevant mm/page_alloc: remove nr_free_pagecache_pages() mm: remove vm_total_pages ...
Diffstat (limited to 'init')
-rw-r--r--init/Kconfig9
-rw-r--r--init/main.c2
2 files changed, 6 insertions, 5 deletions
diff --git a/init/Kconfig b/init/Kconfig
index 9082ed33a9cd..d6a0b31b13dc 100644
--- a/init/Kconfig
+++ b/init/Kconfig
@@ -1913,9 +1913,8 @@ config SLAB_MERGE_DEFAULT
command line.
config SLAB_FREELIST_RANDOM
- default n
+ bool "Randomize slab freelist"
depends on SLAB || SLUB
- bool "SLAB freelist randomization"
help
Randomizes the freelist order used on creating new pages. This
security feature reduces the predictability of the kernel slab
@@ -1923,12 +1922,14 @@ config SLAB_FREELIST_RANDOM
config SLAB_FREELIST_HARDENED
bool "Harden slab freelist metadata"
- depends on SLUB
+ depends on SLAB || SLUB
help
Many kernel heap attacks try to target slab cache metadata and
other infrastructure. This options makes minor performance
sacrifices to harden the kernel slab allocator against common
- freelist exploit methods.
+ freelist exploit methods. Some slab implementations have more
+ sanity-checking than others. This option is most effective with
+ CONFIG_SLUB.
config SHUFFLE_PAGE_ALLOCATOR
bool "Page allocator randomization"
diff --git a/init/main.c b/init/main.c
index 276ca3160c8c..de2f9fa6bb4a 100644
--- a/init/main.c
+++ b/init/main.c
@@ -830,7 +830,7 @@ void __init __weak arch_call_rest_init(void)
rest_init();
}
-asmlinkage __visible void __init start_kernel(void)
+asmlinkage __visible void __init __no_sanitize_address start_kernel(void)
{
char *command_line;
char *after_dashes;