diff options
author | Hailong Liu <liu.hailong6@zte.com.cn> | 2021-01-12 15:49:14 -0800 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2021-01-12 18:12:54 -0800 |
commit | 29970dc24faf0078beb4efab5455b4f504d2198d (patch) | |
tree | ebe8dcf7ced8b12e369b90183c0257401c12949c /net | |
parent | 7ea510b92c7c9b4eb5ff72e6b4bbad4b0407a914 (diff) | |
download | linux-29970dc24faf0078beb4efab5455b4f504d2198d.tar.bz2 |
arm/kasan: fix the array size of kasan_early_shadow_pte[]
The size of kasan_early_shadow_pte[] now is PTRS_PER_PTE which defined
to 512 for arm. This means that it only covers the prev Linux pte
entries, but not the HWTABLE pte entries for arm.
The reason it currently works is that the symbol kasan_early_shadow_page
immediately following kasan_early_shadow_pte in memory is page aligned,
which makes kasan_early_shadow_pte look like a 4KB size array. But we
can't ensure the order is always right with different compiler/linker,
or if more bss symbols are introduced.
We had a test with QEMU + vexpress:put a 512KB-size symbol with
attribute __section(".bss..page_aligned") after kasan_early_shadow_pte,
and poisoned it after kasan_early_init(). Then enabled CONFIG_KASAN, it
failed to boot up.
Link: https://lkml.kernel.org/r/20210109044622.8312-1-hailongliiu@yeah.net
Signed-off-by: Hailong Liu <liu.hailong6@zte.com.cn>
Signed-off-by: Ziliang Guo <guo.ziliang@zte.com.cn>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Alexander Potapenko <glider@google.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'net')
0 files changed, 0 insertions, 0 deletions