diff options
author | Vincenzo Frascino <vincenzo.frascino@arm.com> | 2020-12-22 12:01:31 -0800 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2020-12-22 12:55:07 -0800 |
commit | e5b8d9218951e59df986f627ec93569a0d22149b (patch) | |
tree | ca062c821b1621f828f36e535b63ae1764627ca4 /arch/arm64 | |
parent | 85f49cae4dfcfae16f17418466e00370091de03d (diff) | |
download | linux-e5b8d9218951e59df986f627ec93569a0d22149b.tar.bz2 |
arm64: mte: reset the page tag in page->flags
The hardware tag-based KASAN for compatibility with the other modes stores
the tag associated to a page in page->flags. Due to this the kernel
faults on access when it allocates a page with an initial tag and the user
changes the tags.
Reset the tag associated by the kernel to a page in all the meaningful
places to prevent kernel faults on access.
Note: An alternative to this approach could be to modify page_to_virt().
This though could end up being racy, in fact if a CPU checks the
PG_mte_tagged bit and decides that the page is not tagged but another CPU
maps the same with PROT_MTE and becomes tagged the subsequent kernel
access would fail.
Link: https://lkml.kernel.org/r/9073d4e973747a6f78d5bdd7ebe17f290d087096.1606161801.git.andreyknvl@google.com
Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Branislav Rankov <Branislav.Rankov@arm.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Evgenii Stepanov <eugenis@google.com>
Cc: Kevin Brodsky <kevin.brodsky@arm.com>
Cc: Marco Elver <elver@google.com>
Cc: Vasily Gorbik <gor@linux.ibm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'arch/arm64')
-rw-r--r-- | arch/arm64/kernel/hibernate.c | 5 | ||||
-rw-r--r-- | arch/arm64/kernel/mte.c | 9 | ||||
-rw-r--r-- | arch/arm64/mm/copypage.c | 9 | ||||
-rw-r--r-- | arch/arm64/mm/mteswap.c | 9 |
4 files changed, 32 insertions, 0 deletions
diff --git a/arch/arm64/kernel/hibernate.c b/arch/arm64/kernel/hibernate.c index 42003774d261..9c9f47e9f7f4 100644 --- a/arch/arm64/kernel/hibernate.c +++ b/arch/arm64/kernel/hibernate.c @@ -371,6 +371,11 @@ static void swsusp_mte_restore_tags(void) unsigned long pfn = xa_state.xa_index; struct page *page = pfn_to_online_page(pfn); + /* + * It is not required to invoke page_kasan_tag_reset(page) + * at this point since the tags stored in page->flags are + * already restored. + */ mte_restore_page_tags(page_address(page), tags); mte_free_tag_storage(tags); diff --git a/arch/arm64/kernel/mte.c b/arch/arm64/kernel/mte.c index 78f81c459cb1..3a6216b0002e 100644 --- a/arch/arm64/kernel/mte.c +++ b/arch/arm64/kernel/mte.c @@ -34,6 +34,15 @@ static void mte_sync_page_tags(struct page *page, pte_t *ptep, bool check_swap) return; } + page_kasan_tag_reset(page); + /* + * We need smp_wmb() in between setting the flags and clearing the + * tags because if another thread reads page->flags and builds a + * tagged address out of it, there is an actual dependency to the + * memory access, but on the current thread we do not guarantee that + * the new page->flags are visible before the tags were updated. + */ + smp_wmb(); mte_clear_page_tags(page_address(page)); } diff --git a/arch/arm64/mm/copypage.c b/arch/arm64/mm/copypage.c index 70a71f38b6a9..b5447e53cd73 100644 --- a/arch/arm64/mm/copypage.c +++ b/arch/arm64/mm/copypage.c @@ -23,6 +23,15 @@ void copy_highpage(struct page *to, struct page *from) if (system_supports_mte() && test_bit(PG_mte_tagged, &from->flags)) { set_bit(PG_mte_tagged, &to->flags); + page_kasan_tag_reset(to); + /* + * We need smp_wmb() in between setting the flags and clearing the + * tags because if another thread reads page->flags and builds a + * tagged address out of it, there is an actual dependency to the + * memory access, but on the current thread we do not guarantee that + * the new page->flags are visible before the tags were updated. + */ + smp_wmb(); mte_copy_page_tags(kto, kfrom); } } diff --git a/arch/arm64/mm/mteswap.c b/arch/arm64/mm/mteswap.c index c52c1847079c..7c4ef56265ee 100644 --- a/arch/arm64/mm/mteswap.c +++ b/arch/arm64/mm/mteswap.c @@ -53,6 +53,15 @@ bool mte_restore_tags(swp_entry_t entry, struct page *page) if (!tags) return false; + page_kasan_tag_reset(page); + /* + * We need smp_wmb() in between setting the flags and clearing the + * tags because if another thread reads page->flags and builds a + * tagged address out of it, there is an actual dependency to the + * memory access, but on the current thread we do not guarantee that + * the new page->flags are visible before the tags were updated. + */ + smp_wmb(); mte_restore_page_tags(page_address(page), tags); return true; |