diff options
author | Nanyong Sun <sunnanyong@huawei.com> | 2021-11-23 22:06:37 +0800 |
---|---|---|
committer | Palmer Dabbelt <palmer@rivosinc.com> | 2022-01-07 15:54:24 -0800 |
commit | fba88ede6a312705e147860c45ed9b3c3d9c6f85 (patch) | |
tree | 5c76677cc1a29ffb1accf989349f24959a6cc876 /arch/riscv | |
parent | 8ee304396e2f3db9c2856fb8f63548f906e6f2e1 (diff) | |
download | linux-fba88ede6a312705e147860c45ed9b3c3d9c6f85.tar.bz2 |
riscv/mm: Adjust PAGE_PROT_NONE to comply with THP semantics
This is a preparation for enabling THP migration.
As the commit b65399f6111b("arm64/mm: Change THP helpers
to comply with generic MM semantics") mentioned, pmd_present()
and pmd_trans_huge() are expected to behave in the following
manner:
-------------------------------------------------------------------------
| PMD states | pmd_present | pmd_trans_huge |
-------------------------------------------------------------------------
| Mapped | Yes | Yes |
-------------------------------------------------------------------------
| Splitting | Yes | Yes |
-------------------------------------------------------------------------
| Migration/Swap | No | No |
-------------------------------------------------------------------------
At present the PROT_NONE bit reuses the READ bit could not comply with
above semantics with two problems:
1. When splitting a PMD THP, PMD is first invalidated with
pmdp_invalidate()->pmd_mkinvalid(), which clears the PRESENT bit
and PROT_NONE bit/READ bit, if the PMD is read-only, then the PAGE_LEAF
property is also cleared, which results in pmd_present() return false.
2. When migrating, the swap entry only clear the PRESENT bit
and PROT_NONE bit/READ bit, the W/X bit may be set, so _PAGE_LEAF may be
true which results in pmd_present() return true.
Solution:
Adjust PROT_NONE bit from READ to GLOBAL bit can satisfy the above rules:
1. GLOBAL bit has no other meanings, not like the R/W/X bit, which is
also relative with _PAGE_LEAF property.
2. GLOBAL bit is at bit 5, making swap entry start from bit 6, bit 0-5
are zero, which means the PRESENT, PROT_NONE, and PAGE_LEAF are
all false, then the pmd_present() and pmd_trans_huge() return false when
in migration/swap.
Signed-off-by: Nanyong Sun <sunnanyong@huawei.com>
Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
Diffstat (limited to 'arch/riscv')
-rw-r--r-- | arch/riscv/include/asm/pgtable-bits.h | 2 | ||||
-rw-r--r-- | arch/riscv/include/asm/pgtable.h | 11 |
2 files changed, 7 insertions, 6 deletions
diff --git a/arch/riscv/include/asm/pgtable-bits.h b/arch/riscv/include/asm/pgtable-bits.h index 2ee413912926..a6b0c89824c2 100644 --- a/arch/riscv/include/asm/pgtable-bits.h +++ b/arch/riscv/include/asm/pgtable-bits.h @@ -31,7 +31,7 @@ * _PAGE_PROT_NONE is set on not-present pages (and ignored by the hardware) to * distinguish them from swapped out pages */ -#define _PAGE_PROT_NONE _PAGE_READ +#define _PAGE_PROT_NONE _PAGE_GLOBAL #define _PAGE_PFN_SHIFT 10 diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h index db3f73931af6..34230c277358 100644 --- a/arch/riscv/include/asm/pgtable.h +++ b/arch/riscv/include/asm/pgtable.h @@ -119,7 +119,7 @@ /* Page protection bits */ #define _PAGE_BASE (_PAGE_PRESENT | _PAGE_ACCESSED | _PAGE_USER) -#define PAGE_NONE __pgprot(_PAGE_PROT_NONE) +#define PAGE_NONE __pgprot(_PAGE_PROT_NONE | _PAGE_READ) #define PAGE_READ __pgprot(_PAGE_BASE | _PAGE_READ) #define PAGE_WRITE __pgprot(_PAGE_BASE | _PAGE_READ | _PAGE_WRITE) #define PAGE_EXEC __pgprot(_PAGE_BASE | _PAGE_EXEC) @@ -628,11 +628,12 @@ static inline pmd_t pmdp_establish(struct vm_area_struct *vma, * * Format of swap PTE: * bit 0: _PAGE_PRESENT (zero) - * bit 1: _PAGE_PROT_NONE (zero) - * bits 2 to 6: swap type - * bits 7 to XLEN-1: swap offset + * bit 1 to 3: _PAGE_LEAF (zero) + * bit 5: _PAGE_PROT_NONE (zero) + * bits 6 to 10: swap type + * bits 10 to XLEN-1: swap offset */ -#define __SWP_TYPE_SHIFT 2 +#define __SWP_TYPE_SHIFT 6 #define __SWP_TYPE_BITS 5 #define __SWP_TYPE_MASK ((1UL << __SWP_TYPE_BITS) - 1) #define __SWP_OFFSET_SHIFT (__SWP_TYPE_BITS + __SWP_TYPE_SHIFT) |