diff options
author | Will Deacon <will@kernel.org> | 2019-10-30 17:15:01 +0000 |
---|---|---|
committer | Will Deacon <will@kernel.org> | 2020-07-21 10:50:36 +0100 |
commit | bb7cdd38185a4f9fa32e62db115c2c6dceb2b621 (patch) | |
tree | e595c58c6d7cdb11ba37afcf5e1b35dc3708b0ad /mm | |
parent | 71c0b9a65cefa8c34eab83d337a1e3ec61fb7cc2 (diff) | |
download | linux-bb7cdd38185a4f9fa32e62db115c2c6dceb2b621.tar.bz2 |
alpha: Replace smp_read_barrier_depends() usage with smp_[r]mb()
In preparation for removing smp_read_barrier_depends() altogether,
move the Alpha code over to using smp_rmb() and smp_mb() directly.
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Paul E. McKenney <paulmck@kernel.org>
Signed-off-by: Will Deacon <will@kernel.org>
Diffstat (limited to 'mm')
-rw-r--r-- | mm/memory.c | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/mm/memory.c b/mm/memory.c index 87ec87cdc1ff..e1f2c730d8bb 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -437,7 +437,7 @@ int __pte_alloc(struct mm_struct *mm, pmd_t *pmd) * of a chain of data-dependent loads, meaning most CPUs (alpha * being the notable exception) will already guarantee loads are * seen in-order. See the alpha page table accessors for the - * smp_read_barrier_depends() barriers in page table walking code. + * smp_rmb() barriers in page table walking code. */ smp_wmb(); /* Could be smp_wmb__xxx(before|after)_spin_lock */ |