summaryrefslogtreecommitdiffstats
path: root/arch/powerpc/kernel/entry_64.S
diff options
context:
space:
mode:
authorPalmer Dabbelt <palmerdabbelt@google.com>2020-07-16 12:38:20 -0700
committerMichael Ellerman <mpe@ellerman.id.au>2020-07-23 17:43:23 +1000
commit147c13413c04bc6a2bd76f2503402905e5e98cff (patch)
tree6b8a24635c515f9c41f46a9699abb5531b68f567 /arch/powerpc/kernel/entry_64.S
parente93ad65e3611b06288efdf0cfd76c012df3feec1 (diff)
downloadlinux-147c13413c04bc6a2bd76f2503402905e5e98cff.tar.bz2
powerpc/64: Fix an out of date comment about MMIO ordering
This primitive has been renamed, but because it was spelled incorrectly in the first place it must have escaped the fixup patch. As far as I can tell this logic is still correct: smp_mb__after_spinlock() uses the default smp_mb() implementation, which is "sync" rather than "hwsync" but those are the same (though I'm not that familiar with PowerPC). Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200716193820.1141936-1-palmer@dabbelt.com
Diffstat (limited to 'arch/powerpc/kernel/entry_64.S')
-rw-r--r--arch/powerpc/kernel/entry_64.S2
1 files changed, 1 insertions, 1 deletions
diff --git a/arch/powerpc/kernel/entry_64.S b/arch/powerpc/kernel/entry_64.S
index da85c2511e57..2547c5dac07a 100644
--- a/arch/powerpc/kernel/entry_64.S
+++ b/arch/powerpc/kernel/entry_64.S
@@ -354,7 +354,7 @@ _GLOBAL(_switch)
* kernel/sched/core.c).
*
* Uncacheable stores in the case of involuntary preemption must
- * be taken care of. The smp_mb__before_spin_lock() in __schedule()
+ * be taken care of. The smp_mb__after_spinlock() in __schedule()
* is implemented as hwsync on powerpc, which orders MMIO too. So
* long as there is an hwsync in the context switch path, it will
* be executed on the source CPU after the task has performed