From 7904ba8a66f400182a204893c92098994e22a88d Mon Sep 17 00:00:00 2001 From: Peter Zijlstra Date: Wed, 19 Sep 2018 10:50:24 +0200 Subject: x86/mm/cpa: Optimize __cpa_flush_range() If we IPI for WBINDV, then we might as well kill the entire TLB too. But if we don't have to invalidate cache, there is no reason not to use a range TLB flush. Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Thomas Gleixner Reviewed-by: Dave Hansen Cc: Bin Yang Cc: Mark Gross Link: https://lkml.kernel.org/r/20180919085948.195633798@infradead.org --- arch/x86/mm/pageattr.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) (limited to 'arch/x86/mm') diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c index dc552824e86a..62bb30b4bd2a 100644 --- a/arch/x86/mm/pageattr.c +++ b/arch/x86/mm/pageattr.c @@ -291,7 +291,7 @@ static bool __cpa_flush_range(unsigned long start, int numpages, int cache) WARN_ON(PAGE_ALIGN(start) != start); - if (!static_cpu_has(X86_FEATURE_CLFLUSH)) { + if (cache && !static_cpu_has(X86_FEATURE_CLFLUSH)) { cpa_flush_all(cache); return true; } -- cgit v1.2.3