summaryrefslogtreecommitdiffstats
path: root/arch
diff options
context:
space:
mode:
authorAndy Lutomirski <luto@kernel.org>2019-06-21 08:43:04 -0700
committerLinus Torvalds <torvalds@linux-foundation.org>2019-06-21 13:31:52 -0700
commitff17bbe0bb405ad8b36e55815d381841f9fdeebc (patch)
treecadee7583ae06afc80ada88cb3950cab2d86903c /arch
parenta4c33bbb660b89fc7f21957386fb3a0b38e43f98 (diff)
downloadlinux-ff17bbe0bb405ad8b36e55815d381841f9fdeebc.tar.bz2
x86/vdso: Prevent segfaults due to hoisted vclock reads
GCC 5.5.0 sometimes cleverly hoists reads of the pvclock and/or hvclock pages before the vclock mode checks. This creates a path through vclock_gettime() in which no vclock is enabled at all (due to disabled TSC on old CPUs, for example) but the pvclock or hvclock page nevertheless read. This will segfault on bare metal. This fixes commit 459e3a21535a ("gcc-9: properly declare the {pv,hv}clock_page storage") in the sense that, before that commit, GCC didn't seem to generate the offending code. There was nothing wrong with that commit per se, and -stable maintainers should backport this to all supported kernels regardless of whether the offending commit was present, since the same crash could just as easily be triggered by the phase of the moon. On GCC 9.1.1, this doesn't seem to affect the generated code at all, so I'm not too concerned about performance regressions from this fix. Cc: stable@vger.kernel.org Cc: x86@kernel.org Cc: Borislav Petkov <bp@alien8.de> Reported-by: Duncan Roe <duncan_roe@optusnet.com.au> Signed-off-by: Andy Lutomirski <luto@kernel.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'arch')
-rw-r--r--arch/x86/entry/vdso/vclock_gettime.c15
1 files changed, 13 insertions, 2 deletions
diff --git a/arch/x86/entry/vdso/vclock_gettime.c b/arch/x86/entry/vdso/vclock_gettime.c
index 0f82a70c7682..4aed41f638bb 100644
--- a/arch/x86/entry/vdso/vclock_gettime.c
+++ b/arch/x86/entry/vdso/vclock_gettime.c
@@ -128,13 +128,24 @@ notrace static inline u64 vgetcyc(int mode)
{
if (mode == VCLOCK_TSC)
return (u64)rdtsc_ordered();
+
+ /*
+ * For any memory-mapped vclock type, we need to make sure that gcc
+ * doesn't cleverly hoist a load before the mode check. Otherwise we
+ * might end up touching the memory-mapped page even if the vclock in
+ * question isn't enabled, which will segfault. Hence the barriers.
+ */
#ifdef CONFIG_PARAVIRT_CLOCK
- else if (mode == VCLOCK_PVCLOCK)
+ if (mode == VCLOCK_PVCLOCK) {
+ barrier();
return vread_pvclock();
+ }
#endif
#ifdef CONFIG_HYPERV_TSCPAGE
- else if (mode == VCLOCK_HVCLOCK)
+ if (mode == VCLOCK_HVCLOCK) {
+ barrier();
return vread_hvclock();
+ }
#endif
return U64_MAX;
}