diff options
author | Maciej W. Rozycki <macro@imgtec.com> | 2016-05-17 06:12:27 +0100 |
---|---|---|
committer | Ralf Baechle <ralf@linux-mips.org> | 2016-05-17 11:02:45 +0200 |
commit | e49d38488515057dba8f0c2ba4cfde5be4a7281f (patch) | |
tree | cc141fabd9a4622c560e71beaaddffd0dd0ca5be /arch | |
parent | 0868971a8dad52e726711a2a4b161ace9ed011fa (diff) | |
download | linux-e49d38488515057dba8f0c2ba4cfde5be4a7281f.tar.bz2 |
MIPS: MSA: Fix a link error on `_init_msa_upper' with older GCC
Fix a build regression from commit c9017757c532 ("MIPS: init upper 64b
of vector registers when MSA is first used"):
arch/mips/built-in.o: In function `enable_restore_fp_context':
traps.c:(.text+0xbb90): undefined reference to `_init_msa_upper'
traps.c:(.text+0xbb90): relocation truncated to fit: R_MIPS_26 against `_init_msa_upper'
traps.c:(.text+0xbef0): undefined reference to `_init_msa_upper'
traps.c:(.text+0xbef0): relocation truncated to fit: R_MIPS_26 against `_init_msa_upper'
to !CONFIG_CPU_HAS_MSA configurations with older GCC versions, which are
unable to figure out that calls to `_init_msa_upper' are indeed dead.
Of the many ways to tackle this failure choose the approach we have
already taken in `thread_msa_context_live'.
[ralf@linux-mips.org: Drop patch segment to junk file.]
Signed-off-by: Maciej W. Rozycki <macro@imgtec.com>
Cc: stable@vger.kernel.org # v3.16+
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/13271/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Diffstat (limited to 'arch')
-rw-r--r-- | arch/mips/include/asm/msa.h | 13 | ||||
-rw-r--r-- | arch/mips/kernel/traps.c | 6 |
2 files changed, 16 insertions, 3 deletions
diff --git a/arch/mips/include/asm/msa.h b/arch/mips/include/asm/msa.h index bbb85fe21642..6e4effa6f626 100644 --- a/arch/mips/include/asm/msa.h +++ b/arch/mips/include/asm/msa.h @@ -147,6 +147,19 @@ static inline void restore_msa(struct task_struct *t) _restore_msa(t); } +static inline void init_msa_upper(void) +{ + /* + * Check cpu_has_msa only if it's a constant. This will allow the + * compiler to optimise out code for CPUs without MSA without adding + * an extra redundant check for CPUs with MSA. + */ + if (__builtin_constant_p(cpu_has_msa) && !cpu_has_msa) + return; + + _init_msa_upper(); +} + #ifdef TOOLCHAIN_SUPPORTS_MSA #define __BUILD_MSA_CTL_REG(name, cs) \ diff --git a/arch/mips/kernel/traps.c b/arch/mips/kernel/traps.c index 56f4161b98b9..4a1712b5abdf 100644 --- a/arch/mips/kernel/traps.c +++ b/arch/mips/kernel/traps.c @@ -1246,7 +1246,7 @@ static int enable_restore_fp_context(int msa) err = init_fpu(); if (msa && !err) { enable_msa(); - _init_msa_upper(); + init_msa_upper(); set_thread_flag(TIF_USEDMSA); set_thread_flag(TIF_MSA_CTX_LIVE); } @@ -1309,7 +1309,7 @@ static int enable_restore_fp_context(int msa) */ prior_msa = test_and_set_thread_flag(TIF_MSA_CTX_LIVE); if (!prior_msa && was_fpu_owner) { - _init_msa_upper(); + init_msa_upper(); goto out; } @@ -1326,7 +1326,7 @@ static int enable_restore_fp_context(int msa) * of each vector register such that it cannot see data left * behind by another task. */ - _init_msa_upper(); + init_msa_upper(); } else { /* We need to restore the vector context. */ restore_msa(current); |