Age | Commit message (Collapse) | Author | Files | Lines |
|
If platform code returns a NULL pointer to the FDT, initial_boot_params
will not get set to a valid pointer and attempting to find the /chosen
node in it will cause a NULL pointer dereference and the kernel to crash
immediately on startup - with no output to the console.
Fix this by checking that initial_boot_params is valid before using it.
Fixes: 405bc8fd12f5 ("MIPS: Kernel: Implement KASLR using CONFIG_RELOCATABLE")
Cc: stable@vger.kernel.org # 4.7+
Signed-off-by: Matt Redfearn <matt.redfearn@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/14414/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
Provide a default implementation of mips_cpc_default_phys_base() which
simply returns 0, and adjust mips_cpc_phys_base() to allow for
mips_cpc_default_phys_base() returning 0. This allows kernels which
include CPC support to be built without platform code & simply ignore
the CPC if it wasn't already enabled by the bootloader.
This fixes link failures such as the following from generic defconfigs:
arch/mips/built-in.o: In function `mips_cpc_phys_base':
arch/mips/kernel/mips-cpc.c:47: undefined reference to `mips_cpc_default_phys_base'
[ralf@linux-mips.org: changed prototype for coding style compliance.]
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Reported-by: kbuild test robot <fengguang.wu@intel.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/14401/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
Merge the gup_flags cleanups from Lorenzo Stoakes:
"This patch series adjusts functions in the get_user_pages* family such
that desired FOLL_* flags are passed as an argument rather than
implied by flags.
The purpose of this change is to make the use of FOLL_FORCE explicit
so it is easier to grep for and clearer to callers that this flag is
being used. The use of FOLL_FORCE is an issue as it overrides missing
VM_READ/VM_WRITE flags for the VMA whose pages we are reading
from/writing to, which can result in surprising behaviour.
The patch series came out of the discussion around commit 38e088546522
("mm: check VMA flags to avoid invalid PROT_NONE NUMA balancing"),
which addressed a BUG_ON() being triggered when a page was faulted in
with PROT_NONE set but having been overridden by FOLL_FORCE.
do_numa_page() was run on the assumption the page _must_ be one marked
for NUMA node migration as an actual PROT_NONE page would have been
dealt with prior to this code path, however FOLL_FORCE introduced a
situation where this assumption did not hold.
See
https://marc.info/?l=linux-mm&m=147585445805166
for the patch proposal"
Additionally, there's a fix for an ancient bug related to FOLL_FORCE and
FOLL_WRITE by me.
[ This branch was rebased recently to add a few more acked-by's and
reviewed-by's ]
* gup_flag-cleanups:
mm: replace access_process_vm() write parameter with gup_flags
mm: replace access_remote_vm() write parameter with gup_flags
mm: replace __access_remote_vm() write parameter with gup_flags
mm: replace get_user_pages_remote() write/force parameters with gup_flags
mm: replace get_user_pages() write/force parameters with gup_flags
mm: replace get_vaddr_frames() write/force parameters with gup_flags
mm: replace get_user_pages_locked() write/force parameters with gup_flags
mm: replace get_user_pages_unlocked() write/force parameters with gup_flags
mm: remove write/force parameters from __get_user_pages_unlocked()
mm: remove write/force parameters from __get_user_pages_locked()
mm: remove gup_flags FOLL_WRITE games from __get_user_pages()
|
|
This removes the 'write' argument from access_process_vm() and replaces
it with 'gup_flags' as use of this function previously silently implied
FOLL_FORCE, whereas after this patch callers explicitly pass this flag.
We make this explicit as use of FOLL_FORCE can result in surprising
behaviour (and hence bugs) within the mm subsystem.
Signed-off-by: Lorenzo Stoakes <lstoakes@gmail.com>
Acked-by: Jesper Nilsson <jesper.nilsson@axis.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Acked-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
Pull MIPS updates from Ralf Baechle:
"This is the main MIPS pull request for 4.9:
MIPS core arch code:
- traps: 64bit kernels should read CP0_EBase 64bit
- traps: Convert ebase to KSEG0
- c-r4k: Drop bc_wback_inv() from icache flush
- c-r4k: Split user/kernel flush_icache_range()
- cacheflush: Use __flush_icache_user_range()
- uprobes: Flush icache via kernel address
- KVM: Use __local_flush_icache_user_range()
- c-r4k: Fix flush_icache_range() for EVA
- Fix -mabi=64 build of vdso.lds
- VDSO: Drop duplicated -I*/-E* aflags
- tracing: move insn_has_delay_slot to a shared header
- tracing: disable uprobe/kprobe on compact branch instructions
- ptrace: Fix regs_return_value for kernel context
- Squash lines for simple wrapper functions
- Move identification of VP(E) into proc.c from smp-mt.c
- Add definitions of SYNC barrierstype values
- traps: Ensure full EBase is written
- tlb-r4k: If there are wired entries, don't use TLBINVF
- Sanitise coherentio semantics
- dma-default: Don't check hw_coherentio if device is non-coherent
- Support per-device DMA coherence
- Adjust MIPS64 CAC_BASE to reflect Config.K0
- Support generating Flattened Image Trees (.itb)
- generic: Introduce generic DT-based board support
- generic: Convert SEAD-3 to a generic board
- Enable hardened usercopy
- Don't specify STACKPROTECTOR in defconfigs
Octeon:
- Delete dead code and files across the platform.
- Change to use all memory into use by default.
- Rename upper case variables in setup code to lowercase.
- Delete legacy hack for broken bootloaders.
- Leave maintaining the link state to the actual ethernet/PHY drivers.
- Add DTS for D-Link DSR-500N.
- Fix PCI interrupt routing on D-Link DSR-500N.
Pistachio:
- Remove ANDROID_TIMED_OUTPUT from defconfig
TX39xx:
- Move GPIO setup from .mem_setup() to .arch_init()
- Convert to Common Clock Framework
TX49xx:
- Move GPIO setup from .mem_setup() to .arch_init()
- Convert to Common Clock Framework
txx9wdt:
- Add missing clock (un)prepare calls for CCF
BMIPS:
- Add PW, GPIO SDHCI and NAND device node names
- Support APPENDED_DTB
- Add missing bcm97435svmb to DT_NONE
- Rename bcm96358nb4ser to bcm6358-neufbox4-sercom
- Add DT examples for BCM63268, BCM3368 and BCM6362
- Add support for BCM3368 and BCM6362
PCI
- Reduce stack frame usage
- Use struct list_head lists
- Support for CONFIG_PCI_DOMAINS_GENERIC
- Make pcibios_set_cache_line_size an initcall
- Inline pcibios_assign_all_busses
- Split pci.c into pci.c & pci-legacy.c
- Introduce CONFIG_PCI_DRIVERS_LEGACY
- Support generic drivers
CPC
- Convert bare 'unsigned' to 'unsigned int'
- Avoid lock when MIPS CM >= 3 is present
GIC:
- Delete unused file smp-gic.c
mt7620:
- Delete unnecessary assignment for the field "owner" from PCI
BCM63xx:
- Let clk_disable() return immediately if clk is NULL
pm-cps:
- Change FSB workaround to CPU blacklist
- Update comments on barrier instructions
- Use MIPS standard lightweight ordering barrier
- Use MIPS standard completion barrier
- Remove selection of sync types
- Add MIPSr6 CPU support
- Support CM3 changes to Coherence Enable Register
SMP:
- Wrap call to mips_cpc_lock_other in mips_cm_lock_other
- Introduce mechanism for freeing and allocating IPIs
cpuidle:
- cpuidle-cps: Enable use with MIPSr6 CPUs.
SEAD3:
- Rewrite to use DT and generic kernel feature.
USB:
- host: ehci-sead3: Remove SEAD-3 EHCI code
FBDEV:
- cobalt_lcdfb: Drop SEAD3 support
dt-bindings:
- Document a binding for simple ASCII LCDs
auxdisplay:
- img-ascii-lcd: driver for simple ASCII LCD displays
irqchip i8259:
- i8259: Add domain before mapping parent irq
- i8259: Allow platforms to override poll function
- i8259: Remove unused i8259A_irq_pending
Malta:
- Rewrite to use DT
of/platform:
- Probe "isa" busses by default
CM:
- Print CM error reports upon bus errors
Module:
- Migrate exception table users off module.h and onto extable.h
- Make various drivers explicitly non-modular:
- Audit and remove any unnecessary uses of module.h
mailmap:
- Canonicalize to Qais' current email address.
Documentation:
- MIPS supports HAVE_REGS_AND_STACK_ACCESS_API
Loongson1C:
- Add CPU support for Loongson1C
- Add board support
- Add defconfig
- Add RTC support for Loongson1C board
All this except one Documentation fix has sat in linux-next and has
survived Imagination's automated build test system"
* 'upstream' of git://git.linux-mips.org/pub/scm/ralf/upstream-linus: (127 commits)
Documentation: MIPS supports HAVE_REGS_AND_STACK_ACCESS_API
MIPS: ptrace: Fix regs_return_value for kernel context
MIPS: VDSO: Drop duplicated -I*/-E* aflags
MIPS: Fix -mabi=64 build of vdso.lds
MIPS: Enable hardened usercopy
MIPS: generic: Convert SEAD-3 to a generic board
MIPS: generic: Introduce generic DT-based board support
MIPS: Support generating Flattened Image Trees (.itb)
MIPS: Adjust MIPS64 CAC_BASE to reflect Config.K0
MIPS: Print CM error reports upon bus errors
MIPS: Support per-device DMA coherence
MIPS: dma-default: Don't check hw_coherentio if device is non-coherent
MIPS: Sanitise coherentio semantics
MIPS: PCI: Support generic drivers
MIPS: PCI: Introduce CONFIG_PCI_DRIVERS_LEGACY
MIPS: PCI: Split pci.c into pci.c & pci-legacy.c
MIPS: PCI: Inline pcibios_assign_all_busses
MIPS: PCI: Make pcibios_set_cache_line_size an initcall
MIPS: PCI: Support for CONFIG_PCI_DOMAINS_GENERIC
MIPS: PCI: Use struct list_head lists
...
|
|
Signed-off-by: Marcin Nowakowski <marcin.nowakowski@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/14380/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
Daniel Walker reported problems which happens when
crash_kexec_post_notifiers kernel option is enabled
(https://lkml.org/lkml/2015/6/24/44).
In that case, smp_send_stop() is called before entering kdump routines
which assume other CPUs are still online. As the result, kdump
routines fail to save other CPUs' registers. Additionally for MIPS
OCTEON, it misses to stop the watchdog timer.
To fix this problem, call a new kdump friendly function,
crash_smp_send_stop(), instead of the smp_send_stop() when
crash_kexec_post_notifiers is enabled. crash_smp_send_stop() is a
weak function, and it just call smp_send_stop(). Architecture
codes should override it so that kdump can work appropriately.
This patch provides MIPS version.
Fixes: f06e5153f4ae (kernel/panic.c: add "crash_kexec_post_notifiers" option)
Link: http://lkml.kernel.org/r/20160810080950.11028.28000.stgit@sysi4-13.yrl.intra.hitachi.co.jp
Signed-off-by: Hidehiro Kawai <hidehiro.kawai.ez@hitachi.com>
Reported-by: Daniel Walker <dwalker@fifo99.com>
Cc: Dave Young <dyoung@redhat.com>
Cc: Baoquan He <bhe@redhat.com>
Cc: Vivek Goyal <vgoyal@redhat.com>
Cc: Eric Biederman <ebiederm@xmission.com>
Cc: Masami Hiramatsu <mhiramat@kernel.org>
Cc: Daniel Walker <dwalker@fifo99.com>
Cc: Xunlei Pang <xpang@redhat.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Borislav Petkov <bp@suse.de>
Cc: David Vrabel <david.vrabel@citrix.com>
Cc: Toshi Kani <toshi.kani@hpe.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: David Daney <david.daney@cavium.com>
Cc: Aaro Koskinen <aaro.koskinen@iki.fi>
Cc: "Steven J. Hill" <steven.hill@cavium.com>
Cc: Corey Minyard <cminyard@mvista.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
When doing an nmi backtrace of many cores, most of which are idle, the
output is a little overwhelming and very uninformative. Suppress
messages for cpus that are idling when they are interrupted and just
emit one line, "NMI backtrace for N skipped: idling at pc 0xNNN".
We do this by grouping all the cpuidle code together into a new
.cpuidle.text section, and then checking the address of the interrupted
PC to see if it lies within that section.
This commit suitably tags x86 and tile idle routines, and only adds in
the minimal framework for other architectures.
Link: http://lkml.kernel.org/r/1472487169-14923-5-git-send-email-cmetcalf@mellanox.com
Signed-off-by: Chris Metcalf <cmetcalf@mellanox.com>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Tested-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Tested-by: Daniel Thompson <daniel.thompson@linaro.org> [arm]
Tested-by: Petr Mladek <pmladek@suse.com>
Cc: Aaron Tomlin <atomlin@redhat.com>
Cc: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
Patch series "improvements to the nmi_backtrace code" v9.
This patch series modifies the trigger_xxx_backtrace() NMI-based remote
backtracing code to make it more flexible, and makes a few small
improvements along the way.
The motivation comes from the task isolation code, where there are
scenarios where we want to be able to diagnose a case where some cpu is
about to interrupt a task-isolated cpu. It can be helpful to see both
where the interrupting cpu is, and also an approximation of where the
cpu that is being interrupted is. The nmi_backtrace framework allows us
to discover the stack of the interrupted cpu.
I've tested that the change works as desired on tile, and build-tested
x86, arm, mips, and sparc64. For x86 I confirmed that the generic
cpuidle stuff as well as the architecture-specific routines are in the
new cpuidle section. For arm, mips, and sparc I just build-tested it
and made sure the generic cpuidle routines were in the new cpuidle
section, but I didn't attempt to figure out which the platform-specific
idle routines might be. That might be more usefully done by someone
with platform experience in follow-up patches.
This patch (of 4):
Currently you can only request a backtrace of either all cpus, or all
cpus but yourself. It can also be helpful to request a remote backtrace
of a single cpu, and since we want that, the logical extension is to
support a cpumask as the underlying primitive.
This change modifies the existing lib/nmi_backtrace.c code to take a
cpumask as its basic primitive, and modifies the linux/nmi.h code to use
the new "cpumask" method instead.
The existing clients of nmi_backtrace (arm and x86) are converted to
using the new cpumask approach in this change.
The other users of the backtracing API (sparc64 and mips) are converted
to use the cpumask approach rather than the all/allbutself approach.
The mips code ignored the "include_self" boolean but with this change it
will now also dump a local backtrace if requested.
Link: http://lkml.kernel.org/r/1472487169-14923-2-git-send-email-cmetcalf@mellanox.com
Signed-off-by: Chris Metcalf <cmetcalf@mellanox.com>
Tested-by: Daniel Thompson <daniel.thompson@linaro.org> [arm]
Reviewed-by: Aaron Tomlin <atomlin@redhat.com>
Reviewed-by: Petr Mladek <pmladek@suse.com>
Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: David Miller <davem@davemloft.net>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
If a bus error occurs on a system with a MIPS Coherence Manager (CM)
then the CM may hold useful diagnostic information. Printing this out
has so far been left up to boards, with the requirement that they
register a board_be_handler function & call mips_cm_error_decode() from
there.
In order to avoid boards other than Malta needing to duplicate this
code, call mips_cm_error_decode() automatically if the board registers
no board_be_handler, and remove the Malta implementation of that.
This patch results in no functional change, but removes a further piece
of platform-specific code.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/14350/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
Current instruction decoder for uprobe/kprobe handler only handles
branches with delay slots. For compact branches the behaviour is rather
unpredictable - and depending on the encoding of a compact branch
instruction may result in one (or more) of:
- executing an instruction that follows a branch which wasn't in a delay
slot and shouldn't have been executed
- incorrectly emulating a branch leading to a jump to a wrong location
- unexpected branching out of the single-stepped code and never reaching
the breakpoint that should terminate the probe handler
Results of these actions are generally unpredictable, but can end up
with a probed application or kernel crash, so disable placing probes on
compact branches until they are handled properly.
Signed-off-by: Marcin Nowakowski <marcin.nowakowski@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/14336/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
Currently both kprobes and uprobes code have definitions of the
insn_has_delay_slot method. Move it to a separate header as an inline
method that each probe-specific method can later use.
No functional change intended, although the methods slightly varied in
the constraints they set for the methods - the uprobes one was chosen as
it is slightly more specific when filtering opcode fields.
Signed-off-by: Marcin Nowakowski <marcin.nowakowski@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/14335/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
The MIPS Coherent Processing System (CPS) power management code has
previously generated code used to enter low power idle states once
during boot for all CPUs. This has the drawback that if a CPU is present
in the system but not being used (for example due to the maxcpus kernel
parameter) then we encounter problems due to not having probed that CPU
for information about its type & properties. The result of this is that
we generate entry code which is both unused, potentially entirely
invalid & likely to be unsuitable for the CPU in question anyway.
Avoid this by generating idle state entry code only when a CPU is
brought online. This way we only ever generate code for CPUs that we
know we've probed the properties of, and that will actually be used.
[ralf@linux-mips.org: Resolve merge conflict.]
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: Adam Buchbinder <adam.buchbinder@gmail.com>
Cc: Masahiro Yamada <yamada.masahiro@socionext.com>
Cc: Markos Chandras <markos.chandras@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/14259/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
Historically a lot of these existed because we did not have
a distinction between what was modular code and what was providing
support to modules via EXPORT_SYMBOL and friends. That changed
when we forked out support for the latter into the export.h file.
This means we should be able to reduce the usage of module.h
in code that is obj-y Makefile or bool Kconfig. The advantage
in doing so is that module.h itself sources about 15 other headers;
adding significantly to what we feed cpp, and it can obscure what
headers we are effectively using.
Since module.h was the source for init.h (for __init) and for
export.h (for EXPORT_SYMBOL) we consider each obj-y/bool instance
for the presence of either and replace as needed.
In the case of the n32/o32 files, we have to get rid of a couple
no-op MODULE_ tags to facilitate the module.h removal. They piggy
back off the fs/ elf binary support, which is also a bool Kconfig.
Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/14032/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
For the MIPS remote processor implementation, we need additional IPIs to
talk to the remote processor. Since MIPS GIC reserves exactly the right
number of IPI IRQs required by Linux for the number of VPs in the
system, this is not possible without releasing some recources.
This commit introduces mips_smp_ipi_allocate() which allocates IPIs to a
given cpumask. It is called as normal with the cpu_possible_mask at
bootup to initialise IPIs to all CPUs. mips_smp_ipi_free() may then be
used to free IPIs to a subset of those CPUs so that their hardware
resources can be reused.
Signed-off-by: Matt Redfearn <matt.redfearn@imgtec.com>
Cc: Bjorn Andersson <bjorn.andersson@linaro.org>
Cc: Ohad Ben-Cohen <ohad@wizery.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Lisa Parratt <Lisa.Parratt@imgtec.com>
Cc: James Hogan <james.hogan@imgtec.com>
Cc: Qais Yousef <qsyousef@gmail.com>
Cc: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: linux-remoteproc@vger.kernel.org
Cc: lisa.parratt@imgtec.com
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/14285/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
Update arch_uprobe_copy_ixol() to use the kmap_atomic() based kernel
address to flush the icache with flush_icache_range(), rather than the
user mapping. We have the kernel mapping available anyway and this
avoids having to switch to using the new __flush_icache_user_range() for
the sake of Enhanced Virtual Addressing (EVA) where flush_icache_range()
will become ineffective on user addresses.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Leonid Yegoshin <leonid.yegoshin@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/14154/
Patchwork: https://patchwork.linux-mips.org/patch/14308/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
On CPUs which support the EBase WG (write gate) flag, the most
significant bits of the exception base can be changed. Firmware running
on a VP(E) using MIPS rproc may change EBase to point into the user
segment where the firmware is located such that it can service
interrupts. When control is transferred back to the kernel the EBase
must be switched back into the kernel segment, such that the kernel's
exception vectors are used.
Similarly when vectored interrupts (vint) or vectored external interrupt
controllers (veic) are enabled an exception vector is allocated from
bootmem, and written to the EBase register. Due to the WG flag being
clear, only bits 29:12 will be written. Asside from the rproc case above
this is normally fine (as it will usually be a low allocation within the
KSeg0 range, however when Enhanced Virtual Addressing (EVA) is enabled
the allocation may be outside of the traditional KSeg0/KSeg1 address
range, resulting in the wrong EBase being written.
Correct both cases (configure_exception_vector() for the boot CPU, and
per_cpu_trap_init() for secondary CPUs) to write EBase with the WG flag
first if supported.
On the Malta EVA configuration, KSeg0 is mapped to physical address 0,
and memory is allocated from the KUSeg segment which is mapped to
physical address 0x80000000, which physically aliases the RAM at 0. This
only worked due to the exception base address aliasing the same
underlying RAM that was written to & cache flushed, and due to
flush_icache_range() going beyond the call of duty and flushing from the
L2 cache too (due to the differing physical addresses).
Signed-off-by: Matt Redfearn <matt.redfearn@imgtec.com>
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Leonid Yegoshin <leonid.yegoshin@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/14150/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
When allocating boot memory for the exception vector when vectored
interrupts (vint) or vectored external interrupt controllers (veic) are
enabled, try to ensure that the virtual address resides in KSeg0 (and
WARN should that not be possible).
This will be helpful on MIPS64 cores supporting the CP0_EBase Write Gate
(WG) bit once we start using the WG bit to write the full ebase into
CP0_EBase, as we ideally need to avoid hitting the architecturally
poorly defined exception base for Cache Errors when CP0_EBase is in
XKPhys.
An exception is made for Enhanced Virtual Addressing (EVA) kernels which
allow segments to be rearranged and to become uncached during cache
error handling, making it valid for ebase to be elsewhere.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Matt Redfearn <matt.redfearn@imgtec.com>
Cc: Leonid Yegoshin <leonid.yegoshin@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/14149/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
When reading the CP0_EBase register containing the WG (write gate) bit,
the ebase variable should be set to the full value of the register, i.e.
on a 64-bit kernel the full 64-bit width of the register via
read_cp0_ebase_64(), and on a 32-bit kernel the full 32-bit width
including bits 31:30 which may be writeable.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Signed-off-by: Matt Redfearn <matt.redfearn@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/14148/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
All calls to mips_cpc_lock_other should be wrapped in
mips_cm_lock_other. This only matters if the system has CM3 and is using
cpu idle, since otherwise a) the CPC lock is sufficent for CM < 3 and b)
any systems with CM > 3 have not been able to use cpu idle until now.
Signed-off-by: Matt Redfearn <matt.redfearn@imgtec.com>
Reviewed-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: James Hogan <james.hogan@imgtec.com>
Cc: Qais Yousef <qsyousef@gmail.com>
Patchwork: https://patchwork.linux-mips.org/patch/14227/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
MIPS CM3 changed the management of coherence. Instead of a coherence
control register with a bitmask of coherent domains, CM3 simply has a
coherence enable register with a single bit to enable coherence of the
local core. Support this by clearing and setting this single bit to
disable / enable coherence.
Signed-off-by: Matt Redfearn <matt.redfearn@imgtec.com>
Reviewed-by: Paul Burton <paul.burton@imgtec.com>
Cc: Adam Buchbinder <adam.buchbinder@gmail.com>
Cc: Tony Wu <tung7970@gmail.com>
Cc: Masahiro Yamada <yamada.masahiro@socionext.com>
Cc: Nikolay Martynov <mar.kolya@gmail.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/14226/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
This patch adds support for CPUs implementing the MIPSr6 ISA to the CPS
power management code. Three changes are necessary:
1. In MIPSr6, coupled coherence is necessary when CPUS implement multiple
Virtual Processors (VPs).
2. MIPSr6 virtual processors are more like real cores and cannot yield
to other VPs on the same core, so drop the MT ASE yield instruction.
3. To halt a MIPSr6 VP, the CPC VP_STOP register is used rather than the
MT ASE TCHalt CP0 register.
Signed-off-by: Matt Redfearn <matt.redfearn@imgtec.com>
Reviewed-by: Paul Burton <paul.burton@imgtec.com>
Cc: Adam Buchbinder <adam.buchbinder@gmail.com>
Cc: Masahiro Yamada <yamada.masahiro@socionext.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/14225/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
Instead of selecting an implementation or vendor specific sync type for
the required sync operations, always use the architecturally mandated
sync types which previous patches have put in place. The selection of
special sync types is now redundant an can be removed.
Signed-off-by: Matt Redfearn <matt.redfearn@imgtec.com>
Reviewed-by: Paul Burton <paul.burton@imgtec.com>
Cc: Adam Buchbinder <adam.buchbinder@gmail.com>
Cc: Masahiro Yamada <yamada.masahiro@socionext.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/14223/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
SYNC type 0 is defined in the MIPS architecture as a completion barrier
where all loads/stores in the pipeline before the sync instruction must
complete before any loads/stores subsequent to the sync instruction.
In places where we require loads / stores be globally completed, use the
standard completion sync stype.
Signed-off-by: Matt Redfearn <matt.redfearn@imgtec.com>
Reviewed-by: Paul Burton <paul.burton@imgtec.com>
Cc: Adam Buchbinder <adam.buchbinder@gmail.com>
Cc: Masahiro Yamada <yamada.masahiro@socionext.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/14224/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
Since R2 of the MIPS architecture, SYNC(0x10) has been an optional but
architecturally defined ordering barrier. If a CPU does not implement it,
the arch specifies that it must fall back to SYNC(0).
In places where we require that the instruction stream not be reordered,
but do not require that loads / stores are gloablly completed, use the
defined standard sync stype.
Signed-off-by: Matt Redfearn <matt.redfearn@imgtec.com>
Reviewed-by: Paul Burton <paul.burton@imgtec.com>
Cc: Adam Buchbinder <adam.buchbinder@gmail.com>
Cc: Masahiro Yamada <yamada.masahiro@socionext.com>
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/14221/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
This code makes large use of barriers, which had quite vague
descriptions. Update the comments to make the choice of barrier and
reason for it more clear.
Signed-off-by: Matt Redfearn <matt.redfearn@imgtec.com>
Reviewed-by: Paul Burton <paul.burton@imgtec.com>
Cc: Adam Buchbinder <adam.buchbinder@gmail.com>
Cc: Masahiro Yamada <yamada.masahiro@socionext.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/14220/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
The check for whether a CPU required the FSB flush workaround
previously required every CPU not requiring it to be whitelisted. That
approach does not scale well as new CPUs are introduced so change the
default from a WARN and returning an error to just returning 0. Any CPUs
requiring the workaround can then be added to the blacklist.
Signed-off-by: Matt Redfearn <matt.redfearn@imgtec.com>
Reviewed-by: Paul Burton <paul.burton@imgtec.com>
Cc: Adam Buchbinder <adam.buchbinder@gmail.com>
Cc: Masahiro Yamada <yamada.masahiro@socionext.com>
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/14218/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
MIPS CM version 3 removed the CPC_CL_OTHER register and instead the
CM_CL_OTHER register is used to redirect the CPC_OTHER region. As such,
we should not write the unimplmented register and can avoid the
spinlock as well.
These lock functions should aleady be called within the context of a
mips_cm_{lock,unlock}_other pair ensuring the correct CPC_OTHER region
will be accessed.
Signed-off-by: Matt Redfearn <matt.redfearn@imgtec.com>
Reviewed-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/14219/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
Checkpatch complains about use of bare unsigned type.
Signed-off-by: Matt Redfearn <matt.redfearn@imgtec.com>
Reviewed-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/14217/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
These files were only including module.h for exception table
related functions. We've now separated that content out into its
own file "extable.h" so now move over to that and avoid all the
extra header content in module.h that we don't really need to compile
these files.
In the case of traps.c we can't dump the module.h include since it is
also used to provide "print_modules".
Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/13934/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
Commit 7eb8c99db26c ("MIPS: Delete smp-gic.c") removed the file from the
Makefile and the option to build it from KConfig, but left the file
itself floating in the tree.
Remove the unused source file.
Signed-off-by: Matt Redfearn <matt.redfearn@imgtec.com>
Cc: Qais Yousef <qsyousef@gmail.com>
Cc: paul.burton@imgtec.com
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/13883/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
The addition of VPE information to /proc/cpuinfo used to be in smp-mt.c.
This file is not used by MIPS r6 kernels, so the Virtual Processor
information was not present for these CPU types.
Move the code to print VPE information into proc.c, add a case for MIPS
r6 CPUS, and remove the block from smp-mt.c.
Signed-off-by: Matt Redfearn <matt.redfearn@imgtec.com>
Reviewed-by: Paul Burton <paul.burton@imgtec.com>
Cc: Qais Yousef <qsyousef@gmail.com>
Cc: Zubair Lutfullah Kakakhel <Zubair.Kakakhel@imgtec.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/13847/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull low-level x86 updates from Ingo Molnar:
"In this cycle this topic tree has become one of those 'super topics'
that accumulated a lot of changes:
- Add CONFIG_VMAP_STACK=y support to the core kernel and enable it on
x86 - preceded by an array of changes. v4.8 saw preparatory changes
in this area already - this is the rest of the work. Includes the
thread stack caching performance optimization. (Andy Lutomirski)
- switch_to() cleanups and all around enhancements. (Brian Gerst)
- A large number of dumpstack infrastructure enhancements and an
unwinder abstraction. The secret long term plan is safe(r) live
patching plus maybe another attempt at debuginfo based unwinding -
but all these current bits are standalone enhancements in a frame
pointer based debug environment as well. (Josh Poimboeuf)
- More __ro_after_init and const annotations. (Kees Cook)
- Enable KASLR for the vmemmap memory region. (Thomas Garnier)"
[ The virtually mapped stack changes are pretty fundamental, and not
x86-specific per se, even if they are only used on x86 right now. ]
* 'x86-asm-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (70 commits)
x86/asm: Get rid of __read_cr4_safe()
thread_info: Use unsigned long for flags
x86/alternatives: Add stack frame dependency to alternative_call_2()
x86/dumpstack: Fix show_stack() task pointer regression
x86/dumpstack: Remove dump_trace() and related callbacks
x86/dumpstack: Convert show_trace_log_lvl() to use the new unwinder
oprofile/x86: Convert x86_backtrace() to use the new unwinder
x86/stacktrace: Convert save_stack_trace_*() to use the new unwinder
perf/x86: Convert perf_callchain_kernel() to use the new unwinder
x86/unwind: Add new unwind interface and implementations
x86/dumpstack: Remove NULL task pointer convention
fork: Optimize task creation by caching two thread stacks per CPU if CONFIG_VMAP_STACK=y
sched/core: Free the stack early if CONFIG_THREAD_INFO_IN_TASK
lib/syscall: Pin the task stack in collect_syscall()
x86/process: Pin the target stack in get_wchan()
x86/dumpstack: Pin the target stack when dumping it
kthread: Pin the stack via try_get_task_stack()/put_task_stack() in to_live_kthread() function
sched/core: Add try_get_task_stack() and put_task_stack()
x86/entry/64: Fix a minor comment rebase error
iommu/amd: Don't put completion-wait semaphore on stack
...
|
|
Get the cr4 fixes so we can apply the final cleanup
|
|
The paging_init() function contains code which detects that highmem is
in use but unsupported due to dcache aliasing. However this code was
ineffective because it was being run before the caches are probed,
meaning that cpu_has_dc_aliases would always evaluate to false (unless a
platform overrides it to a compile-time constant) and the detection of
the unsupported case is never triggered. The kernel would then go on to
attempt to use highmem & either hit coherency issues or trigger the
BUG_ON in flush_kernel_dcache_page().
Fix this by running paging_init() later than cpu_cache_init(), such that
the cpu_has_dc_aliases macro will evaluate correctly & the unsupported
highmem case will be detected successfully.
This then leads to a formerly hidden issue in that
mem_init_free_highmem() will attempt to free all highmem pages, even
though we're avoiding use of them & don't have valid page structs for
them. This leads to an invalid pointer dereference & a TLB exception.
Avoid this by skipping the loop in mem_init_free_highmem() if
cpu_has_dc_aliases evaluates true.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: Rabin Vincent <rabinv@axis.com>
Cc: Matt Redfearn <matt.redfearn@imgtec.com>
Cc: Jerome Marchand <jmarchan@redhat.com>
Cc: Alexander Sverdlin <alexander.sverdlin@gmail.com>
Cc: Aurelien Jarno <aurelien@aurel32.net>
Cc: Jaedon Shin <jaedon.shin@gmail.com>
Cc: Toshi Kani <toshi.kani@hpe.com>
Cc: James Hogan <james.hogan@imgtec.com>
Cc: Sergey Ryazanov <ryazanov.s.a@gmail.com>
Cc: Jonas Gorski <jogo@openwrt.org>
Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/14184/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
When the kernel is built for microMIPS, branches targets need to be
known to be microMIPS code in order to result in bit 0 of the PC being
set. The branch target in the BUILD_ROLLBACK_PROLOGUE macro was simply
the end of the macro, which may be pointing at padding rather than at
code. This results in recent enough GNU linkers complaining like so:
mips-img-linux-gnu-ld: arch/mips/built-in.o: .text+0x3e3c: Unsupported branch between ISA modes.
mips-img-linux-gnu-ld: final link failed: Bad value
Makefile:936: recipe for target 'vmlinux' failed
make: *** [vmlinux] Error 1
Fix this by changing the branch target to be the start of the
appropriate handler, skipping over any padding.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/14019/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
On current P-series cores from Imagination the FTLB can be enabled or
disabled via a bit in the Config6 register, and an execution hazard is
created by changing the value of bit. The ftlb_disable function already
cleared that hazard but that does no good for other callers. Clear the
hazard in the set_ftlb_enable function that creates it, and only for the
cores where it applies.
This has the effect of reverting c982c6d6c48b ("MIPS: cpu-probe: Remove
cp0 hazard barrier when enabling the FTLB") which was incorrect.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Fixes: c982c6d6c48b ("MIPS: cpu-probe: Remove cp0 hazard barrier when enabling the FTLB")
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/14023/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
On some cores (proAptiv, P5600) we make use of the sizes of the TLBs
to determine the desired FTLB:VTLB write ratio. However set_ftlb_enable
& thus calculate_ftlb_probability is called before decode_config4. This
results in us calculating a probability based on zero sizes, and we end
up setting FTLBP=3 for a 3:1 FTLB:VTLB write ratio in all cases. This
will make abysmal use of the available FTLB resources in the affected
cores.
Fix this by configuring the FTLB probability after having decoded
config4. However we do need to have enabled the FTLB before that point
such that fields in config4 actually reflect that an FTLB is present. So
set_ftlb_enable is now called twice, with flags indicating that it
should configure the write probability only the second time.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Fixes: cf0a8aa0226d ("MIPS: cpu-probe: Set the FTLB probability bit on supported cores")
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/14022/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
The FTLBP field in Config7 for the I6400 is intended as chicken bits for
debugging rather than as a field that software actually makes use of.
For best performance, FTLBP should be left at its default value of 0
with all TLB writes hitting the FTLB by default.
Additionally, since set_ftlb_enable is called from decode_configs before
decode_config4 which determines the size of the TLBs, this was
previously always setting FTLBP=3 for a 3:1 FTLB:VTLB write ratio which
makes abysmal use of the available FTLB resources.
This effectively reverts b0c4e1b79d8a ("MIPS: Set up FTLB probability
for I6400").
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Fixes: b0c4e1b79d8a ("MIPS: Set up FTLB probability for I6400")
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/14021/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
arch_uprobe_pre_xol needs to emulate a branch if a branch instruction
has been replaced with a breakpoint, but in fact an uninitialised local
variable was passed to the emulator routine instead of the original
instruction
Signed-off-by: Marcin Nowakowski <marcin.nowakowski@imgtec.com>
Fixes: 40e084a506eb ('MIPS: Add uprobes support.')
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/14300/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
Generic kernel code implements a weak version of set_orig_insn that
moves cached 'insn' from arch_uprobe to the original code location when
the trap is removed.
MIPS variant used arch_uprobe->orig_inst which was never initialised
properly, so this code only inserted a nop instead of the original
instruction. With that change orig_inst can also be safely removed.
Signed-off-by: Marcin Nowakowski <marcin.nowakowski@imgtec.com>
Fixes: 40e084a506eb ('MIPS: Add uprobes support.')
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/14299/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
arch_uretprobe_hijack_return_addr should replace the return address for
a call with a trampoline address.
Signed-off-by: Marcin Nowakowski <marcin.nowakowski@imgtec.com>
Fixes: 40e084a506eb ('MIPS: Add uprobes support.')
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/14298/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
Commit 0d2808f338c7 ("MIPS: smp-cps: Add support for CPU hotplug of
MIPSr6 processors") added a call to mips_cm_lock_other in order to lock
the CPC in CPUs containing a version 3 or higher Coherence Manager,
which use the general CM core other register, where previous CMs had a
dedicated core other register for the CPC.
A kernel BUG() is triggered, however, if mips_cm_lock_other is called
with a VP other than 0 on a CPU with CM < 3, a condition introduced by
0d2808f338c7.
Avoid the BUG() by always locking VP0 when locking the CPC, since the
required register, cpc_stat_conf, is shared by all vps in a core.
Fixes: 0d2808f338c7 ("MIPS: smp-cps: Add support for CPU hotplug...)
Signed-off-by: Matt Redfearn <matt.redfearn@imgtec.com>
Cc: Qais Yousef <qsyousef@gmail.com>
Cc: Masahiro Yamada <yamada.masahiro@socionext.com>
Cc: James Hogan <james.hogan@imgtec.com>
Cc: Paul Burton <paul.burton@imgtec.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/14297/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
This patch fixes the possibility of a deadlock when bringing up
secondary CPUs.
The deadlock occurs because the set_cpu_online() is called before
synchronise_count_slave(). This can cause a deadlock if the boot CPU,
having scheduled another thread, attempts to send an IPI to the
secondary CPU, which it sees has been marked online. The secondary is
blocked in synchronise_count_slave() waiting for the boot CPU to enter
synchronise_count_master(), but the boot cpu is blocked in
smp_call_function_many() waiting for the secondary to respond to it's
IPI request.
Fix this by marking the CPU online in cpu_callin_map and synchronising
counters before declaring the CPU online and calculating the maps for
IPIs.
Signed-off-by: Matt Redfearn <matt.redfearn@imgtec.com>
Reported-by: Justin Chen <justinpopo6@gmail.com>
Tested-by: Justin Chen <justinpopo6@gmail.com>
Cc: Florian Fainelli <f.fainelli@gmail.com>
Cc: stable@vger.kernel.org # v4.1+
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/14302/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
In the mipsr2_decoder() function, used to emulate pre-MIPSr6
instructions that were removed in MIPSr6, the init_fpu() function is
called if a removed pre-MIPSr6 floating point instruction is the first
floating point instruction used by the task. However, init_fpu()
performs varous actions that rely upon not being migrated. For example
in the most basic case it sets the coprocessor 0 Status.CU1 bit to
enable the FPU & then loads FP register context into the FPU registers.
If the task were to migrate during this time, it may end up attempting
to load FP register context on a different CPU where it hasn't set the
CU1 bit, leading to errors such as:
do_cpu invoked from kernel context![#2]:
CPU: 2 PID: 7338 Comm: fp-prctl Tainted: G D 4.7.0-00424-g49b0c82 #2
task: 838e4000 ti: 88d38000 task.ti: 88d38000
$ 0 : 00000000 00000001 ffffffff 88d3fef8
$ 4 : 838e4000 88d38004 00000000 00000001
$ 8 : 3400fc01 801f8020 808e9100 24000000
$12 : dbffffff 807b69d8 807b0000 00000000
$16 : 00000000 80786150 00400fc4 809c0398
$20 : 809c0338 0040273c 88d3ff28 808e9d30
$24 : 808e9d30 00400fb4
$28 : 88d38000 88d3fe88 00000000 8011a2ac
Hi : 0040273c
Lo : 88d3ff28
epc : 80114178 _restore_fp+0x10/0xa0
ra : 8011a2ac mipsr2_decoder+0xd5c/0x1660
Status: 1400fc03 KERNEL EXL IE
Cause : 1080002c (ExcCode 0b)
PrId : 0001a920 (MIPS I6400)
Modules linked in:
Process fp-prctl (pid: 7338, threadinfo=88d38000, task=838e4000, tls=766527d0)
Stack : 00000000 00000000 00000000 88d3fe98 00000000 00000000 809c0398 809c0338
808e9100 00000000 88d3ff28 00400fc4 00400fc4 0040273c 7fb69e18 004a0000
004a0000 004a0000 7664add0 8010de18 00000000 00000000 88d3fef8 88d3ff28
808e9100 00000000 766527d0 8010e534 000c0000 85755000 8181d580 00000000
00000000 00000000 004a0000 00000000 766527d0 7fb69e18 004a0000 80105c20
...
Call Trace:
[<80114178>] _restore_fp+0x10/0xa0
[<8011a2ac>] mipsr2_decoder+0xd5c/0x1660
[<8010de18>] do_ri+0x90/0x6b8
[<80105c20>] ret_from_exception+0x0/0x10
Fix this by disabling preemption around the call to init_fpu(), ensuring
that it starts & completes on one CPU.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Fixes: b0a668fb2038 ("MIPS: kernel: mips-r2-to-r6-emul: Add R2 emulator for MIPS R6")
Cc: linux-mips@linux-mips.org
Cc: stable@vger.kernel.org # v4.0+
Patchwork: https://patchwork.linux-mips.org/patch/14305/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
The page structures associated with the vDSO pages in the kernel image
are calculated using virt_to_page(), which uses __pa() under the hood to
find the pfn associated with the virtual address. The vDSO data pointers
however point to kernel symbols, so __pa_symbol() should really be used
instead.
Since there is no equivalent to virt_to_page() which uses __pa_symbol(),
fix init_vdso_image() to work directly with pfns, calculated with
__phys_to_pfn(__pa_symbol(...)).
This issue broke the Malta Enhanced Virtual Addressing (EVA)
configuration which has a non-default implementation of __pa_symbol().
This is because it uses a physical alias so that the kernel executes
from KSeg0 (VA 0x80000000 -> PA 0x00000000), while RAM is provided to
the kernel in the KUSeg range (VA 0x00000000 -> PA 0x80000000) which
uses the same underlying RAM.
Since there are no page structures associated with the low physical
address region, some arbitrary kernel memory would be interpreted as a
page structure for the vDSO pages and badness ensues.
Fixes: ebb5e78cc634 ("MIPS: Initial implementation of a VDSO")
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Leonid Yegoshin <leonid.yegoshin@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: <stable@vger.kernel.org> # 4.4.x-
Patchwork: https://patchwork.linux-mips.org/patch/14229/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
cpu_has_fpu macro uses smp_processor_id() and is currently executed
with preemption enabled, that triggers the warning at runtime.
It is assumed throughout the kernel that if any CPU has an FPU, then all
CPUs would have an FPU as well, so it is safe to perform the check with
preemption enabled - change the code to use raw_ variant of the check to
avoid the warning.
Signed-off-by: Marcin Nowakowski <marcin.nowakowski@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: stable@vger.kernel.org # 4.0+
Patchwork: https://patchwork.linux-mips.org/patch/14125/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
Memory regions added with add_memory_region() at the top of the physical
address space will have their end address overflow to 0. This causes
them to be rejected as invalid, and would cause various other issues
later on.
This causes issues on Malta and Boston platforms when wanting to use all
2GB of RAM on a 32-bit kernel, either via highmem (using physical
addresses 0x90000000..0xFFFFFFFF), or with the Malta Enhanced Virtual
Addressing (EVA) layout which exposes the whole 0x80000000..0xFFFFFFFF
physical address range to kernel mode at 0x00000000..0x7FFFFFFF.
Due to the abundance of these non-overflow assumptions and the fact that
memblock already avoids the arithmetic overflow by limiting the size of
new memory regions without the arch code knowing it (in particular
mem_init_free_highmem() will trigger a page dump due to nonzero mapcount
on the last page), it is simpler and safer to just limit the size of the
region in a similar way to memblock but at the arch level to allow most
of the RAM to be used without arithmetic overflows.
Therefore we detect this case specifically and reduce the size of the
region slightly to avoid the arithmetic overflows and cause the last
page to be ignored.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/13857/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
When a uprobe-replacement breakpoint instruction is handled, a notifier
is called with DIE_UPROBE argument, but a corresponding exception notify
handler for MIPS attempts to handle DIE_BREAK instead. As a result
the breakpoint instruction isn't handled by the uprobe code and the probed
application terminates with SIGTRAP.
Fix this by changing arch_uprobe_exception_notify code to handle
DIE_UPROBE as a pre-singlestep condition instead of DIE_BREAK.
Signed-off-by: Marcin Nowakowski <marcin.nowakowski@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/13884/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
Storing this value will help prevent unwinders from getting out of sync
with the function graph tracer ret_stack. Now instead of needing a
stateful iterator, they can compare the return address pointer to find
the right ret_stack entry.
Note that an array of 50 ftrace_ret_stack structs is allocated for every
task. So when an arch implements this, it will add either 200 or 400
bytes of memory usage per task (depending on whether it's a 32-bit or
64-bit platform).
Signed-off-by: Josh Poimboeuf <jpoimboe@redhat.com>
Acked-by: Steven Rostedt <rostedt@goodmis.org>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Byungchul Park <byungchul.park@lge.com>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Nilay Vaish <nilayvaish@gmail.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/a95cfcc39e8f26b89a430c56926af0bb217bc0a1.1471607358.git.jpoimboe@redhat.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
|