summaryrefslogtreecommitdiffstats
path: root/drivers/perf/arm_pmu.c
AgeCommit message (Collapse)AuthorFilesLines
2018-03-19Merge tag 'v4.16-rc6' into perf/core, to pick up fixesIngo Molnar1-1/+1
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-03-16perf: Fix sibling iterationPeter Zijlstra1-1/+1
Mark noticed that the change to sibling_list changed some iteration semantics; because previously we used group_list as list entry, sibling events would always have an empty sibling_list. But because we now use sibling_list for both list head and list entry, siblings will report as having siblings. Fix this with a custom for_each_sibling_event() iterator. Fixes: 8343aae66167 ("perf/core: Remove perf_event::group_entry") Reported-by: Mark Rutland <mark.rutland@arm.com> Suggested-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: vincent.weaver@maine.edu Cc: alexander.shishkin@linux.intel.com Cc: torvalds@linux-foundation.org Cc: alexey.budankov@linux.intel.com Cc: valery.cherepennikov@intel.com Cc: eranian@google.com Cc: acme@redhat.com Cc: linux-tip-commits@vger.kernel.org Cc: davidcc@google.com Cc: kan.liang@intel.com Cc: Dmitry.Prohorov@intel.com Cc: jolsa@redhat.com Link: https://lkml.kernel.org/r/20180315170129.GX4043@hirez.programming.kicks-ass.net
2018-03-12perf/core: Remove perf_event::group_entryPeter Zijlstra1-1/+1
Now that all the grouping is done with RB trees, we no longer need group_entry and can replace the whole thing with sibling_list. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Alexey Budankov <alexey.budankov@linux.intel.com> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: David Carrillo-Cisneros <davidcc@google.com> Cc: Dmitri Prokhorov <Dmitry.Prohorov@intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Kan Liang <kan.liang@intel.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Stephane Eranian <eranian@google.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Valery Cherepennikov <valery.cherepennikov@intel.com> Cc: Vince Weaver <vincent.weaver@maine.edu> Cc: linux-kernel@vger.kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-02-28arm_pmu: Use disable_irq_nosync when disabling SPI in CPU teardown hookWill Deacon1-1/+1
Commit 6de3f79112cc ("arm_pmu: explicitly enable/disable SPIs at hotplug") moved all of the arm_pmu IRQ enable/disable calls to the CPU hotplug hooks, regardless of whether they are implemented as PPIs or SPIs. This can lead to us sleeping from atomic context due to disable_irq blocking: | BUG: sleeping function called from invalid context at kernel/irq/manage.c:112 | in_atomic(): 1, irqs_disabled(): 128, pid: 15, name: migration/1 | no locks held by migration/1/15. | irq event stamp: 192 | hardirqs last enabled at (191): [<00000000803c2507>] | _raw_spin_unlock_irq+0x2c/0x4c | hardirqs last disabled at (192): [<000000007f57ad28>] multi_cpu_stop+0x9c/0x140 | softirqs last enabled at (0): [<0000000004ee1b58>] | copy_process.isra.77.part.78+0x43c/0x1504 | softirqs last disabled at (0): [< (null)>] (null) | CPU: 1 PID: 15 Comm: migration/1 Not tainted 4.16.0-rc3-salvator-x #1651 | Hardware name: Renesas Salvator-X board based on r8a7796 (DT) | Call trace: | dump_backtrace+0x0/0x140 | show_stack+0x14/0x1c | dump_stack+0xb4/0xf0 | ___might_sleep+0x1fc/0x218 | __might_sleep+0x70/0x80 | synchronize_irq+0x40/0xa8 | disable_irq+0x20/0x2c | arm_perf_teardown_cpu+0x80/0xac Since the interrupt is always CPU-affine and this code is running with interrupts disabled, we can just use disable_irq_nosync as we know there isn't a concurrent invocation of the handler to worry about. Fixes: 6de3f79112cc ("arm_pmu: explicitly enable/disable SPIs at hotplug") Reported-by: Geert Uytterhoeven <geert@linux-m68k.org> Tested-by: Geert Uytterhoeven <geert+renesas@glider.be> Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2018-02-20arm_pmu: acpi: request IRQs up-frontMark Rutland1-20/+2
We can't request IRQs in atomic context, so for ACPI systems we'll have to request them up-front, and later associate them with CPUs. This patch reorganises the arm_pmu code to do so. As we no longer have the arm_pmu structure at probe time, a number of prototypes need to be adjusted, requiring changes to the common arm_pmu code and arm_pmu platform code. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2018-02-20arm_pmu: note IRQs and PMUs per-cpuMark Rutland1-17/+52
To support ACPI systems, we need to request IRQs before we know the associated PMU, and thus we need some percpu variable that the IRQ handler can find the PMU from. As we're going to request IRQs without the PMU, we can't rely on the arm_pmu::active_irqs mask, and similarly need to track requested IRQs with a percpu variable. Signed-off-by: Mark Rutland <mark.rutland@arm.com> [will: made armpmu_count_irq_users static] Signed-off-by: Will Deacon <will.deacon@arm.com>
2018-02-20arm_pmu: explicitly enable/disable SPIs at hotplugMark Rutland1-5/+10
To support ACPI systems, we need to request IRQs before CPUs are hotplugged, and thus we need to request IRQs before we know their associated PMU. This is problematic if a PMU IRQ is pending out of reset, as it may be taken before we know the PMU, and thus the IRQ handler won't be able to handle it, leaving it screaming. To avoid such problems, lets request all IRQs in a disabled state, and explicitly enable/disable them at hotplug time, when we're sure the PMU has been probed. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2018-02-20arm_pmu: acpi: check for mismatched PPIsMark Rutland1-13/+4
The arm_pmu platform code explicitly checks for mismatched PPIs at probe time, while the ACPI code leaves this to the core code. Future refactoring will make this difficult for the core code to check, so let's have the ACPI code check this explicitly. As before, upon a failure we'll continue on without an interrupt. Ho hum. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2018-02-20arm_pmu: add armpmu_alloc_atomic()Mark Rutland1-3/+14
In ACPI systems, we don't know the makeup of CPUs until we hotplug them on, and thus have to allocate the PMU datastructures at hotplug time. Thus, we must use GFP_ATOMIC allocations. Let's add an armpmu_alloc_atomic() that we can use in this case. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2018-02-20arm_pmu: fold platform helpers into platform codeMark Rutland1-21/+0
The armpmu_{request,free}_irqs() helpers are only used by arm_pmu_platform.c, so let's fold them in and make them static. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2018-02-20arm_pmu: kill arm_pmu_platdataMark Rutland1-23/+4
Now that we have no platforms passing platform data to the arm_pmu code, we can get rid of the platdata and associated hooks, paving the way for rework of our IRQ handling. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2017-10-24arm/arm64: pmu: Distinguish percpu irq and percpu_devid irqJulien Thierry1-5/+5
arm_pmu interrupts are maked as PERCPU even when these are not local physical interrupts to a single CPU. When using non-local interrupts, interrupts marked as PERCPU will not get freed not disabled properly by the PMU driver. Check if interrupts are local to a single CPU with PERCPU_DEVID since this is what the PMU driver really needs to know. Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Julien Thierry <julien.thierry@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2017-08-08arm64: perf: Allow standard PMUv3 events to be extended by the CPU typeWill Deacon1-0/+6
Rather than continue adding CPU-specific event maps, instead look up by default in the PMUv3 event map and only fallback to the CPU-specific maps if either the event isn't described by PMUv3, or it is described but the PMCEID registers say that it is unsupported by the current CPU. Signed-off-by: Will Deacon <will.deacon@arm.com>
2017-07-27drivers/perf: arm_pmu: Request PMU SPIs with IRQF_PER_CPUWill Deacon1-14/+27
Since the PMU register interface is banked per CPU, CPU PMU interrrupts cannot be handled by a CPU other than the one with the PMU asserting the interrupt. This means that migrating PMU SPIs, as we do during a CPU hotplug operation doesn't make any sense and can lead to the IRQ being disabled entirely if we route a spurious IRQ to the new affinity target. This has been observed in practice on AMD Seattle, where CPUs on the non-boot cluster appear to take a spurious PMU IRQ when coming online, which is routed to CPU0 where it cannot be handled. This patch passes IRQF_PERCPU for PMU SPIs and forcefully sets their affinity prior to requesting them, ensuring that they cannot be migrated during hotplug events. This interacts badly with the DB8500 erratum workaround that ping-pongs the interrupt affinity from the handler, so we avoid passing IRQF_PERCPU in that case by allowing the IRQ flags to be overridden in the platdata. Fixes: 3cf7ee98b848 ("drivers/perf: arm_pmu: move irq request/free into probe") Cc: Mark Rutland <mark.rutland@arm.com> Cc: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
2017-04-11drivers/perf: arm_pmu: add ACPI frameworkMark Rutland1-2/+2
This patch adds framework code to handle parsing PMU data out of the MADT, sanity checking this, and managing the association of CPUs (and their interrupts) with appropriate logical PMUs. For the time being, we expect that only one PMU driver (PMUv3) will make use of this, and we simply pass in a single probe function. This is based on an earlier patch from Jeremy Linton. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Tested-by: Jeremy Linton <jeremy.linton@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2017-04-11drivers/perf: arm_pmu: split out platform device probe logicMark Rutland1-222/+4
Now that we've split the pdev and DT probing logic from the runtime management, let's move the former into its own file. We gain a few lines due to the copyright header and includes, but this should keep the logic clearly separated, and paves the way for adding ACPI support in a similar fashion. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Tested-by: Jeremy Linton <jeremy.linton@arm.com> [will: rename nr_irqs to avoid conflict with global variable] Signed-off-by: Will Deacon <will.deacon@arm.com>
2017-04-11drivers/perf: arm_pmu: move irq request/free into probeMark Rutland1-5/+6
Currently we request (and potentially free) all IRQs for a given PMU in cpu_pmu_init(). This works for platform/DT probing today, but it doesn't fit ACPI well as we don't have all our affinity data up-front. In preparation for ACPI support, fold the IRQ request/free into arm_pmu_device_probe(), which will remain specific to platform/DT probing. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Tested-by: Jeremy Linton <jeremy.linton@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2017-04-11drivers/perf: arm_pmu: split cpu-local irq request/freeMark Rutland1-36/+52
Currently we have functions to request/free all IRQs for a given PMU. While this works today, this won't work for ACPI, where we don't know the full set of IRQs up front, and need to request them separately. To enable supporting ACPI, this patch splits out the cpu-local request/free into new functions, allowing us to request/free individual IRQs. As this makes it possible/necessary to request a PPI once per cpu, an additional check is added to detect mismatched PPIs. This shouldn't matter for the DT / platform case, as we check this when parsing. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Tested-by: Jeremy Linton <jeremy.linton@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2017-04-11drivers/perf: arm_pmu: rename irq request/free functionsMark Rutland1-10/+10
For historical reasons, portions of the arm_pmu code use a cpu_pmu_ prefix rather than an armpmu_ prefix. While a minor annoyance, this hasn't been a problem thusfar. However, to enable ACPI support, we'll need to expose a few things in header files, and we should aim to keep those consistently namespaced. In preparation for exporting our IRQ request/free functions, rename these to have an armpmu_ prefix. For consistency, the 'cpu_pmu' parameter is also renamed to 'armpmu'. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Tested-by: Jeremy Linton <jeremy.linton@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2017-04-11drivers/perf: arm_pmu: handle no platform_deviceMark Rutland1-3/+9
In armpmu_dispatch_irq() we look at arm_pmu::plat_device to acquire platdata, so that we can defer to platform-specific IRQ handling, required on some 32-bit parts. With the advent of ACPI we won't always have a platform_device, and so we must avoid trying to dereference fields from it. This patch fixes up armpmu_dispatch_irq() to avoid doing so, introducing a new armpmu_get_platdata() helper. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Tested-by: Jeremy Linton <jeremy.linton@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2017-04-11drivers/perf: arm_pmu: simplify cpu_pmu_request_irqs()Mark Rutland1-2/+3
The ARM PMU framework code always uses armpmu_dispatch_irq as its common IRQ handler. Passing this down from cpu_pmu_init() is somewhat pointless, and gets in the way of refactoring. This patch makes cpu_pmu_request_irqs() always use armpmu_dispatch_irq as the handler when requesting IRQs, and removes the handler parameter from its prototype. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Tested-by: Jeremy Linton <jeremy.linton@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2017-04-11drivers/perf: arm_pmu: factor out pmu registrationMark Rutland1-14/+26
Currently arm_pmu_device_probe contains probing logic specific to the platform_device infrastructure, and some logic required to safely register the PMU with various systems. This patch factors out the logic relating to the registration of the PMU. This makes arm_pmu_device_probe a little easier to read, and will make it easier to reuse the logic for an ACPI-specific probing mechanism. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Tested-by: Jeremy Linton <jeremy.linton@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2017-04-11drivers/perf: arm_pmu: fold init into allocMark Rutland1-28/+24
Given we always want to initialise common fields on an allocated PMU, this patch folds this common initialisation into armpmu_alloc(). This will make it simpler to reuse this code for an ACPI-specific probe path. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Tested-by: Jeremy Linton <jeremy.linton@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2017-04-11drivers/perf: arm_pmu: define armpmu_init_fnMark Rutland1-1/+1
We expect an ARM PMU's init function to have a particular prototype, which we open-code in a few places. This is less than ideal, considering that we cast a void value to this type in one location, and a mismatch could easily be missed. Add a typedef so that we can ensure this is consistent. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Tested-by: Jeremy Linton <jeremy.linton@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2017-04-11drivers/perf: arm_pmu: remove pointless PMU disablingMark Rutland1-10/+3
We currently disable the PMU temporarily in armpmu_add(). We may have required this historically, but the perf core always disables an event's PMU when calling event::pmu::add(), so this is not necessary. We don't do similarly in armpmu_del(), or elsewhere, so this is unnecessary and inconsistent, and only serves to confuse the reader. Remove the pointless disable, simplifying armpmu_add() in the process. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Tested-by: Jeremy Linton <jeremy.linton@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2017-03-31drivers/perf: arm_pmu: split irq request from enableMark Rutland1-103/+50
For historical reasons, we lazily request and free interrupts in the arm pmu driver. This requires us to refcount use of the pmu (by way of counting the active events) in order to request/free interrupts at the correct times, which complicates the driver somewhat. The existing logic is flawed, as it only considers currently online CPUs when requesting, freeing, or managing the affinity of interrupts. Intervening hotplug events can result in erroneous IRQ affinity, online CPUs for which interrupts have not been requested, or offline CPUs whose interrupts are still requested. To fix this, this patch splits the requesting of interrupts from any per-cpu management (i.e. per-cpu enable/disable, and configuration of cpu affinity). We now request all interrupts up-front at probe time (and never free them, since we never unregister PMUs). The management of affinity, and per-cpu enable/disable now happens in our cpu hotplug callback, ensuring it occurs consistently. This means that we must now invoke the CPU hotplug callback at boot time in order to configure IRQs, and since the callback also resets the PMU hardware, we can remove the duplicate reset in the probe path. This rework renders our event refcounting unnecessary, so this is removed. Signed-off-by: Mark Rutland <mark.rutland@arm.com> [will: make armpmu_get_cpu_irq static] Signed-off-by: Will Deacon <will.deacon@arm.com>
2017-03-31drivers/perf: arm_pmu: manage interrupts per-cpuMark Rutland1-150/+164
When requesting or freeing interrupts, we use platform_get_irq() to find relevant irqs, backing this up with additional information in an optional irq_affinity table. This means that our irq request and free paths are tied to a platform_device, and our request path must jump through a number of hoops in order to determine the required affinity of each interrupt. Given that the affinity must be static, we can compute the affinity once up-front at probe time, simplifying the irq request and free paths. By recording interrupts in a per-cpu data structure, we simplify a few paths, and permit a subsequent rework of the request and free paths. Signed-off-by: Mark Rutland <mark.rutland@arm.com> [will: rename local nr_irqs variable to avoid conflict with global] Signed-off-by: Will Deacon <will.deacon@arm.com>
2017-03-31drivers/perf: arm_pmu: rework per-cpu allocationMark Rutland1-22/+44
For historical reasons, we allocate per-cpu data associated with a PMU rather late, in cpu_pmu_init, after we've parsed whatever hardware information we were provided with. In order to allow use to store some per-cpu data early in the probe path, we need to allocate (and initialise) the per-cpu data earlier. This patch reworks the way we allocate the pmu and associated per-cpu data in order to make that possible. Signed-off-by: Mark Rutland <mark.rutland@arm.com> [will: make armpmu_{alloc,free} static Signed-off-by: Will Deacon <will.deacon@arm.com>
2017-03-02sched/headers: Prepare for new header dependencies before moving code to ↵Ingo Molnar1-0/+1
<linux/sched/clock.h> We are going to split <linux/sched/clock.h> out of <linux/sched.h>, which will have to be picked up from other headers and .c files. Create a trivial placeholder <linux/sched/clock.h> file that just maps to <linux/sched.h> to make this patch obviously correct and bisectable. Include the new header in the files that are going to need it. Acked-by: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-kernel@vger.kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
2016-12-25cpu/hotplug: Cleanup state namesThomas Gleixner1-1/+1
When the state names got added a script was used to add the extra argument to the calls. The script basically converted the state constant to a string, but the cleanup to convert these strings into meaningful ones did not happen. Replace all the useless strings with 'subsys/xxx/yyy:state' strings which are used in all the other places already. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Sebastian Siewior <bigeasy@linutronix.de> Link: http://lkml.kernel.org/r/20161221192112.085444152@linutronix.de Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2016-10-03Merge branch 'smp-hotplug-for-linus' of ↵Linus Torvalds1-26/+18
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull CPU hotplug updates from Thomas Gleixner: "Yet another batch of cpu hotplug core updates and conversions: - Provide core infrastructure for multi instance drivers so the drivers do not have to keep custom lists. - Convert custom lists to the new infrastructure. The block-mq custom list conversion comes through the block tree and makes the diffstat tip over to more lines removed than added. - Handle unbalanced hotplug enable/disable calls more gracefully. - Remove the obsolete CPU_STARTING/DYING notifier support. - Convert another batch of notifier users. The relayfs changes which conflicted with the conversion have been shipped to me by Andrew. The remaining lot is targeted for 4.10 so that we finally can remove the rest of the notifiers" * 'smp-hotplug-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (46 commits) cpufreq: Fix up conversion to hotplug state machine blk/mq: Reserve hotplug states for block multiqueue x86/apic/uv: Convert to hotplug state machine s390/mm/pfault: Convert to hotplug state machine mips/loongson/smp: Convert to hotplug state machine mips/octeon/smp: Convert to hotplug state machine fault-injection/cpu: Convert to hotplug state machine padata: Convert to hotplug state machine cpufreq: Convert to hotplug state machine ACPI/processor: Convert to hotplug state machine virtio scsi: Convert to hotplug state machine oprofile/timer: Convert to hotplug state machine block/softirq: Convert to hotplug state machine lib/irq_poll: Convert to hotplug state machine x86/microcode: Convert to hotplug state machine sh/SH-X3 SMP: Convert to hotplug state machine ia64/mca: Convert to hotplug state machine ARM/OMAP/wakeupgen: Convert to hotplug state machine ARM/shmobile: Convert to hotplug state machine arm64/FP/SIMD: Convert to hotplug state machine ...
2016-10-03Merge tag 'arm64-upstream' of ↵Linus Torvalds1-6/+28
git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux Pull arm64 updates from Will Deacon: "It's a bit all over the place this time with no "killer feature" to speak of. Support for mismatched cache line sizes should help people seeing whacky JIT failures on some SoCs, and the big.LITTLE perf updates have been a long time coming, but a lot of the changes here are cleanups. We stray outside arch/arm64 in a few areas: the arch/arm/ arch_timer workaround is acked by Russell, the DT/OF bits are acked by Rob, the arch_timer clocksource changes acked by Marc, CPU hotplug by tglx and jump_label by Peter (all CC'd). Summary: - Support for execute-only page permissions - Support for hibernate and DEBUG_PAGEALLOC - Support for heterogeneous systems with mismatches cache line sizes - Errata workarounds (A53 843419 update and QorIQ A-008585 timer bug) - arm64 PMU perf updates, including cpumasks for heterogeneous systems - Set UTS_MACHINE for building rpm packages - Yet another head.S tidy-up - Some cleanups and refactoring, particularly in the NUMA code - Lots of random, non-critical fixes across the board" * tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (100 commits) arm64: tlbflush.h: add __tlbi() macro arm64: Kconfig: remove SMP dependence for NUMA arm64: Kconfig: select OF/ACPI_NUMA under NUMA config arm64: fix dump_backtrace/unwind_frame with NULL tsk arm/arm64: arch_timer: Use archdata to indicate vdso suitability arm64: arch_timer: Work around QorIQ Erratum A-008585 arm64: arch_timer: Add device tree binding for A-008585 erratum arm64: Correctly bounds check virt_addr_valid arm64: migrate exception table users off module.h and onto extable.h arm64: pmu: Hoist pmu platform device name arm64: pmu: Probe default hw/cache counters arm64: pmu: add fallback probe table MAINTAINERS: Update ARM PMU PROFILING AND DEBUGGING entry arm64: Improve kprobes test for atomic sequence arm64/kvm: use alternative auto-nop arm64: use alternative auto-nop arm64: alternative: add auto-nop infrastructure arm64: lse: convert lse alternatives NOP padding to use __nops arm64: barriers: introduce nops and __nops macros for NOP sequences arm64: sysreg: replace open-coded mrs_s/msr_s with {read,write}_sysreg_s ...
2016-09-16arm64: pmu: add fallback probe tableMark Salter1-1/+1
In preparation for ACPI support, add a pmu_probe_info table to the arm_pmu_device_probe() call. This table gets used when probing in the absence of a devicetree node for PMU. Signed-off-by: Mark Salter <msalter@redhat.com> Signed-off-by: Jeremy Linton <jeremy.linton@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-09-09drivers/perf: arm_pmu: expose a cpumask in sysfsMark Rutland1-0/+20
In systems with heterogeneous CPUs, there are multiple logical CPU PMUs, each of which covers a subset of CPUs in the system. In some cases userspace needs to know which CPUs a given logical PMU covers, so we'd like to expose a cpumask under sysfs, similar to what is done for uncore PMUs. Unfortunately, prior to commit 00e727bb389359c8 ("perf stat: Balance opening and reading events"), perf stat only correctly handled a cpumask holding a single CPU, and only when profiling in system-wide mode. In other cases, the presence of a cpumask file could cause perf stat to behave erratically. Thus, exposing a cpumask file would break older perf binaries in cases where they would otherwise work. To avoid this issue while still providing userspace with the information it needs, this patch exposes a differently-named file (cpus) under sysfs. New tools can look for this and operate correctly, while older tools will not be adversely affected by its presence. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-09-09drivers/perf: arm_pmu: only use common attr_groupsMark Rutland1-2/+1
Now that the 32-bit and 64-bit perf backends use the common groups directly, remove the fallback and no longer allow the groups array to be overridden. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-09-09drivers/perf: arm_pmu: add common attr group fieldsMark Rutland1-0/+3
In preparation for adding common attribute groups, add an array of attribute group pointers to arm_pmu, which will be used if the backend hasn't already set pmu::attr_groups. Subsequent patches will move backends over to using these, before adding common fields. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-09-06drivers/perf: arm_pmu: Always consider IRQ0 as an errorMarc Zyngier1-6/+5
As declared by the chief penguin, and enforced by the NO_IRQ brigade, IRQ0 doesn't exist, and is considered as an error (no irq). Unfortunately, the arm_pmu driver still considers it as valid in a large number of cases. Let's fix this. Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-09-02arm/perf: Use multi instance instead of custom listSebastian Andrzej Siewior1-26/+18
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: rt@linutronix.de Link: http://lkml.kernel.org/r/20160817171420.sdwk2qivxunzryz4@linutronix.de Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2016-09-02drivers/perf: arm_pmu: Fix NULL pointer dereference during probeStefan Wahren1-1/+1
Patch 7f1d642fbb5c ("drivers/perf: arm-pmu: Fix handling of SPI lacking interrupt-affinity property") unintended also fixes perf_event support for bcm2835 which doesn't have PMU interrupts. Unfortunately this change introduce a NULL pointer dereference on bcm2835, because irq_is_percpu always expected to be called with a valid IRQ. So fix this regression by validating the IRQ before. Tested-by: Kevin Hilman <khilman@baylibre.com> Signed-off-by: Stefan Wahren <stefan.wahren@i2se.com> Fixes: 7f1d642fbb5c ("drivers/perf: arm-pmu: Fix handling of SPI lacking "interrupt-affinity" property") Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-09-02drivers/perf: arm_pmu: Fix leak in error pathStefan Wahren1-0/+1
In case of a IRQ type mismatch in of_pmu_irq_cfg() the device node for interrupt affinity isn't freed. So fix this issue by calling of_node_put(). Signed-off-by: Stefan Wahren <stefan.wahren@i2se.com> Fixes: fa8ad7889d83 ("arm: perf: factor arm_pmu core out to drivers") Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-08-09drivers/perf: arm-pmu: Fix handling of SPI lacking "interrupt-affinity" propertyMarc Zyngier1-3/+4
Patch 19a469a58720 ("drivers/perf: arm-pmu: Handle per-interrupt affinity mask") added support for partitionned PPI setups, but inadvertently broke setups using SPIs without the "interrupt-affinity" property (which is the case for UP platforms). This patch restore the broken functionnality by testing whether the interrupt is percpu or not instead of relying on the using_spi flag that really means "SPI *and* interrupt-affinity property". Acked-by: Mark Rutland <mark.rutland@arm.com> Reported-by: Geert Uytterhoeven <geert@linux-m68k.org> Tested-by: Geert Uytterhoeven <geert@linux-m68k.org> Fixes: 19a469a58720 ("drivers/perf: arm-pmu: Handle per-interrupt affinity mask") Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-08-09drivers/perf: arm-pmu: convert arm_pmu_mutex to spinlockSudeep Holla1-9/+9
arm_pmu_mutex is never held long and we don't want to sleep while the lock is being held as it's executed in the context of hotplug notifiers. So it can be converted to a simple spinlock instead. Without this patch we get the following warning: BUG: sleeping function called from invalid context at kernel/locking/mutex.c:620 in_atomic(): 1, irqs_disabled(): 128, pid: 0, name: swapper/2 no locks held by swapper/2/0. irq event stamp: 381314 hardirqs last enabled at (381313): _raw_spin_unlock_irqrestore+0x7c/0x88 hardirqs last disabled at (381314): cpu_die+0x28/0x48 softirqs last enabled at (381294): _local_bh_enable+0x28/0x50 softirqs last disabled at (381293): irq_enter+0x58/0x78 CPU: 2 PID: 0 Comm: swapper/2 Not tainted 4.7.0 #12 Call trace: dump_backtrace+0x0/0x220 show_stack+0x24/0x30 dump_stack+0xb4/0xf0 ___might_sleep+0x1d8/0x1f0 __might_sleep+0x5c/0x98 mutex_lock_nested+0x54/0x400 arm_perf_starting_cpu+0x34/0xb0 cpuhp_invoke_callback+0x88/0x3d8 notify_cpu_starting+0x78/0x98 secondary_start_kernel+0x108/0x1a8 This patch converts the mutex to spinlock to eliminate the above warnings. This constraints pmu->reset to be non-blocking call which is the case with all the ARM PMU backends. Cc: Stephen Boyd <sboyd@codeaurora.org> Fixes: 37b502f121ad ("arm/perf: Fix hotplug state machine conversion") Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Sudeep Holla <sudeep.holla@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-07-29Merge branch 'smp-hotplug-for-linus' of ↵Linus Torvalds1-22/+37
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull smp hotplug updates from Thomas Gleixner: "This is the next part of the hotplug rework. - Convert all notifiers with a priority assigned - Convert all CPU_STARTING/DYING notifiers The final removal of the STARTING/DYING infrastructure will happen when the merge window closes. Another 700 hundred line of unpenetrable maze gone :)" * 'smp-hotplug-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (70 commits) timers/core: Correct callback order during CPU hot plug leds/trigger/cpu: Move from CPU_STARTING to ONLINE level powerpc/numa: Convert to hotplug state machine arm/perf: Fix hotplug state machine conversion irqchip/armada: Avoid unused function warnings ARC/time: Convert to hotplug state machine clocksource/atlas7: Convert to hotplug state machine clocksource/armada-370-xp: Convert to hotplug state machine clocksource/exynos_mct: Convert to hotplug state machine clocksource/arm_global_timer: Convert to hotplug state machine rcu: Convert rcutree to hotplug state machine KVM/arm/arm64/vgic-new: Convert to hotplug state machine smp/cfd: Convert core to hotplug state machine x86/x2apic: Convert to CPU hotplug state machine profile: Convert to hotplug state machine timers/core: Convert to hotplug state machine hrtimer: Convert to hotplug state machine x86/tboot: Convert to hotplug state machine arm64/armv8 deprecated: Convert to hotplug state machine hwtracing/coresight-etm4x: Convert to hotplug state machine ...
2016-07-20arm/perf: Fix hotplug state machine conversionSebastian Andrzej Siewior1-16/+37
Mark Rutland pointed out that this commit is incomplete: 7d88eb695a1f ("arm/perf: Convert to hotplug state machine") The problem is that: > We may have multiple PMUs (e.g. two in big.LITTLE systems), and > __oprofile_cpu_pmu only contains one of these. So this conversion is not > correct. > > We were relying on the notifier list implicitly containing a list of > those PMUs. It seems like we need an explicit list here. > > We keep __oprofile_cpu_pmu around for legacy 32-bit users of OProfile > (on non-hetereogeneous systems), and that's all that the variable should > be used for. Introduce arm_pmu_list to correctly handle multiple PMUs in the system. Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Acked-by: Mark Rutland <mark.rutland@arm.com> Cc: Anna-Maria Gleixner <anna-maria@linutronix.de> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will.deacon@arm.com> Cc: linux-tip-commits@vger.kernel.org Cc: rt@linutronix.de Link: http://lkml.kernel.org/r/20160719111733.GA22911@linutronix.de Signed-off-by: Ingo Molnar <mingo@kernel.org>
2016-07-15arm/perf: Convert to hotplug state machineThomas Gleixner1-21/+15
Straight forward conversion w/o bells and whistles. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Anna-Maria Gleixner <anna-maria@linutronix.de> Reviewed-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Will Deacon <will.deacon@arm.com> Cc: rt@linutronix.de Link: http://lkml.kernel.org/r/20160713153335.794097159@linutronix.de Signed-off-by: Ingo Molnar <mingo@kernel.org>
2016-07-08drivers/perf: arm-pmu: Handle per-interrupt affinity maskMarc Zyngier1-5/+22
On a big-little system, PMUs can be wired to CPUs using per CPU interrups (PPI). In this case, it is important to make sure that the enable/disable do happen on the right set of CPUs. So instead of relying on the interrupt-affinity property, we can use the actual percpu affinity that DT exposes as part of the interrupt specifier. The DT binding is also updated to reflect the fact that the interrupt-affinity property shouldn't be used in that case. Acked-by: Rob Herring <robh@kernel.org> Tested-by: Caesar Wang <wxt@rock-chips.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-06-15arm: pmu: Fix non-devicetree probingMark Salter1-1/+1
There is a problem in the non-devicetree PMU probing where some probe functions may get the number of supported events through smp_call_function_any() using the arm_pmu supported_cpus mask. But at the time the probe function is called, the supported_cpus mask is empty so the call fails. This patch makes sure the mask is set before calling the init function rather than after. Signed-off-by: Mark Salter <msalter@redhat.com> Signed-off-by: Jeremy Linton <jeremy.linton@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-06-03drivers/perf: arm_pmu: Avoid leaking pmu->irq_affinity on errorJulien Grall1-0/+1
pmu->irq_affinity will not be freed if an error occurred within arm_pmu_device_probe after of_pmu_irq_cfg has been called. Note that in the case of_pmu_irq_cfg is returning an error, pmu->irq_affinity will not be set, but it should be NULL as pmu was kzalloc'd. Therefore the result kfree(NULL) is benign. Signed-off-by: Julien Grall <julien.grall@arm.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-06-03drivers/perf: arm_pmu: Defer the setting of __oprofile_cpu_pmuJulien Grall1-3/+3
The global variable __oprofile_cpu_pmu is set before the PMU is fully initialized. If an error occurs before the end of the initialization, the PMU will be freed and the variable will contain an invalid pointer. This will result in a kernel crash when perf will be used. Fix it by moving the setting of __oprofile_cpu_pmu when the PMU is fully initialized (i.e when it is no longer possible to fail). Cc: <stable@vger.kernel.org> Signed-off-by: Julien Grall <julien.grall@arm.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-06-03drivers/perf: arm_pmu: Fix reference count of a device_node in of_pmu_irq_cfgJulien Grall1-4/+1
The only function called by of_pmu_irq_cfg that will increment the reference count on dn is of_parse_phandle. Each time we successfully parse a possible CPU from an interrupt-affinity property, we increment the refcount of that CPU node once via of_parse_handle. After validating the CPU is possible, we decrement the refcount once. Subsequently, we decrement the refcount again, either as part of an early break if we don't have a matching SPI, or as part of the end of the loop body. This will lead to decrementing twice the refcounnt. Remove the second pairs of call to of_node_put as nobody is using dn between the first and second call to of_node_put. Signed-off-by: Julien Grall <julien.grall@arm.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>