summaryrefslogtreecommitdiffstats
path: root/arch/arm/mach-zynq/platsmp.c
AgeCommit message (Collapse)AuthorFilesLines
2020-01-08ARM: zynq: use physical cpuid in zynq_slcr_cpu_stop/startQuanyang Wang1-2/+4
When kernel booting, it will create a cpuid map between the logical cpus and physical cpus. In a normal boot, the cpuid map is as below: Physical Logical 0 ==> 0 1 ==> 1 But in kdump, there is a condition that the crash happens at the physical cpu1, and the crash kernel will run at the physical cpu1 too, so the cpuid map in crash kernel is as below: Physical Logical 1 ==> 0 0 ==> 1 The functions zynq_slcr_cpu_stop/start is to stop/start the physical cpus, the parameter cpu should be the physical cpuid. So use cpu_logical_map to translate the logical cpuid to physical cpuid. Or else the logical cpu0(physical cpu1) will stop itself and the processor will hang. Signed-off-by: Quanyang Wang <quanyang.wang@windriver.com> Tested-by: Michal Simek <michal.simek@xilinx.com> Signed-off-by: Michal Simek <michal.simek@xilinx.com>
2019-08-14ARM: zynq: Use memcpy_toio instead of memcpy on smp bring-upLuis Araneda1-1/+1
This fixes a kernel panic on memcpy when FORTIFY_SOURCE is enabled. The initial smp implementation on commit aa7eb2bb4e4a ("arm: zynq: Add smp support") used memcpy, which worked fine until commit ee333554fed5 ("ARM: 8749/1: Kconfig: Add ARCH_HAS_FORTIFY_SOURCE") enabled overflow checks at runtime, producing a read overflow panic. The computed size of memcpy args are: - p_size (dst): 4294967295 = (size_t) -1 - q_size (src): 1 - size (len): 8 Additionally, the memory is marked as __iomem, so one of the memcpy_* functions should be used for read/write. Fixes: aa7eb2bb4e4a ("arm: zynq: Add smp support") Signed-off-by: Luis Araneda <luaraneda@gmail.com> Cc: stable@vger.kernel.org Signed-off-by: Michal Simek <michal.simek@xilinx.com>
2019-08-14ARM: zynq: Support smp in thumb modeLuis Araneda1-1/+1
Add .arm directive to headsmp.S to ensure that the CPU starts in 32-bit ARM mode and the correct code size is copied on smp bring-up. This is related to the fix applied to SoCFPGA by commit 5616f36713ea ("ARM: SoCFPGA: Fix secondary CPU startup in thumb2 kernel") Additionally, start secondary CPUs on secondary_startup_arm to automatically switch from ARM to thumb on a thumb kernel Signed-off-by: Luis Araneda <luaraneda@gmail.com> Suggested-by: Michal Simek <michal.simek@xilinx.com> Signed-off-by: Michal Simek <michal.simek@xilinx.com>
2019-06-05treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 282Thomas Gleixner1-9/+1
Based on 1 normalized pattern(s): this software is licensed under the terms of the gnu general public license version 2 as published by the free software foundation and may be copied distributed and modified under those terms this program is distributed in the hope that it will be useful but without any warranty without even the implied warranty of merchantability or fitness for a particular purpose see the gnu general public license for more details extracted by the scancode license scanner the SPDX license identifier GPL-2.0-only has been chosen to replace the boilerplate/reference in 285 file(s). Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Alexios Zavras <alexios.zavras@intel.com> Reviewed-by: Allison Randal <allison@lohutok.net> Cc: linux-spdx@vger.kernel.org Link: https://lkml.kernel.org/r/20190529141900.642774971@linutronix.de Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-02-28ARM: 8641/1: treewide: Replace uses of virt_to_phys with __pa_symbolFlorian Fainelli1-1/+1
All low-level PM/SMP code using virt_to_phys() should actually use __pa_symbol() against kernel symbols. Update code where relevant to move away from virt_to_phys(). Acked-by: Russell King <rmk+kernel@armlinux.org.uk> Reviewed-by: Laura Abbott <labbott@redhat.com> Signed-off-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2015-12-01ARM: use const and __initconst for smp_operationsMasahiro Yamada1-1/+1
These smp_operations structures are not over-written, so add "const" qualifier and replace __initdata with __initconst. Also, add "static" where it is possible. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com> Acked-by: Krzysztof Kozlowski <k.kozlowski@samsung.com> Acked-by: Maxime Ripard <maxime.ripard@free-electrons.com> Acked-by: Moritz Fischer <moritz.fischer@ettus.com> Acked-by: Stephen Boyd <sboyd@codeaurora.org> # qcom part Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Acked-by: Patrice Chotard <patrice.chotard@st.com> Acked-by: Heiko Stuebner <heiko@sntech.de> Acked-by: Wei Xu <xuwei5@hisilicon.com> Acked-by: Florian Fainelli <f.fainelli@gmail.com> Acked-by: Sebastian Hesselbarth <sebastian.hesselbarth@gmail.com> Acked-by: Gregory CLEMENT <gregory.clement@free-electrons.com> Acked-by: Shawn Guo <shawnguo@kernel.org> Acked-by: Matthias Brugger <matthias.bgg@gmail.com> Acked-by: Thierry Reding <treding@nvidia.com> Acked-by: Nicolas Pitre <nico@linaro.org> Acked-by: Liviu Dudau <Liviu.Dudau@arm.com> Acked-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2015-06-01ARM: v7 setup function should invalidate L1 cacheRussell King1-3/+2
All ARMv5 and older CPUs invalidate their caches in the early assembly setup function, prior to enabling the MMU. This is because the L1 cache should not contain any data relevant to the execution of the kernel at this point; all data should have been flushed out to memory. This requirement should also be true for ARMv6 and ARMv7 CPUs - indeed, these typically do not search their caches when caching is disabled (as it needs to be when the MMU is disabled) so this change should be safe. ARMv7 allows there to be CPUs which search their caches while caching is disabled, and it's permitted that the cache is uninitialised at boot; for these, the architecture reference manual requires that an implementation specific code sequence is used immediately after reset to ensure that the cache is placed into a sane state. Such functionality is definitely outside the remit of the Linux kernel, and must be done by the SoC's firmware before _any_ CPU gets to the Linux kernel. Changing the data cache clean+invalidate to a mere invalidate allows us to get rid of a lot of platform specific hacks around this issue for their secondary CPU bringup paths - some of which were buggy. Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Tested-by: Florian Fainelli <f.fainelli@gmail.com> Tested-by: Heiko Stuebner <heiko@sntech.de> Tested-by: Dinh Nguyen <dinguyen@opensource.altera.com> Acked-by: Sebastian Hesselbarth <sebastian.hesselbarth@gmail.com> Tested-by: Sebastian Hesselbarth <sebastian.hesselbarth@gmail.com> Acked-by: Shawn Guo <shawn.guo@linaro.org> Tested-by: Thierry Reding <treding@nvidia.com> Acked-by: Thierry Reding <treding@nvidia.com> Tested-by: Geert Uytterhoeven <geert+renesas@glider.be> Tested-by: Michal Simek <michal.simek@xilinx.com> Tested-by: Wei Xu <xuwei5@hisilicon.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-09-16ARM: zynq: Rename 'zynq_platform_cpu_die'Soren Brinkmann1-5/+7
Match the naming pattern of all other SMP ops and rename zynq_platform_cpu_die --> zynq_cpu_die. Signed-off-by: Soren Brinkmann <soren.brinkmann@xilinx.com> Signed-off-by: Michal Simek <michal.simek@xilinx.com>
2014-09-16ARM: zynq: Remove hotplug.cSoren Brinkmann1-0/+18
The hotplug code contains only a single function, which is an SMP function. Move that to platsmp.c where all other SMP runctions reside. That allows removing hotplug.c and declaring the cpu_die function static. Signed-off-by: Soren Brinkmann <soren.brinkmann@xilinx.com> Signed-off-by: Michal Simek <michal.simek@xilinx.com>
2014-09-16ARM: zynq: Synchronise zynq_cpu_die/killSoren Brinkmann1-0/+6
Avoid races and add synchronisation between the arch specific kill and die routines. The same synchronisation issue was fixed on IMX platform by this commit: "ARM: imx: fix sync issue between imx_cpu_die and imx_cpu_kill" (sha1: 2f3edfd7e27ad4206acbc2ae99c9df5f46353024) Signed-off-by: Soren Brinkmann <soren.brinkmann@xilinx.com> Signed-off-by: Michal Simek <michal.simek@xilinx.com>
2014-09-16ARM: zynq: PM: Enable A9 internal clock gating featureSoren Brinkmann1-0/+13
Signed-off-by: Soren Brinkmann <soren.brinkmann@xilinx.com> Signed-off-by: Michal Simek <michal.simek@xilinx.com>
2013-12-10ARM: zynq: remove unnecessary setting of cpu_present_maskSudeep KarkadaNagesha1-9/+0
This patch also removes setting cpu_present_mask as platforms should only re-initialize it in smp_prepare_cpus() if present != possible. Cc: Michal Simek <michal.simek@xilinx.com> Signed-off-by: Sudeep KarkadaNagesha <sudeep.karkadanagesha@arm.com> Signed-off-by: Michal Simek <michal.simek@xilinx.com>
2013-12-10arm: zynq: Add support for zynq_cpu_kill functionMichal Simek1-0/+9
Use simple hook to slcr to stop cpu. Signed-off-by: Michal Simek <michal.simek@xilinx.com>
2013-12-10arm: zynq: Invalidate L1 in secondary bootSoren Brinkmann1-1/+1
During boot, Linux initiates a clean-invalidate operation only, resulting in faulty data to be written to the memory system during resume. Therefore invalidate the L1 in the secondary boot path to avoid these issues. Signed-off-by: Soren Brinkmann <soren.brinkmann@xilinx.com> Signed-off-by: Michal Simek <michal.simek@xilinx.com>
2013-12-10arm: zynq: platsmp: Remove CPU presence checkSoren Brinkmann1-5/+0
The generic code already checks that the CPU being requested is legal if the cpu possible/present masks are set correctly. Cc: Russell King <linux@arm.linux.org.uk> Signed-off-by: Soren Brinkmann <soren.brinkmann@xilinx.com> Signed-off-by: Michal Simek <michal.simek@xilinx.com>
2013-07-14arm: delete __cpuinit/__CPUINIT usage from all ARM usersPaul Gortmaker1-3/+3
The __cpuinit type of throwaway sections might have made sense some time ago when RAM was more constrained, but now the savings do not offset the cost and complications. For example, the fix in commit 5e427ec2d0 ("x86: Fix bit corruption at CPU resume time") is a good example of the nasty type of bugs that can be created with improper use of the various __init prefixes. After a discussion on LKML[1] it was decided that cpuinit should go the way of devinit and be phased out. Once all the users are gone, we can then finally remove the macros themselves from linux/init.h. Note that some harmless section mismatch warnings may result, since notify_cpu_starting() and cpu_up() are arch independent (kernel/cpu.c) and are flagged as __cpuinit -- so if we remove the __cpuinit from the arch specific callers, we will also get section mismatch warnings. As an intermediate step, we intend to turn the linux/init.h cpuinit related content into no-ops as early as possible, since that will get rid of these warnings. In any case, they are temporary and harmless. This removes all the ARM uses of the __cpuinit macros from C code, and all __CPUINIT from assembly code. It also had two ".previous" section statements that were paired off against __CPUINIT (aka .section ".cpuinit.text") that also get removed here. [1] https://lkml.org/lkml/2013/5/20/589 Cc: Russell King <linux@arm.linux.org.uk> Cc: Will Deacon <will.deacon@arm.com> Cc: linux-arm-kernel@lists.infradead.org Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2013-06-17ARM: zynq: Not to rewrite jump code when starting address is 0x0Michal Simek1-26/+26
This configuration is used by remoteproc. Signed-off-by: Michal Simek <michal.simek@xilinx.com>
2013-04-08Merge branch 'gic/cleanup' into next/soc2Arnd Bergmann1-16/+0
Both zynq and shmobile have conflicts against the gic cleanup series, resolved here. Conflicts: arch/arm/mach-shmobile/smp-emev2.c arch/arm/mach-shmobile/smp-r8a7779.c arch/arm/mach-shmobile/smp-sh73a0.c arch/arm/mach-zynq/platsmp.c drivers/gpio/gpio-pl061.c Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2013-04-04arm: zynq: Add hotplug supportMichal Simek1-0/+3
Signed-off-by: Michal Simek <michal.simek@xilinx.com>
2013-04-04arm: zynq: Add smp supportMichal Simek1-0/+149
Zynq is dual core Cortex A9 which starts always at zero. Using simple trampoline ensure long jump to secondary_startup code. Signed-off-by: Michal Simek <michal.simek@xilinx.com> Signed-off-by: Steffen Trumtrar <s.trumtrar@pengutronix.de>