summaryrefslogtreecommitdiffstats
AgeCommit message (Collapse)AuthorFilesLines
2020-08-04powerpc: Fix circular dependency between percpu.h and mmu.hMichael Ellerman1-2/+2
Recently random.h started including percpu.h (see commit f227e3ec3b5c ("random32: update the net random state on interrupt and activity")), which broke corenet64_smp_defconfig: In file included from /linux/arch/powerpc/include/asm/paca.h:18, from /linux/arch/powerpc/include/asm/percpu.h:13, from /linux/include/linux/random.h:14, from /linux/lib/uuid.c:14: /linux/arch/powerpc/include/asm/mmu.h:139:22: error: unknown type name 'next_tlbcam_idx' 139 | DECLARE_PER_CPU(int, next_tlbcam_idx); This is due to a circular header dependency: asm/mmu.h includes asm/percpu.h, which includes asm/paca.h, which includes asm/mmu.h Which means DECLARE_PER_CPU() isn't defined when mmu.h needs it. We can fix it by moving the include of paca.h below the include of asm-generic/percpu.h. This moves the include of paca.h out of the #ifdef __powerpc64__, but that is OK because paca.h is almost entirely inside #ifdef CONFIG_PPC64 anyway. It also moves the include of paca.h out of the #ifdef CONFIG_SMP, which could possibly break something, but seems to have no ill effects. Fixes: f227e3ec3b5c ("random32: update the net random state on interrupt and activity") Cc: stable@vger.kernel.org # v5.8 Reported-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200804130558.292328-1-mpe@ellerman.id.au
2020-08-03powerpc/powernv/sriov: Fix use of uninitialised variableOliver O'Halloran1-3/+1
Initialising the value before using it is generally regarded as a good idea so do that. Fixes: 4c51f3e1e870 ("powerpc/powernv/sriov: Make single PE mode a per-BAR setting") Reported-by: Nathan Chancellor <natechancellor@gmail.com> Signed-off-by: Oliver O'Halloran <oohall@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200803075408.132601-1-oohall@gmail.com
2020-08-03selftests/powerpc: Skip vmx/vsx/tar/etc tests on older CPUsMichael Ellerman10-6/+34
Some of our tests use VSX or newer VMX instructions, so need to be skipped on older CPUs to avoid SIGILL'ing. Similarly TAR was added in v2.07, and the PMU event used in the stcx fail test only works on Power8 or later. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200803020719.96114-1-mpe@ellerman.id.au
2020-08-03powerpc/40x: Fix assembler warning about r0Michael Ellerman1-1/+1
The assembler says: arch/powerpc/kernel/head_40x.S:623: Warning: invalid register expression It's objecting to the use of r0 as the RA argument. That's because when RA = 0 the literal value 0 is used, rather than the content of r0, making the use of r0 in the source potentially confusing. Fix it to use a literal 0, the generated code is identical. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200722022422.825197-1-mpe@ellerman.id.au
2020-07-31powerpc/papr_scm: Add support for fetching nvdimm 'fuel-gauge' metricVaibhav Jain2-0/+58
We add support for reporting 'fuel-gauge' NVDIMM metric via PAPR_PDSM_HEALTH pdsm payload. 'fuel-gauge' metric indicates the usage life remaining of a papr-scm compatible NVDIMM. PHYP exposes this metric via the H_SCM_PERFORMANCE_STATS. The metric value is returned from the pdsm by extending the return payload 'struct nd_papr_pdsm_health' without breaking the ABI. A new field 'dimm_fuel_gauge' to hold the metric value is introduced at the end of the payload struct and its presence is indicated by by extension flag PDSM_DIMM_HEALTH_RUN_GAUGE_VALID. The patch introduces a new function papr_pdsm_fuel_gauge() that is called from papr_pdsm_health(). If fetching NVDIMM performance stats is supported then 'papr_pdsm_fuel_gauge()' allocated an output buffer large enough to hold the performance stat and passes it to drc_pmem_query_stats() that issues the HCALL to PHYP. The return value of the stat is then populated in the 'struct nd_papr_pdsm_health.dimm_fuel_gauge' field with extension flag 'PDSM_DIMM_HEALTH_RUN_GAUGE_VALID' set in 'struct nd_papr_pdsm_health.extension_flags' Signed-off-by: Vaibhav Jain <vaibhav@linux.ibm.com> Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200731064153.182203-3-vaibhav@linux.ibm.com
2020-07-31powerpc/papr_scm: Fetch nvdimm performance stats from PHYPVaibhav Jain2-0/+177
Update papr_scm.c to query dimm performance statistics from PHYP via H_SCM_PERFORMANCE_STATS hcall and export them to user-space as PAPR specific NVDIMM attribute 'perf_stats' in sysfs. The patch also provide a sysfs ABI documentation for the stats being reported and their meanings. During NVDIMM probe time in papr_scm_nvdimm_init() a special variant of H_SCM_PERFORMANCE_STATS hcall is issued to check if collection of performance statistics is supported or not. If successful then a PHYP returns a maximum possible buffer length needed to read all performance stats. This returned value is stored in a per-nvdimm attribute 'stat_buffer_len'. The layout of request buffer for reading NVDIMM performance stats from PHYP is defined in 'struct papr_scm_perf_stats' and 'struct papr_scm_perf_stat'. These structs are used in newly introduced drc_pmem_query_stats() that issues the H_SCM_PERFORMANCE_STATS hcall. The sysfs access function perf_stats_show() uses value 'stat_buffer_len' to allocate a buffer large enough to hold all possible NVDIMM performance stats and passes it to drc_pmem_query_stats() to populate. Finally statistics reported in the buffer are formatted into the sysfs access function output buffer. Signed-off-by: Vaibhav Jain <vaibhav@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200731064153.182203-2-vaibhav@linux.ibm.com
2020-07-30cpuidle: pseries: Fixup exit latency for CEDE(0)Gautham R. Shenoy1-3/+42
We are currently assuming that CEDE(0) has exit latency 10us, since there is no way for us to query from the platform. However, if the wakeup latency of an Extended CEDE state is smaller than 10us, then we can be sure that the exit latency of CEDE(0) cannot be more than that. In this patch, we fix the exit latency of CEDE(0) if we discover an Extended CEDE state with wakeup latency smaller than 10us. Benchmark results: On POWER8, this patch does not have any impact since the advertized latency of Extended CEDE (1) is 30us which is higher than the default latency of CEDE (0) which is 10us. On POWER9 we see improvement the single-threaded performance of ebizzy, and no regression in the wakeup latency or the number of context-switches. ebizzy: 2 ebizzy threads bound to the same big-core. 25% improvement in the avg records/s with patch. x without_patch * with_patch N Min Max Median Avg Stddev x 10 2491089 5834307 5398375 4244335 1596244.9 * 10 2893813 5834474 5832448 5327281.3 1055941.4 context_switch2: There is no major regression observed with this patch as seen from the context_switch2 benchmark. context_switch2 across CPU0 CPU1 (Both belong to same big-core, but different small cores). We observe a minor 0.14% regression in the number of context-switches (higher is better). x without_patch * with_patch N Min Max Median Avg Stddev x 500 348872 362236 354712 354745.69 2711.827 * 500 349422 361452 353942 354215.4 2576.9258 Difference at 99.0% confidence -530.288 +/- 430.963 -0.149484% +/- 0.121485% (Student's t, pooled s = 2645.24) context_switch2 across CPU0 CPU8 (Different big-cores). We observe a 0.37% improvement in the number of context-switches (higher is better). x without_patch * with_patch N Min Max Median Avg Stddev x 500 287956 294940 288896 288977.23 646.59295 * 500 288300 294646 289582 290064.76 1161.9992 Difference at 99.0% confidence 1087.53 +/- 153.194 0.376337% +/- 0.0530125% (Student's t, pooled s = 940.299) schbench: No major difference could be seen until the 99.9th percentile. Without-patch: Latency percentiles (usec) 50.0th: 29 75.0th: 39 90.0th: 49 95.0th: 59 *99.0th: 13104 99.5th: 14672 99.9th: 15824 min=0, max=17993 With-patch: Latency percentiles (usec) 50.0th: 29 75.0th: 40 90.0th: 50 95.0th: 61 *99.0th: 13648 99.5th: 14768 99.9th: 15664 min=0, max=29812 Signed-off-by: Gautham R. Shenoy <ego@linux.vnet.ibm.com> [mpe: Minor formatting] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/1596087177-30329-4-git-send-email-ego@linux.vnet.ibm.com
2020-07-30cpuidle: pseries: Add function to parse extended CEDE recordsGautham R. Shenoy1-1/+135
Currently we use CEDE with latency-hint 0 as the only other idle state on a dedicated LPAR apart from the polling "snooze" state. The platform might support additional extended CEDE idle states, which can be discovered through the "ibm,get-system-parameter" rtas-call made with CEDE_LATENCY_TOKEN. This patch adds a function to obtain information about the extended CEDE idle states from the platform and parse the contents to populate an array of extended CEDE states. These idle states thus discovered will be added to the cpuidle framework in the next patch. dmesg on a POWER8 and POWER9 LPAR, demonstrating the output of parsing the extended CEDE latency parameters are as follows POWER8 [ 10.093279] xcede : xcede_record_size = 10 [ 10.093285] xcede : Record 0 : hint = 1, latency = 0x3c00 tb ticks, Wake-on-irq = 1 [ 10.093291] xcede : Record 1 : hint = 2, latency = 0x4e2000 tb ticks, Wake-on-irq = 0 [ 10.093297] cpuidle : Skipping the 2 Extended CEDE idle states POWER9 [ 5.913180] xcede : xcede_record_size = 10 [ 5.913183] xcede : Record 0 : hint = 1, latency = 0x400 tb ticks, Wake-on-irq = 1 [ 5.913188] xcede : Record 1 : hint = 2, latency = 0x3e8000 tb ticks, Wake-on-irq = 0 [ 5.913193] cpuidle : Skipping the 2 Extended CEDE idle states Signed-off-by: Gautham R. Shenoy <ego@linux.vnet.ibm.com> [mpe: Make space for 16 records, drop memset, minor cleanup & formatting] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/1596087177-30329-3-git-send-email-ego@linux.vnet.ibm.com
2020-07-30cpuidle: pseries: Set the latency-hint before entering CEDEGautham R. Shenoy1-2/+10
As per the PAPR, each H_CEDE call is associated with a latency-hint to be passed in the VPA field "cede_latency_hint". The CEDE states that we were implicitly entering so far is CEDE with latency-hint = 0. This patch explicitly sets the latency hint corresponding to the CEDE state that we are currently entering. While at it, we save the previous hint, to be restored once we wakeup from CEDE. This will be required in the future when we expose extended-cede states through the cpuidle framework, where each of them will have a different cede-latency hint. Signed-off-by: Gautham R. Shenoy <ego@linux.vnet.ibm.com> [mpe: Make cede_latency_hint static] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/1596087177-30329-2-git-send-email-ego@linux.vnet.ibm.com
2020-07-30selftests/powerpc: Fix online CPU selectionSandipan Das1-12/+25
The size of the CPU affinity mask must be large enough for systems with a very large number of CPUs. Otherwise, tests which try to determine the first online CPU by calling sched_getaffinity() will fail. This makes sure that the size of the allocated affinity mask is dependent on the number of CPUs as reported by get_nprocs_conf(). Fixes: 3752e453f6ba ("selftests/powerpc: Add tests of PMU EBBs") Reported-by: Shirisha Ganta <shiganta@in.ibm.com> Signed-off-by: Sandipan Das <sandipan@linux.ibm.com> Reviewed-by: Kamalesh Babulal <kamalesh@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/a408c4b8e9a23bb39b539417a21eb0ff47bb5127.1596084858.git.sandipan@linux.ibm.com
2020-07-30powerpc/perf: Consolidate perf_callchain_user_[64|32]()Michal Suchanek3-30/+29
perf_callchain_user_64() and perf_callchain_user_32() are nearly identical. Consolidate into one function with thin wrappers. Suggested-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michal Suchanek <msuchanek@suse.de> [mpe: Adapt to copy_from_user_nofault(), minor formatting] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200406210022.32265-1-msuchanek@suse.de
2020-07-30powerpc/pseries/hotplug-cpu: Remove double free in error pathNathan Lynch1-1/+0
In the unlikely event that the device tree lacks a /cpus node, find_dlpar_cpus_to_add() oddly frees the cpu_drcs buffer it has been passed before returning an error. Its only caller also frees the buffer on error. Remove the less conventional kfree() of a caller-supplied buffer from find_dlpar_cpus_to_add(). Fixes: 90edf184b9b7 ("powerpc/pseries: Add CPU dlpar add functionality") Signed-off-by: Nathan Lynch <nathanl@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20190919231633.1344-1-nathanl@linux.ibm.com
2020-07-30powerpc/pseries/mobility: Add pr_debug() for device tree changesNathan Lynch1-0/+5
When investigating issues with partition migration or resource reassignments it is helpful to have a log of which nodes and properties in the device tree have changed. Use pr_debug() so it's easy to enable these at runtime with the dynamic debug facility. Signed-off-by: Nathan Lynch <nathanl@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20190627053044.9238-3-nathanl@linux.ibm.com
2020-07-30powerpc/pseries/mobility: Set pr_fmt()Nathan Lynch1-2/+5
The pr_err() callsites in mobility.c already manually include a "mobility:" prefix, let's make it official for the benefit of messages to be added later. Signed-off-by: Nathan Lynch <nathanl@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20190627053044.9238-2-nathanl@linux.ibm.com
2020-07-30powerpc/cacheinfo: Warn if cache object chain becomes unorderedNathan Lynch1-0/+9
This can catch cases where the device tree has gotten mishandled into an inconsistent state at runtime, e.g. the cache nodes for both the source and the destination are present after a migration. Signed-off-by: Nathan Lynch <nathanl@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20190627051537.7298-5-nathanl@linux.ibm.com
2020-07-30powerpc/cacheinfo: Improve diagnostics about malformed cache listsNathan Lynch1-2/+8
If we have a bug which causes us to start with the wrong kind of OF node when linking up the cache tree, it's helpful for debugging to print information about what we found vs what we expected. So replace uses of WARN_ON_ONCE with WARN_ONCE, which lets us include an informative message instead of a contentless backtrace. Signed-off-by: Nathan Lynch <nathanl@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20190627051537.7298-4-nathanl@linux.ibm.com
2020-07-30powerpc/cacheinfo: Use name@unit instead of full DT path in debug messagesNathan Lynch1-8/+8
We know that every OF node we deal with in this code is under /cpus, so we can make the debug messages a little less verbose without losing information. E.g. cacheinfo: creating L1 dcache and icache for /cpus/PowerPC,POWER8@0 cacheinfo: creating L2 ucache for /cpus/l2-cache@2006 cacheinfo: creating L3 ucache for /cpus/l3-cache@3106 becomes cacheinfo: creating L1 dcache and icache for PowerPC,POWER8@0 cacheinfo: creating L2 ucache for l2-cache@2006 cacheinfo: creating L3 ucache for l3-cache@3106 Replace all '%pOF' specifiers with '%pOFP'. Signed-off-by: Nathan Lynch <nathanl@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20190627051537.7298-3-nathanl@linux.ibm.com
2020-07-30powerpc/cacheinfo: Set pr_fmt()Nathan Lynch1-0/+2
Set pr_fmt() so we get a nice prefix on messages. Signed-off-by: Nathan Lynch <nathanl@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20190627051537.7298-2-nathanl@linux.ibm.com
2020-07-30powerpc: fix function annotations to avoid section mismatch warnings with gcc-10Vladis Dronov3-5/+5
Certain warnings are emitted for powerpc code when building with a gcc-10 toolset: WARNING: modpost: vmlinux.o(.text.unlikely+0x377c): Section mismatch in reference from the function remove_pmd_table() to the function .meminit.text:split_kernel_mapping() The function remove_pmd_table() references the function __meminit split_kernel_mapping(). This is often because remove_pmd_table lacks a __meminit annotation or the annotation of split_kernel_mapping is wrong. Add the appropriate __init and __meminit annotations to make modpost not complain. In all the cases there are just a single callsite from another __init or __meminit function: __meminit remove_pagetable() -> remove_pud_table() -> remove_pmd_table() __init prom_init() -> setup_secure_guest() __init xive_spapr_init() -> xive_spapr_disabled() Signed-off-by: Vladis Dronov <vdronov@redhat.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200729133741.62789-1-vdronov@redhat.com
2020-07-29powerpc/kexec_file: Enable early kernel OPAL callsHari Bathini2-0/+36
Kernels built with CONFIG_PPC_EARLY_DEBUG_OPAL enabled expects r8 & r9 to be filled with OPAL base & entry addresses respectively. Setting these registers allows the kernel to perform OPAL calls before the device tree is parsed. Signed-off-by: Hari Bathini <hbathini@linux.ibm.com> Reviewed-by: Thiago Jung Bauermann <bauerman@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/159602303975.575379.5032301944162937479.stgit@hbathini
2020-07-29powerpc/kexec_file: Fix kexec load failure with lack of memory holeHari Bathini1-19/+14
The kexec purgatory has to run in real mode. Only the first memory block maybe accessible in real mode. And, unlike the case with panic kernel, no memory is set aside for regular kexec load. Another thing to note is, the memory for crashkernel is reserved at an offset of 128MB. So, when crashkernel memory is reserved, the memory ranges to load kexec segments shrink further as the generic code only looks for memblock free memory ranges and in all likelihood only a tiny bit of memory from 0 to 128MB would be available to load kexec segments. With kdump being used by default in general, kexec file load is likely to fail almost always. This can be fixed by changing the memory hole lookup logic for regular kexec to use the same method as kdump. This would mean that most kexec segments will overlap with crashkernel memory region. That should still be ok as the pages, whose destination address isn't available while loading, are placed in an intermediate location till a flush to the actual destination address happens during kexec boot sequence. Signed-off-by: Hari Bathini <hbathini@linux.ibm.com> Tested-by: Pingfan Liu <piliu@redhat.com> Reviewed-by: Thiago Jung Bauermann <bauerman@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/159602302326.575379.14038896654942043093.stgit@hbathini
2020-07-29powerpc/kexec_file: Add appropriate regions for memory reserve mapHari Bathini1-5/+53
While initrd, elfcorehdr and backup regions are already added to the reserve map, there are a few missing regions that need to be added to the memory reserve map. Add them here. And now that all the changes to load panic kernel are in place, claim likewise. Signed-off-by: Hari Bathini <hbathini@linux.ibm.com> Tested-by: Pingfan Liu <piliu@redhat.com> Reviewed-by: Thiago Jung Bauermann <bauerman@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/159602300473.575379.4218568032039284448.stgit@hbathini
2020-07-29powerpc/kexec_file: Prepare elfcore header for crashing kernelHari Bathini4-0/+234
Prepare elf headers for the crashing kernel's core file using crash_prepare_elf64_headers() and pass on this info to kdump kernel by updating its command line with elfcorehdr parameter. Also, add elfcorehdr location to reserve map to avoid it from being stomped on while booting. Signed-off-by: Hari Bathini <hbathini@linux.ibm.com> Tested-by: Pingfan Liu <piliu@redhat.com> Reviewed-by: Thiago Jung Bauermann <bauerman@linux.ibm.com> [mpe: Ensure cmdline is nul terminated] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/159602298855.575379.15819225623219909517.stgit@hbathini
2020-07-29powerpc/kexec_file: Setup backup region for kdump kernelHari Bathini5-7/+159
Though kdump kernel boots from loaded address, the first 64KB of it is copied down to real 0. So, setup a backup region and let purgatory copy the first 64KB of crashed kernel into this backup region before booting into kdump kernel. Update reserve map with backup region and crashed kernel's memory to avoid kdump kernel from accidentially using that memory. Signed-off-by: Hari Bathini <hbathini@linux.ibm.com> Reviewed-by: Thiago Jung Bauermann <bauerman@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/159602294718.575379.16216507537038008623.stgit@hbathini
2020-07-29powerpc/kexec_file: Restrict memory usage of kdump kernelHari Bathini1-1/+385
Kdump kernel, used for capturing the kernel core image, is supposed to use only specific memory regions to avoid corrupting the image to be captured. The regions are crashkernel range - the memory reserved explicitly for kdump kernel, memory used for the tce-table, the OPAL region and RTAS region as applicable. Restrict kdump kernel memory to use only these regions by setting up usable-memory DT property. Also, tell the kdump kernel to run at the loaded address by setting the magic word at 0x5c. Signed-off-by: Hari Bathini <hbathini@linux.ibm.com> Tested-by: Pingfan Liu <piliu@redhat.com> Reviewed-by: Thiago Jung Bauermann <bauerman@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/159602284284.575379.6962016255404325493.stgit@hbathini
2020-07-29powerpc/drmem: Make LMB walk a bit more flexibleHari Bathini4-44/+78
Currently, numa & prom are the only users of drmem LMB walk code. Loading kdump with kexec_file also needs to walk the drmem LMBs to setup the usable memory ranges for kdump kernel. But there are couple of issues in using the code as is. One, walk_drmem_lmb() code is built into the .init section currently, while kexec_file needs it later. Two, there is no scope to pass data to the callback function for processing and/or erroring out on certain conditions. Fix that by, moving drmem LMB walk code out of .init section, adding scope to pass data to the callback function and bailing out when an error is encountered in the callback function. Signed-off-by: Hari Bathini <hbathini@linux.ibm.com> Tested-by: Pingfan Liu <piliu@redhat.com> Reviewed-by: Thiago Jung Bauermann <bauerman@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/159602282727.575379.3979857013827701828.stgit@hbathini
2020-07-29powerpc/kexec_file: Avoid stomping memory used by special regionsHari Bathini5-4/+539
crashkernel region could have an overlap with special memory regions like OPAL, RTAS, TCE table & such. These regions are referred to as excluded memory ranges. Setup these ranges during image probe in order to avoid them while finding the buffer for different kdump segments. Override arch_kexec_locate_mem_hole() to locate a memory hole taking these ranges into account. Signed-off-by: Hari Bathini <hbathini@linux.ibm.com> Reviewed-by: Thiago Jung Bauermann <bauerman@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/159602281047.575379.6636807148335160795.stgit@hbathini
2020-07-29powerpc/kexec_file: Add helper functions for getting memory rangesHari Bathini3-1/+247
In kexec case, the kernel to be loaded uses the same memory layout as the running kernel. So, passing on the DT of the running kernel would be good enough. But in case of kdump, different memory ranges are needed to manage loading the kdump kernel, booting into it and exporting the elfcore of the crashing kernel. The ranges are exclude memory ranges, usable memory ranges, reserved memory ranges and crash memory ranges. Exclude memory ranges specify the list of memory ranges to avoid while loading kdump segments. Usable memory ranges list the memory ranges that could be used for booting kdump kernel. Reserved memory ranges list the memory regions for the loading kernel's reserve map. Crash memory ranges list the memory ranges to be exported as the crashing kernel's elfcore. Add helper functions for setting up the above mentioned memory ranges. This helpers facilitate in understanding the subsequent changes better and make it easy to setup the different memory ranges listed above, as and when appropriate. Signed-off-by: Hari Bathini <hbathini@linux.ibm.com> Tested-by: Pingfan Liu <piliu@redhat.com> Reviewed-by: Thiago Jung Bauermann <bauerman@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/159602279194.575379.8526552316948643550.stgit@hbathini
2020-07-29powerpc/kexec_file: Mark PPC64 specific codeHari Bathini7-23/+105
Some of the kexec_file_load code isn't PPC64 specific. Move PPC64 specific code from kexec/file_load.c to kexec/file_load_64.c. Also, rename purgatory/trampoline.S to purgatory/trampoline_64.S in the same spirit. No functional changes. Signed-off-by: Hari Bathini <hbathini@linux.ibm.com> Tested-by: Pingfan Liu <piliu@redhat.com> Reviewed-by: Laurent Dufour <ldufour@linux.ibm.com> Reviewed-by: Thiago Jung Bauermann <bauerman@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/159602276920.575379.10390965946438306388.stgit@hbathini
2020-07-29kexec_file: Allow archs to handle special regions while locating memory holeHari Bathini2-13/+32
Some architectures may have special memory regions, within the given memory range, which can't be used for the buffer in a kexec segment. Implement weak arch_kexec_locate_mem_hole() definition which arch code may override, to take care of special regions, while trying to locate a memory hole. Also, add the missing declarations for arch overridable functions and and drop the __weak descriptors in the declarations to avoid non-weak definitions from becoming weak. Signed-off-by: Hari Bathini <hbathini@linux.ibm.com> Tested-by: Pingfan Liu <piliu@redhat.com> Reviewed-by: Thiago Jung Bauermann <bauerman@linux.ibm.com> Acked-by: Dave Young <dyoung@redhat.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/159602273603.575379.17665852963340380839.stgit@hbathini
2020-07-29powerpc/configs: Add BLK_DEV_NVME to pseries_defconfigAnton Blanchard1-0/+1
I've forgotten to manually enable NVME when building pseries kernels for machines with NVME adapters. Since it's a reasonably common configuration, enable it by default. Signed-off-by: Anton Blanchard <anton@ozlabs.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200729040828.2312966-1-anton@ozlabs.org
2020-07-29powerpc/64s: Move HMI IRQ stat from percpu variable to paca.Mahesh Salgaonkar5-6/+9
With the proposed change in percpu bootmem allocator to use page mapping [1], the percpu first chunk memory area can come from vmalloc ranges. This makes the HMI (Hypervisor Maintenance Interrupt) handler crash the kernel whenever percpu variable is accessed in real mode. This patch fixes this issue by moving the HMI IRQ stat inside paca for safe access in realmode. [1] https://lore.kernel.org/linuxppc-dev/20200608070904.387440-1-aneesh.kumar@linux.ibm.com/ Suggested-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Signed-off-by: Mahesh Salgaonkar <mahesh@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/159290806973.3642154.5244613424529764050.stgit@jupiter
2020-07-29powerpc/fsl/dts: add missing P4080DS I2C devicesDavid Lamparter1-2/+41
This just adds the zl2006 voltage regulators / power monitors and the onboard I2C eeproms. The ICS9FG108 clock chip doesn't seem to have a driver, so it is left in the DTS as a comment. And for good measure, the SPD eeproms are tagged as such. Signed-off-by: David Lamparter <equinox@diac24.net> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20180920230422.GK487685@eidolon.nox.tf
2020-07-29ocxl: Address kernel doc errors & warningsAlastair D'Silva3-74/+55
This patch addresses warnings and errors from the kernel doc scripts for the OpenCAPI driver. It also makes minor tweaks to make the docs more consistent. Signed-off-by: Alastair D'Silva <alastair@d-silva.org> Acked-by: Andrew Donnellan <ajd@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200415012343.919255-3-alastair@d-silva.org
2020-07-29ocxl: Remove unnecessary externsAlastair D'Silva2-24/+22
Function declarations don't need externs, remove the existing ones so they are consistent with newer code Signed-off-by: Alastair D'Silva <alastair@d-silva.org> Acked-by: Andrew Donnellan <ajd@linux.ibm.com> Acked-by: Frederic Barrat <fbarrat@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200415012343.919255-2-alastair@d-silva.org
2020-07-29selftests/powerpc: Return skip code for spectre_v2Thadeu Lima de Souza Cascardo1-0/+10
When running under older versions of qemu of under newer versions with old machine types, some security features will not be reported to the guest. This will lead the guest OS to consider itself Vulnerable to spectre_v2. So, spectre_v2 test fails in such cases when the host is mitigated and miss predictions cannot be detected as expected by the test. Make it return the skip code instead, for this particular case. We don't want to miss the case when the test fails and the system reports as mitigated or not affected. But it is not a problem to miss failures when the system reports as Vulnerable. Signed-off-by: Thadeu Lima de Souza Cascardo <cascardo@canonical.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200728155039.401445-1-cascardo@canonical.com
2020-07-29powerpc/test_emulate_step: Add testcases for divde[.] and divdeu[.] instructionsBalamuruhan S1-0/+156
Add testcases for divde, divde., divdeu, divdeu. emulated instructions to cover few scenarios, - with same dividend and divisor to have undefine RT for divdeu[.] - with divide by zero to have undefine RT for both divde[.] and divdeu[.] - with negative dividend to cover -|divisor| < r <= 0 if the dividend is negative for divde[.] - normal case with proper dividend and divisor for both divde[.] and divdeu[.] Signed-off-by: Balamuruhan S <bala24@linux.ibm.com> Reviewed-by: Sandipan Das <sandipan@linux.ibm.com> Acked-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200728130308.1790982-4-bala24@linux.ibm.com
2020-07-29powerpc/sstep: Add support for divde[.] and divdeu[.] instructionsBalamuruhan S1-1/+12
This patch adds emulation support for divde, divdeu instructions, - Divide Doubleword Extended (divde[.]) - Divide Doubleword Extended Unsigned (divdeu[.]) Signed-off-by: Balamuruhan S <bala24@linux.ibm.com> Reviewed-by: Sandipan Das <sandipan@linux.ibm.com> Acked-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200728130308.1790982-3-bala24@linux.ibm.com
2020-07-29powerpc/ppc-opcode: Add divde and divdeu opcodesBalamuruhan S1-0/+6
Include instruction opcodes for divde and divdeu as macros. Signed-off-by: Balamuruhan S <bala24@linux.ibm.com> Reviewed-by: Sandipan Das <sandipan@linux.ibm.com> Acked-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200728130308.1790982-2-bala24@linux.ibm.com
2020-07-29selftests/powerpc: Fix CPU affinity for child processHarish1-5/+16
On systems with large number of cpus, test fails trying to set affinity by calling sched_setaffinity() with smaller size for affinity mask. This patch fixes it by making sure that the size of allocated affinity mask is dependent on the number of CPUs as reported by get_nprocs(). Fixes: 00b7ec5c9cf3 ("selftests/powerpc: Import Anton's context_switch2 benchmark") Reported-by: Shirisha Ganta <shiganta@in.ibm.com> Signed-off-by: Sandipan Das <sandipan@linux.ibm.com> Signed-off-by: Harish <harish@linux.ibm.com> Reviewed-by: Kamalesh Babulal <kamalesh@linux.vnet.ibm.com> Reviewed-by: Satheesh Rajendran <sathnaga@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200609081423.529664-1-harish@linux.ibm.com
2020-07-29powerpc/powernv/sriov: Remove unused but set variable 'phb'Wei Yongjun1-2/+0
Gcc report warning as follows: arch/powerpc/platforms/powernv/pci-sriov.c:602:25: warning: variable 'phb' set but not used [-Wunused-but-set-variable] 602 | struct pnv_phb *phb; | ^~~ This variable is not used, so this commit removing it. Reported-by: Hulk Robot <hulkci@huawei.com> Signed-off-by: Wei Yongjun <weiyongjun1@huawei.com> Acked-by: Oliver O'Halloran <oohall@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200727171112.2781-1-weiyongjun1@huawei.com
2020-07-29powerpc: use for_each_child_of_node() macroQinglang Miao4-9/+6
Use for_each_child_of_node() macro instead of open coding it. Signed-off-by: Qinglang Miao <miaoqinglang@huawei.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200728022807.87815-1-miaoqinglang@huawei.com
2020-07-29powerpc/build: vdso linker warning for orphan sectionsNicholas Piggin4-3/+5
Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200303012748.4190929-1-npiggin@gmail.com
2020-07-29powerpc: Use fallthrough pseudo-keywordGustavo A. R. Silva5-8/+8
Replace the existing /* fall through */ comments and its variants with the new pseudo-keyword macro fallthrough[1]. Also, remove unnecessary fall-through markings when it is the case. [1] https://www.kernel.org/doc/html/v5.7/process/deprecated.html?highlight=fallthrough#implicit-switch-case-fall-through Signed-off-by: Gustavo A. R. Silva <gustavoars@kernel.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200727224201.GA10133@embeddedor
2020-07-29powerpc/book3s64/radix: Add kernel command line option to disable radix GTSEAneesh Kumar K.V6-6/+22
This adds a kernel command line option that can be used to disable GTSE support. Disabling GTSE implies kernel will make hcalls to invalidate TLB entries. This was done so that we can do VM migration between configs that enable/disable GTSE support via hypervisor. To migrate a VM from a system that supports GTSE to a system that doesn't, we can boot the guest with radix_hcall_invalidate=on, thereby forcing the guest to use hcalls for TLB invalidates. The check for hcall availability is done in pSeries_setup_arch so that the panic message appears on the console. This should only happen on a hypervisor that doesn't force the guest to hash translation even though it can't handle the radix GTSE=0 request via CAS. With radix_hcall_invalidate=on if the hypervisor doesn't support hcall_rpt_invalidate hcall it should force the LPAR to hash translation. Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Tested-by: Bharata B Rao <bharata@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200727085908.420806-1-aneesh.kumar@linux.ibm.com
2020-07-29powerpc/kvm/cma: Improve kernel log during bootAneesh Kumar K.V1-1/+1
Current kernel gives: [ 0.000000] cma: Reserved 26224 MiB at 0x0000007959000000 [ 0.000000] hugetlb_cma: reserve 65536 MiB, up to 16384 MiB per node [ 0.000000] cma: Reserved 16384 MiB at 0x0000001800000000 With the fix [ 0.000000] kvm_cma_reserve: reserving 26214 MiB for global area [ 0.000000] cma: Reserved 26224 MiB at 0x0000007959000000 [ 0.000000] hugetlb_cma: reserve 65536 MiB, up to 16384 MiB per node [ 0.000000] cma: Reserved 16384 MiB at 0x0000001800000000 Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200713150749.25245-2-aneesh.kumar@linux.ibm.com
2020-07-29powerpc/hugetlb/cma: Allocate gigantic hugetlb pages using CMAAneesh Kumar K.V3-0/+28
commit: cf11e85fc08c ("mm: hugetlb: optionally allocate gigantic hugepages using cma") added support for allocating gigantic hugepages using CMA. This patch enables the same for powerpc Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200713150749.25245-1-aneesh.kumar@linux.ibm.com
2020-07-29powerpc/xmon: Use `dcbf` inplace of `dcbi` instruction for 64bit Book3SBalamuruhan S1-1/+1
Data Cache Block Invalidate (dcbi) instruction implemented back in PowerPC architecture version 2.03. But as per Power Processor Users Manual it is obsolete and not supported by POWER8/POWER9 core. Attempt to use of this illegal instruction results in a hypervisor emulation assistance interrupt. So, ifdef it out the option `i` in xmon for 64bit Book3S. 0:mon> fi cpu 0x0: Vector: 700 (Program Check) at [c000000003be74a0] pc: c000000000102030: cacheflush+0x180/0x1a0 lr: c000000000101f3c: cacheflush+0x8c/0x1a0 sp: c000000003be7730 msr: 8000000000081033 current = 0xc0000000035e5c00 paca = 0xc000000001910000 irqmask: 0x03 irq_happened: 0x01 pid = 1025, comm = bash Linux version 5.6.0-rc5-g5aa19adac (root@ltc-wspoon6) (gcc version 7.4.0 (Ubuntu 7.4.0-1ubuntu1~18.04.1)) #1 SMP Tue Mar 10 04:38:41 CDT 2020 cpu 0x0: Exception 700 (Program Check) in xmon, returning to main loop [c000000003be7c50] c00000000084abb0 __handle_sysrq+0xf0/0x2a0 [c000000003be7d00] c00000000084b3c0 write_sysrq_trigger+0xb0/0xe0 [c000000003be7d30] c0000000004d1edc proc_reg_write+0x8c/0x130 [c000000003be7d60] c00000000040dc7c __vfs_write+0x3c/0x70 [c000000003be7d80] c000000000410e70 vfs_write+0xd0/0x210 [c000000003be7dd0] c00000000041126c ksys_write+0xdc/0x130 [c000000003be7e20] c00000000000b9d0 system_call+0x5c/0x68 --- Exception: c01 (System Call) at 00007fffa345e420 SP (7ffff0b08ab0) is in userspace Signed-off-by: Balamuruhan S <bala24@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200330075954.538773-1-bala24@linux.ibm.com
2020-07-29powerpc: Drop old comment about CONFIG_POWERMichael Ellerman1-1/+0
There's a comment in time.h referring to CONFIG_POWER, which doesn't exist. That confuses scripts/checkkconfigsymbols.py. Presumably the comment was referring to a CONFIG_POWER vs CONFIG_PPC, in which case for CONFIG_POWER we would #define __USE_RTC to 1. But instead we have CONFIG_PPC_BOOK3S_601, and these days we have IS_ENABLED(). So the comment is no longer relevant, drop it. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200724131728.1643966-9-mpe@ellerman.id.au
2020-07-29powerpc/kvm: Use correct CONFIG symbol in commentMichael Ellerman1-1/+1
This comment refers to the non-existent CONFIG_PPC_BOOK3S_XX, which confuses scripts/checkkconfigsymbols.py. Change it to use the correct symbol. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200724131728.1643966-8-mpe@ellerman.id.au