summaryrefslogtreecommitdiffstats
path: root/arch/powerpc
AgeCommit message (Collapse)AuthorFilesLines
2019-11-13powerpc/spufs: remove set but not used variable 'ctx'YueHaibing1-2/+0
arch/powerpc/platforms/cell/spufs/inode.c:201:22: warning: variable ctx set but not used [-Wunused-but-set-variable] It is not used since commit 67cba9fd6456 ("move spu_forget() into spufs_rmdir()") Signed-off-by: YueHaibing <yuehaibing@huawei.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20191023134423.15052-1-yuehaibing@huawei.com
2019-11-13powerpc/powernv/ioda: using kfree_rcu() to simplify the codeYueHaibing1-9/+1
The callback function of call_rcu() just calls a kfree(), so we can use kfree_rcu() instead of call_rcu() + callback function. Signed-off-by: YueHaibing <yuehaibing@huawei.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20190711141818.18044-1-yuehaibing@huawei.com
2019-11-13powerpc/powernv: Make some symbols staticYueHaibing3-4/+4
Fix sparse warnings: arch/powerpc/platforms/powernv/opal-psr.c:20:1: warning: symbol 'psr_mutex' was not declared. Should it be static? arch/powerpc/platforms/powernv/opal-psr.c:27:3: warning: symbol 'psr_attrs' was not declared. Should it be static? arch/powerpc/platforms/powernv/opal-powercap.c:20:1: warning: symbol 'powercap_mutex' was not declared. Should it be static? arch/powerpc/platforms/powernv/opal-sensor-groups.c:20:1: warning: symbol 'sg_mutex' was not declared. Should it be static? Reported-by: Hulk Robot <hulkci@huawei.com> Signed-off-by: YueHaibing <yuehaibing@huawei.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20190702131733.44100-1-yuehaibing@huawei.com
2019-11-13powerpc/configs: remove obsolete CONFIG_INET_XFRM_MODE_* and ↵YueHaibing43-126/+0
CONFIG_INET6_XFRM_MODE_* These Kconfig options has been removed in commit 4c145dce2601 ("xfrm: make xfrm modes builtin") So there is no point to keep it in defconfigs any longer. Signed-off-by: YueHaibing <yuehaibing@huawei.com> [mpe: Extract from cross arch patch] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20190612071901.21736-1-yuehaibing@huawei.com
2019-11-13powerpc/pseries: Fix platform_no_drv_owner.cocci warningsYueHaibing1-1/+0
Remove .owner field if calls are used which set it automatically Generated by: scripts/coccinelle/api/platform_no_drv_owner.cocci Signed-off-by: YueHaibing <yuehaibing@huawei.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20190218133950.95225-1-yuehaibing@huawei.com
2019-11-13powerpc/pseries: Drop pointless static qualifier in vpa_debugfs_init()YueHaibing1-1/+1
There is no need to have the 'struct dentry *vpa_dir' variable static since new value always be assigned before use it. Fixes: c6c26fb55e8e ("powerpc/pseries: Export raw per-CPU VPA data via debugfs") Signed-off-by: YueHaibing <yuehaibing@huawei.com> Reviewed-by: Daniel Axtens <dja@axtens.net> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20190218125644.87448-1-yuehaibing@huawei.com
2019-11-13powerpc/powernv/npu: Fix debugfs_simple_attr.cocci warningsYueHaibing1-4/+4
Use DEFINE_DEBUGFS_ATTRIBUTE rather than DEFINE_SIMPLE_ATTRIBUTE for debugfs files. Semantic patch information: Rationale: DEFINE_SIMPLE_ATTRIBUTE + debugfs_create_file() imposes some significant overhead as compared to DEFINE_DEBUGFS_ATTRIBUTE + debugfs_create_file_unsafe(). Generated by: scripts/coccinelle/api/debugfs/debugfs_simple_attr.cocci Signed-off-by: YueHaibing <yuehaibing@huawei.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/1545705876-63132-1-git-send-email-yuehaibing@huawei.com
2019-11-13powerpc/64s: Fix debugfs_simple_attr.cocci warningsYueHaibing1-10/+14
Use DEFINE_DEBUGFS_ATTRIBUTE rather than DEFINE_SIMPLE_ATTRIBUTE for debugfs files. Semantic patch information: Rationale: DEFINE_SIMPLE_ATTRIBUTE + debugfs_create_file() imposes some significant overhead as compared to DEFINE_DEBUGFS_ATTRIBUTE + debugfs_create_file_unsafe(). Generated by: scripts/coccinelle/api/debugfs/debugfs_simple_attr.cocci Signed-off-by: YueHaibing <yuehaibing@huawei.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/1543498518-107601-1-git-send-email-yuehaibing@huawei.com
2019-11-13powerpc/pseries: Use correct event modifier in rtas_parse_epow_errlog()YueHaibing1-1/+1
rtas_parse_epow_errlog() should pass 'modifier' to handle_system_shutdown, because event modifier only use bottom 4 bits. Reviewed-by: Tyrel Datwyler <tyreld@linux.ibm.com> Signed-off-by: YueHaibing <yuehaibing@huawei.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20191023134838.21280-1-yuehaibing@huawei.com
2019-11-13powerpc/watchpoint: Don't ignore extraneous exceptions blindlyRavi Bangoria1-21/+31
On powerpc, watchpoint match range is double-word granular. On a watchpoint hit, DAR is set to the first byte of overlap between actual access and watched range. And thus it's quite possible that DAR does not point inside user specified range. Ex, say user creates a watchpoint with address range 0x1004 to 0x1007. So hw would be configured to watch from 0x1000 to 0x1007. If there is a 4 byte access from 0x1002 to 0x1005, DAR will point to 0x1002 and thus interrupt handler considers it as extraneous, but it's actually not, because part of the access belongs to what user has asked. Instead of blindly ignoring the exception, get actual address range by analysing an instruction, and ignore only if actual range does not overlap with user specified range. Note: The behavior is unchanged for 8xx. Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20191017093204.7511-5-ravi.bangoria@linux.ibm.com
2019-11-13powerpc/watchpoint: Fix ptrace code that muck around with address/lenRavi Bangoria2-8/+5
ptrace_set_debugreg() does not consider new length while overwriting the watchpoint. Fix that. ppc_set_hwdebug() aligns watchpoint address to doubleword boundary but does not change the length. If address range is crossing doubleword boundary and length is less then 8, we will lose samples from second doubleword. So fix that as well. Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20191017093204.7511-4-ravi.bangoria@linux.ibm.com
2019-11-13powerpc/watchpoint: Fix length calculation for unaligned targetRavi Bangoria5-23/+56
Watchpoint match range is always doubleword(8 bytes) aligned on powerpc. If the given range is crossing doubleword boundary, we need to increase the length such that next doubleword also get covered. Ex, address len = 6 bytes |=========. |------------v--|------v--------| | | | | | | | | | | | | | | | | | |---------------|---------------| <---8 bytes---> In such case, current code configures hw as: start_addr = address & ~HW_BREAKPOINT_ALIGN len = 8 bytes And thus read/write in last 4 bytes of the given range is ignored. Fix this by including next doubleword in the length. Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20191017093204.7511-3-ravi.bangoria@linux.ibm.com
2019-11-13powerpc/watchpoint: Introduce macros for watchpoint lengthRavi Bangoria4-6/+9
We are hadrcoding length everywhere in the watchpoint code. Introduce macros for the length and use them. Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20191017093204.7511-2-ravi.bangoria@linux.ibm.com
2019-11-13powerpc/security: Fix wrong message when RFI Flush is disableGustavo L. F. Walbon1-10/+6
The issue was showing "Mitigation" message via sysfs whatever the state of "RFI Flush", but it should show "Vulnerable" when it is disabled. If you have "L1D private" feature enabled and not "RFI Flush" you are vulnerable to meltdown attacks. "RFI Flush" is the key feature to mitigate the meltdown whatever the "L1D private" state. SEC_FTR_L1D_THREAD_PRIV is a feature for Power9 only. So the message should be as the truth table shows: CPU | L1D private | RFI Flush | sysfs ----|-------------|-----------|------------------------------------- P9 | False | False | Vulnerable P9 | False | True | Mitigation: RFI Flush P9 | True | False | Vulnerable: L1D private per thread P9 | True | True | Mitigation: RFI Flush, L1D private per thread P8 | False | False | Vulnerable P8 | False | True | Mitigation: RFI Flush Output before this fix: # cat /sys/devices/system/cpu/vulnerabilities/meltdown Mitigation: RFI Flush, L1D private per thread # echo 0 > /sys/kernel/debug/powerpc/rfi_flush # cat /sys/devices/system/cpu/vulnerabilities/meltdown Mitigation: L1D private per thread Output after fix: # cat /sys/devices/system/cpu/vulnerabilities/meltdown Mitigation: RFI Flush, L1D private per thread # echo 0 > /sys/kernel/debug/powerpc/rfi_flush # cat /sys/devices/system/cpu/vulnerabilities/meltdown Vulnerable: L1D private per thread Signed-off-by: Gustavo L. F. Walbon <gwalbon@linux.ibm.com> Signed-off-by: Mauro S. M. Rodrigues <maurosr@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20190502210907.42375-1-gwalbon@linux.ibm.com
2019-11-13powerpc/crypto: Add cond_resched() in crc-vpmsum self-testChris Smart1-0/+1
The stress test for vpmsum implementations executes a long for loop in the kernel. This blocks the scheduler, which prevents other tasks from running, resulting in a warning. This fix adds a call to cond_reshed() at the end of each loop, which allows the scheduler to run other tasks as required. Signed-off-by: Chris Smart <chris.smart@humanservices.gov.au> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20191103233356.5472-1-chris.smart@humanservices.gov.au
2019-11-13powerpc/pseries/cmm: Simulation modeDavid Hildenbrand1-8/+30
Let's allow to test the implementation without needing HW support. When "simulate=1" is specified when loading the module, we bypass all HW checks and HW calls. The sysfs file "simulate_loan_target_kb" can be used to simulate HW requests. The simualtion mode can be activated using: modprobe cmm debug=1 simulate=1 And the requested loan target can be changed using: echo X > /sys/devices/system/cmm/cmm0/simulate_loan_target_kb Signed-off-by: David Hildenbrand <david@redhat.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20191031142933.10779-11-david@redhat.com
2019-11-13powerpc/pseries/cmm: Switch to balloon_page_alloc()David Hildenbrand1-2/+1
balloon_page_alloc() will use GFP_HIGHUSER_MOVABLE in case we have CONFIG_BALLOON_COMPACTION. This is now possible, as balloon pages are movable with CONFIG_BALLOON_COMPACTION. Without CONFIG_BALLOON_COMPACTION, GFP_HIGHUSER is used. Note that apart from that, balloon_page_alloc() uses the following flags: __GFP_NOMEMALLOC | __GFP_NORETRY | __GFP_NOWARN And current code used: GFP_NOIO | __GFP_NOWARN | __GFP_NORETRY | __GFP_NOMEMALLOC GFP_HIGHUSER/GFP_HIGHUSER_MOVABLE include __GFP_RECLAIM | __GFP_IO | __GFP_FS | __GFP_HARDWALL | __GFP_HIGHMEM GFP_NOIO is __GFP_RECLAIM. With CONFIG_BALLOON_COMPACTION, we essentially add: __GFP_IO | __GFP_FS | __GFP_HARDWALL | __GFP_HIGHMEM | __GFP_MOVABLE Without CONFIG_BALLOON_COMPACTION, we essentially add: __GFP_IO | __GFP_FS | __GFP_HARDWALL | __GFP_HIGHMEM I assume this is fine, as this is what all other balloon compaction users use. If it turns out to be a problem, we could add __GFP_MOVABLE manually if we have CONFIG_BALLOON_COMPACTION. Signed-off-by: David Hildenbrand <david@redhat.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20191031142933.10779-10-david@redhat.com
2019-11-13powerpc/pseries/cmm: Implement balloon compactionDavid Hildenbrand2-14/+120
We can now get rid of the cmm_lock and completely rely on the balloon compaction internals, which now also manage the page list and the lock. Inflated/"loaned" pages are now movable. Memory blocks that contain such pages can get offlined. Also, all such pages will be marked PageOffline() and can therefore be excluded in memory dumps using recent versions of makedumpfile. Don't switch to balloon_page_alloc() yet (due to the GFP_NOIO). Will do that separately to discuss this change in detail. Signed-off-by: David Hildenbrand <david@redhat.com> [mpe: Add isolated_pages-- in cmm_migratepage() as suggested by David] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20191031142933.10779-9-david@redhat.com
2019-11-13powerpc/pseries/cmm: Convert loaned_pages to an atomic_long_tDavid Hildenbrand1-16/+19
When switching to balloon compaction, we want to drop the cmm_lock and completely rely on the balloon compaction list lock internally. loaned_pages is currently protected under the cmm_lock. Note: Right now cmm_alloc_pages() and cmm_free_pages() can be called at the same time, e.g., via the thread and a concurrent OOM notifier. Signed-off-by: David Hildenbrand <david@redhat.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20191031142933.10779-8-david@redhat.com
2019-11-13powerpc/pseries/cmm: Rip out memory isolate notifierDavid Hildenbrand1-96/+1
The memory isolate notifier was added to allow to offline memory blocks that contain inflated/"loaned" pages. We can achieve the same using the balloon compaction framework. Get rid of the memory isolate notifier. Also, we can get rid of cmm_mem_going_offline(), as we will never reach that code path now when we have allocated memory in the balloon (allocated pages are unmovable and will no longer be special-cased using the memory isolation notifier). Leave the memory notifier in place, so we can still back off in case memory gets offlined. Signed-off-by: David Hildenbrand <david@redhat.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20191031142933.10779-7-david@redhat.com
2019-11-13powerpc/pseries/cmm: Use adjust_managed_page_count() insted of totalram_pages_*David Hildenbrand1-3/+3
adjust_managed_page_count() performs a totalram_pages_add(), but also adjusts the managed pages of the zone. Let's use that instead, similar to virtio-balloon. Use it before freeing a page. Signed-off-by: David Hildenbrand <david@redhat.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20191031142933.10779-6-david@redhat.com
2019-11-13powerpc/pseries/cmm: Drop page arrayDavid Hildenbrand1-127/+36
We can simply store the pages in a list (page->lru), no need for a separate data structure (+ complicated handling). This is how most other balloon drivers store allocated pages without additional tracking data. For the notifiers, use page_to_pfn() to check if a page is in the applicable range. Use page_to_phys() in plpar_page_set_loaned() and plpar_page_set_active() (I assume due to the __pa() that's the right thing to do). Signed-off-by: David Hildenbrand <david@redhat.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20191031142933.10779-5-david@redhat.com
2019-11-13powerpc/pseries/cmm: Cleanup rc handling in cmm_init()David Hildenbrand1-4/+3
No need to initialize rc. Also, let's return 0 directly when succeeding. Signed-off-by: David Hildenbrand <david@redhat.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20191031142933.10779-4-david@redhat.com
2019-11-13powerpc/pseries/cmm: Report errors when registering notifiers failsDavid Hildenbrand1-2/+6
If we don't set the rc, we will return "0", making it look like we succeeded. Signed-off-by: David Hildenbrand <david@redhat.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20191031142933.10779-3-david@redhat.com
2019-11-13powerpc/pseries/cmm: Implement release() function for sysfs deviceDavid Hildenbrand1-0/+5
When unloading the module, one gets ------------[ cut here ]------------ Device 'cmm0' does not have a release() function, it is broken and must be fixed. See Documentation/kobject.txt. WARNING: CPU: 0 PID: 19308 at drivers/base/core.c:1244 .device_release+0xcc/0xf0 ... We only have one static fake device. There is nothing to do when releasing the device (via cmm_exit()). Signed-off-by: David Hildenbrand <david@redhat.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20191031142933.10779-2-david@redhat.com
2019-11-13powerpc/pseries: Enable support for ibm,drc-info propertyTyrel Datwyler1-1/+1
Advertise client support for the PAPR architected ibm,drc-info device tree property during CAS handshake. Fixes: c7a3275e0f9e ("powerpc/pseries: Revert support for ibm,drc-info devtree property") Signed-off-by: Tyrel Datwyler <tyreld@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/1573449697-5448-11-git-send-email-tyreld@linux.ibm.com
2019-11-13powerpc/pseries: Add cpu DLPAR support for drc-info propertyTyrel Datwyler1-15/+112
Older firmwares provided information about Dynamic Reconfig Connectors (DRC) through several device tree properties, namely ibm,drc-types, ibm,drc-indexes, ibm,drc-names, and ibm,drc-power-domains. New firmwares have the ability to present this same information in a much condensed format through a device tree property called ibm,drc-info. The existing cpu DLPAR hotplug code only understands the older DRC property format when validating the drc-index of a cpu during a hotplug add. This updates those code paths to use the ibm,drc-info property, when present, instead for validation. Signed-off-by: Tyrel Datwyler <tyreld@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/1573449697-5448-4-git-send-email-tyreld@linux.ibm.com
2019-11-13powerpc/pseries: Fix drc-info mappings of logical cpus to drc-indexTyrel Datwyler1-13/+10
There are a couple subtle errors in the mapping between cpu-ids and a cpus associated drc-index when using the new ibm,drc-info property. The first is that while drc-info may have been a supported firmware feature at boot it is possible we have migrated to a CEC with older firmware that doesn't support the ibm,drc-info property. In that case the device tree would have been updated after migration to remove the ibm,drc-info property and replace it with the older style ibm,drc-* properties for types, indexes, names, and power-domains. PAPR even goes as far as dictating that if we advertise support for drc-info that we are capable of supporting either property type at runtime. The second is that the first value of the ibm,drc-info property is the int encoded count of drc-info entries. As such "value" returned by of_prop_next_u32() is pointing at that count, and not the first element of the first drc-info entry as is expected by the of_read_drc_info_cell() helper. Fix the first by ignoring DRC-INFO firmware feature and instead testing directly for ibm,drc-info, and then falling back to the old style ibm,drc-indexes in the case it doesn't exit. Fix the second by incrementing value to the next element prior to parsing drc-info entries. Signed-off-by: Tyrel Datwyler <tyreld@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/1573449697-5448-3-git-send-email-tyreld@linux.ibm.com
2019-11-13powerpc/pseries: Fix bad drc_index_start value parsing of drc-info entryTyrel Datwyler1-5/+3
The ibm,drc-info property is an array property that contains drc-info entries such that each entry is made up of 2 string encoded elements followed by 5 int encoded elements. The of_read_drc_info_cell() helper contains comments that correctly name the expected elements and their encoding. However, the usage of of_prop_next_string() and of_prop_next_u32() introduced a subtle skippage of the first u32. This is a result of of_prop_next_string() returning a pointer to the next property value which is not a string, but actually a (__be32 *). As, a result the following call to of_prop_next_u32() passes over the current int encoded value and actually stores the next one wrongly. Simply endian swap the current value in place after reading the first two string values. The remaining int encoded values can then be read correctly using of_prop_next_u32(). Signed-off-by: Tyrel Datwyler <tyreld@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/1573449697-5448-2-git-send-email-tyreld@linux.ibm.com
2019-11-13Merge branch 'topic/secureboot' into nextMichael Ellerman14-1/+640
Merge the secureboot support, as well as the IMA changes needed to support it. From Nayna's cover letter: In order to verify the OS kernel on PowerNV systems, secure boot requires X.509 certificates trusted by the platform. These are stored in secure variables controlled by OPAL, called OPAL secure variables. In order to enable users to manage the keys, the secure variables need to be exposed to userspace. OPAL provides the runtime services for the kernel to be able to access the secure variables. This patchset defines the kernel interface for the OPAL APIs. These APIs are used by the hooks, which load these variables to the keyring and expose them to the userspace for reading/writing. Overall, this patchset adds the following support: * expose secure variables to the kernel via OPAL Runtime API interface * expose secure variables to the userspace via kernel sysfs interface * load kernel verification and revocation keys to .platform and .blacklist keyring respectively. The secure variables can be read/written using simple linux utilities cat/hexdump. For example: Path to the secure variables is: /sys/firmware/secvar/vars Each secure variable is listed as directory. $ ls -l total 0 drwxr-xr-x. 2 root root 0 Aug 20 21:20 db drwxr-xr-x. 2 root root 0 Aug 20 21:20 KEK drwxr-xr-x. 2 root root 0 Aug 20 21:20 PK The attributes of each of the secure variables are (for example: PK): $ ls -l total 0 -r--r--r--. 1 root root 4096 Oct 1 15:10 data -r--r--r--. 1 root root 65536 Oct 1 15:10 size --w-------. 1 root root 4096 Oct 1 15:12 update The "data" is used to read the existing variable value using hexdump. The data is stored in ESL format. The "update" is used to write a new value using cat. The update is to be submitted as AUTH file.
2019-11-13powerpc: expose secure variables to userspace via sysfsNayna Jain3-0/+260
PowerNV secure variables, which store the keys used for OS kernel verification, are managed by the firmware. These secure variables need to be accessed by the userspace for addition/deletion of the certificates. This patch adds the sysfs interface to expose secure variables for PowerNV secureboot. The users shall use this interface for manipulating the keys stored in the secure variables. Signed-off-by: Nayna Jain <nayna@linux.ibm.com> Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Eric Richter <erichte@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/1573441836-3632-3-git-send-email-nayna@linux.ibm.com
2019-11-13powerpc/powernv: Add OPAL API interface to access secure variableNayna Jain9-2/+211
The X.509 certificates trusted by the platform and required to secure boot the OS kernel are wrapped in secure variables, which are controlled by OPAL. This patch adds firmware/kernel interface to read and write OPAL secure variables based on the unique key. This support can be enabled using CONFIG_OPAL_SECVAR. Signed-off-by: Claudio Carvalho <cclaudio@linux.ibm.com> Signed-off-by: Nayna Jain <nayna@linux.ibm.com> Signed-off-by: Eric Richter <erichte@linux.ibm.com> [mpe: Make secvar_ops __ro_after_init, only build opal-secvar.c if PPC_SECURE_BOOT=y] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/1573441836-3632-2-git-send-email-nayna@linux.ibm.com
2019-11-12powerpc/ima: Indicate kernel modules appended signatures are enforcedMimi Zohar1-2/+6
The arch specific kernel module policy rule requires kernel modules to be signed, either as an IMA signature, stored as an xattr, or as an appended signature. As a result, kernel modules appended signatures could be enforced without "sig_enforce" being set or reflected in /sys/module/module/parameters/sig_enforce. This patch sets "sig_enforce". Signed-off-by: Mimi Zohar <zohar@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/1572492694-6520-10-git-send-email-zohar@linux.ibm.com
2019-11-12powerpc/ima: Update ima arch policy to check for blacklistNayna Jain1-4/+4
This patch updates the arch-specific policies for PowerNV system to make sure that the binary hash is not blacklisted. Signed-off-by: Nayna Jain <nayna@linux.ibm.com> Signed-off-by: Mimi Zohar <zohar@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/1572492694-6520-9-git-send-email-zohar@linux.ibm.com
2019-11-12powerpc/ima: Define trusted boot policyNayna Jain1-1/+32
This patch defines an arch-specific trusted boot only policy and a combined secure and trusted boot policy. Signed-off-by: Nayna Jain <nayna@linux.ibm.com> Signed-off-by: Mimi Zohar <zohar@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/1572492694-6520-5-git-send-email-zohar@linux.ibm.com
2019-11-12powerpc: Detect the trusted boot state of the systemNayna Jain2-0/+21
While secure boot permits only properly verified signed kernels to be booted, trusted boot calculates the file hash of the kernel image and stores the measurement prior to boot, that can be subsequently compared against good known values via attestation services. This patch reads the trusted boot state of a PowerNV system. The state is used to conditionally enable additional measurement rules in the IMA arch-specific policies. Signed-off-by: Nayna Jain <nayna@linux.ibm.com> Signed-off-by: Eric Richter <erichte@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/e9eeee6b-b9bf-1e41-2954-61dbd6fbfbcf@linux.ibm.com
2019-11-12powerpc/ima: Add support to initialize ima policy rulesNayna Jain3-1/+45
PowerNV systems use a Linux-based bootloader, which rely on the IMA subsystem to enforce different secure boot modes. Since the verification policy may differ based on the secure boot mode of the system, the policies must be defined at runtime. This patch implements arch-specific support to define IMA policy rules based on the runtime secure boot mode of the system. This patch provides arch-specific IMA policies if PPC_SECURE_BOOT config is enabled. Signed-off-by: Nayna Jain <nayna@linux.ibm.com> Signed-off-by: Mimi Zohar <zohar@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/1572492694-6520-3-git-send-email-zohar@linux.ibm.com
2019-11-12powerpc: Detect the secure boot mode of the systemNayna Jain4-0/+70
This patch defines a function to detect the secure boot state of a PowerNV system. The PPC_SECURE_BOOT config represents the base enablement of secure boot for powerpc. Signed-off-by: Nayna Jain <nayna@linux.ibm.com> Signed-off-by: Eric Richter <erichte@linux.ibm.com> [mpe: Fold in change from Nayna to add "ibm,secureboot" to ids] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/46b003b9-3225-6bf7-9101-ed6580bb748c@linux.ibm.com
2019-11-07powerpc: Don't flush caches when adding memoryAlastair D'Silva1-2/+0
This operation takes a significant amount of time when hotplugging large amounts of memory (~50 seconds with 890GB of persistent memory). This was orignally in commit fb5924fddf9e ("powerpc/mm: Flush cache on memory hot(un)plug") to support memtrace, but the flush on add is not needed as it is flushed on remove. Signed-off-by: Alastair D'Silva <alastair@d-silva.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20191104023305.9581-7-alastair@au1.ibm.com
2019-11-07powerpc: Chunk calls to flush_dcache_range in arch_*_memoryAlastair D'Silva1-2/+25
When presented with large amounts of memory being hotplugged (in my test case, ~890GB), the call to flush_dcache_range takes a while (~50 seconds), triggering RCU stalls. This patch breaks up the call into 1GB chunks, calling cond_resched() inbetween to allow the scheduler to run. Fixes: fb5924fddf9e ("powerpc/mm: Flush cache on memory hot(un)plug") Signed-off-by: Alastair D'Silva <alastair@d-silva.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20191104023305.9581-6-alastair@au1.ibm.com
2019-11-07powerpc: Convert flush_icache_range & friends to CAlastair D'Silva5-252/+170
Similar to commit 22e9c88d486a ("powerpc/64: reuse PPC32 static inline flush_dcache_range()") this patch converts the following ASM symbols to C: flush_icache_range() __flush_dcache_icache() __flush_dcache_icache_phys() This was done as we discovered a long-standing bug where the length of the range was truncated due to using a 32 bit shift instead of a 64 bit one. By converting these functions to C, it becomes easier to maintain. flush_dcache_icache_phys() retains a critical assembler section as we must ensure there are no memory accesses while the data MMU is disabled (authored by Christophe Leroy). Since this has no external callers, it has also been made static, allowing the compiler to inline it within flush_dcache_icache_page(). Signed-off-by: Alastair D'Silva <alastair@d-silva.org> Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> [mpe: Minor fixups, don't export __flush_dcache_icache()] Link: https://lore.kernel.org/r/20191104023305.9581-5-alastair@au1.ibm.com
2019-11-07powerpc: define helpers to get L1 icache sizesAlastair D'Silva2-10/+31
This patch adds helpers to retrieve icache sizes, and renames the existing helpers to make it clear that they are for dcache. Signed-off-by: Alastair D'Silva <alastair@d-silva.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20191104023305.9581-4-alastair@au1.ibm.com
2019-11-07powerpc: Allow 64bit VDSO __kernel_sync_dicache to work across ranges >4GBAlastair D'Silva1-2/+2
When calling __kernel_sync_dicache with a size >4GB, we were masking off the upper 32 bits, so we would incorrectly flush a range smaller than intended. This patch replaces the 32 bit shifts with 64 bit ones, so that the full size is accounted for. Signed-off-by: Alastair D'Silva <alastair@d-silva.org> Cc: stable@vger.kernel.org Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20191104023305.9581-3-alastair@au1.ibm.com
2019-11-07powerpc: Allow flush_icache_range to work across ranges >4GBAlastair D'Silva1-2/+2
When calling flush_icache_range with a size >4GB, we were masking off the upper 32 bits, so we would incorrectly flush a range smaller than intended. This patch replaces the 32 bit shifts with 64 bit ones, so that the full size is accounted for. Signed-off-by: Alastair D'Silva <alastair@d-silva.org> Cc: stable@vger.kernel.org Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20191104023305.9581-2-alastair@au1.ibm.com
2019-11-07powerpc: Support CMDLINE_EXTENDChris Packham2-13/+43
Bring powerpc in line with other architectures that support extending or overriding the bootloader provided command line. The current behaviour is most like CMDLINE_FROM_BOOTLOADER where the bootloader command line is preferred but the kernel config can provide a fallback so CMDLINE_FROM_BOOTLOADER is the default. CMDLINE_EXTEND can be used to append the CMDLINE from the kernel config to the one provided by the bootloader. Signed-off-by: Chris Packham <chris.packham@alliedtelesis.co.nz> Reviewed-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20190801225006.21952-1-chris.packham@alliedtelesis.co.nz
2019-11-06powerpc/64s: Always disable branch profiling for prom_init.oMichael Ellerman1-1/+1
Otherwise the build fails because prom_init is calling symbols it's not allowed to, eg: Error: External symbol 'ftrace_likely_update' referenced from prom_init.c make[3]: *** [arch/powerpc/kernel/Makefile:197: arch/powerpc/kernel/prom_init_check] Error 1 Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20191106051129.7626-1-mpe@ellerman.id.au
2019-11-05powerpc/security: Fix debugfs data leak on 32-bitGeert Uytterhoeven2-6/+6
"powerpc_security_features" is "unsigned long", i.e. 32-bit or 64-bit, depending on the platform (PPC_FSL_BOOK3E or PPC_BOOK3S_64). Hence casting its address to "u64 *", and calling debugfs_create_x64() is wrong, and leaks 32-bit of nearby data to userspace on 32-bit platforms. While all currently defined SEC_FTR_* security feature flags fit in 32-bit, they all have "ULL" suffixes to make them 64-bit constants. Hence fix the leak by changing the type of "powerpc_security_features" (and the parameter types of its accessors) to "u64". This also allows to drop the cast. Fixes: 398af571128fe75f ("powerpc/security: Show powerpc_security_features in debugfs") Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20191021142309.28105-1-geert+renesas@glider.be
2019-11-05powerpc/book3s64/hash: Add cond_resched to avoid soft lockup warningAneesh Kumar K.V1-0/+1
With large memory (8TB and more) hotplug, we can get soft lockup warnings as below. These were caused by a long loop without any explicit cond_resched which is a problem for !PREEMPT kernels. Avoid this using cond_resched() while inserting hash page table entries. We already do similar cond_resched() in __add_pages(), see commit f64ac5e6e306 ("mm, memory_hotplug: add scheduling point to __add_pages"). rcu: 3-....: (24002 ticks this GP) idle=13e/1/0x4000000000000002 softirq=722/722 fqs=12001 (t=24003 jiffies g=4285 q=2002) NMI backtrace for cpu 3 CPU: 3 PID: 3870 Comm: ndctl Not tainted 5.3.0-197.18-default+ #2 Call Trace: dump_stack+0xb0/0xf4 (unreliable) nmi_cpu_backtrace+0x124/0x130 nmi_trigger_cpumask_backtrace+0x1ac/0x1f0 arch_trigger_cpumask_backtrace+0x28/0x3c rcu_dump_cpu_stacks+0xf8/0x154 rcu_sched_clock_irq+0x878/0xb40 update_process_times+0x48/0x90 tick_sched_handle.isra.16+0x4c/0x80 tick_sched_timer+0x68/0xe0 __hrtimer_run_queues+0x180/0x430 hrtimer_interrupt+0x110/0x300 timer_interrupt+0x108/0x2f0 decrementer_common+0x114/0x120 --- interrupt: 901 at arch_add_memory+0xc0/0x130 LR = arch_add_memory+0x74/0x130 memremap_pages+0x494/0x650 devm_memremap_pages+0x3c/0xa0 pmem_attach_disk+0x188/0x750 nvdimm_bus_probe+0xac/0x2c0 really_probe+0x148/0x570 driver_probe_device+0x19c/0x1d0 device_driver_attach+0xcc/0x100 bind_store+0x134/0x1c0 drv_attr_store+0x44/0x60 sysfs_kf_write+0x64/0x90 kernfs_fop_write+0x1a0/0x270 __vfs_write+0x3c/0x70 vfs_write+0xd0/0x260 ksys_write+0xdc/0x130 system_call+0x5c/0x68 Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20191001084656.31277-1-aneesh.kumar@linux.ibm.com
2019-11-05powerpc/mm/book3s64/radix: Flush the full mm even when need_flush_all is setAneesh Kumar K.V1-2/+1
With the previous patch, we should now not be using need_flush_all for powerpc. But then make sure we force a PID tlbie flush with RIC=2 if we ever find need_flush_all set. Also don't reset it after a mmu gather flush. Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20191024075801.22434-3-aneesh.kumar@linux.ibm.com
2019-11-05powerpc/mm/book3s64/radix: Use freed_tables instead of need_flush_allAneesh Kumar K.V3-39/+3
With commit 22a61c3c4f13 ("asm-generic/tlb: Track freeing of page-table directories in struct mmu_gather") we now track whether we freed page table in mmu_gather. Use that to decide whether to flush Page Walk Cache. Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20191024075801.22434-2-aneesh.kumar@linux.ibm.com