summaryrefslogtreecommitdiffstats
path: root/virt
AgeCommit message (Collapse)AuthorFilesLines
2018-01-16KVM: arm64: Handle RAS SErrors from EL1 on guest exitJames Morse1-0/+3
We expect to have firmware-first handling of RAS SErrors, with errors notified via an APEI method. For systems without firmware-first, add some minimal handling to KVM. There are two ways KVM can take an SError due to a guest, either may be a RAS error: we exit the guest due to an SError routed to EL2 by HCR_EL2.AMO, or we take an SError from EL2 when we unmask PSTATE.A from __guest_exit. For SError that interrupt a guest and are routed to EL2 the existing behaviour is to inject an impdef SError into the guest. Add code to handle RAS SError based on the ESR. For uncontained and uncategorized errors arm64_is_fatal_ras_serror() will panic(), these errors compromise the host too. All other error types are contained: For the fatal errors the vCPU can't make progress, so we inject a virtual SError. We ignore contained errors where we can make progress as if we're lucky, we may not hit them again. If only some of the CPUs support RAS the guest will see the cpufeature sanitised version of the id registers, but we may still take RAS SError on this CPU. Move the SError handling out of handle_exit() into a new handler that runs before we can be preempted. This allows us to use this_cpu_has_cap(), via arm64_is_ras_serror(). Acked-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: James Morse <james.morse@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2018-01-16KVM: arm/arm64: mask/unmask daif around VHE guestsJames Morse1-0/+4
Non-VHE systems take an exception to EL2 in order to world-switch into the guest. When returning from the guest KVM implicitly restores the DAIF flags when it returns to the kernel at EL1. With VHE none of this exception-level jumping happens, so KVMs world-switch code is exposed to the host kernel's DAIF values, and KVM spills the guest-exit DAIF values back into the host kernel. On entry to a guest we have Debug and SError exceptions unmasked, KVM has switched VBAR but isn't prepared to handle these. On guest exit Debug exceptions are left disabled once we return to the host and will stay this way until we enter user space. Add a helper to mask/unmask DAIF around VHE guests. The unmask can only happen after the hosts VBAR value has been synchronised by the isb in __vhe_hyp_call (via kvm_call_hyp()). Masking could be as late as setting KVMs VBAR value, but is kept here for symmetry. Acked-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: James Morse <james.morse@arm.com> Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2018-01-15KVM: arm/arm64: fix HYP ID map extension to 52 bitsKristina Martsenko1-12/+6
Commit fa2a8445b1d3 incorrectly masks the index of the HYP ID map pgd entry, causing a non-VHE kernel to hang during boot. This happens when VA_BITS=48 and the ID map text is in 52-bit physical memory. In this case we don't need an extra table level but need more entries in the top-level table, so we need to map into hyp_pgd and need to use __kvm_idmap_ptrs_per_pgd to mask in the extra bits. However, __create_hyp_mappings currently masks by PTRS_PER_PGD instead. Fix it so that we always use __kvm_idmap_ptrs_per_pgd for the HYP ID map. This ensures that we use the larger mask for the top-level ID map table when it has more entries. In all other cases, PTRS_PER_PGD is used as normal. Fixes: fa2a8445b1d3 ("arm64: allow ID map to be extended to 52 bits") Acked-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2018-01-13KVM: arm/arm64: Convert kvm_host_cpu_state to a static per-cpu allocationJames Morse1-15/+3
kvm_host_cpu_state is a per-cpu allocation made from kvm_arch_init() used to store the host EL1 registers when KVM switches to a guest. Make it easier for ASM to generate pointers into this per-cpu memory by making it a static allocation. Signed-off-by: James Morse <james.morse@arm.com> Acked-by: Christoffer Dall <cdall@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2018-01-08arm64: KVM: Use per-CPU vector when BP hardening is enabledMarc Zyngier1-1/+7
Now that we have per-CPU vectors, let's plug then in the KVM/arm64 code. Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2017-12-22arm64: allow ID map to be extended to 52 bitsKristina Martsenko1-2/+8
Currently, when using VA_BITS < 48, if the ID map text happens to be placed in physical memory above VA_BITS, we increase the VA size (up to 48) and create a new table level, in order to map in the ID map text. This is okay because the system always supports 48 bits of VA. This patch extends the code such that if the system supports 52 bits of VA, and the ID map text is placed that high up, then we increase the VA size accordingly, up to 52. One difference from the current implementation is that so far the condition of VA_BITS < 48 has meant that the top level table is always "full", with the maximum number of entries, and an extra table level is always needed. Now, when VA_BITS = 48 (and using 64k pages), the top level table is not full, and we simply need to increase the number of entries in it, instead of creating a new table level. Tested-by: Suzuki K Poulose <suzuki.poulose@arm.com> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Reviewed-by: Marc Zyngier <marc.zyngier@arm.com> Tested-by: Bob Picco <bob.picco@oracle.com> Reviewed-by: Bob Picco <bob.picco@oracle.com> Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com> [catalin.marinas@arm.com: reduce arguments to __create_hyp_mappings()] [catalin.marinas@arm.com: reworked/renamed __cpu_uses_extended_idmap_level()] Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2017-12-22arm64: handle 52-bit addresses in TTBRKristina Martsenko1-1/+1
The top 4 bits of a 52-bit physical address are positioned at bits 2..5 in the TTBR registers. Introduce a couple of macros to move the bits there, and change all TTBR writers to use them. Leave TTBR0 PAN code unchanged, to avoid complicating it. A system with 52-bit PA will have PAN anyway (because it's ARMv8.1 or later), and a system without 52-bit PA can only use up to 48-bit PAs. A later patch in this series will add a kconfig dependency to ensure PAN is configured. In addition, when using 52-bit PA there is a special alignment requirement on the top-level table. We don't currently have any VA_BITS configuration that would violate the requirement, but one could be added in the future, so add a compile-time BUG_ON to check for it. Tested-by: Suzuki K Poulose <suzuki.poulose@arm.com> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Reviewed-by: Marc Zyngier <marc.zyngier@arm.com> Tested-by: Bob Picco <bob.picco@oracle.com> Reviewed-by: Bob Picco <bob.picco@oracle.com> Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com> [catalin.marinas@arm.com: added TTBR_BADD_MASK_52 comment] Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2017-12-06KVM: x86: fix APIC page invalidationRadim Krčmář1-0/+8
Implementation of the unpinned APIC page didn't update the VMCS address cache when invalidation was done through range mmu notifiers. This became a problem when the page notifier was removed. Re-introduce the arch-specific helper and call it from ...range_start. Reported-by: Fabian Grünbichler <f.gruenbichler@proxmox.com> Fixes: 38b9917350cb ("kvm: vmx: Implement set_apic_access_page_addr") Fixes: 369ea8242c0f ("mm/rmap: update to new mmu_notifier semantic v2") Cc: <stable@vger.kernel.org> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Andrea Arcangeli <aarcange@redhat.com> Tested-by: Wanpeng Li <wanpeng.li@hotmail.com> Tested-by: Fabian Grünbichler <f.gruenbichler@proxmox.com> Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
2017-12-05Merge tag 'kvm-arm-fixes-for-v4.15-1' of ↵Radim Krčmář9-50/+43
git://git.kernel.org/pub/scm/linux/kernel/git/kvmarm/kvmarm KVM/ARM Fixes for v4.15. Fixes: - A number of issues in the vgic discovered using SMATCH - A bit one-off calculation in out stage base address mask (32-bit and 64-bit) - Fixes to single-step debugging instructions that trap for other reasons such as MMMIO aborts - Printing unavailable hyp mode as error - Potential spinlock deadlock in the vgic - Avoid calling vgic vcpu free more than once - Broken bit calculation for big endian systems
2017-12-04KVM: arm/arm64: Fix broken GICH_ELRSR big endian conversionChristoffer Dall1-4/+0
We are incorrectly rearranging 32-bit words inside a 64-bit typed value for big endian systems, which would result in never marking a virtual interrupt as inactive on big endian systems (assuming 32 or fewer LRs on the hardware). Fix this by not doing any word order manipulation for the typed values. Cc: <stable@vger.kernel.org> Acked-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
2017-12-01KVM: arm/arm64: kvm_arch_destroy_vm cleanupsAndrew Jones1-1/+1
kvm_vgic_vcpu_destroy already gets called from kvm_vgic_destroy for each vcpu, so we don't have to call it from kvm_arch_vcpu_free. Additionally the other architectures set kvm->online_vcpus to zero after freeing them. We might as well do that for ARM too. Signed-off-by: Andrew Jones <drjones@redhat.com> Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
2017-12-01KVM: arm/arm64: Fix spinlock acquisition in vgic_set_ownerMarc Zyngier1-2/+3
vgic_set_owner acquires the irq lock without disabling interrupts, resulting in a lockdep splat (an interrupt could fire and result in the same lock being taken if the same virtual irq is to be injected). In practice, it is almost impossible to trigger this bug, but better safe than sorry. Convert the lock acquisition to a spin_lock_irqsave() and keep lockdep happy. Reported-by: James Morse <james.morse@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
2017-11-30Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvmLinus Torvalds2-5/+26
Pull KVM fixes from Paolo Bonzini: - x86 bugfixes: APIC, nested virtualization, IOAPIC - PPC bugfix: HPT guests on a POWER9 radix host * tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm: (26 commits) KVM: Let KVM_SET_SIGNAL_MASK work as advertised KVM: VMX: Fix vmx->nested freeing when no SMI handler KVM: VMX: Fix rflags cache during vCPU reset KVM: X86: Fix softlockup when get the current kvmclock KVM: lapic: Fixup LDR on load in x2apic KVM: lapic: Split out x2apic ldr calculation KVM: PPC: Book3S HV: Fix migration and HPT resizing of HPT guests on radix hosts KVM: vmx: use X86_CR4_UMIP and X86_FEATURE_UMIP KVM: x86: Fix CPUID function for word 6 (80000001_ECX) KVM: nVMX: Fix vmx_check_nested_events() return value in case an event was reinjected to L2 KVM: x86: ioapic: Preserve read-only values in the redirection table KVM: x86: ioapic: Clear Remote IRR when entry is switched to edge-triggered KVM: x86: ioapic: Remove redundant check for Remote IRR in ioapic_set_irq KVM: x86: ioapic: Don't fire level irq when Remote IRR set KVM: x86: ioapic: Fix level-triggered EOI and IOAPIC reconfigure race KVM: x86: inject exceptions produced by x86_decode_insn KVM: x86: Allow suppressing prints on RDMSR/WRMSR of unhandled MSRs KVM: x86: fix em_fxstor() sleeping while in atomic KVM: nVMX: Fix mmu context after VMLAUNCH/VMRESUME failure KVM: nVMX: Validate the IA32_BNDCFGS on nested VM-entry ...
2017-11-29kvm: arm: don't treat unavailable HYP mode as an errorArd Biesheuvel1-1/+1
Since it is perfectly legal to run the kernel at EL1, it is not actually an error if HYP mode is not available when attempting to initialize KVM, given that KVM support cannot be built as a module. So demote the kvm_err() to kvm_info(), which prevents the error from appearing on an otherwise 'quiet' console. Acked-by: Marc Zyngier <marc.zyngier@arm.com> Acked-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
2017-11-29KVM: arm/arm64: Avoid attempting to load timer vgic state without a vgicChristoffer Dall1-1/+4
The timer optimization patches inadvertendly changed the logic to always load the timer state as if we have a vgic, even if we don't have a vgic. Fix this by doing the usual irqchip_in_kernel() check and call the appropriate load function. Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
2017-11-29kvm: arm64: handle single-step of userspace mmio instructionsAlex Bennée1-0/+3
The system state of KVM when using userspace emulation is not complete until we return into KVM_RUN. To handle mmio related updates we wait until they have been committed and then schedule our KVM_EXIT_DEBUG. The kvm_arm_handle_step_debug() helper tells us if we need to return and sets up the exit_reason for us. Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
2017-11-29KVM: arm/arm64: vgic-v4: Only perform an unmap for valid vLPIsMarc Zyngier1-2/+4
Before performing an unmap, let's check that what we have was really mapped the first place. Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
2017-11-29KVM: arm/arm64: vgic-its: Check result of allocation before useMarc Zyngier1-0/+2
We miss a test against NULL after allocation. Fixes: 6d03a68f8054 ("KVM: arm64: vgic-its: Turn device_id validation into generic ID validation") Cc: stable@vger.kernel.org # 4.8 Reported-by: AKASHI Takahiro <takahiro.akashi@linaro.org> Acked-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
2017-11-29KVM: arm/arm64: vgic-its: Preserve the revious read from the pending tableMarc Zyngier1-1/+1
The current pending table parsing code assumes that we keep the previous read of the pending bits, but keep that variable in the current block, making sure it is discarded on each loop. We end-up using whatever is on the stack. Who knows, it might just be the right thing... Fixes: 33d3bc9556a7d ("KVM: arm64: vgic-its: Read initial LPI pending table") Cc: stable@vger.kernel.org # 4.8 Reported-by: AKASHI Takahiro <takahiro.akashi@linaro.org> Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
2017-11-29KVM: arm/arm64: vgic: Preserve the revious read from the pending tableMarc Zyngier1-1/+1
The current pending table parsing code assumes that we keep the previous read of the pending bits, but keep that variable in the current block, making sure it is discarded on each loop. We end-up using whatever is on the stack. Who knows, it might just be the right thing... Fixes: 280771252c1ba ("KVM: arm64: vgic-v3: KVM_DEV_ARM_VGIC_SAVE_PENDING_TABLES") Cc: stable@vger.kernel.org # 4.12 Reported-by: AKASHI Takahiro <takahiro.akashi@linaro.org> Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
2017-11-29KVM: arm/arm64: vgic-irqfd: Fix MSI entry allocationMarc Zyngier1-2/+1
Using the size of the structure we're allocating is a good idea and avoids any surprise... In this case, we're happilly confusing kvm_kernel_irq_routing_entry and kvm_irq_routing_entry... Fixes: 95b110ab9a09 ("KVM: arm/arm64: Enable irqchip routing") Cc: stable@vger.kernel.org # 4.8 Reported-by: AKASHI Takahiro <takahiro.akashi@linaro.org> Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
2017-11-29KVM: arm/arm64: VGIC: extend !vgic_is_initialized guardAndre Przywara1-1/+2
Commit f39d16cbabf9 ("KVM: arm/arm64: Guard kvm_vgic_map_is_active against !vgic_initialized") introduced a check whether the VGIC has been initialized before accessing the spinlock and the VGIC data structure. However the vgic_get_irq() call in the variable declaration sneaked through the net, so lets make sure that this also gets called only after we actually allocated the arrays this function accesses. Reviewed-by: Eric Auger <eric.auger@redhat.com> Signed-off-by: Andre Przywara <andre.przywara@arm.com> Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
2017-11-29KVM: arm/arm64: Don't enable/disable physical timer access on VHEChristoffer Dall2-34/+20
After the timer optimization rework we accidentally end up calling physical timer enable/disable functions on VHE systems, which is neither needed nor correct, since the CNTHCTL_EL2 register format is different when HCR_EL2.E2H is set. The CNTHCTL_EL2 is initialized when CPUs become online in kvm_timer_init_vhe() and we don't have to call these functions on VHE systems, which also allows us to inline the non-VHE functionality. Reported-by: Jintack Lim <jintack@cs.columbia.edu> Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
2017-11-27KVM: Let KVM_SET_SIGNAL_MASK work as advertisedJan H. Schönherr2-5/+26
KVM API says for the signal mask you set via KVM_SET_SIGNAL_MASK, that "any unblocked signal received [...] will cause KVM_RUN to return with -EINTR" and that "the signal will only be delivered if not blocked by the original signal mask". This, however, is only true, when the calling task has a signal handler registered for a signal. If not, signal evaluation is short-circuited for SIG_IGN and SIG_DFL, and the signal is either ignored without KVM_RUN returning or the whole process is terminated. Make KVM_SET_SIGNAL_MASK behave as advertised by utilizing logic similar to that in do_sigtimedwait() to avoid short-circuiting of signals. Signed-off-by: Jan H. Schönherr <jschoenh@amazon.de> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2017-11-24Merge tag 'kvm-4.15-2' of git://git.kernel.org/pub/scm/virt/kvm/kvmLinus Torvalds10-111/+641
Pull KVM updates from Radim Krčmář: "Trimmed second batch of KVM changes for Linux 4.15: - GICv4 Support for KVM/ARM - re-introduce support for CPUs without virtual NMI (cc stable) and allow testing of KVM without virtual NMI on available CPUs - fix long-standing performance issues with assigned devices on AMD (cc stable)" * tag 'kvm-4.15-2' of git://git.kernel.org/pub/scm/virt/kvm/kvm: (30 commits) kvm: vmx: Allow disabling virtual NMI support kvm: vmx: Reinstate support for CPUs without virtual NMI KVM: SVM: obey guest PAT KVM: arm/arm64: Don't queue VLPIs on INV/INVALL KVM: arm/arm64: Fix GICv4 ITS initialization issues KVM: arm/arm64: GICv4: Theory of operations KVM: arm/arm64: GICv4: Enable VLPI support KVM: arm/arm64: GICv4: Prevent userspace from changing doorbell affinity KVM: arm/arm64: GICv4: Prevent a VM using GICv4 from being saved KVM: arm/arm64: GICv4: Enable virtual cpuif if VLPIs can be delivered KVM: arm/arm64: GICv4: Hook vPE scheduling into vgic flush/sync KVM: arm/arm64: GICv4: Use the doorbell interrupt as an unblocking source KVM: arm/arm64: GICv4: Add doorbell interrupt handling KVM: arm/arm64: GICv4: Use pending_last as a scheduling hint KVM: arm/arm64: GICv4: Handle INVALL applied to a vPE KVM: arm/arm64: GICv4: Propagate property updates to VLPIs KVM: arm/arm64: GICv4: Handle MOVALL applied to a vPE KVM: arm/arm64: GICv4: Handle CLEAR applied to a VLPI KVM: arm/arm64: GICv4: Propagate affinity changes to the physical ITS KVM: arm/arm64: GICv4: Unmap VLPI when freeing an LPI ...
2017-11-17Merge branch 'misc.compat' of ↵Linus Torvalds1-5/+2
git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs Pull compat and uaccess updates from Al Viro: - {get,put}_compat_sigset() series - assorted compat ioctl stuff - more set_fs() elimination - a few more timespec64 conversions - several removals of pointless access_ok() in places where it was followed only by non-__ variants of primitives * 'misc.compat' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs: (24 commits) coredump: call do_unlinkat directly instead of sys_unlink fs: expose do_unlinkat for built-in callers ext4: take handling of EXT4_IOC_GROUP_ADD into a helper, get rid of set_fs() ipmi: get rid of pointless access_ok() pi433: sanitize ioctl cxlflash: get rid of pointless access_ok() mtdchar: get rid of pointless access_ok() r128: switch compat ioctls to drm_ioctl_kernel() selection: get rid of field-by-field copyin VT_RESIZEX: get rid of field-by-field copyin i2c compat ioctls: move to ->compat_ioctl() sched_rr_get_interval(): move compat to native, get rid of set_fs() mips: switch to {get,put}_compat_sigset() sparc: switch to {get,put}_compat_sigset() s390: switch to {get,put}_compat_sigset() ppc: switch to {get,put}_compat_sigset() parisc: switch to {get,put}_compat_sigset() get_compat_sigset() get rid of {get,put}_compat_itimerspec() io_getevents: Use timespec64 to represent timeouts ...
2017-11-17Merge tag 'kvm-arm-gicv4-for-v4.15' of ↵Paolo Bonzini10-111/+641
git://git.kernel.org/pub/scm/linux/kernel/git/kvmarm/kvmarm into HEAD GICv4 Support for KVM/ARM for v4.15
2017-11-16Merge tag 'kvm-4.15-1' of git://git.kernel.org/pub/scm/virt/kvm/kvmLinus Torvalds13-365/+673
Pull KVM updates from Radim Krčmář: "First batch of KVM changes for 4.15 Common: - Python 3 support in kvm_stat - Accounting of slabs to kmemcg ARM: - Optimized arch timer handling for KVM/ARM - Improvements to the VGIC ITS code and introduction of an ITS reset ioctl - Unification of the 32-bit fault injection logic - More exact external abort matching logic PPC: - Support for running hashed page table (HPT) MMU mode on a host that is using the radix MMU mode; single threaded mode on POWER 9 is added as a pre-requisite - Resolution of merge conflicts with the last second 4.14 HPT fixes - Fixes and cleanups s390: - Some initial preparation patches for exitless interrupts and crypto - New capability for AIS migration - Fixes x86: - Improved emulation of LAPIC timer mode changes, MCi_STATUS MSRs, and after-reset state - Refined dependencies for VMX features - Fixes for nested SMI injection - A lot of cleanups" * tag 'kvm-4.15-1' of git://git.kernel.org/pub/scm/virt/kvm/kvm: (89 commits) KVM: s390: provide a capability for AIS state migration KVM: s390: clear_io_irq() requests are not expected for adapter interrupts KVM: s390: abstract conversion between isc and enum irq_types KVM: s390: vsie: use common code functions for pinning KVM: s390: SIE considerations for AP Queue virtualization KVM: s390: document memory ordering for kvm_s390_vcpu_wakeup KVM: PPC: Book3S HV: Cosmetic post-merge cleanups KVM: arm/arm64: fix the incompatible matching for external abort KVM: arm/arm64: Unify 32bit fault injection KVM: arm/arm64: vgic-its: Implement KVM_DEV_ARM_ITS_CTRL_RESET KVM: arm/arm64: Document KVM_DEV_ARM_ITS_CTRL_RESET KVM: arm/arm64: vgic-its: Free caches when GITS_BASER Valid bit is cleared KVM: arm/arm64: vgic-its: New helper functions to free the caches KVM: arm/arm64: vgic-its: Remove kvm_its_unmap_device arm/arm64: KVM: Load the timer state when enabling the timer KVM: arm/arm64: Rework kvm_timer_should_fire KVM: arm/arm64: Get rid of kvm_timer_flush_hwstate KVM: arm/arm64: Avoid phys timer emulation in vcpu entry/exit KVM: arm/arm64: Move phys_timer_emulate function KVM: arm/arm64: Use kvm_arm_timer_set/get_reg for guest register traps ...
2017-11-16Merge tag 'kvm-s390-next-4.15-1' of ↵Radim Krčmář1-2/+2
git://git.kernel.org/pub/scm/linux/kernel/git/kvms390/linux KVM: s390: fixes and improvements for 4.15 - Some initial preparation patches for exitless interrupts and crypto - New capability for AIS migration - Fixes - merge of the sthyi tree from the base s390 team, which moves the sthyi out of KVM into a shared function also for non-KVM
2017-11-15Merge tag 'arm64-upstream' of ↵Linus Torvalds1-0/+3
git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux Pull arm64 updates from Will Deacon: "The big highlight is support for the Scalable Vector Extension (SVE) which required extensive ABI work to ensure we don't break existing applications by blowing away their signal stack with the rather large new vector context (<= 2 kbit per vector register). There's further work to be done optimising things like exception return, but the ABI is solid now. Much of the line count comes from some new PMU drivers we have, but they're pretty self-contained and I suspect we'll have more of them in future. Plenty of acronym soup here: - initial support for the Scalable Vector Extension (SVE) - improved handling for SError interrupts (required to handle RAS events) - enable GCC support for 128-bit integer types - remove kernel text addresses from backtraces and register dumps - use of WFE to implement long delay()s - ACPI IORT updates from Lorenzo Pieralisi - perf PMU driver for the Statistical Profiling Extension (SPE) - perf PMU driver for Hisilicon's system PMUs - misc cleanups and non-critical fixes" * tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (97 commits) arm64: Make ARMV8_DEPRECATED depend on SYSCTL arm64: Implement __lshrti3 library function arm64: support __int128 on gcc 5+ arm64/sve: Add documentation arm64/sve: Detect SVE and activate runtime support arm64/sve: KVM: Hide SVE from CPU features exposed to guests arm64/sve: KVM: Treat guest SVE use as undefined instruction execution arm64/sve: KVM: Prevent guests from using SVE arm64/sve: Add sysctl to set the default vector length for new processes arm64/sve: Add prctl controls for userspace vector length management arm64/sve: ptrace and ELF coredump support arm64/sve: Preserve SVE registers around EFI runtime service calls arm64/sve: Preserve SVE registers around kernel-mode NEON use arm64/sve: Probe SVE capabilities and usable vector lengths arm64: cpufeature: Move sys_caps_initialised declarations arm64/sve: Backend logic for setting the vector length arm64/sve: Signal handling support arm64/sve: Support vector length resetting for new processes arm64/sve: Core task context handling arm64/sve: Low-level CPU setup ...
2017-11-10KVM: arm/arm64: Don't queue VLPIs on INV/INVALLChristoffer Dall1-3/+6
Since VLPIs are injected directly by the hardware there's no need to mark these as pending in software and queue them on the AP list. Reviewed-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
2017-11-10KVM: arm/arm64: Fix GICv4 ITS initialization issuesChristoffer Dall3-6/+7
We should only try to initialize GICv4 data structures on a GICv4 capable system. Move the vgic_supports_direct_msis() check inito vgic_v4_init() so that any KVM VGIC initialization path does not fail on non-GICv4 systems. Also be slightly more strict in the checking of the return value in vgic_its_create, and only error out on negative return values from the vgic_v4_init() function. This is important because the kvm device code only treats negative values as errors and only cleans up in this case. Errornously treating a positive return value as an error from the vgic_v4_init() function can lead to NULL pointer dereferences, as has recently been observed. Acked-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
2017-11-10KVM: arm/arm64: GICv4: Theory of operationsMarc Zyngier1-0/+67
Yet another braindump so I can free some cells... Acked-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
2017-11-10KVM: arm/arm64: GICv4: Enable VLPI supportMarc Zyngier1-0/+14
All it takes is the has_v4 flag to be set in gic_kvm_info as well as "kvm-arm.vgic_v4_enable=1" being passed on the command line for GICv4 to be enabled in KVM. Acked-by: Christoffer Dall <cdall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
2017-11-10KVM: arm/arm64: GICv4: Prevent userspace from changing doorbell affinityMarc Zyngier1-2/+4
We so far allocate the doorbell interrupts without taking any special measure regarding the affinity of these interrupts. We simply move them around as required when the vcpu gets scheduled on a different CPU. But that's counting without userspace (and the evil irqbalance) that can try and move the VPE interrupt around, causing the ITS code to emit VMOVP commands and remap the doorbell to another redistributor. Worse, this can happen while the vcpu is running, causing all kind of trouble if the VPE is already resident, and we end-up in UNPRED territory. So let's take a definitive action and prevent userspace from messing with us. This is just a matter of adding IRQ_NO_BALANCING to the set of flags we already have, letting the kernel in sole control of the affinity. Acked-by: Christoffer Dall <christoffer.dall@linaro.org> Reviewed-by: Eric Auger <eric.auger@redhat.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
2017-11-10KVM: arm/arm64: GICv4: Prevent a VM using GICv4 from being savedMarc Zyngier1-0/+9
The GICv4 architecture doesn't make it easy for save/restore to work, as it doesn't give any guarantee that the pending state is written into the pending table. So let's not take any chance, and let's return an error if we encounter any LPI that has the HW bit set. In order for userspace to distinguish this error from other failure modes, use -EACCES as an error code. Reviewed-by: Eric Auger <eric.auger@redhat.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
2017-11-10KVM: arm/arm64: GICv4: Enable virtual cpuif if VLPIs can be deliveredMarc Zyngier1-3/+6
In order for VLPIs to be delivered to the guest, we must make sure that the virtual cpuif is always enabled, irrespective of the presence of virtual interrupt in the LRs. Acked-by: Christoffer Dall <cdall@linaro.org> Reviewed-by: Eric Auger <eric.auger@redhat.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
2017-11-10KVM: arm/arm64: GICv4: Hook vPE scheduling into vgic flush/syncMarc Zyngier3-0/+45
The redistributor needs to be told which vPE is about to be run, and tells us whether there is any pending VLPI on exit. Let's add the scheduling calls to the vgic flush/sync functions, allowing the VLPIs to be delivered to the guest. Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
2017-11-10KVM: arm/arm64: GICv4: Use the doorbell interrupt as an unblocking sourceMarc Zyngier2-0/+20
The doorbell interrupt is only useful if the vcpu is blocked on WFI. In all other cases, recieving a doorbell interrupt is just a waste of cycles. So let's only enable the doorbell if a vcpu is getting blocked, and disable it when it is unblocked. This is very similar to what we're doing for the background timer. Reviewed-by: Christoffer Dall <cdall@linaro.org> Reviewed-by: Eric Auger <eric.auger@redhat.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
2017-11-10KVM: arm/arm64: GICv4: Add doorbell interrupt handlingMarc Zyngier1-0/+48
When a vPE is not running, a VLPI being made pending results in a doorbell interrupt being delivered. Let's handle this interrupt and update the pending_last flag that indicates that VLPIs are pending. The corresponding vcpu is also kicked into action. Special care is taken to prevent the doorbell from being enabled at request time (this is controlled separately), and to make the disabling on the interrupt non-lazy. Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org> Reviewed-by: Eric Auger <eric.auger@redhat.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
2017-11-10KVM: arm/arm64: GICv4: Use pending_last as a scheduling hintMarc Zyngier1-0/+3
When a vPE exits, the pending_last flag is set when there are pending VLPIs stored in the pending table. Similarily, this flag will be set when a doorbell interrupt fires, as it indicates the same condition. Let's update kvm_vgic_vcpu_pending_irq() to account for that flag as well, making a vcpu runnable when set. Acked-by: Christoffer Dall <cdall@linaro.org> Reviewed-by: Eric Auger <eric.auger@redhat.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
2017-11-10KVM: arm/arm64: GICv4: Handle INVALL applied to a vPEMarc Zyngier1-6/+9
There is no need to perform an INV for each interrupt when updating multiple interrupts. Instead, we can rely on the final VINVALL that gets sent to the ITS to do the work for all of them. Acked-by: Christoffer Dall <cdall@linaro.org> Reviewed-by: Eric Auger <eric.auger@redhat.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
2017-11-10KVM: arm/arm64: GICv4: Propagate property updates to VLPIsMarc Zyngier1-0/+3
Upon updating a property, we propagate it all the way to the physical ITS, and ask for an INV command to be executed there. Acked-by: Christoffer Dall <cdall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
2017-11-10KVM: arm/arm64: GICv4: Handle MOVALL applied to a vPEMarc Zyngier1-9/+10
The current implementation of MOVALL doesn't allow us to call into the core ITS code as we hold a number of spinlocks. Let's try a method used in other parts of the code, were we copy the intids of the candicate interrupts, and then do whatever we need to do with them outside of the critical section. This allows us to move the interrupts one by one, at the expense of a bit of CPU time. Who cares? MOVALL is such a stupid command anyway... Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
2017-11-10KVM: arm/arm64: GICv4: Handle CLEAR applied to a VLPIMarc Zyngier1-0/+4
Handling CLEAR is pretty easy. Just ask the ITS driver to clear the corresponding pending bit (which will turn into a CLEAR command on the physical side). Acked-by: Christoffer Dall <cdall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
2017-11-10KVM: arm/arm64: GICv4: Propagate affinity changes to the physical ITSMarc Zyngier1-1/+15
When the guest issues an affinity change, we need to tell the physical ITS that we're now targetting a new vcpu. This is done by extracting the current mapping, updating the target, and reapplying the mapping. Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
2017-11-10KVM: arm/arm64: GICv4: Unmap VLPI when freeing an LPIMarc Zyngier1-1/+5
When freeing an LPI (on a DISCARD command, for example), we need to unmap the VLPI down to the physical ITS level. Acked-by: Christoffer Dall <cdall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
2017-11-10KVM: arm/arm64: GICv4: Handle INT command applied to a VLPIMarc Zyngier1-0/+4
If the guest issues an INT command targetting a VLPI, let's call into the irq_set_irqchip_state() helper to make it pending on the physical side. This works just as well if userspace decides to inject an interrupt using the normal userspace API... Acked-by: Christoffer Dall <cdall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
2017-11-10KVM: arm/arm64: GICv4: Wire mapping/unmapping of VLPIs in VFIO irq bypassMarc Zyngier2-2/+108
Let's use the irq bypass mechanism also used for x86 posted interrupts to intercept the virtual PCIe endpoint configuration and establish our LPI->VLPI mapping. Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
2017-11-10KVM: arm/arm64: GICv4: Add init/teardown of the per-VM vPE irq domainMarc Zyngier4-0/+102
In order to control the GICv4 view of virtual CPUs, we rely on an irqdomain allocated for that purpose. Let's add a couple of helpers to that effect. At the same time, the vgic data structures gain new fields to track all this... erm... wonderful stuff. The way we hook into the vgic init is slightly convoluted. We need the vgic to be initialized (in order to guarantee that the number of vcpus is now fixed), and we must have a vITS (otherwise this is all very pointless). So we end-up calling the init from both vgic_init and vgic_its_create. Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>