summaryrefslogtreecommitdiffstats
path: root/drivers/pci
diff options
context:
space:
mode:
authorLinus Torvalds <torvalds@linux-foundation.org>2022-12-12 11:21:29 -0800
committerLinus Torvalds <torvalds@linux-foundation.org>2022-12-12 11:21:29 -0800
commit9d33edb20f7e6943250d6bb96ceaf2368f674d51 (patch)
tree4f31a59b262b5cca1905b3f74a43cfab09bed416 /drivers/pci
parentf10bc40168032962ebee26894bdbdc972cde35bf (diff)
parent6132a490f9c81d621fdb4e8c12f617dc062130a2 (diff)
downloadlinux-9d33edb20f7e6943250d6bb96ceaf2368f674d51.tar.bz2
Merge tag 'irq-core-2022-12-10' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull irq updates from Thomas Gleixner: "Updates for the interrupt core and driver subsystem: The bulk is the rework of the MSI subsystem to support per device MSI interrupt domains. This solves conceptual problems of the current PCI/MSI design which are in the way of providing support for PCI/MSI[-X] and the upcoming PCI/IMS mechanism on the same device. IMS (Interrupt Message Store] is a new specification which allows device manufactures to provide implementation defined storage for MSI messages (as opposed to PCI/MSI and PCI/MSI-X that has a specified message store which is uniform accross all devices). The PCI/MSI[-X] uniformity allowed us to get away with "global" PCI/MSI domains. IMS not only allows to overcome the size limitations of the MSI-X table, but also gives the device manufacturer the freedom to store the message in arbitrary places, even in host memory which is shared with the device. There have been several attempts to glue this into the current MSI code, but after lengthy discussions it turned out that there is a fundamental design problem in the current PCI/MSI-X implementation. This needs some historical background. When PCI/MSI[-X] support was added around 2003, interrupt management was completely different from what we have today in the actively developed architectures. Interrupt management was completely architecture specific and while there were attempts to create common infrastructure the commonalities were rudimentary and just providing shared data structures and interfaces so that drivers could be written in an architecture agnostic way. The initial PCI/MSI[-X] support obviously plugged into this model which resulted in some basic shared infrastructure in the PCI core code for setting up MSI descriptors, which are a pure software construct for holding data relevant for a particular MSI interrupt, but the actual association to Linux interrupts was completely architecture specific. This model is still supported today to keep museum architectures and notorious stragglers alive. In 2013 Intel tried to add support for hot-pluggable IO/APICs to the kernel, which was creating yet another architecture specific mechanism and resulted in an unholy mess on top of the existing horrors of x86 interrupt handling. The x86 interrupt management code was already an incomprehensible maze of indirections between the CPU vector management, interrupt remapping and the actual IO/APIC and PCI/MSI[-X] implementation. At roughly the same time ARM struggled with the ever growing SoC specific extensions which were glued on top of the architected GIC interrupt controller. This resulted in a fundamental redesign of interrupt management and provided the today prevailing concept of hierarchical interrupt domains. This allowed to disentangle the interactions between x86 vector domain and interrupt remapping and also allowed ARM to handle the zoo of SoC specific interrupt components in a sane way. The concept of hierarchical interrupt domains aims to encapsulate the functionality of particular IP blocks which are involved in interrupt delivery so that they become extensible and pluggable. The X86 encapsulation looks like this: |--- device 1 [Vector]---[Remapping]---[PCI/MSI]--|... |--- device N where the remapping domain is an optional component and in case that it is not available the PCI/MSI[-X] domains have the vector domain as their parent. This reduced the required interaction between the domains pretty much to the initialization phase where it is obviously required to establish the proper parent relation ship in the components of the hierarchy. While in most cases the model is strictly representing the chain of IP blocks and abstracting them so they can be plugged together to form a hierarchy, the design stopped short on PCI/MSI[-X]. Looking at the hardware it's clear that the actual PCI/MSI[-X] interrupt controller is not a global entity, but strict a per PCI device entity. Here we took a short cut on the hierarchical model and went for the easy solution of providing "global" PCI/MSI domains which was possible because the PCI/MSI[-X] handling is uniform across the devices. This also allowed to keep the existing PCI/MSI[-X] infrastructure mostly unchanged which in turn made it simple to keep the existing architecture specific management alive. A similar problem was created in the ARM world with support for IP block specific message storage. Instead of going all the way to stack a IP block specific domain on top of the generic MSI domain this ended in a construct which provides a "global" platform MSI domain which allows overriding the irq_write_msi_msg() callback per allocation. In course of the lengthy discussions we identified other abuse of the MSI infrastructure in wireless drivers, NTB etc. where support for implementation specific message storage was just mindlessly glued into the existing infrastructure. Some of this just works by chance on particular platforms but will fail in hard to diagnose ways when the driver is used on platforms where the underlying MSI interrupt management code does not expect the creative abuse. Another shortcoming of today's PCI/MSI-X support is the inability to allocate or free individual vectors after the initial enablement of MSI-X. This results in an works by chance implementation of VFIO (PCI pass-through) where interrupts on the host side are not set up upfront to avoid resource exhaustion. They are expanded at run-time when the guest actually tries to use them. The way how this is implemented is that the host disables MSI-X and then re-enables it with a larger number of vectors again. That works by chance because most device drivers set up all interrupts before the device actually will utilize them. But that's not universally true because some drivers allocate a large enough number of vectors but do not utilize them until it's actually required, e.g. for acceleration support. But at that point other interrupts of the device might be in active use and the MSI-X disable/enable dance can just result in losing interrupts and therefore hard to diagnose subtle problems. Last but not least the "global" PCI/MSI-X domain approach prevents to utilize PCI/MSI[-X] and PCI/IMS on the same device due to the fact that IMS is not longer providing a uniform storage and configuration model. The solution to this is to implement the missing step and switch from global PCI/MSI domains to per device PCI/MSI domains. The resulting hierarchy then looks like this: |--- [PCI/MSI] device 1 [Vector]---[Remapping]---|... |--- [PCI/MSI] device N which in turn allows to provide support for multiple domains per device: |--- [PCI/MSI] device 1 |--- [PCI/IMS] device 1 [Vector]---[Remapping]---|... |--- [PCI/MSI] device N |--- [PCI/IMS] device N This work converts the MSI and PCI/MSI core and the x86 interrupt domains to the new model, provides new interfaces for post-enable allocation/free of MSI-X interrupts and the base framework for PCI/IMS. PCI/IMS has been verified with the work in progress IDXD driver. There is work in progress to convert ARM over which will replace the platform MSI train-wreck. The cleanup of VFIO, NTB and other creative "solutions" are in the works as well. Drivers: - Updates for the LoongArch interrupt chip drivers - Support for MTK CIRQv2 - The usual small fixes and updates all over the place" * tag 'irq-core-2022-12-10' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (134 commits) irqchip/ti-sci-inta: Fix kernel doc irqchip/gic-v2m: Mark a few functions __init irqchip/gic-v2m: Include arm-gic-common.h irqchip/irq-mvebu-icu: Fix works by chance pointer assignment iommu/amd: Enable PCI/IMS iommu/vt-d: Enable PCI/IMS x86/apic/msi: Enable PCI/IMS PCI/MSI: Provide pci_ims_alloc/free_irq() PCI/MSI: Provide IMS (Interrupt Message Store) support genirq/msi: Provide constants for PCI/IMS support x86/apic/msi: Enable MSI_FLAG_PCI_MSIX_ALLOC_DYN PCI/MSI: Provide post-enable dynamic allocation interfaces for MSI-X PCI/MSI: Provide prepare_desc() MSI domain op PCI/MSI: Split MSI-X descriptor setup genirq/msi: Provide MSI_FLAG_MSIX_ALLOC_DYN genirq/msi: Provide msi_domain_alloc_irq_at() genirq/msi: Provide msi_domain_ops:: Prepare_desc() genirq/msi: Provide msi_desc:: Msi_data genirq/msi: Provide struct msi_map x86/apic/msi: Remove arch_create_remap_msi_irq_domain() ...
Diffstat (limited to 'drivers/pci')
-rw-r--r--drivers/pci/Kconfig7
-rw-r--r--drivers/pci/controller/Kconfig30
-rw-r--r--drivers/pci/controller/dwc/Kconfig48
-rw-r--r--drivers/pci/controller/mobiveil/Kconfig6
-rw-r--r--drivers/pci/controller/pci-hyperv.c15
-rw-r--r--drivers/pci/msi/Makefile3
-rw-r--r--drivers/pci/msi/api.c458
-rw-r--r--drivers/pci/msi/irqdomain.c369
-rw-r--r--drivers/pci/msi/msi.c1078
-rw-r--r--drivers/pci/msi/msi.h114
-rw-r--r--drivers/pci/probe.c2
11 files changed, 1316 insertions, 814 deletions
diff --git a/drivers/pci/Kconfig b/drivers/pci/Kconfig
index 55c028af4bd9..9309f2469b41 100644
--- a/drivers/pci/Kconfig
+++ b/drivers/pci/Kconfig
@@ -51,11 +51,6 @@ config PCI_MSI
If you don't know what to do here, say Y.
-config PCI_MSI_IRQ_DOMAIN
- def_bool y
- depends on PCI_MSI
- select GENERIC_MSI_IRQ_DOMAIN
-
config PCI_MSI_ARCH_FALLBACKS
bool
@@ -192,7 +187,7 @@ config PCI_LABEL
config PCI_HYPERV
tristate "Hyper-V PCI Frontend"
- depends on ((X86 && X86_64) || ARM64) && HYPERV && PCI_MSI && PCI_MSI_IRQ_DOMAIN && SYSFS
+ depends on ((X86 && X86_64) || ARM64) && HYPERV && PCI_MSI && SYSFS
select PCI_HYPERV_INTERFACE
help
The PCI device frontend driver allows the kernel to import arbitrary
diff --git a/drivers/pci/controller/Kconfig b/drivers/pci/controller/Kconfig
index bfd9bac37e24..1569d9a3ada0 100644
--- a/drivers/pci/controller/Kconfig
+++ b/drivers/pci/controller/Kconfig
@@ -19,7 +19,7 @@ config PCI_AARDVARK
tristate "Aardvark PCIe controller"
depends on (ARCH_MVEBU && ARM64) || COMPILE_TEST
depends on OF
- depends on PCI_MSI_IRQ_DOMAIN
+ depends on PCI_MSI
select PCI_BRIDGE_EMUL
help
Add support for Aardvark 64bit PCIe Host Controller. This
@@ -29,7 +29,7 @@ config PCI_AARDVARK
config PCIE_XILINX_NWL
bool "NWL PCIe Core"
depends on ARCH_ZYNQMP || COMPILE_TEST
- depends on PCI_MSI_IRQ_DOMAIN
+ depends on PCI_MSI
help
Say 'Y' here if you want kernel support for Xilinx
NWL PCIe controller. The controller can act as Root Port
@@ -53,7 +53,7 @@ config PCI_IXP4XX
config PCI_TEGRA
bool "NVIDIA Tegra PCIe controller"
depends on ARCH_TEGRA || COMPILE_TEST
- depends on PCI_MSI_IRQ_DOMAIN
+ depends on PCI_MSI
help
Say Y here if you want support for the PCIe host controller found
on NVIDIA Tegra SoCs.
@@ -70,7 +70,7 @@ config PCI_RCAR_GEN2
config PCIE_RCAR_HOST
bool "Renesas R-Car PCIe host controller"
depends on ARCH_RENESAS || COMPILE_TEST
- depends on PCI_MSI_IRQ_DOMAIN
+ depends on PCI_MSI
help
Say Y here if you want PCIe controller support on R-Car SoCs in host
mode.
@@ -99,7 +99,7 @@ config PCI_HOST_GENERIC
config PCIE_XILINX
bool "Xilinx AXI PCIe host bridge support"
depends on OF || COMPILE_TEST
- depends on PCI_MSI_IRQ_DOMAIN
+ depends on PCI_MSI
help
Say 'Y' here if you want kernel to support the Xilinx AXI PCIe
Host Bridge driver.
@@ -124,7 +124,7 @@ config PCI_XGENE
config PCI_XGENE_MSI
bool "X-Gene v1 PCIe MSI feature"
depends on PCI_XGENE
- depends on PCI_MSI_IRQ_DOMAIN
+ depends on PCI_MSI
default y
help
Say Y here if you want PCIe MSI support for the APM X-Gene v1 SoC.
@@ -170,7 +170,7 @@ config PCIE_IPROC_BCMA
config PCIE_IPROC_MSI
bool "Broadcom iProc PCIe MSI support"
depends on PCIE_IPROC_PLATFORM || PCIE_IPROC_BCMA
- depends on PCI_MSI_IRQ_DOMAIN
+ depends on PCI_MSI
default ARCH_BCM_IPROC
help
Say Y here if you want to enable MSI support for Broadcom's iProc
@@ -186,7 +186,7 @@ config PCIE_ALTERA
config PCIE_ALTERA_MSI
tristate "Altera PCIe MSI feature"
depends on PCIE_ALTERA
- depends on PCI_MSI_IRQ_DOMAIN
+ depends on PCI_MSI
help
Say Y here if you want PCIe MSI support for the Altera FPGA.
This MSI driver supports Altera MSI to GIC controller IP.
@@ -215,7 +215,7 @@ config PCIE_ROCKCHIP_HOST
tristate "Rockchip PCIe host controller"
depends on ARCH_ROCKCHIP || COMPILE_TEST
depends on OF
- depends on PCI_MSI_IRQ_DOMAIN
+ depends on PCI_MSI
select MFD_SYSCON
select PCIE_ROCKCHIP
help
@@ -239,7 +239,7 @@ config PCIE_MEDIATEK
tristate "MediaTek PCIe controller"
depends on ARCH_AIROHA || ARCH_MEDIATEK || COMPILE_TEST
depends on OF
- depends on PCI_MSI_IRQ_DOMAIN
+ depends on PCI_MSI
help
Say Y here if you want to enable PCIe controller support on
MediaTek SoCs.
@@ -247,7 +247,7 @@ config PCIE_MEDIATEK
config PCIE_MEDIATEK_GEN3
tristate "MediaTek Gen3 PCIe controller"
depends on ARCH_MEDIATEK || COMPILE_TEST
- depends on PCI_MSI_IRQ_DOMAIN
+ depends on PCI_MSI
help
Adds support for PCIe Gen3 MAC controller for MediaTek SoCs.
This PCIe controller is compatible with Gen3, Gen2 and Gen1 speed,
@@ -277,7 +277,7 @@ config PCIE_BRCMSTB
depends on ARCH_BRCMSTB || ARCH_BCM2835 || ARCH_BCMBCA || \
BMIPS_GENERIC || COMPILE_TEST
depends on OF
- depends on PCI_MSI_IRQ_DOMAIN
+ depends on PCI_MSI
default ARCH_BRCMSTB || BMIPS_GENERIC
help
Say Y here to enable PCIe host controller support for
@@ -285,7 +285,7 @@ config PCIE_BRCMSTB
config PCI_HYPERV_INTERFACE
tristate "Hyper-V PCI Interface"
- depends on ((X86 && X86_64) || ARM64) && HYPERV && PCI_MSI && PCI_MSI_IRQ_DOMAIN
+ depends on ((X86 && X86_64) || ARM64) && HYPERV && PCI_MSI && PCI_MSI
help
The Hyper-V PCI Interface is a helper driver allows other drivers to
have a common interface with the Hyper-V PCI frontend driver.
@@ -303,8 +303,6 @@ config PCI_LOONGSON
config PCIE_MICROCHIP_HOST
bool "Microchip AXI PCIe host bridge support"
depends on PCI_MSI && OF
- select PCI_MSI_IRQ_DOMAIN
- select GENERIC_MSI_IRQ_DOMAIN
select PCI_HOST_COMMON
help
Say Y here if you want kernel to support the Microchip AXI PCIe
@@ -326,7 +324,7 @@ config PCIE_APPLE
tristate "Apple PCIe controller"
depends on ARCH_APPLE || COMPILE_TEST
depends on OF
- depends on PCI_MSI_IRQ_DOMAIN
+ depends on PCI_MSI
select PCI_HOST_COMMON
help
Say Y here if you want to enable PCIe controller support on Apple
diff --git a/drivers/pci/controller/dwc/Kconfig b/drivers/pci/controller/dwc/Kconfig
index 62ce3abf0f19..f3c462130627 100644
--- a/drivers/pci/controller/dwc/Kconfig
+++ b/drivers/pci/controller/dwc/Kconfig
@@ -21,7 +21,7 @@ config PCI_DRA7XX_HOST
tristate "TI DRA7xx PCIe controller Host Mode"
depends on SOC_DRA7XX || COMPILE_TEST
depends on OF && HAS_IOMEM && TI_PIPE3
- depends on PCI_MSI_IRQ_DOMAIN
+ depends on PCI_MSI
select PCIE_DW_HOST
select PCI_DRA7XX
default y if SOC_DRA7XX
@@ -53,7 +53,7 @@ config PCIE_DW_PLAT
config PCIE_DW_PLAT_HOST
bool "Platform bus based DesignWare PCIe Controller - Host mode"
- depends on PCI_MSI_IRQ_DOMAIN
+ depends on PCI_MSI
select PCIE_DW_HOST
select PCIE_DW_PLAT
help
@@ -67,7 +67,7 @@ config PCIE_DW_PLAT_HOST
config PCIE_DW_PLAT_EP
bool "Platform bus based DesignWare PCIe Controller - Endpoint mode"
- depends on PCI && PCI_MSI_IRQ_DOMAIN
+ depends on PCI && PCI_MSI
depends on PCI_ENDPOINT
select PCIE_DW_EP
select PCIE_DW_PLAT
@@ -83,7 +83,7 @@ config PCIE_DW_PLAT_EP
config PCI_EXYNOS
tristate "Samsung Exynos PCIe controller"
depends on ARCH_EXYNOS || COMPILE_TEST
- depends on PCI_MSI_IRQ_DOMAIN
+ depends on PCI_MSI
select PCIE_DW_HOST
help
Enables support for the PCIe controller in the Samsung Exynos SoCs
@@ -94,13 +94,13 @@ config PCI_EXYNOS
config PCI_IMX6
bool "Freescale i.MX6/7/8 PCIe controller"
depends on ARCH_MXC || COMPILE_TEST
- depends on PCI_MSI_IRQ_DOMAIN
+ depends on PCI_MSI
select PCIE_DW_HOST
config PCIE_SPEAR13XX
bool "STMicroelectronics SPEAr PCIe controller"
depends on ARCH_SPEAR13XX || COMPILE_TEST
- depends on PCI_MSI_IRQ_DOMAIN
+ depends on PCI_MSI
select PCIE_DW_HOST
help
Say Y here if you want PCIe support on SPEAr13XX SoCs.
@@ -111,7 +111,7 @@ config PCI_KEYSTONE
config PCI_KEYSTONE_HOST
bool "PCI Keystone Host Mode"
depends on ARCH_KEYSTONE || ARCH_K3 || COMPILE_TEST
- depends on PCI_MSI_IRQ_DOMAIN
+ depends on PCI_MSI
select PCIE_DW_HOST
select PCI_KEYSTONE
help
@@ -135,7 +135,7 @@ config PCI_KEYSTONE_EP
config PCI_LAYERSCAPE
bool "Freescale Layerscape PCIe controller - Host mode"
depends on OF && (ARM || ARCH_LAYERSCAPE || COMPILE_TEST)
- depends on PCI_MSI_IRQ_DOMAIN
+ depends on PCI_MSI
select PCIE_DW_HOST
select MFD_SYSCON
help
@@ -160,7 +160,7 @@ config PCI_LAYERSCAPE_EP
config PCI_HISI
depends on OF && (ARM64 || COMPILE_TEST)
bool "HiSilicon Hip05 and Hip06 SoCs PCIe controllers"
- depends on PCI_MSI_IRQ_DOMAIN
+ depends on PCI_MSI
select PCIE_DW_HOST
select PCI_HOST_COMMON
help
@@ -170,7 +170,7 @@ config PCI_HISI
config PCIE_QCOM
bool "Qualcomm PCIe controller"
depends on OF && (ARCH_QCOM || COMPILE_TEST)
- depends on PCI_MSI_IRQ_DOMAIN
+ depends on PCI_MSI
select PCIE_DW_HOST
select CRC8
help
@@ -191,7 +191,7 @@ config PCIE_QCOM_EP
config PCIE_ARMADA_8K
bool "Marvell Armada-8K PCIe controller"
depends on ARCH_MVEBU || COMPILE_TEST
- depends on PCI_MSI_IRQ_DOMAIN
+ depends on PCI_MSI
select PCIE_DW_HOST
help
Say Y here if you want to enable PCIe controller support on
@@ -205,7 +205,7 @@ config PCIE_ARTPEC6
config PCIE_ARTPEC6_HOST
bool "Axis ARTPEC-6 PCIe controller Host Mode"
depends on MACH_ARTPEC6 || COMPILE_TEST
- depends on PCI_MSI_IRQ_DOMAIN
+ depends on PCI_MSI
select PCIE_DW_HOST
select PCIE_ARTPEC6
help
@@ -226,7 +226,7 @@ config PCIE_ROCKCHIP_DW_HOST
bool "Rockchip DesignWare PCIe controller"
select PCIE_DW
select PCIE_DW_HOST
- depends on PCI_MSI_IRQ_DOMAIN
+ depends on PCI_MSI
depends on ARCH_ROCKCHIP || COMPILE_TEST
depends on OF
help
@@ -236,7 +236,7 @@ config PCIE_ROCKCHIP_DW_HOST
config PCIE_INTEL_GW
bool "Intel Gateway PCIe host controller support"
depends on OF && (X86 || COMPILE_TEST)
- depends on PCI_MSI_IRQ_DOMAIN
+ depends on PCI_MSI
select PCIE_DW_HOST
help
Say 'Y' here to enable PCIe Host controller support on Intel
@@ -250,7 +250,7 @@ config PCIE_KEEMBAY
config PCIE_KEEMBAY_HOST
bool "Intel Keem Bay PCIe controller - Host mode"
depends on ARCH_KEEMBAY || COMPILE_TEST
- depends on PCI && PCI_MSI_IRQ_DOMAIN
+ depends on PCI_MSI
select PCIE_DW_HOST
select PCIE_KEEMBAY
help
@@ -262,7 +262,7 @@ config PCIE_KEEMBAY_HOST
config PCIE_KEEMBAY_EP
bool "Intel Keem Bay PCIe controller - Endpoint mode"
depends on ARCH_KEEMBAY || COMPILE_TEST
- depends on PCI && PCI_MSI_IRQ_DOMAIN
+ depends on PCI_MSI
depends on PCI_ENDPOINT
select PCIE_DW_EP
select PCIE_KEEMBAY
@@ -275,7 +275,7 @@ config PCIE_KEEMBAY_EP
config PCIE_KIRIN
depends on OF && (ARM64 || COMPILE_TEST)
tristate "HiSilicon Kirin series SoCs PCIe controllers"
- depends on PCI_MSI_IRQ_DOMAIN
+ depends on PCI_MSI
select PCIE_DW_HOST
help
Say Y here if you want PCIe controller support
@@ -284,7 +284,7 @@ config PCIE_KIRIN
config PCIE_HISI_STB
bool "HiSilicon STB SoCs PCIe controllers"
depends on ARCH_HISI || COMPILE_TEST
- depends on PCI_MSI_IRQ_DOMAIN
+ depends on PCI_MSI
select PCIE_DW_HOST
help
Say Y here if you want PCIe controller support on HiSilicon STB SoCs
@@ -292,7 +292,7 @@ config PCIE_HISI_STB
config PCI_MESON
tristate "MESON PCIe controller"
default m if ARCH_MESON
- depends on PCI_MSI_IRQ_DOMAIN
+ depends on PCI_MSI
select PCIE_DW_HOST
help
Say Y here if you want to enable PCI controller support on Amlogic
@@ -306,7 +306,7 @@ config PCIE_TEGRA194
config PCIE_TEGRA194_HOST
tristate "NVIDIA Tegra194 (and later) PCIe controller - Host Mode"
depends on ARCH_TEGRA_194_SOC || COMPILE_TEST
- depends on PCI_MSI_IRQ_DOMAIN
+ depends on PCI_MSI
select PCIE_DW_HOST
select PHY_TEGRA194_P2U
select PCIE_TEGRA194
@@ -336,7 +336,7 @@ config PCIE_TEGRA194_EP
config PCIE_VISCONTI_HOST
bool "Toshiba Visconti PCIe controllers"
depends on ARCH_VISCONTI || COMPILE_TEST
- depends on PCI_MSI_IRQ_DOMAIN
+ depends on PCI_MSI
select PCIE_DW_HOST
help
Say Y here if you want PCIe controller support on Toshiba Visconti SoC.
@@ -346,7 +346,7 @@ config PCIE_UNIPHIER
bool "Socionext UniPhier PCIe host controllers"
depends on ARCH_UNIPHIER || COMPILE_TEST
depends on OF && HAS_IOMEM
- depends on PCI_MSI_IRQ_DOMAIN
+ depends on PCI_MSI
select PCIE_DW_HOST
help
Say Y here if you want PCIe host controller support on UniPhier SoCs.
@@ -365,7 +365,7 @@ config PCIE_UNIPHIER_EP
config PCIE_AL
bool "Amazon Annapurna Labs PCIe controller"
depends on OF && (ARM64 || COMPILE_TEST)
- depends on PCI_MSI_IRQ_DOMAIN
+ depends on PCI_MSI
select PCIE_DW_HOST
select PCI_ECAM
help
@@ -377,7 +377,7 @@ config PCIE_AL
config PCIE_FU740
bool "SiFive FU740 PCIe host controller"
- depends on PCI_MSI_IRQ_DOMAIN
+ depends on PCI_MSI
depends on SOC_SIFIVE || COMPILE_TEST
select PCIE_DW_HOST
help
diff --git a/drivers/pci/controller/mobiveil/Kconfig b/drivers/pci/controller/mobiveil/Kconfig
index e4643fb94e78..1d7a07ba9ccd 100644
--- a/drivers/pci/controller/mobiveil/Kconfig
+++ b/drivers/pci/controller/mobiveil/Kconfig
@@ -8,14 +8,14 @@ config PCIE_MOBIVEIL
config PCIE_MOBIVEIL_HOST
bool
- depends on PCI_MSI_IRQ_DOMAIN
+ depends on PCI_MSI
select PCIE_MOBIVEIL
config PCIE_MOBIVEIL_PLAT
bool "Mobiveil AXI PCIe controller"
depends on ARCH_ZYNQMP || COMPILE_TEST
depends on OF
- depends on PCI_MSI_IRQ_DOMAIN
+ depends on PCI_MSI
select PCIE_MOBIVEIL_HOST
help
Say Y here if you want to enable support for the Mobiveil AXI PCIe
@@ -25,7 +25,7 @@ config PCIE_MOBIVEIL_PLAT
config PCIE_LAYERSCAPE_GEN4
bool "Freescale Layerscape PCIe Gen4 controller"
depends on ARCH_LAYERSCAPE || COMPILE_TEST
- depends on PCI_MSI_IRQ_DOMAIN
+ depends on PCI_MSI
select PCIE_MOBIVEIL_HOST
help
Say Y here if you want PCIe Gen4 controller support on
diff --git a/drivers/pci/controller/pci-hyperv.c b/drivers/pci/controller/pci-hyperv.c
index 583d3aad6908..084f5313895c 100644
--- a/drivers/pci/controller/pci-hyperv.c
+++ b/drivers/pci/controller/pci-hyperv.c
@@ -611,20 +611,7 @@ static unsigned int hv_msi_get_int_vector(struct irq_data *data)
return cfg->vector;
}
-static int hv_msi_prepare(struct irq_domain *domain, struct device *dev,
- int nvec, msi_alloc_info_t *info)
-{
- int ret = pci_msi_prepare(domain, dev, nvec, info);
-
- /*
- * By using the interrupt remapper in the hypervisor IOMMU, contiguous
- * CPU vectors is not needed for multi-MSI
- */
- if (info->type == X86_IRQ_ALLOC_TYPE_PCI_MSI)
- info->flags &= ~X86_IRQ_ALLOC_CONTIGUOUS_VECTORS;
-
- return ret;
-}
+#define hv_msi_prepare pci_msi_prepare
/**
* hv_arch_irq_unmask() - "Unmask" the IRQ by setting its current
diff --git a/drivers/pci/msi/Makefile b/drivers/pci/msi/Makefile
index 93ef7b9e404d..839ff72d72a8 100644
--- a/drivers/pci/msi/Makefile
+++ b/drivers/pci/msi/Makefile
@@ -2,6 +2,5 @@
#
# Makefile for the PCI/MSI
obj-$(CONFIG_PCI) += pcidev_msi.o
-obj-$(CONFIG_PCI_MSI) += msi.o
-obj-$(CONFIG_PCI_MSI_IRQ_DOMAIN) += irqdomain.o
+obj-$(CONFIG_PCI_MSI) += api.o msi.o irqdomain.o
obj-$(CONFIG_PCI_MSI_ARCH_FALLBACKS) += legacy.o
diff --git a/drivers/pci/msi/api.c b/drivers/pci/msi/api.c
new file mode 100644
index 000000000000..b8009aa11f3c
--- /dev/null
+++ b/drivers/pci/msi/api.c
@@ -0,0 +1,458 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * PCI MSI/MSI-X — Exported APIs for device drivers
+ *
+ * Copyright (C) 2003-2004 Intel
+ * Copyright (C) Tom Long Nguyen (tom.l.nguyen@intel.com)
+ * Copyright (C) 2016 Christoph Hellwig.
+ * Copyright (C) 2022 Linutronix GmbH
+ */
+
+#include <linux/export.h>
+#include <linux/irq.h>
+
+#include "msi.h"
+
+/**
+ * pci_enable_msi() - Enable MSI interrupt mode on device
+ * @dev: the PCI device to operate on
+ *
+ * Legacy device driver API to enable MSI interrupts mode on device and
+ * allocate a single interrupt vector. On success, the allocated vector
+ * Linux IRQ will be saved at @dev->irq. The driver must invoke
+ * pci_disable_msi() on cleanup.
+ *
+ * NOTE: The newer pci_alloc_irq_vectors() / pci_free_irq_vectors() API
+ * pair should, in general, be used instead.
+ *
+ * Return: 0 on success, errno otherwise
+ */
+int pci_enable_msi(struct pci_dev *dev)
+{
+ int rc = __pci_enable_msi_range(dev, 1, 1, NULL);
+ if (rc < 0)
+ return rc;
+ return 0;
+}
+EXPORT_SYMBOL(pci_enable_msi);
+
+/**
+ * pci_disable_msi() - Disable MSI interrupt mode on device
+ * @dev: the PCI device to operate on
+ *
+ * Legacy device driver API to disable MSI interrupt mode on device,
+ * free earlier allocated interrupt vectors, and restore INTx emulation.
+ * The PCI device Linux IRQ (@dev->irq) is restored to its default
+ * pin-assertion IRQ. This is the cleanup pair of pci_enable_msi().
+ *
+ * NOTE: The newer pci_alloc_irq_vectors() / pci_free_irq_vectors() API
+ * pair should, in general, be used instead.
+ */
+void pci_disable_msi(struct pci_dev *dev)
+{
+ if (!pci_msi_enabled() || !dev || !dev->msi_enabled)
+ return;
+
+ msi_lock_descs(&dev->dev);
+ pci_msi_shutdown(dev);
+ pci_free_msi_irqs(dev);
+ msi_unlock_descs(&dev->dev);
+}
+EXPORT_SYMBOL(pci_disable_msi);
+
+/**
+ * pci_msix_vec_count() - Get number of MSI-X interrupt vectors on device
+ * @dev: the PCI device to operate on
+ *
+ * Return: number of MSI-X interrupt vectors available on this device
+ * (i.e., the device's MSI-X capability structure "table size"), -EINVAL
+ * if the device is not MSI-X capable, other errnos otherwise.
+ */
+int pci_msix_vec_count(struct pci_dev *dev)
+{
+ u16 control;
+
+ if (!dev->msix_cap)
+ return -EINVAL;
+
+ pci_read_config_word(dev, dev->msix_cap + PCI_MSIX_FLAGS, &control);
+ return msix_table_size(control);
+}
+EXPORT_SYMBOL(pci_msix_vec_count);
+
+/**
+ * pci_enable_msix_range() - Enable MSI-X interrupt mode on device
+ * @dev: the PCI device to operate on
+ * @entries: input/output parameter, array of MSI-X configuration entries
+ * @minvec: minimum required number of MSI-X vectors
+ * @maxvec: maximum desired number of MSI-X vectors
+ *
+ * Legacy device driver API to enable MSI-X interrupt mode on device and
+ * configure its MSI-X capability structure as appropriate. The passed
+ * @entries array must have each of its members "entry" field set to a
+ * desired (valid) MSI-X vector number, where the range of valid MSI-X
+ * vector numbers can be queried through pci_msix_vec_count(). If
+ * successful, the driver must invoke pci_disable_msix() on cleanup.
+ *
+ * NOTE: The newer pci_alloc_irq_vectors() / pci_free_irq_vectors() API
+ * pair should, in general, be used instead.
+ *
+ * Return: number of MSI-X vectors allocated (which might be smaller
+ * than @maxvecs), where Linux IRQ numbers for such allocated vectors
+ * are saved back in the @entries array elements' "vector" field. Return
+ * -ENOSPC if less than @minvecs interrupt vectors are available.
+ * Return -EINVAL if one of the passed @entries members "entry" field
+ * was invalid or a duplicate, or if plain MSI interrupts mode was
+ * earlier enabled on device. Return other errnos otherwise.
+ */
+int pci_enable_msix_range(struct pci_dev *dev, struct msix_entry *entries,
+ int minvec, int maxvec)
+{
+ return __pci_enable_msix_range(dev, entries, minvec, maxvec, NULL, 0);
+}
+EXPORT_SYMBOL(pci_enable_msix_range);
+
+/**
+ * pci_msix_can_alloc_dyn - Query whether dynamic allocation after enabling
+ * MSI-X is supported
+ *
+ * @dev: PCI device to operate on
+ *
+ * Return: True if supported, false otherwise
+ */
+bool pci_msix_can_alloc_dyn(struct pci_dev *dev)
+{
+ if (!dev->msix_cap)
+ return false;
+
+ return pci_msi_domain_supports(dev, MSI_FLAG_PCI_MSIX_ALLOC_DYN, DENY_LEGACY);
+}
+EXPORT_SYMBOL_GPL(pci_msix_can_alloc_dyn);
+
+/**
+ * pci_msix_alloc_irq_at - Allocate an MSI-X interrupt after enabling MSI-X
+ * at a given MSI-X vector index or any free vector index
+ *
+ * @dev: PCI device to operate on
+ * @index: Index to allocate. If @index == MSI_ANY_INDEX this allocates
+ * the next free index in the MSI-X table
+ * @affdesc: Optional pointer to an affinity descriptor structure. NULL otherwise
+ *
+ * Return: A struct msi_map
+ *
+ * On success msi_map::index contains the allocated index (>= 0) and
+ * msi_map::virq contains the allocated Linux interrupt number (> 0).
+ *
+ * On fail msi_map::index contains the error code and msi_map::virq
+ * is set to 0.
+ */
+struct msi_map pci_msix_alloc_irq_at(struct pci_dev *dev, unsigned int index,
+ const struct irq_affinity_desc *affdesc)
+{
+ struct msi_map map = { .index = -ENOTSUPP };
+
+ if (!dev->msix_enabled)
+ return map;
+
+ if (!pci_msix_can_alloc_dyn(dev))
+ return map;
+
+ return msi_domain_alloc_irq_at(&dev->dev, MSI_DEFAULT_DOMAIN, index, affdesc, NULL);
+}
+EXPORT_SYMBOL_GPL(pci_msix_alloc_irq_at);
+
+/**
+ * pci_msix_free_irq - Free an interrupt on a PCI/MSIX interrupt domain
+ * which was allocated via pci_msix_alloc_irq_at()
+ *
+ * @dev: The PCI device to operate on
+ * @map: A struct msi_map describing the interrupt to free
+ * as returned from the allocation function.
+ */
+void pci_msix_free_irq(struct pci_dev *dev, struct msi_map map)
+{
+ if (WARN_ON_ONCE(map.index < 0 || map.virq <= 0))
+ return;
+ if (WARN_ON_ONCE(!pci_msix_can_alloc_dyn(dev)))
+ return;
+ msi_domain_free_irqs_range(&dev->dev, MSI_DEFAULT_DOMAIN, map.index, map.index);
+}
+EXPORT_SYMBOL_GPL(pci_msix_free_irq);
+
+/**
+ * pci_disable_msix() - Disable MSI-X interrupt mode on device
+ * @dev: the PCI device to operate on
+ *
+ * Legacy device driver API to disable MSI-X interrupt mode on device,
+ * free earlier-allocated interrupt vectors, and restore INTx.
+ * The PCI device Linux IRQ (@dev->irq) is restored to its default pin
+ * assertion IRQ. This is the cleanup pair of pci_enable_msix_range().
+ *
+ * NOTE: The newer pci_alloc_irq_vectors() / pci_free_irq_vectors() API
+ * pair should, in general, be used instead.
+ */
+void pci_disable_msix(struct pci_dev *dev)
+{
+ if (!pci_msi_enabled() || !dev || !dev->msix_enabled)
+ return;
+
+ msi_lock_descs(&dev->dev);
+ pci_msix_shutdown(dev);
+ pci_free_msi_irqs(dev);
+ msi_unlock_descs(&dev->dev);
+}
+EXPORT_SYMBOL(pci_disable_msix);
+
+/**
+ * pci_alloc_irq_vectors() - Allocate multiple device interrupt vectors
+ * @dev: the PCI device to operate on
+ * @min_vecs: minimum required number of vectors (must be >= 1)
+ * @max_vecs: maximum desired number of vectors
+ * @flags: One or more of:
+ *
+ * * %PCI_IRQ_MSIX Allow trying MSI-X vector allocations
+ * * %PCI_IRQ_MSI Allow trying MSI vector allocations
+ *
+ * * %PCI_IRQ_LEGACY Allow trying legacy INTx interrupts, if
+ * and only if @min_vecs == 1
+ *
+ * * %PCI_IRQ_AFFINITY Auto-manage IRQs affinity by spreading
+ * the vectors around available CPUs
+ *
+ * Allocate up to @max_vecs interrupt vectors on device. MSI-X irq
+ * vector allocation has a higher precedence over plain MSI, which has a
+ * higher precedence over legacy INTx emulation.
+ *
+ * Upon a successful allocation, the caller should use pci_irq_vector()
+ * to get the Linux IRQ number to be passed to request_threaded_irq().
+ * The driver must call pci_free_irq_vectors() on cleanup.
+ *
+ * Return: number of allocated vectors (which might be smaller than
+ * @max_vecs), -ENOSPC if less than @min_vecs interrupt vectors are
+ * available, other errnos otherwise.
+ */
+int pci_alloc_irq_vectors(struct pci_dev *dev, unsigned int min_vecs,
+ unsigned int max_vecs, unsigned int flags)
+{
+ return pci_alloc_irq_vectors_affinity(dev, min_vecs, max_vecs,
+ flags, NULL);
+}
+EXPORT_SYMBOL(pci_alloc_irq_vectors);
+
+/**
+ * pci_alloc_irq_vectors_affinity() - Allocate multiple device interrupt
+ * vectors with affinity requirements
+ * @dev: the PCI device to operate on
+ * @min_vecs: minimum required number of vectors (must be >= 1)
+ * @max_vecs: maximum desired number of vectors
+ * @flags: allocation flags, as in pci_alloc_irq_vectors()
+ * @affd: affinity requirements (can be %NULL).
+ *
+ * Same as pci_alloc_irq_vectors(), but with the extra @affd parameter.
+ * Check that function docs, and &struct irq_affinity, for more details.
+ */
+int pci_alloc_irq_vectors_affinity(struct pci_dev *dev, unsigned int min_vecs,
+ unsigned int max_vecs, unsigned int flags,
+ struct irq_affinity *affd)
+{
+ struct irq_affinity msi_default_affd = {0};
+ int nvecs = -ENOSPC;
+
+ if (flags & PCI_IRQ_AFFINITY) {
+ if (!affd)
+ affd = &msi_default_affd;
+ } else {
+ if (WARN_ON(affd))
+ affd = NULL;
+ }
+
+ if (flags & PCI_IRQ_MSIX) {
+ nvecs = __pci_enable_msix_range(dev, NULL, min_vecs, max_vecs,
+ affd, flags);
+ if (nvecs > 0)
+ return nvecs;
+ }
+
+ if (flags & PCI_IRQ_MSI) {
+ nvecs = __pci_enable_msi_range(dev, min_vecs, max_vecs, affd);
+ if (nvecs > 0)
+ return nvecs;
+ }
+
+ /* use legacy IRQ if allowed */
+ if (flags & PCI_IRQ_LEGACY) {
+ if (min_vecs == 1 && dev->irq) {
+ /*
+ * Invoke the affinity spreading logic to ensure that
+ * the device driver can adjust queue configuration
+ * for the single interrupt case.
+ */
+ if (affd)
+ irq_create_affinity_masks(1, affd);
+ pci_intx(dev, 1);
+ return 1;
+ }
+ }
+
+ return nvecs;
+}
+EXPORT_SYMBOL(pci_alloc_irq_vectors_affinity);
+
+/**
+ * pci_irq_vector() - Get Linux IRQ number of a device interrupt vector
+ * @dev: the PCI device to operate on
+ * @nr: device-relative interrupt vector index (0-based); has different
+ * meanings, depending on interrupt mode:
+ *
+ * * MSI-X the index in the MSI-X vector table
+ * * MSI the index of the enabled MSI vectors
+ * * INTx must be 0
+ *
+ * Return: the Linux IRQ number, or -EINVAL if @nr is out of range
+ */
+int pci_irq_vector(struct pci_dev *dev, unsigned int nr)
+{
+ unsigned int irq;
+
+ if (!dev->msi_enabled && !dev->msix_enabled)
+ return !nr ? dev->irq : -EINVAL;
+
+ irq = msi_get_virq(&dev->dev, nr);
+ return irq ? irq : -EINVAL;
+}
+EXPORT_SYMBOL(pci_irq_vector);
+
+/**
+ * pci_irq_get_affinity() - Get a device interrupt vector affinity
+ * @dev: the PCI device to operate on
+ * @nr: device-relative interrupt vector index (0-based); has different
+ * meanings, depending on interrupt mode:
+ *
+ * * MSI-X the index in the MSI-X vector table
+ * * MSI the index of the enabled MSI vectors
+ * * INTx must be 0
+ *
+ * Return: MSI/MSI-X vector affinity, NULL if @nr is out of range or if
+ * the MSI(-X) vector was allocated without explicit affinity
+ * requirements (e.g., by pci_enable_msi(), pci_enable_msix_range(), or
+ * pci_alloc_irq_vectors() without the %PCI_IRQ_AFFINITY flag). Return a
+ * generic set of CPU IDs representing all possible CPUs available
+ * during system boot if the device is in legacy INTx mode.
+ */
+const struct cpumask *pci_irq_get_affinity(struct pci_dev *dev, int nr)
+{
+ int idx, irq = pci_irq_vector(dev, nr);
+ struct msi_desc *desc;
+
+ if (WARN_ON_ONCE(irq <= 0))
+ return NULL;
+
+ desc = irq_get_msi_desc(irq);
+ /* Non-MSI does not have the information handy */
+ if (!desc)
+ return cpu_possible_mask;
+
+ /* MSI[X] interrupts can be allocated without affinity descriptor */
+ if (!desc->affinity)
+ return NULL;
+
+ /*
+ * MSI has a mask array in the descriptor.
+ * MSI-X has a single mask.
+ */
+ idx = dev->msi_enabled ? nr : 0;
+ return &desc->affinity[idx].mask;
+}
+EXPORT_SYMBOL(pci_irq_get_affinity);
+
+/**
+ * pci_ims_alloc_irq - Allocate an interrupt on a PCI/IMS interrupt domain
+ * @dev: The PCI device to operate on
+ * @icookie: Pointer to an IMS implementation specific cookie for this
+ * IMS instance (PASID, queue ID, pointer...).
+ * The cookie content is copied into the MSI descriptor for the
+ * interrupt chip callbacks or domain specific setup functions.
+ * @affdesc: Optional pointer to an interrupt affinity descriptor
+ *
+ * There is no index for IMS allocations as IMS is an implementation
+ * specific storage and does not have any direct associations between
+ * index, which might be a pure software construct, and device
+ * functionality. This association is established by the driver either via
+ * the index - if there is a hardware table - or in case of purely software
+ * managed IMS implementation the association happens via the
+ * irq_write_msi_msg() callback of the implementation specific interrupt
+ * chip, which utilizes the provided @icookie to store the MSI message in
+ * the appropriate place.
+ *
+ * Return: A struct msi_map
+ *
+ * On success msi_map::index contains the allocated index (>= 0) and
+ * msi_map::virq the allocated Linux interrupt number (> 0).
+ *
+ * On fail msi_map::index contains the error code and msi_map::virq
+ * is set to 0.
+ */
+struct msi_map pci_ims_alloc_irq(struct pci_dev *dev, union msi_instance_cookie *icookie,
+ const struct irq_affinity_desc *affdesc)
+{
+ return msi_domain_alloc_irq_at(&dev->dev, MSI_SECONDARY_DOMAIN, MSI_ANY_INDEX,
+ affdesc, icookie);
+}
+EXPORT_SYMBOL_GPL(pci_ims_alloc_irq);
+
+/**
+ * pci_ims_free_irq - Allocate an interrupt on a PCI/IMS interrupt domain
+ * which was allocated via pci_ims_alloc_irq()
+ * @dev: The PCI device to operate on
+ * @map: A struct msi_map describing the interrupt to free as
+ * returned from pci_ims_alloc_irq()
+ */
+void pci_ims_free_irq(struct pci_dev *dev, struct msi_map map)
+{
+ if (WARN_ON_ONCE(map.index < 0 || map.virq <= 0))
+ return;
+ msi_domain_free_irqs_range(&dev->dev, MSI_SECONDARY_DOMAIN, map.index, map.index);
+}
+EXPORT_SYMBOL_GPL(pci_ims_free_irq);
+
+/**
+ * pci_free_irq_vectors() - Free previously allocated IRQs for a device
+ * @dev: the PCI device to operate on
+ *
+ * Undo the interrupt vector allocations and possible device MSI/MSI-X
+ * enablement earlier done through pci_alloc_irq_vectors_affinity() or
+ * pci_alloc_irq_vectors().
+ */
+void pci_free_irq_vectors(struct pci_dev *dev)
+{
+ pci_disable_msix(dev);
+ pci_disable_msi(dev);
+}
+EXPORT_SYMBOL(pci_free_irq_vectors);
+
+/**
+ * pci_restore_msi_state() - Restore cached MSI(-X) state on device
+ * @dev: the PCI device to operate on
+ *
+ * Write the Linux-cached MSI(-X) state back on device. This is
+ * typically useful upon system resume, or after an error-recovery PCI
+ * adapter reset.
+ */
+void pci_restore_msi_state(struct pci_dev *dev)
+{
+ __pci_restore_msi_state(dev);
+ __pci_restore_msix_state(dev);
+}
+EXPORT_SYMBOL_GPL(pci_restore_msi_state);
+
+/**
+ * pci_msi_enabled() - Are MSI(-X) interrupts enabled system-wide?
+ *
+ * Return: true if MSI has not been globally disabled through ACPI FADT,
+ * PCI bridge quirks, or the "pci=nomsi" kernel command-line option.
+ */
+int pci_msi_enabled(void)
+{
+ return pci_msi_enable;
+}
+EXPORT_SYMBOL(pci_msi_enabled);
diff --git a/drivers/pci/msi/irqdomain.c b/drivers/pci/msi/irqdomain.c
index e9cf318e6670..e33bcc872699 100644
--- a/drivers/pci/msi/irqdomain.c
+++ b/drivers/pci/msi/irqdomain.c
@@ -14,7 +14,7 @@ int pci_msi_setup_msi_irqs(struct pci_dev *dev, int nvec, int type)
domain = dev_get_msi_domain(&dev->dev);
if (domain && irq_domain_is_hierarchy(domain))
- return msi_domain_alloc_irqs_descs_locked(domain, &dev->dev, nvec);
+ return msi_domain_alloc_irqs_all_locked(&dev->dev, MSI_DEFAULT_DOMAIN, nvec);
return pci_msi_legacy_setup_msi_irqs(dev, nvec, type);
}
@@ -24,11 +24,12 @@ void pci_msi_teardown_msi_irqs(struct pci_dev *dev)
struct irq_domain *domain;
domain = dev_get_msi_domain(&dev->dev);
- if (domain && irq_domain_is_hierarchy(domain))
- msi_domain_free_irqs_descs_locked(domain, &dev->dev);
- else
+ if (domain && irq_domain_is_hierarchy(domain)) {
+ msi_domain_free_irqs_all_locked(&dev->dev, MSI_DEFAULT_DOMAIN);
+ } else {
pci_msi_legacy_teardown_msi_irqs(dev);
- msi_free_msi_descs(&dev->dev);
+ msi_free_msi_descs(&dev->dev);
+ }
}
/**
@@ -63,51 +64,6 @@ static irq_hw_number_t pci_msi_domain_calc_hwirq(struct msi_desc *desc)
(pci_domain_nr(dev->bus) & 0xFFFFFFFF) << 27;
}
-static inline bool pci_msi_desc_is_multi_msi(struct msi_desc *desc)
-{
- return !desc->pci.msi_attrib.is_msix && desc->nvec_used > 1;
-}
-
-/**
- * pci_msi_domain_check_cap - Verify that @domain supports the capabilities
- * for @dev
- * @domain: The interrupt domain to check
- * @info: The domain info for verification
- * @dev: The device to check
- *
- * Returns:
- * 0 if the functionality is supported
- * 1 if Multi MSI is requested, but the domain does not support it
- * -ENOTSUPP otherwise
- */
-static int pci_msi_domain_check_cap(struct irq_domain *domain,
- struct msi_domain_info *info,
- struct device *dev)
-{
- struct msi_desc *desc = msi_first_desc(dev, MSI_DESC_ALL);
-
- /* Special handling to support __pci_enable_msi_range() */
- if (pci_msi_desc_is_multi_msi(desc) &&
- !(info->flags & MSI_FLAG_MULTI_PCI_MSI))
- return 1;
-
- if (desc->pci.msi_attrib.is_msix) {
- if (!(info->flags & MSI_FLAG_PCI_MSIX))
- return -ENOTSUPP;
-
- if (info->flags & MSI_FLAG_MSIX_CONTIGUOUS) {
- unsigned int idx = 0;
-
- /* Check for gaps in the entry indices */
- msi_for_each_desc(desc, dev, MSI_DESC_ALL) {
- if (desc->msi_index != idx++)
- return -ENOTSUPP;
- }
- }
- }
- return 0;
-}
-
static void pci_msi_domain_set_desc(msi_alloc_info_t *arg,
struct msi_desc *desc)
{
@@ -117,7 +73,6 @@ static void pci_msi_domain_set_desc(msi_alloc_info_t *arg,
static struct msi_domain_ops pci_msi_domain_ops_default = {
.set_desc = pci_msi_domain_set_desc,
- .msi_check = pci_msi_domain_check_cap,
};
static void pci_msi_domain_update_dom_ops(struct msi_domain_info *info)
@@ -129,8 +84,6 @@ static void pci_msi_domain_update_dom_ops(struct msi_domain_info *info)
} else {
if (ops->set_desc == NULL)
ops->set_desc = pci_msi_domain_set_desc;
- if (ops->msi_check == NULL)
- ops->msi_check = pci_msi_domain_check_cap;
}
}
@@ -162,8 +115,6 @@ struct irq_domain *pci_msi_create_irq_domain(struct fwnode_handle *fwnode,
struct msi_domain_info *info,
struct irq_domain *parent)
{
- struct irq_domain *domain;
-
if (WARN_ON(info->flags & MSI_FLAG_LEVEL_CAPABLE))
info->flags &= ~MSI_FLAG_LEVEL_CAPABLE;
@@ -172,23 +123,298 @@ struct irq_domain *pci_msi_create_irq_domain(struct fwnode_handle *fwnode,
if (info->flags & MSI_FLAG_USE_DEF_CHIP_OPS)
pci_msi_domain_update_chip_ops(info);
+ /* Let the core code free MSI descriptors when freeing interrupts */
+ info->flags |= MSI_FLAG_FREE_MSI_DESCS;
+
info->flags |= MSI_FLAG_ACTIVATE_EARLY | MSI_FLAG_DEV_SYSFS;
if (IS_ENABLED(CONFIG_GENERIC_IRQ_RESERVATION_MODE))
info->flags |= MSI_FLAG_MUST_REACTIVATE;
/* PCI-MSI is oneshot-safe */
info->chip->flags |= IRQCHIP_ONESHOT_SAFE;
+ /* Let the core update the bus token */
+ info->bus_token = DOMAIN_BUS_PCI_MSI;
- domain = msi_create_irq_domain(fwnode, info, parent);
- if (!domain)
- return NULL;
-
- irq_domain_update_bus_token(domain, DOMAIN_BUS_PCI_MSI);
- return domain;
+ return msi_create_irq_domain(fwnode, info, parent);
}
EXPORT_SYMBOL_GPL(pci_msi_create_irq_domain);
/*
+ * Per device MSI[-X] domain functionality
+ */
+static void pci_device_domain_set_desc(msi_alloc_info_t *arg, struct msi_desc *desc)
+{
+ arg->desc = desc;
+ arg->hwirq = desc->msi_index;
+}
+
+static void pci_irq_mask_msi(struct irq_data *data)
+{
+ struct msi_desc *desc = irq_data_get_msi_desc(data);
+
+ pci_msi_mask(desc, BIT(data->irq - desc->irq));
+}
+
+static void pci_irq_unmask_msi(struct irq_data *data)
+{
+ struct msi_desc *desc = irq_data_get_msi_desc(data);
+
+ pci_msi_unmask(desc, BIT(data->irq - desc->irq));
+}
+
+#ifdef CONFIG_GENERIC_IRQ_RESERVATION_MODE
+# define MSI_REACTIVATE MSI_FLAG_MUST_REACTIVATE
+#else
+# define MSI_REACTIVATE 0
+#endif
+
+#define MSI_COMMON_FLAGS (MSI_FLAG_FREE_MSI_DESCS | \
+ MSI_FLAG_ACTIVATE_EARLY | \
+ MSI_FLAG_DEV_SYSFS | \
+ MSI_REACTIVATE)
+
+static const struct msi_domain_template pci_msi_template = {
+ .chip = {
+ .name = "PCI-MSI",
+ .irq_mask = pci_irq_mask_msi,
+ .irq_unmask = pci_irq_unmask_msi,
+ .irq_write_msi_msg = pci_msi_domain_write_msg,
+ .flags = IRQCHIP_ONESHOT_SAFE,
+ },
+
+ .ops = {
+ .set_desc = pci_device_domain_set_desc,
+ },
+
+ .info = {
+ .flags = MSI_COMMON_FLAGS | MSI_FLAG_MULTI_PCI_MSI,
+ .bus_token = DOMAIN_BUS_PCI_DEVICE_MSI,
+ },
+};
+
+static void pci_irq_mask_msix(struct irq_data *data)
+{
+ pci_msix_mask(irq_data_get_msi_desc(data));
+}
+
+static void pci_irq_unmask_msix(struct irq_data *data)
+{
+ pci_msix_unmask(irq_data_get_msi_desc(data));
+}
+
+static void pci_msix_prepare_desc(struct irq_domain *domain, msi_alloc_info_t *arg,
+ struct msi_desc *desc)
+{
+ /* Don't fiddle with preallocated MSI descriptors */
+ if (!desc->pci.mask_base)
+ msix_prepare_msi_desc(to_pci_dev(desc->dev), desc);
+}
+
+static const struct msi_domain_template pci_msix_template = {
+ .chip = {
+ .name = "PCI-MSIX",
+ .irq_mask = pci_irq_mask_msix,
+ .irq_unmask = pci_irq_unmask_msix,
+ .irq_write_msi_msg = pci_msi_domain_write_msg,
+ .flags = IRQCHIP_ONESHOT_SAFE,
+ },
+
+ .ops = {
+ .prepare_desc = pci_msix_prepare_desc,
+ .set_desc = pci_device_domain_set_desc,
+ },
+
+ .info = {
+ .flags = MSI_COMMON_FLAGS | MSI_FLAG_PCI_MSIX |
+ MSI_FLAG_PCI_MSIX_ALLOC_DYN,
+ .bus_token = DOMAIN_BUS_PCI_DEVICE_MSIX,
+ },
+};
+
+static bool pci_match_device_domain(struct pci_dev *pdev, enum irq_domain_bus_token bus_token)
+{
+ return msi_match_device_irq_domain(&pdev->dev, MSI_DEFAULT_DOMAIN, bus_token);
+}
+
+static bool pci_create_device_domain(struct pci_dev *pdev, const struct msi_domain_template *tmpl,
+ unsigned int hwsize)
+{
+ struct irq_domain *domain = dev_get_msi_domain(&pdev->dev);
+
+ if (!domain || !irq_domain_is_msi_parent(domain))
+ return true;
+
+ return msi_create_device_irq_domain(&pdev->dev, MSI_DEFAULT_DOMAIN, tmpl,
+ hwsize, NULL, NULL);
+}
+
+/**
+ * pci_setup_msi_device_domain - Setup a device MSI interrupt domain
+ * @pdev: The PCI device to create the domain on
+ *
+ * Return:
+ * True when:
+ * - The device does not have a MSI parent irq domain associated,
+ * which keeps the legacy architecture specific and the global
+ * PCI/MSI domain models working
+ * - The MSI domain exists already
+ * - The MSI domain was successfully allocated
+ * False when:
+ * - MSI-X is enabled
+ * - The domain creation fails.
+ *
+ * The created MSI domain is preserved until:
+ * - The device is removed
+ * - MSI is disabled and a MSI-X domain is created
+ */
+bool pci_setup_msi_device_domain(struct pci_dev *pdev)
+{
+ if (WARN_ON_ONCE(pdev->msix_enabled))
+ return false;
+
+ if (pci_match_device_domain(pdev, DOMAIN_BUS_PCI_DEVICE_MSI))
+ return true;
+ if (pci_match_device_domain(pdev, DOMAIN_BUS_PCI_DEVICE_MSIX))
+ msi_remove_device_irq_domain(&pdev->dev, MSI_DEFAULT_DOMAIN);
+
+ return pci_create_device_domain(pdev, &pci_msi_template, 1);
+}
+
+/**
+ * pci_setup_msix_device_domain - Setup a device MSI-X interrupt domain
+ * @pdev: The PCI device to create the domain on
+ * @hwsize: The size of the MSI-X vector table
+ *
+ * Return:
+ * True when:
+ * - The device does not have a MSI parent irq domain associated,
+ * which keeps the legacy architecture specific and the global
+ * PCI/MSI domain models working
+ * - The MSI-X domain exists already
+ * - The MSI-X domain was successfully allocated
+ * False when:
+ * - MSI is enabled
+ * - The domain creation fails.
+ *
+ * The created MSI-X domain is preserved until:
+ * - The device is removed
+ * - MSI-X is disabled and a MSI domain is created
+ */
+bool pci_setup_msix_device_domain(struct pci_dev *pdev, unsigned int hwsize)
+{
+ if (WARN_ON_ONCE(pdev->msi_enabled))
+ return false;
+
+ if (pci_match_device_domain(pdev, DOMAIN_BUS_PCI_DEVICE_MSIX))
+ return true;
+ if (pci_match_device_domain(pdev, DOMAIN_BUS_PCI_DEVICE_MSI))
+ msi_remove_device_irq_domain(&pdev->dev, MSI_DEFAULT_DOMAIN);
+
+ return pci_create_device_domain(pdev, &pci_msix_template, hwsize);
+}
+
+/**
+ * pci_msi_domain_supports - Check for support of a particular feature flag
+ * @pdev: The PCI device to operate on
+ * @feature_mask: The feature mask to check for (full match)
+ * @mode: If ALLOW_LEGACY this grants the feature when there is no irq domain
+ * associated to the device. If DENY_LEGACY the lack of an irq domain
+ * makes the feature unsupported
+ */
+bool pci_msi_domain_supports(struct pci_dev *pdev, unsigned int feature_mask,
+ enum support_mode mode)
+{
+ struct msi_domain_info *info;
+ struct irq_domain *domain;
+ unsigned int supported;
+
+ domain = dev_get_msi_domain(&pdev->dev);
+
+ if (!domain || !irq_domain_is_hierarchy(domain))
+ return mode == ALLOW_LEGACY;
+
+ if (!irq_domain_is_msi_parent(domain)) {
+ /*
+ * For "global" PCI/MSI interrupt domains the associated
+ * msi_domain_info::flags is the authoritive source of
+ * information.
+ */
+ info = domain->host_data;
+ supported = info->flags;
+ } else {
+ /*
+ * For MSI parent domains the supported feature set
+ * is avaliable in the parent ops. This makes checks
+ * possible before actually instantiating the
+ * per device domain because the parent is never
+ * expanding the PCI/MSI functionality.
+ */
+ supported = domain->msi_parent_ops->supported_flags;
+ }
+
+ return (supported & feature_mask) == feature_mask;
+}
+
+/**
+ * pci_create_ims_domain - Create a secondary IMS domain for a PCI device
+ * @pdev: The PCI device to operate on
+ * @template: The MSI info template which describes the domain
+ * @hwsize: The size of the hardware entry table or 0 if the domain
+ * is purely software managed
+ * @data: Optional pointer to domain specific data to be stored
+ * in msi_domain_info::data
+ *
+ * Return: True on success, false otherwise
+ *
+ * An IMS domain is expected to have the following constraints:
+ * - The index space is managed by the core code
+ *
+ * - There is no requirement for consecutive index ranges
+ *
+ * - The interrupt chip must provide the following callbacks:
+ * - irq_mask()
+ * - irq_unmask()
+ * - irq_write_msi_msg()
+ *
+ * - The interrupt chip must provide the following optional callbacks
+ * when the irq_mask(), irq_unmask() and irq_write_msi_msg() callbacks
+ * cannot operate directly on hardware, e.g. in the case that the
+ * interrupt message store is in queue memory:
+ * - irq_bus_lock()
+ * - irq_bus_unlock()
+ *
+ * These callbacks are invoked from preemptible task context and are
+ * allowed to sleep. In this case the mandatory callbacks above just
+ * store the information. The irq_bus_unlock() callback is supposed
+ * to make the change effective before returning.
+ *
+ * - Interrupt affinity setting is handled by the underlying parent
+ * interrupt domain and communicated to the IMS domain via
+ * irq_write_msi_msg().
+ *
+ * The domain is automatically destroyed when the PCI device is removed.
+ */
+bool pci_create_ims_domain(struct pci_dev *pdev, const struct msi_domain_template *template,
+ unsigned int hwsize, void *data)
+{
+ struct irq_domain *domain = dev_get_msi_domain(&pdev->dev);
+
+ if (!domain || !irq_domain_is_msi_parent(domain))
+ return false;
+
+ if (template->info.bus_token != DOMAIN_BUS_PCI_DEVICE_IMS ||
+ !(template->info.flags & MSI_FLAG_ALLOC_SIMPLE_MSI_DESCS) ||
+ !(template->info.flags & MSI_FLAG_FREE_MSI_DESCS) ||
+ !template->chip.irq_mask || !template->chip.irq_unmask ||
+ !template->chip.irq_write_msi_msg || template->chip.irq_set_affinity)
+ return false;
+
+ return msi_create_device_irq_domain(&pdev->dev, MSI_SECONDARY_DOMAIN, template,
+ hwsize, data, NULL);
+}
+EXPORT_SYMBOL_GPL(pci_create_ims_domain);
+
+/*
* Users of the generic MSI infrastructure expect a device to have a single ID,
* so with DMA aliases we have to pick the least-worst compromise. Devices with
* DMA phantom functions tend to still emit MSIs from the real function number,
@@ -257,24 +483,3 @@ struct irq_domain *pci_msi_get_device_domain(struct pci_dev *pdev)
DOMAIN_BUS_PCI_MSI);
return dom;
}
-
-/**
- * pci_dev_has_special_msi_domain - Check whether the device is handled by
- * a non-standard PCI-MSI domain
- * @pdev: The PCI device to check.
- *
- * Returns: True if the device irqdomain or the bus irqdomain is
- * non-standard PCI/MSI.
- */
-bool pci_dev_has_special_msi_domain(struct pci_dev *pdev)
-{
- struct irq_domain *dom = dev_get_msi_domain(&pdev->dev);
-
- if (!dom)
- dom = dev_get_msi_domain(&pdev->bus->dev);
-
- if (!dom)
- return true;
-
- return dom->bus_token != DOMAIN_BUS_PCI_MSI;
-}
diff --git a/drivers/pci/msi/msi.c b/drivers/pci/msi/msi.c
index fdd2ec09651e..1f716624ca56 100644
--- a/drivers/pci/msi/msi.c
+++ b/drivers/pci/msi/msi.c
@@ -13,82 +13,114 @@
#include "../pci.h"
#include "msi.h"
-static int pci_msi_enable = 1;
+int pci_msi_enable = 1;
int pci_msi_ignore_mask;
-static noinline void pci_msi_update_mask(struct msi_desc *desc, u32 clear, u32 set)
+/**
+ * pci_msi_supported - check whether MSI may be enabled on a device
+ * @dev: pointer to the pci_dev data structure of MSI device function
+ * @nvec: how many MSIs have been requested?
+ *
+ * Look at global flags, the device itself, and its parent buses
+ * to determine if MSI/-X are supported for the device. If MSI/-X is
+ * supported return 1, else return 0.
+ **/
+static int pci_msi_supported(struct pci_dev *dev, int nvec)
{
- raw_spinlock_t *lock = &to_pci_dev(desc->dev)->msi_lock;
- unsigned long flags;
+ struct pci_bus *bus;
- if (!desc->pci.msi_attrib.can_mask)
- return;
+ /* MSI must be globally enabled and supported by the device */
+ if (!pci_msi_enable)
+ return 0;
- raw_spin_lock_irqsave(lock, flags);
- desc->pci.msi_mask &= ~clear;
- desc->pci.msi_mask |= set;
- pci_write_config_dword(msi_desc_to_pci_dev(desc), desc->pci.mask_pos,
- desc->pci.msi_mask);
- raw_spin_unlock_irqrestore(lock, flags);
-}
+ if (!dev || dev->no_msi)
+ return 0;
-static inline void pci_msi_mask(struct msi_desc *desc, u32 mask)
-{
- pci_msi_update_mask(desc, 0, mask);
-}
+ /*
+ * You can't ask to have 0 or less MSIs configured.
+ * a) it's stupid ..
+ * b) the list manipulation code assumes nvec >= 1.
+ */
+ if (nvec < 1)
+ return 0;
-static inline void pci_msi_unmask(struct msi_desc *desc, u32 mask)
-{
- pci_msi_update_mask(desc, mask, 0);
+ /*
+ * Any bridge which does NOT route MSI transactions from its
+ * secondary bus to its primary bus must set NO_MSI flag on
+ * the secondary pci_bus.
+ *
+ * The NO_MSI flag can either be set directly by:
+ * - arch-specific PCI host bus controller drivers (deprecated)
+ * - quirks for specific PCI bridges
+ *
+ * or indirectly by platform-specific PCI host bridge drivers by
+ * advertising the 'msi_domain' property, which results in
+ * the NO_MSI flag when no MSI domain is found for this bridge
+ * at probe time.
+ */
+ for (bus = dev->bus; bus; bus = bus->parent)
+ if (bus->bus_flags & PCI_BUS_FLAGS_NO_MSI)
+ return 0;
+
+ return 1;
}
-static inline void __iomem *pci_msix_desc_addr(struct msi_desc *desc)
+static void pcim_msi_release(void *pcidev)
{
- return desc->pci.mask_base + desc->msi_index * PCI_MSIX_ENTRY_SIZE;
+ struct pci_dev *dev = pcidev;
+
+ dev->is_msi_managed = false;
+ pci_free_irq_vectors(dev);
}
/*
- * This internal function does not flush PCI writes to the device. All
- * users must ensure that they read from the device before either assuming
- * that the device state is up to date, or returning out of this file.
- * It does not affect the msi_desc::msix_ctrl cache either. Use with care!
+ * Needs to be separate from pcim_release to prevent an ordering problem
+ * vs. msi_device_data_release() in the MSI core code.
*/
-static void pci_msix_write_vector_ctrl(struct msi_desc *desc, u32 ctrl)
+static int pcim_setup_msi_release(struct pci_dev *dev)
{
- void __iomem *desc_addr = pci_msix_desc_addr(desc);
+ int ret;
- if (desc->pci.msi_attrib.can_mask)
- writel(ctrl, desc_addr + PCI_MSIX_ENTRY_VECTOR_CTRL);
-}
+ if (!pci_is_managed(dev) || dev->is_msi_managed)
+ return 0;
-static inline void pci_msix_mask(struct msi_desc *desc)
-{
- desc->pci.msix_ctrl |= PCI_MSIX_ENTRY_CTRL_MASKBIT;
- pci_msix_write_vector_ctrl(desc, desc->pci.msix_ctrl);
- /* Flush write to device */
- readl(desc->pci.mask_base);
+ ret = devm_add_action(&dev->dev, pcim_msi_release, dev);
+ if (!ret)
+ dev->is_msi_managed = true;
+ return ret;
}
-static inline void pci_msix_unmask(struct msi_desc *desc)
+/*
+ * Ordering vs. devres: msi device data has to be installed first so that
+ * pcim_msi_release() is invoked before it on device release.
+ */
+static int pci_setup_msi_context(struct pci_dev *dev)
{
- desc->pci.msix_ctrl &= ~PCI_MSIX_ENTRY_CTRL_MASKBIT;
- pci_msix_write_vector_ctrl(desc, desc->pci.msix_ctrl);
-}
+ int ret = msi_setup_device_data(&dev->dev);
-static void __pci_msi_mask_desc(struct msi_desc *desc, u32 mask)
-{
- if (desc->pci.msi_attrib.is_msix)
- pci_msix_mask(desc);
- else
- pci_msi_mask(desc, mask);
+ if (!ret)
+ ret = pcim_setup_msi_release(dev);
+ return ret;
}
-static void __pci_msi_unmask_desc(struct msi_desc *desc, u32 mask)
+/*
+ * Helper functions for mask/unmask and MSI message handling
+ */
+
+void pci_msi_update_mask(struct msi_desc *desc, u32 clear, u32 set)
{
- if (desc->pci.msi_attrib.is_msix)
- pci_msix_unmask(desc);
- else
- pci_msi_unmask(desc, mask);
+ raw_spinlock_t *lock = &to_pci_dev(desc->dev)->msi_lock;
+ unsigned long flags;
+
+ if (!desc->pci.msi_attrib.can_mask)
+ return;
+
+ raw_spin_lock_irqsave(lock, flags);
+ desc->pci.msi_mask &= ~clear;
+ desc->pci.msi_mask |= set;
+ pci_write_config_dword(msi_desc_to_pci_dev(desc), desc->pci.mask_pos,
+ desc->pci.msi_mask);
+ raw_spin_unlock_irqrestore(lock, flags);
}
/**
@@ -148,6 +180,58 @@ void __pci_read_msi_msg(struct msi_desc *entry, struct msi_msg *msg)
}
}
+static inline void pci_write_msg_msi(struct pci_dev *dev, struct msi_desc *desc,
+ struct msi_msg *msg)
+{
+ int pos = dev->msi_cap;
+ u16 msgctl;
+
+ pci_read_config_word(dev, pos + PCI_MSI_FLAGS, &msgctl);
+ msgctl &= ~PCI_MSI_FLAGS_QSIZE;
+ msgctl |= desc->pci.msi_attrib.multiple << 4;
+ pci_write_config_word(dev, pos + PCI_MSI_FLAGS, msgctl);
+
+ pci_write_config_dword(dev, pos + PCI_MSI_ADDRESS_LO, msg->address_lo);
+ if (desc->pci.msi_attrib.is_64) {
+ pci_write_config_dword(dev, pos + PCI_MSI_ADDRESS_HI, msg->address_hi);
+ pci_write_config_word(dev, pos + PCI_MSI_DATA_64, msg->data);
+ } else {
+ pci_write_config_word(dev, pos + PCI_MSI_DATA_32, msg->data);
+ }
+ /* Ensure that the writes are visible in the device */
+ pci_read_config_word(dev, pos + PCI_MSI_FLAGS, &msgctl);
+}
+
+static inline void pci_write_msg_msix(struct msi_desc *desc, struct msi_msg *msg)
+{
+ void __iomem *base = pci_msix_desc_addr(desc);
+ u32 ctrl = desc->pci.msix_ctrl;
+ bool unmasked = !(ctrl & PCI_MSIX_ENTRY_CTRL_MASKBIT);
+
+ if (desc->pci.msi_attrib.is_virtual)
+ return;
+ /*
+ * The specification mandates that the entry is masked
+ * when the message is modified:
+ *
+ * "If software changes the Address or Data value of an
+ * entry while the entry is unmasked, the result is
+ * undefined."
+ */
+ if (unmasked)
+ pci_msix_write_vector_ctrl(desc, ctrl | PCI_MSIX_ENTRY_CTRL_MASKBIT);
+
+ writel(msg->address_lo, base + PCI_MSIX_ENTRY_LOWER_ADDR);
+ writel(msg->address_hi, base + PCI_MSIX_ENTRY_UPPER_ADDR);
+ writel(msg->data, base + PCI_MSIX_ENTRY_DATA);
+
+ if (unmasked)
+ pci_msix_write_vector_ctrl(desc, ctrl);
+
+ /* Ensure that the writes are visible in the device */
+ readl(base + PCI_MSIX_ENTRY_DATA);
+}
+
void __pci_write_msi_msg(struct msi_desc *entry, struct msi_msg *msg)
{
struct pci_dev *dev = msi_desc_to_pci_dev(entry);
@@ -155,63 +239,15 @@ void __pci_write_msi_msg(struct msi_desc *entry, struct msi_msg *msg)
if (dev->current_state != PCI_D0 || pci_dev_is_disconnected(dev)) {
/* Don't touch the hardware now */
} else if (entry->pci.msi_attrib.is_msix) {
- void __iomem *base = pci_msix_desc_addr(entry);
- u32 ctrl = entry->pci.msix_ctrl;
- bool unmasked = !(ctrl & PCI_MSIX_ENTRY_CTRL_MASKBIT);
-
- if (entry->pci.msi_attrib.is_virtual)
- goto skip;
-
- /*
- * The specification mandates that the entry is masked
- * when the message is modified:
- *
- * "If software changes the Address or Data value of an
- * entry while the entry is unmasked, the result is
- * undefined."
- */
- if (unmasked)
- pci_msix_write_vector_ctrl(entry, ctrl | PCI_MSIX_ENTRY_CTRL_MASKBIT);
-
- writel(msg->address_lo, base + PCI_MSIX_ENTRY_LOWER_ADDR);
- writel(msg->address_hi, base + PCI_MSIX_ENTRY_UPPER_ADDR);
- writel(msg->data, base + PCI_MSIX_ENTRY_DATA);
-
- if (unmasked)
- pci_msix_write_vector_ctrl(entry, ctrl);
-
- /* Ensure that the writes are visible in the device */
- readl(base + PCI_MSIX_ENTRY_DATA);
+ pci_write_msg_msix(entry, msg);
} else {
- int pos = dev->msi_cap;
- u16 msgctl;
-
- pci_read_config_word(dev, pos + PCI_MSI_FLAGS, &msgctl);
- msgctl &= ~PCI_MSI_FLAGS_QSIZE;
- msgctl |= entry->pci.msi_attrib.multiple << 4;
- pci_write_config_word(dev, pos + PCI_MSI_FLAGS, msgctl);
-
- pci_write_config_dword(dev, pos + PCI_MSI_ADDRESS_LO,
- msg->address_lo);
- if (entry->pci.msi_attrib.is_64) {
- pci_write_config_dword(dev, pos + PCI_MSI_ADDRESS_HI,
- msg->address_hi);
- pci_write_config_word(dev, pos + PCI_MSI_DATA_64,
- msg->data);
- } else {
- pci_write_config_word(dev, pos + PCI_MSI_DATA_32,
- msg->data);
- }
- /* Ensure that the writes are visible in the device */
- pci_read_config_word(dev, pos + PCI_MSI_FLAGS, &msgctl);
+ pci_write_msg_msi(dev, entry, msg);
}
-skip:
entry->msg = *msg;
if (entry->write_msi_msg)
entry->write_msi_msg(entry, entry->write_msi_msg_data);
-
}
void pci_write_msi_msg(unsigned int irq, struct msi_msg *msg)
@@ -222,15 +258,8 @@ void pci_write_msi_msg(unsigned int irq, struct msi_msg *msg)
}
EXPORT_SYMBOL_GPL(pci_write_msi_msg);
-static void free_msi_irqs(struct pci_dev *dev)
-{
- pci_msi_teardown_msi_irqs(dev);
- if (dev->msix_base) {
- iounmap(dev->msix_base);
- dev->msix_base = NULL;
- }
-}
+/* PCI/MSI specific functionality */
static void pci_intx_for_msi(struct pci_dev *dev, int enable)
{
@@ -249,118 +278,6 @@ static void pci_msi_set_enable(struct pci_dev *dev, int enable)
pci_write_config_word(dev, dev->msi_cap + PCI_MSI_FLAGS, control);
}
-/*
- * Architecture override returns true when the PCI MSI message should be
- * written by the generic restore function.
- */
-bool __weak arch_restore_msi_irqs(struct pci_dev *dev)
-{
- return true;
-}
-
-static void __pci_restore_msi_state(struct pci_dev *dev)
-{
- struct msi_desc *entry;
- u16 control;
-
- if (!dev->msi_enabled)
- return;
-
- entry = irq_get_msi_desc(dev->irq);
-
- pci_intx_for_msi(dev, 0);
- pci_msi_set_enable(dev, 0);
- if (arch_restore_msi_irqs(dev))
- __pci_write_msi_msg(entry, &entry->msg);
-
- pci_read_config_word(dev, dev->msi_cap + PCI_MSI_FLAGS, &control);
- pci_msi_update_mask(entry, 0, 0);
- control &= ~PCI_MSI_FLAGS_QSIZE;
- control |= (entry->pci.msi_attrib.multiple << 4) | PCI_MSI_FLAGS_ENABLE;
- pci_write_config_word(dev, dev->msi_cap + PCI_MSI_FLAGS, control);
-}
-
-static void pci_msix_clear_and_set_ctrl(struct pci_dev *dev, u16 clear, u16 set)
-{
- u16 ctrl;
-
- pci_read_config_word(dev, dev->msix_cap + PCI_MSIX_FLAGS, &ctrl);
- ctrl &= ~clear;
- ctrl |= set;
- pci_write_config_word(dev, dev->msix_cap + PCI_MSIX_FLAGS, ctrl);
-}
-
-static void __pci_restore_msix_state(struct pci_dev *dev)
-{
- struct msi_desc *entry;
- bool write_msg;
-
- if (!dev->msix_enabled)
- return;
-
- /* route the table */
- pci_intx_for_msi(dev, 0);
- pci_msix_clear_and_set_ctrl(dev, 0,
- PCI_MSIX_FLAGS_ENABLE | PCI_MSIX_FLAGS_MASKALL);
-
- write_msg = arch_restore_msi_irqs(dev);
-
- msi_lock_descs(&dev->dev);
- msi_for_each_desc(entry, &dev->dev, MSI_DESC_ALL) {
- if (write_msg)
- __pci_write_msi_msg(entry, &entry->msg);
- pci_msix_write_vector_ctrl(entry, entry->pci.msix_ctrl);
- }
- msi_unlock_descs(&dev->dev);
-
- pci_msix_clear_and_set_ctrl(dev, PCI_MSIX_FLAGS_MASKALL, 0);
-}
-
-void pci_restore_msi_state(struct pci_dev *dev)
-{
- __pci_restore_msi_state(dev);
- __pci_restore_msix_state(dev);
-}
-EXPORT_SYMBOL_GPL(pci_restore_msi_state);
-
-static void pcim_msi_release(void *pcidev)
-{
- struct pci_dev *dev = pcidev;
-
- dev->is_msi_managed = false;
- pci_free_irq_vectors(dev);
-}
-
-/*
- * Needs to be separate from pcim_release to prevent an ordering problem
- * vs. msi_device_data_release() in the MSI core code.
- */
-static int pcim_setup_msi_release(struct pci_dev *dev)
-{
- int ret;
-
- if (!pci_is_managed(dev) || dev->is_msi_managed)
- return 0;
-
- ret = devm_add_action(&dev->dev, pcim_msi_release, dev);
- if (!ret)
- dev->is_msi_managed = true;
- return ret;
-}
-
-/*
- * Ordering vs. devres: msi device data has to be installed first so that
- * pcim_msi_release() is invoked before it on device release.
- */
-static int pci_setup_msi_context(struct pci_dev *dev)
-{
- int ret = msi_setup_device_data(&dev->dev);
-
- if (!ret)
- ret = pcim_setup_msi_release(dev);
- return ret;
-}
-
static int msi_setup_msi_desc(struct pci_dev *dev, int nvec,
struct irq_affinity_desc *masks)
{
@@ -395,7 +312,7 @@ static int msi_setup_msi_desc(struct pci_dev *dev, int nvec,
if (desc.pci.msi_attrib.can_mask)
pci_read_config_dword(dev, desc.pci.mask_pos, &desc.pci.msi_mask);
- return msi_add_msi_desc(&dev->dev, &desc);
+ return msi_insert_msi_desc(&dev->dev, &desc);
}
static int msi_verify_entries(struct pci_dev *dev)
@@ -434,6 +351,10 @@ static int msi_capability_init(struct pci_dev *dev, int nvec,
struct msi_desc *entry;
int ret;
+ /* Reject multi-MSI early on irq domain enabled architectures */
+ if (nvec > 1 && !pci_msi_domain_supports(dev, MSI_FLAG_MULTI_PCI_MSI, ALLOW_LEGACY))
+ return 1;
+
/*
* Disable MSI during setup in the hardware, but mark it enabled
* so that setup code can evaluate it.
@@ -472,7 +393,7 @@ static int msi_capability_init(struct pci_dev *dev, int nvec,
err:
pci_msi_unmask(entry, msi_multi_mask(entry));
- free_msi_irqs(dev);
+ pci_free_msi_irqs(dev);
fail:
dev->msi_enabled = 0;
unlock:
@@ -481,6 +402,152 @@ unlock:
return ret;
}
+int __pci_enable_msi_range(struct pci_dev *dev, int minvec, int maxvec,
+ struct irq_affinity *affd)
+{
+ int nvec;
+ int rc;
+
+ if (!pci_msi_supported(dev, minvec) || dev->current_state != PCI_D0)
+ return -EINVAL;
+
+ /* Check whether driver already requested MSI-X IRQs */
+ if (dev->msix_enabled) {
+ pci_info(dev, "can't enable MSI (MSI-X already enabled)\n");
+ return -EINVAL;
+ }
+
+ if (maxvec < minvec)
+ return -ERANGE;
+
+ if (WARN_ON_ONCE(dev->msi_enabled))
+ return -EINVAL;
+
+ nvec = pci_msi_vec_count(dev);
+ if (nvec < 0)
+ return nvec;
+ if (nvec < minvec)
+ return -ENOSPC;
+
+ if (nvec > maxvec)
+ nvec = maxvec;
+
+ rc = pci_setup_msi_context(dev);
+ if (rc)
+ return rc;
+
+ if (!pci_setup_msi_device_domain(dev))
+ return -ENODEV;
+
+ for (;;) {
+ if (affd) {
+ nvec = irq_calc_affinity_vectors(minvec, nvec, affd);
+ if (nvec < minvec)
+ return -ENOSPC;
+ }
+
+ rc = msi_capability_init(dev, nvec, affd);
+ if (rc == 0)
+ return nvec;
+
+ if (rc < 0)
+ return rc;
+ if (rc < minvec)
+ return -ENOSPC;
+
+ nvec = rc;
+ }
+}
+
+/**
+ * pci_msi_vec_count - Return the number of MSI vectors a device can send
+ * @dev: device to report about
+ *
+ * This function returns the number of MSI vectors a device requested via
+ * Multiple Message Capable register. It returns a negative errno if the
+ * device is not capable sending MSI interrupts. Otherwise, the call succeeds
+ * and returns a power of two, up to a maximum of 2^5 (32), according to the
+ * MSI specification.
+ **/
+int pci_msi_vec_count(struct pci_dev *dev)
+{
+ int ret;
+ u16 msgctl;
+
+ if (!dev->msi_cap)
+ return -EINVAL;
+
+ pci_read_config_word(dev, dev->msi_cap + PCI_MSI_FLAGS, &msgctl);
+ ret = 1 << ((msgctl & PCI_MSI_FLAGS_QMASK) >> 1);
+
+ return ret;
+}
+EXPORT_SYMBOL(pci_msi_vec_count);
+
+/*
+ * Architecture override returns true when the PCI MSI message should be
+ * written by the generic restore function.
+ */
+bool __weak arch_restore_msi_irqs(struct pci_dev *dev)
+{
+ return true;
+}
+
+void __pci_restore_msi_state(struct pci_dev *dev)
+{
+ struct msi_desc *entry;
+ u16 control;
+
+ if (!dev->msi_enabled)
+ return;
+
+ entry = irq_get_msi_desc(dev->irq);
+
+ pci_intx_for_msi(dev, 0);
+ pci_msi_set_enable(dev, 0);
+ if (arch_restore_msi_irqs(dev))
+ __pci_write_msi_msg(entry, &entry->msg);
+
+ pci_read_config_word(dev, dev->msi_cap + PCI_MSI_FLAGS, &control);
+ pci_msi_update_mask(entry, 0, 0);
+ control &= ~PCI_MSI_FLAGS_QSIZE;
+ control |= (entry->pci.msi_attrib.multiple << 4) | PCI_MSI_FLAGS_ENABLE;
+ pci_write_config_word(dev, dev->msi_cap + PCI_MSI_FLAGS, control);
+}
+
+void pci_msi_shutdown(struct pci_dev *dev)
+{
+ struct msi_desc *desc;
+
+ if (!pci_msi_enable || !dev || !dev->msi_enabled)
+ return;
+
+ pci_msi_set_enable(dev, 0);
+ pci_intx_for_msi(dev, 1);
+ dev->msi_enabled = 0;
+
+ /* Return the device with MSI unmasked as initial states */
+ desc = msi_first_desc(&dev->dev, MSI_DESC_ALL);
+ if (!WARN_ON_ONCE(!desc))
+ pci_msi_unmask(desc, msi_multi_mask(desc));
+
+ /* Restore dev->irq to its default pin-assertion IRQ */
+ dev->irq = desc->pci.msi_attrib.default_irq;
+ pcibios_alloc_irq(dev);
+}
+
+/* PCI/MSI-X specific functionality */
+
+static void pci_msix_clear_and_set_ctrl(struct pci_dev *dev, u16 clear, u16 set)
+{
+ u16 ctrl;
+
+ pci_read_config_word(dev, dev->msix_cap + PCI_MSIX_FLAGS, &ctrl);
+ ctrl &= ~clear;
+ ctrl |= set;
+ pci_write_config_word(dev, dev->msix_cap + PCI_MSIX_FLAGS, ctrl);
+}
+
static void __iomem *msix_map_region(struct pci_dev *dev,
unsigned int nr_entries)
{
@@ -502,36 +569,58 @@ static void __iomem *msix_map_region(struct pci_dev *dev,
return ioremap(phys_addr, nr_entries * PCI_MSIX_ENTRY_SIZE);
}
-static int msix_setup_msi_descs(struct pci_dev *dev, void __iomem *base,
- struct msix_entry *entries, int nvec,
- struct irq_affinity_desc *masks)
+/**
+ * msix_prepare_msi_desc - Prepare a half initialized MSI descriptor for operation
+ * @dev: The PCI device for which the descriptor is prepared
+ * @desc: The MSI descriptor for preparation
+ *
+ * This is separate from msix_setup_msi_descs() below to handle dynamic
+ * allocations for MSI-X after initial enablement.
+ *
+ * Ideally the whole MSI-X setup would work that way, but there is no way to
+ * support this for the legacy arch_setup_msi_irqs() mechanism and for the
+ * fake irq domains like the x86 XEN one. Sigh...
+ *
+ * The descriptor is zeroed and only @desc::msi_index and @desc::affinity
+ * are set. When called from msix_setup_msi_descs() then the is_virtual
+ * attribute is initialized as well.
+ *
+ * Fill in the rest.
+ */
+void msix_prepare_msi_desc(struct pci_dev *dev, struct msi_desc *desc)
+{
+ desc->nvec_used = 1;
+ desc->pci.msi_attrib.is_msix = 1;
+ desc->pci.msi_attrib.is_64 = 1;
+ desc->pci.msi_attrib.default_irq = dev->irq;
+ desc->pci.mask_base = dev->msix_base;
+ desc->pci.msi_attrib.can_mask = !pci_msi_ignore_mask &&
+ !desc->pci.msi_attrib.is_virtual;
+
+ if (desc->pci.msi_attrib.can_mask) {
+ void __iomem *addr = pci_msix_desc_addr(desc);
+
+ desc->pci.msix_ctrl = readl(addr + PCI_MSIX_ENTRY_VECTOR_CTRL);
+ }
+}
+
+static int msix_setup_msi_descs(struct pci_dev *dev, struct msix_entry *entries,
+ int nvec, struct irq_affinity_desc *masks)
{
int ret = 0, i, vec_count = pci_msix_vec_count(dev);
struct irq_affinity_desc *curmsk;
struct msi_desc desc;
- void __iomem *addr;
memset(&desc, 0, sizeof(desc));
- desc.nvec_used = 1;
- desc.pci.msi_attrib.is_msix = 1;
- desc.pci.msi_attrib.is_64 = 1;
- desc.pci.msi_attrib.default_irq = dev->irq;
- desc.pci.mask_base = base;
-
for (i = 0, curmsk = masks; i < nvec; i++, curmsk++) {
desc.msi_index = entries ? entries[i].entry : i;
desc.affinity = masks ? curmsk : NULL;
desc.pci.msi_attrib.is_virtual = desc.msi_index >= vec_count;
- desc.pci.msi_attrib.can_mask = !pci_msi_ignore_mask &&
- !desc.pci.msi_attrib.is_virtual;
- if (desc.pci.msi_attrib.can_mask) {
- addr = pci_msix_desc_addr(&desc);
- desc.pci.msix_ctrl = readl(addr + PCI_MSIX_ENTRY_VECTOR_CTRL);
- }
+ msix_prepare_msi_desc(dev, &desc);
- ret = msi_add_msi_desc(&dev->dev, &desc);
+ ret = msi_insert_msi_desc(&dev->dev, &desc);
if (ret)
break;
}
@@ -562,9 +651,8 @@ static void msix_mask_all(void __iomem *base, int tsize)
writel(ctrl, base + PCI_MSIX_ENTRY_VECTOR_CTRL);
}
-static int msix_setup_interrupts(struct pci_dev *dev, void __iomem *base,
- struct msix_entry *entries, int nvec,
- struct irq_affinity *affd)
+static int msix_setup_interrupts(struct pci_dev *dev, struct msix_entry *entries,
+ int nvec, struct irq_affinity *affd)
{
struct irq_affinity_desc *masks = NULL;
int ret;
@@ -573,7 +661,7 @@ static int msix_setup_interrupts(struct pci_dev *dev, void __iomem *base,
masks = irq_create_affinity_masks(nvec, affd);
msi_lock_descs(&dev->dev);
- ret = msix_setup_msi_descs(dev, base, entries, nvec, masks);
+ ret = msix_setup_msi_descs(dev, entries, nvec, masks);
if (ret)
goto out_free;
@@ -590,7 +678,7 @@ static int msix_setup_interrupts(struct pci_dev *dev, void __iomem *base,
goto out_unlock;
out_free:
- free_msi_irqs(dev);
+ pci_free_msi_irqs(dev);
out_unlock:
msi_unlock_descs(&dev->dev);
kfree(masks);
@@ -611,7 +699,6 @@ out_unlock:
static int msix_capability_init(struct pci_dev *dev, struct msix_entry *entries,
int nvec, struct irq_affinity *affd)
{
- void __iomem *base;
int ret, tsize;
u16 control;
@@ -629,15 +716,13 @@ static int msix_capability_init(struct pci_dev *dev, struct msix_entry *entries,
pci_read_config_word(dev, dev->msix_cap + PCI_MSIX_FLAGS, &control);
/* Request & Map MSI-X table region */
tsize = msix_table_size(control);
- base = msix_map_region(dev, tsize);
- if (!base) {
+ dev->msix_base = msix_map_region(dev, tsize);
+ if (!dev->msix_base) {
ret = -ENOMEM;
goto out_disable;
}
- dev->msix_base = base;
-
- ret = msix_setup_interrupts(dev, base, entries, nvec, affd);
+ ret = msix_setup_interrupts(dev, entries, nvec, affd);
if (ret)
goto out_disable;
@@ -652,7 +737,7 @@ static int msix_capability_init(struct pci_dev *dev, struct msix_entry *entries,
* which takes the MSI-X mask bits into account even
* when MSI-X is disabled, which prevents MSI delivery.
*/
- msix_mask_all(base, tsize);
+ msix_mask_all(dev->msix_base, tsize);
pci_msix_clear_and_set_ctrl(dev, PCI_MSIX_FLAGS_MASKALL, 0);
pcibios_free_irq(dev);
@@ -665,236 +750,82 @@ out_disable:
return ret;
}
-/**
- * pci_msi_supported - check whether MSI may be enabled on a device
- * @dev: pointer to the pci_dev data structure of MSI device function
- * @nvec: how many MSIs have been requested?
- *
- * Look at global flags, the device itself, and its parent buses
- * to determine if MSI/-X are supported for the device. If MSI/-X is
- * supported return 1, else return 0.
- **/
-static int pci_msi_supported(struct pci_dev *dev, int nvec)
-{
- struct pci_bus *bus;
-
- /* MSI must be globally enabled and supported by the device */
- if (!pci_msi_enable)
- return 0;
-
- if (!dev || dev->no_msi)
- return 0;
-
- /*
- * You can't ask to have 0 or less MSIs configured.
- * a) it's stupid ..
- * b) the list manipulation code assumes nvec >= 1.
- */
- if (nvec < 1)
- return 0;
-
- /*
- * Any bridge which does NOT route MSI transactions from its
- * secondary bus to its primary bus must set NO_MSI flag on
- * the secondary pci_bus.
- *
- * The NO_MSI flag can either be set directly by:
- * - arch-specific PCI host bus controller drivers (deprecated)
- * - quirks for specific PCI bridges
- *
- * or indirectly by platform-specific PCI host bridge drivers by
- * advertising the 'msi_domain' property, which results in
- * the NO_MSI flag when no MSI domain is found for this bridge
- * at probe time.
- */
- for (bus = dev->bus; bus; bus = bus->parent)
- if (bus->bus_flags & PCI_BUS_FLAGS_NO_MSI)
- return 0;
-
- return 1;
-}
-
-/**
- * pci_msi_vec_count - Return the number of MSI vectors a device can send
- * @dev: device to report about
- *
- * This function returns the number of MSI vectors a device requested via
- * Multiple Message Capable register. It returns a negative errno if the
- * device is not capable sending MSI interrupts. Otherwise, the call succeeds
- * and returns a power of two, up to a maximum of 2^5 (32), according to the
- * MSI specification.
- **/
-int pci_msi_vec_count(struct pci_dev *dev)
-{
- int ret;
- u16 msgctl;
-
- if (!dev->msi_cap)
- return -EINVAL;
-
- pci_read_config_word(dev, dev->msi_cap + PCI_MSI_FLAGS, &msgctl);
- ret = 1 << ((msgctl & PCI_MSI_FLAGS_QMASK) >> 1);
-
- return ret;
-}
-EXPORT_SYMBOL(pci_msi_vec_count);
-
-static void pci_msi_shutdown(struct pci_dev *dev)
-{
- struct msi_desc *desc;
-
- if (!pci_msi_enable || !dev || !dev->msi_enabled)
- return;
-
- pci_msi_set_enable(dev, 0);
- pci_intx_for_msi(dev, 1);
- dev->msi_enabled = 0;
-
- /* Return the device with MSI unmasked as initial states */
- desc = msi_first_desc(&dev->dev, MSI_DESC_ALL);
- if (!WARN_ON_ONCE(!desc))
- pci_msi_unmask(desc, msi_multi_mask(desc));
-
- /* Restore dev->irq to its default pin-assertion IRQ */
- dev->irq = desc->pci.msi_attrib.default_irq;
- pcibios_alloc_irq(dev);
-}
-
-void pci_disable_msi(struct pci_dev *dev)
-{
- if (!pci_msi_enable || !dev || !dev->msi_enabled)
- return;
-
- msi_lock_descs(&dev->dev);
- pci_msi_shutdown(dev);
- free_msi_irqs(dev);
- msi_unlock_descs(&dev->dev);
-}
-EXPORT_SYMBOL(pci_disable_msi);
-
-/**
- * pci_msix_vec_count - return the number of device's MSI-X table entries
- * @dev: pointer to the pci_dev data structure of MSI-X device function
- * This function returns the number of device's MSI-X table entries and
- * therefore the number of MSI-X vectors device is capable of sending.
- * It returns a negative errno if the device is not capable of sending MSI-X
- * interrupts.
- **/
-int pci_msix_vec_count(struct pci_dev *dev)
-{
- u16 control;
-
- if (!dev->msix_cap)
- return -EINVAL;
-
- pci_read_config_word(dev, dev->msix_cap + PCI_MSIX_FLAGS, &control);
- return msix_table_size(control);
-}
-EXPORT_SYMBOL(pci_msix_vec_count);
-
-static int __pci_enable_msix(struct pci_dev *dev, struct msix_entry *entries,
- int nvec, struct irq_affinity *affd, int flags)
+static bool pci_msix_validate_entries(struct pci_dev *dev, struct msix_entry *entries,
+ int nvec, int hwsize)
{
- int nr_entries;
+ bool nogap;
int i, j;
- if (!pci_msi_supported(dev, nvec) || dev->current_state != PCI_D0)
- return -EINVAL;
+ if (!entries)
+ return true;
- nr_entries = pci_msix_vec_count(dev);
- if (nr_entries < 0)
- return nr_entries;
- if (nvec > nr_entries && !(flags & PCI_IRQ_VIRTUAL))
- return nr_entries;
+ nogap = pci_msi_domain_supports(dev, MSI_FLAG_MSIX_CONTIGUOUS, DENY_LEGACY);
- if (entries) {
- /* Check for any invalid entries */
- for (i = 0; i < nvec; i++) {
- if (entries[i].entry >= nr_entries)
- return -EINVAL; /* invalid entry */
- for (j = i + 1; j < nvec; j++) {
- if (entries[i].entry == entries[j].entry)
- return -EINVAL; /* duplicate entry */
- }
- }
- }
+ for (i = 0; i < nvec; i++) {
+ /* Entry within hardware limit? */
+ if (entries[i].entry >= hwsize)
+ return false;
- /* Check whether driver already requested for MSI IRQ */
- if (dev->msi_enabled) {
- pci_info(dev, "can't enable MSI-X (MSI IRQ already assigned)\n");
- return -EINVAL;
+ /* Check for duplicate entries */
+ for (j = i + 1; j < nvec; j++) {
+ if (entries[i].entry == entries[j].entry)
+ return false;
+ }
+ /* Check for unsupported gaps */
+ if (nogap && entries[i].entry != i)
+ return false;
}
- return msix_capability_init(dev, entries, nvec, affd);
+ return true;
}
-static void pci_msix_shutdown(struct pci_dev *dev)
+int __pci_enable_msix_range(struct pci_dev *dev, struct msix_entry *entries, int minvec,
+ int maxvec, struct irq_affinity *affd, int flags)
{
- struct msi_desc *desc;
+ int hwsize, rc, nvec = maxvec;
- if (!pci_msi_enable || !dev || !dev->msix_enabled)
- return;
+ if (maxvec < minvec)
+ return -ERANGE;
- if (pci_dev_is_disconnected(dev)) {
- dev->msix_enabled = 0;
- return;
+ if (dev->msi_enabled) {
+ pci_info(dev, "can't enable MSI-X (MSI already enabled)\n");
+ return -EINVAL;
}
- /* Return the device with MSI-X masked as initial states */
- msi_for_each_desc(desc, &dev->dev, MSI_DESC_ALL)
- pci_msix_mask(desc);
+ if (WARN_ON_ONCE(dev->msix_enabled))
+ return -EINVAL;
- pci_msix_clear_and_set_ctrl(dev, PCI_MSIX_FLAGS_ENABLE, 0);
- pci_intx_for_msi(dev, 1);
- dev->msix_enabled = 0;
- pcibios_alloc_irq(dev);
-}
+ /* Check MSI-X early on irq domain enabled architectures */
+ if (!pci_msi_domain_supports(dev, MSI_FLAG_PCI_MSIX, ALLOW_LEGACY))
+ return -ENOTSUPP;
-void pci_disable_msix(struct pci_dev *dev)
-{
- if (!pci_msi_enable || !dev || !dev->msix_enabled)
- return;
-
- msi_lock_descs(&dev->dev);
- pci_msix_shutdown(dev);
- free_msi_irqs(dev);
- msi_unlock_descs(&dev->dev);
-}
-EXPORT_SYMBOL(pci_disable_msix);
+ if (!pci_msi_supported(dev, nvec) || dev->current_state != PCI_D0)
+ return -EINVAL;
-static int __pci_enable_msi_range(struct pci_dev *dev, int minvec, int maxvec,
- struct irq_affinity *affd)
-{
- int nvec;
- int rc;
+ hwsize = pci_msix_vec_count(dev);
+ if (hwsize < 0)
+ return hwsize;
- if (!pci_msi_supported(dev, minvec) || dev->current_state != PCI_D0)
+ if (!pci_msix_validate_entries(dev, entries, nvec, hwsize))
return -EINVAL;
- /* Check whether driver already requested MSI-X IRQs */
- if (dev->msix_enabled) {
- pci_info(dev, "can't enable MSI (MSI-X already enabled)\n");
- return -EINVAL;
+ if (hwsize < nvec) {
+ /* Keep the IRQ virtual hackery working */
+ if (flags & PCI_IRQ_VIRTUAL)
+ hwsize = nvec;
+ else
+ nvec = hwsize;
}
- if (maxvec < minvec)
- return -ERANGE;
-
- if (WARN_ON_ONCE(dev->msi_enabled))
- return -EINVAL;
-
- nvec = pci_msi_vec_count(dev);
- if (nvec < 0)
- return nvec;
if (nvec < minvec)
return -ENOSPC;
- if (nvec > maxvec)
- nvec = maxvec;
-
rc = pci_setup_msi_context(dev);
if (rc)
return rc;
+ if (!pci_setup_msix_device_domain(dev, hwsize))
+ return -ENODEV;
+
for (;;) {
if (affd) {
nvec = irq_calc_affinity_vectors(minvec, nvec, affd);
@@ -902,7 +833,7 @@ static int __pci_enable_msi_range(struct pci_dev *dev, int minvec, int maxvec,
return -ENOSPC;
}
- rc = msi_capability_init(dev, nvec, affd);
+ rc = msix_capability_init(dev, entries, nvec, affd);
if (rc == 0)
return nvec;
@@ -915,214 +846,67 @@ static int __pci_enable_msi_range(struct pci_dev *dev, int minvec, int maxvec,
}
}
-/* deprecated, don't use */
-int pci_enable_msi(struct pci_dev *dev)
-{
- int rc = __pci_enable_msi_range(dev, 1, 1, NULL);
- if (rc < 0)
- return rc;
- return 0;
-}
-EXPORT_SYMBOL(pci_enable_msi);
-
-static int __pci_enable_msix_range(struct pci_dev *dev,
- struct msix_entry *entries, int minvec,
- int maxvec, struct irq_affinity *affd,
- int flags)
+void __pci_restore_msix_state(struct pci_dev *dev)
{
- int rc, nvec = maxvec;
-
- if (maxvec < minvec)
- return -ERANGE;
-
- if (WARN_ON_ONCE(dev->msix_enabled))
- return -EINVAL;
-
- rc = pci_setup_msi_context(dev);
- if (rc)
- return rc;
+ struct msi_desc *entry;
+ bool write_msg;
- for (;;) {
- if (affd) {
- nvec = irq_calc_affinity_vectors(minvec, nvec, affd);
- if (nvec < minvec)
- return -ENOSPC;
- }
+ if (!dev->msix_enabled)
+ return;
- rc = __pci_enable_msix(dev, entries, nvec, affd, flags);
- if (rc == 0)
- return nvec;
+ /* route the table */
+ pci_intx_for_msi(dev, 0);
+ pci_msix_clear_and_set_ctrl(dev, 0,
+ PCI_MSIX_FLAGS_ENABLE | PCI_MSIX_FLAGS_MASKALL);
- if (rc < 0)
- return rc;
- if (rc < minvec)
- return -ENOSPC;
+ write_msg = arch_restore_msi_irqs(dev);
- nvec = rc;
+ msi_lock_descs(&dev->dev);
+ msi_for_each_desc(entry, &dev->dev, MSI_DESC_ALL) {
+ if (write_msg)
+ __pci_write_msi_msg(entry, &entry->msg);
+ pci_msix_write_vector_ctrl(entry, entry->pci.msix_ctrl);
}
-}
+ msi_unlock_descs(&dev->dev);
-/**
- * pci_enable_msix_range - configure device's MSI-X capability structure
- * @dev: pointer to the pci_dev data structure of MSI-X device function
- * @entries: pointer to an array of MSI-X entries
- * @minvec: minimum number of MSI-X IRQs requested
- * @maxvec: maximum number of MSI-X IRQs requested
- *
- * Setup the MSI-X capability structure of device function with a maximum
- * possible number of interrupts in the range between @minvec and @maxvec
- * upon its software driver call to request for MSI-X mode enabled on its
- * hardware device function. It returns a negative errno if an error occurs.
- * If it succeeds, it returns the actual number of interrupts allocated and
- * indicates the successful configuration of MSI-X capability structure
- * with new allocated MSI-X interrupts.
- **/
-int pci_enable_msix_range(struct pci_dev *dev, struct msix_entry *entries,
- int minvec, int maxvec)
-{
- return __pci_enable_msix_range(dev, entries, minvec, maxvec, NULL, 0);
+ pci_msix_clear_and_set_ctrl(dev, PCI_MSIX_FLAGS_MASKALL, 0);
}
-EXPORT_SYMBOL(pci_enable_msix_range);
-/**
- * pci_alloc_irq_vectors_affinity - allocate multiple IRQs for a device
- * @dev: PCI device to operate on
- * @min_vecs: minimum number of vectors required (must be >= 1)
- * @max_vecs: maximum (desired) number of vectors
- * @flags: flags or quirks for the allocation
- * @affd: optional description of the affinity requirements
- *
- * Allocate up to @max_vecs interrupt vectors for @dev, using MSI-X or MSI
- * vectors if available, and fall back to a single legacy vector
- * if neither is available. Return the number of vectors allocated,
- * (which might be smaller than @max_vecs) if successful, or a negative
- * error code on error. If less than @min_vecs interrupt vectors are
- * available for @dev the function will fail with -ENOSPC.
- *
- * To get the Linux IRQ number used for a vector that can be passed to
- * request_irq() use the pci_irq_vector() helper.
- */
-int pci_alloc_irq_vectors_affinity(struct pci_dev *dev, unsigned int min_vecs,
- unsigned int max_vecs, unsigned int flags,
- struct irq_affinity *affd)
+void pci_msix_shutdown(struct pci_dev *dev)
{
- struct irq_affinity msi_default_affd = {0};
- int nvecs = -ENOSPC;
-
- if (flags & PCI_IRQ_AFFINITY) {
- if (!affd)
- affd = &msi_default_affd;
- } else {
- if (WARN_ON(affd))
- affd = NULL;
- }
+ struct msi_desc *desc;
- if (flags & PCI_IRQ_MSIX) {
- nvecs = __pci_enable_msix_range(dev, NULL, min_vecs, max_vecs,
- affd, flags);
- if (nvecs > 0)
- return nvecs;
- }
+ if (!pci_msi_enable || !dev || !dev->msix_enabled)
+ return;
- if (flags & PCI_IRQ_MSI) {
- nvecs = __pci_enable_msi_range(dev, min_vecs, max_vecs, affd);
- if (nvecs > 0)
- return nvecs;
+ if (pci_dev_is_disconnected(dev)) {
+ dev->msix_enabled = 0;
+ return;
}
- /* use legacy IRQ if allowed */
- if (flags & PCI_IRQ_LEGACY) {
- if (min_vecs == 1 && dev->irq) {
- /*
- * Invoke the affinity spreading logic to ensure that
- * the device driver can adjust queue configuration
- * for the single interrupt case.
- */
- if (affd)
- irq_create_affinity_masks(1, affd);
- pci_intx(dev, 1);
- return 1;
- }
- }
+ /* Return the device with MSI-X masked as initial states */
+ msi_for_each_desc(desc, &dev->dev, MSI_DESC_ALL)
+ pci_msix_mask(desc);
- return nvecs;
+ pci_msix_clear_and_set_ctrl(dev, PCI_MSIX_FLAGS_ENABLE, 0);
+ pci_intx_for_msi(dev, 1);
+ dev->msix_enabled = 0;
+ pcibios_alloc_irq(dev);
}
-EXPORT_SYMBOL(pci_alloc_irq_vectors_affinity);
-/**
- * pci_free_irq_vectors - free previously allocated IRQs for a device
- * @dev: PCI device to operate on
- *
- * Undoes the allocations and enabling in pci_alloc_irq_vectors().
- */
-void pci_free_irq_vectors(struct pci_dev *dev)
-{
- pci_disable_msix(dev);
- pci_disable_msi(dev);
-}
-EXPORT_SYMBOL(pci_free_irq_vectors);
+/* Common interfaces */
-/**
- * pci_irq_vector - return Linux IRQ number of a device vector
- * @dev: PCI device to operate on
- * @nr: Interrupt vector index (0-based)
- *
- * @nr has the following meanings depending on the interrupt mode:
- * MSI-X: The index in the MSI-X vector table
- * MSI: The index of the enabled MSI vectors
- * INTx: Must be 0
- *
- * Return: The Linux interrupt number or -EINVAl if @nr is out of range.
- */
-int pci_irq_vector(struct pci_dev *dev, unsigned int nr)
+void pci_free_msi_irqs(struct pci_dev *dev)
{
- unsigned int irq;
-
- if (!dev->msi_enabled && !dev->msix_enabled)
- return !nr ? dev->irq : -EINVAL;
+ pci_msi_teardown_msi_irqs(dev);
- irq = msi_get_virq(&dev->dev, nr);
- return irq ? irq : -EINVAL;
+ if (dev->msix_base) {
+ iounmap(dev->msix_base);
+ dev->msix_base = NULL;
+ }
}
-EXPORT_SYMBOL(pci_irq_vector);
-/**
- * pci_irq_get_affinity - return the affinity of a particular MSI vector
- * @dev: PCI device to operate on
- * @nr: device-relative interrupt vector index (0-based).
- *
- * @nr has the following meanings depending on the interrupt mode:
- * MSI-X: The index in the MSI-X vector table
- * MSI: The index of the enabled MSI vectors
- * INTx: Must be 0
- *
- * Return: A cpumask pointer or NULL if @nr is out of range
- */
-const struct cpumask *pci_irq_get_affinity(struct pci_dev *dev, int nr)
-{
- int idx, irq = pci_irq_vector(dev, nr);
- struct msi_desc *desc;
-
- if (WARN_ON_ONCE(irq <= 0))
- return NULL;
-
- desc = irq_get_msi_desc(irq);
- /* Non-MSI does not have the information handy */
- if (!desc)
- return cpu_possible_mask;
-
- /* MSI[X] interrupts can be allocated without affinity descriptor */
- if (!desc->affinity)
- return NULL;
-
- /*
- * MSI has a mask array in the descriptor.
- * MSI-X has a single mask.
- */
- idx = dev->msi_enabled ? nr : 0;
- return &desc->affinity[idx].mask;
-}
-EXPORT_SYMBOL(pci_irq_get_affinity);
+/* Misc. infrastructure */
struct pci_dev *msi_desc_to_pci_dev(struct msi_desc *desc)
{
@@ -1134,15 +918,3 @@ void pci_no_msi(void)
{
pci_msi_enable = 0;
}
-
-/**
- * pci_msi_enabled - is MSI enabled?
- *
- * Returns true if MSI has not been disabled by the command-line option
- * pci=nomsi.
- **/
-int pci_msi_enabled(void)
-{
- return pci_msi_enable;
-}
-EXPORT_SYMBOL(pci_msi_enabled);
diff --git a/drivers/pci/msi/msi.h b/drivers/pci/msi/msi.h
index dbeff066bedd..ee53cf079f4e 100644
--- a/drivers/pci/msi/msi.h
+++ b/drivers/pci/msi/msi.h
@@ -5,24 +5,70 @@
#define msix_table_size(flags) ((flags & PCI_MSIX_FLAGS_QSIZE) + 1)
-extern int pci_msi_setup_msi_irqs(struct pci_dev *dev, int nvec, int type);
-extern void pci_msi_teardown_msi_irqs(struct pci_dev *dev);
+int pci_msi_setup_msi_irqs(struct pci_dev *dev, int nvec, int type);
+void pci_msi_teardown_msi_irqs(struct pci_dev *dev);
-#ifdef CONFIG_PCI_MSI_ARCH_FALLBACKS
-extern int pci_msi_legacy_setup_msi_irqs(struct pci_dev *dev, int nvec, int type);
-extern void pci_msi_legacy_teardown_msi_irqs(struct pci_dev *dev);
-#else
-static inline int pci_msi_legacy_setup_msi_irqs(struct pci_dev *dev, int nvec, int type)
+/* Mask/unmask helpers */
+void pci_msi_update_mask(struct msi_desc *desc, u32 clear, u32 set);
+
+static inline void pci_msi_mask(struct msi_desc *desc, u32 mask)
{
- WARN_ON_ONCE(1);
- return -ENODEV;
+ pci_msi_update_mask(desc, 0, mask);
}
-static inline void pci_msi_legacy_teardown_msi_irqs(struct pci_dev *dev)
+static inline void pci_msi_unmask(struct msi_desc *desc, u32 mask)
{
- WARN_ON_ONCE(1);
+ pci_msi_update_mask(desc, mask, 0);
+}
+
+static inline void __iomem *pci_msix_desc_addr(struct msi_desc *desc)
+{
+ return desc->pci.mask_base + desc->msi_index * PCI_MSIX_ENTRY_SIZE;
+}
+
+/*
+ * This internal function does not flush PCI writes to the device. All
+ * users must ensure that they read from the device before either assuming
+ * that the device state is up to date, or returning out of this file.
+ * It does not affect the msi_desc::msix_ctrl cache either. Use with care!
+ */
+static inline void pci_msix_write_vector_ctrl(struct msi_desc *desc, u32 ctrl)
+{
+ void __iomem *desc_addr = pci_msix_desc_addr(desc);
+
+ if (desc->pci.msi_attrib.can_mask)
+ writel(ctrl, desc_addr + PCI_MSIX_ENTRY_VECTOR_CTRL);
+}
+
+static inline void pci_msix_mask(struct msi_desc *desc)
+{
+ desc->pci.msix_ctrl |= PCI_MSIX_ENTRY_CTRL_MASKBIT;
+ pci_msix_write_vector_ctrl(desc, desc->pci.msix_ctrl);
+ /* Flush write to device */
+ readl(desc->pci.mask_base);
+}
+
+static inline void pci_msix_unmask(struct msi_desc *desc)
+{
+ desc->pci.msix_ctrl &= ~PCI_MSIX_ENTRY_CTRL_MASKBIT;
+ pci_msix_write_vector_ctrl(desc, desc->pci.msix_ctrl);
+}
+
+static inline void __pci_msi_mask_desc(struct msi_desc *desc, u32 mask)
+{
+ if (desc->pci.msi_attrib.is_msix)
+ pci_msix_mask(desc);
+ else
+ pci_msi_mask(desc, mask);
+}
+
+static inline void __pci_msi_unmask_desc(struct msi_desc *desc, u32 mask)
+{
+ if (desc->pci.msi_attrib.is_msix)
+ pci_msix_unmask(desc);
+ else
+ pci_msi_unmask(desc, mask);
}
-#endif
/*
* PCI 2.3 does not specify mask bits for each MSI interrupt. Attempting to
@@ -37,3 +83,47 @@ static inline __attribute_const__ u32 msi_multi_mask(struct msi_desc *desc)
return 0xffffffff;
return (1 << (1 << desc->pci.msi_attrib.multi_cap)) - 1;
}
+
+void msix_prepare_msi_desc(struct pci_dev *dev, struct msi_desc *desc);
+
+/* Subsystem variables */
+extern int pci_msi_enable;
+
+/* MSI internal functions invoked from the public APIs */
+void pci_msi_shutdown(struct pci_dev *dev);
+void pci_msix_shutdown(struct pci_dev *dev);
+void pci_free_msi_irqs(struct pci_dev *dev);
+int __pci_enable_msi_range(struct pci_dev *dev, int minvec, int maxvec, struct irq_affinity *affd);
+int __pci_enable_msix_range(struct pci_dev *dev, struct msix_entry *entries, int minvec,
+ int maxvec, struct irq_affinity *affd, int flags);
+void __pci_restore_msi_state(struct pci_dev *dev);
+void __pci_restore_msix_state(struct pci_dev *dev);
+
+/* irq_domain related functionality */
+
+enum support_mode {
+ ALLOW_LEGACY,
+ DENY_LEGACY,
+};
+
+bool pci_msi_domain_supports(struct pci_dev *dev, unsigned int feature_mask, enum support_mode mode);
+bool pci_setup_msi_device_domain(struct pci_dev *pdev);
+bool pci_setup_msix_device_domain(struct pci_dev *pdev, unsigned int hwsize);
+
+/* Legacy (!IRQDOMAIN) fallbacks */
+
+#ifdef CONFIG_PCI_MSI_ARCH_FALLBACKS
+int pci_msi_legacy_setup_msi_irqs(struct pci_dev *dev, int nvec, int type);
+void pci_msi_legacy_teardown_msi_irqs(struct pci_dev *dev);
+#else
+static inline int pci_msi_legacy_setup_msi_irqs(struct pci_dev *dev, int nvec, int type)
+{
+ WARN_ON_ONCE(1);
+ return -ENODEV;
+}
+
+static inline void pci_msi_legacy_teardown_msi_irqs(struct pci_dev *dev)
+{
+ WARN_ON_ONCE(1);
+}
+#endif
diff --git a/drivers/pci/probe.c b/drivers/pci/probe.c
index b66fa42c4b1f..fdd7e56ddf40 100644
--- a/drivers/pci/probe.c
+++ b/drivers/pci/probe.c
@@ -842,7 +842,6 @@ static struct irq_domain *pci_host_bridge_msi_domain(struct pci_bus *bus)
if (!d)
d = pci_host_bridge_acpi_msi_domain(bus);
-#ifdef CONFIG_PCI_MSI_IRQ_DOMAIN
/*
* If no IRQ domain was found via the OF tree, try looking it up
* directly through the fwnode_handle.
@@ -854,7 +853,6 @@ static struct irq_domain *pci_host_bridge_msi_domain(struct pci_bus *bus)
d = irq_find_matching_fwnode(fwnode,
DOMAIN_BUS_PCI_MSI);
}
-#endif
return d;
}