summaryrefslogtreecommitdiffstats
path: root/drivers
diff options
context:
space:
mode:
authorLinus Torvalds <torvalds@linux-foundation.org>2021-02-25 09:56:08 -0800
committerLinus Torvalds <torvalds@linux-foundation.org>2021-02-25 09:56:08 -0800
commit5b47b10e8fb92f8beca6aa8a7d97fc84e090384c (patch)
tree74689585f56f51dadcd621d040f7b03c7cafdd86 /drivers
parent6c15f9e805f22566d7547551f359aba04b611f9d (diff)
parente18fb64b79860cf5f381208834b8fbc493ef7cbc (diff)
downloadlinux-5b47b10e8fb92f8beca6aa8a7d97fc84e090384c.tar.bz2
Merge tag 'pci-v5.12-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci
Pull PCI updates from Bjorn Helgaas: "Enumeration: - Remove unnecessary locking around _OSC (Bjorn Helgaas) - Clarify message about _OSC failure (Bjorn Helgaas) - Remove notification of PCIe bandwidth changes (Bjorn Helgaas) - Tidy checking of syscall user config accessors (Heiner Kallweit) Resource management: - Decline to resize resources if boot config must be preserved (Ard Biesheuvel) - Fix pci_register_io_range() memory leak (Geert Uytterhoeven) Error handling (Keith Busch): - Clear error status from the correct device - Retain error recovery status so drivers can use it after reset - Log the type of Port (Root or Switch Downstream) that we reset - Always request a reset for Downstream Ports in frozen state Endpoint framework and NTB (Kishon Vijay Abraham I): - Make *_get_first_free_bar() take into account 64 bit BAR - Add helper API to get the 'next' unreserved BAR - Make *_free_bar() return error codes on failure - Remove unused pci_epf_match_device() - Add support to associate secondary EPC with EPF - Add support in configfs to associate two EPCs with EPF - Add pci_epc_ops to map MSI IRQ - Add pci_epf_ops to expose function-specific attrs - Allow user to create sub-directory of 'EPF Device' directory - Implement ->msi_map_irq() ops for cadence - Configure LM_EP_FUNC_CFG based on epc->function_num_map for cadence - Add EP function driver to provide NTB functionality - Add support for EPF PCI Non-Transparent Bridge - Add specification for PCI NTB function device - Add PCI endpoint NTB function user guide - Add configfs binding documentation for pci-ntb endpoint function Broadcom STB PCIe controller driver: - Add support for BCM4908 and external PERST# signal controller (Rafał Miłecki) Cadence PCIe controller driver: - Retrain Link to work around Gen2 training defect (Nadeem Athani) - Fix merge botch in cdns_pcie_host_map_dma_ranges() (Krzysztof Wilczyński) Freescale Layerscape PCIe controller driver: - Add LX2160A rev2 EP mode support (Hou Zhiqiang) - Convert to builtin_platform_driver() (Michael Walle) MediaTek PCIe controller driver: - Fix OF node reference leak (Krzysztof Wilczyński) Microchip PolarFlare PCIe controller driver: - Add Microchip PolarFire PCIe controller driver (Daire McNamara) Qualcomm PCIe controller driver: - Use PHY_REFCLK_USE_PAD only for ipq8064 (Ansuel Smith) - Add support for ddrss_sf_tbu clock for sm8250 (Dmitry Baryshkov) Renesas R-Car PCIe controller driver: - Drop PCIE_RCAR config option (Lad Prabhakar) - Always allocate MSI addresses in 32bit space (Marek Vasut) Rockchip PCIe controller driver: - Add FriendlyARM NanoPi M4B DT binding (Chen-Yu Tsai) - Make 'ep-gpios' DT property optional (Chen-Yu Tsai) Synopsys DesignWare PCIe controller driver: - Work around ECRC configuration hardware defect (Vidya Sagar) - Drop support for config space in DT 'ranges' (Rob Herring) - Change size to u64 for EP outbound iATU (Shradha Todi) - Add upper limit address for outbound iATU (Shradha Todi) - Make dw_pcie ops optional (Jisheng Zhang) - Remove unnecessary dw_pcie_ops from al driver (Jisheng Zhang) Xilinx Versal CPM PCIe controller driver: - Fix OF node reference leak (Pan Bian) Miscellaneous: - Remove tango host controller driver (Arnd Bergmann) - Remove IRQ handler & data together (altera-msi, brcmstb, dwc) (Martin Kaiser) - Fix xgene-msi race in installing chained IRQ handler (Martin Kaiser) - Apply CONFIG_PCI_DEBUG to entire drivers/pci hierarchy (Junhao He) - Fix pci-bridge-emul array overruns (Russell King) - Remove obsolete uses of WARN_ON(in_interrupt()) (Sebastian Andrzej Siewior)" * tag 'pci-v5.12-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci: (69 commits) PCI: qcom: Use PHY_REFCLK_USE_PAD only for ipq8064 PCI: qcom: Add support for ddrss_sf_tbu clock dt-bindings: PCI: qcom: Document ddrss_sf_tbu clock for sm8250 PCI: al: Remove useless dw_pcie_ops PCI: dwc: Don't assume the ops in dw_pcie always exist PCI: dwc: Add upper limit address for outbound iATU PCI: dwc: Change size to u64 for EP outbound iATU PCI: dwc: Drop support for config space in 'ranges' PCI: layerscape: Convert to builtin_platform_driver() PCI: layerscape: Add LX2160A rev2 EP mode support dt-bindings: PCI: layerscape: Add LX2160A rev2 compatible strings PCI: dwc: Work around ECRC configuration issue PCI/portdrv: Report reset for frozen channel PCI/AER: Specify the type of Port that was reset PCI/ERR: Retain status from error notification PCI/AER: Clear AER status from Root Port when resetting Downstream Port PCI/ERR: Clear status of the reporting device dt-bindings: arm: rockchip: Add FriendlyARM NanoPi M4B PCI: rockchip: Make 'ep-gpios' DT property optional Documentation: PCI: Add PCI endpoint NTB function user guide ...
Diffstat (limited to 'drivers')
-rw-r--r--drivers/acpi/pci_root.c40
-rw-r--r--drivers/gpu/drm/qxl/qxl_drv.c2
-rw-r--r--drivers/misc/pci_endpoint_test.c1
-rw-r--r--drivers/net/wireless/intel/iwlwifi/fw/file.h2
-rw-r--r--drivers/ntb/hw/Kconfig1
-rw-r--r--drivers/ntb/hw/Makefile1
-rw-r--r--drivers/ntb/hw/epf/Kconfig6
-rw-r--r--drivers/ntb/hw/epf/Makefile1
-rw-r--r--drivers/ntb/hw/epf/ntb_hw_epf.c753
-rw-r--r--drivers/pci/Makefile2
-rw-r--r--drivers/pci/controller/Kconfig35
-rw-r--r--drivers/pci/controller/Makefile2
-rw-r--r--drivers/pci/controller/cadence/pci-j721e.c3
-rw-r--r--drivers/pci/controller/cadence/pcie-cadence-ep.c60
-rw-r--r--drivers/pci/controller/cadence/pcie-cadence-host.c86
-rw-r--r--drivers/pci/controller/cadence/pcie-cadence.h11
-rw-r--r--drivers/pci/controller/dwc/pci-layerscape-ep.c7
-rw-r--r--drivers/pci/controller/dwc/pci-layerscape.c5
-rw-r--r--drivers/pci/controller/dwc/pcie-al.c4
-rw-r--r--drivers/pci/controller/dwc/pcie-designware-ep.c8
-rw-r--r--drivers/pci/controller/dwc/pcie-designware-host.c53
-rw-r--r--drivers/pci/controller/dwc/pcie-designware.c70
-rw-r--r--drivers/pci/controller/dwc/pcie-designware.h4
-rw-r--r--drivers/pci/controller/dwc/pcie-qcom.c22
-rw-r--r--drivers/pci/controller/pci-host-common.c4
-rw-r--r--drivers/pci/controller/pci-hyperv.c2
-rw-r--r--drivers/pci/controller/pci-xgene-msi.c10
-rw-r--r--drivers/pci/controller/pci-xgene.c13
-rw-r--r--drivers/pci/controller/pcie-altera-msi.c3
-rw-r--r--drivers/pci/controller/pcie-brcmstb.c35
-rw-r--r--drivers/pci/controller/pcie-mediatek.c7
-rw-r--r--drivers/pci/controller/pcie-microchip-host.c1138
-rw-r--r--drivers/pci/controller/pcie-rcar-host.c2
-rw-r--r--drivers/pci/controller/pcie-rockchip.c12
-rw-r--r--drivers/pci/controller/pcie-tango.c341
-rw-r--r--drivers/pci/controller/pcie-xilinx-cpm.c1
-rw-r--r--drivers/pci/endpoint/functions/Kconfig13
-rw-r--r--drivers/pci/endpoint/functions/Makefile1
-rw-r--r--drivers/pci/endpoint/functions/pci-epf-ntb.c2128
-rw-r--r--drivers/pci/endpoint/functions/pci-epf-test.c13
-rw-r--r--drivers/pci/endpoint/pci-ep-cfs.c176
-rw-r--r--drivers/pci/endpoint/pci-epc-core.c130
-rw-r--r--drivers/pci/endpoint/pci-epf-core.c105
-rw-r--r--drivers/pci/hotplug/acpiphp.h3
-rw-r--r--drivers/pci/pci-bridge-emul.c11
-rw-r--r--drivers/pci/pci.c4
-rw-r--r--drivers/pci/pcie/Kconfig8
-rw-r--r--drivers/pci/pcie/Makefile1
-rw-r--r--drivers/pci/pcie/aer.c5
-rw-r--r--drivers/pci/pcie/bw_notification.c138
-rw-r--r--drivers/pci/pcie/err.c16
-rw-r--r--drivers/pci/pcie/portdrv.h6
-rw-r--r--drivers/pci/pcie/portdrv_pci.c4
-rw-r--r--drivers/pci/search.c4
-rw-r--r--drivers/pci/setup-res.c6
-rw-r--r--drivers/pci/syscall.c10
56 files changed, 4772 insertions, 757 deletions
diff --git a/drivers/acpi/pci_root.c b/drivers/acpi/pci_root.c
index 0bf072cef6cf..dcd593766a64 100644
--- a/drivers/acpi/pci_root.c
+++ b/drivers/acpi/pci_root.c
@@ -56,8 +56,6 @@ static struct acpi_scan_handler pci_root_handler = {
},
};
-static DEFINE_MUTEX(osc_lock);
-
/**
* acpi_is_root_bridge - determine whether an ACPI CA node is a PCI root bridge
* @handle: the ACPI CA node in question.
@@ -223,12 +221,7 @@ static acpi_status acpi_pci_query_osc(struct acpi_pci_root *root,
static acpi_status acpi_pci_osc_support(struct acpi_pci_root *root, u32 flags)
{
- acpi_status status;
-
- mutex_lock(&osc_lock);
- status = acpi_pci_query_osc(root, flags, NULL);
- mutex_unlock(&osc_lock);
- return status;
+ return acpi_pci_query_osc(root, flags, NULL);
}
struct acpi_pci_root *acpi_pci_find_root(acpi_handle handle)
@@ -353,10 +346,10 @@ EXPORT_SYMBOL_GPL(acpi_get_pci_dev);
* _OSC bits the BIOS has granted control of, but its contents are meaningless
* on failure.
**/
-acpi_status acpi_pci_osc_control_set(acpi_handle handle, u32 *mask, u32 req)
+static acpi_status acpi_pci_osc_control_set(acpi_handle handle, u32 *mask, u32 req)
{
struct acpi_pci_root *root;
- acpi_status status = AE_OK;
+ acpi_status status;
u32 ctrl, capbuf[3];
if (!mask)
@@ -370,18 +363,16 @@ acpi_status acpi_pci_osc_control_set(acpi_handle handle, u32 *mask, u32 req)
if (!root)
return AE_NOT_EXIST;
- mutex_lock(&osc_lock);
-
*mask = ctrl | root->osc_control_set;
/* No need to evaluate _OSC if the control was already granted. */
if ((root->osc_control_set & ctrl) == ctrl)
- goto out;
+ return AE_OK;
/* Need to check the available controls bits before requesting them. */
while (*mask) {
status = acpi_pci_query_osc(root, root->osc_support_set, mask);
if (ACPI_FAILURE(status))
- goto out;
+ return status;
if (ctrl == *mask)
break;
decode_osc_control(root, "platform does not support",
@@ -392,21 +383,19 @@ acpi_status acpi_pci_osc_control_set(acpi_handle handle, u32 *mask, u32 req)
if ((ctrl & req) != req) {
decode_osc_control(root, "not requesting control; platform does not support",
req & ~(ctrl));
- status = AE_SUPPORT;
- goto out;
+ return AE_SUPPORT;
}
capbuf[OSC_QUERY_DWORD] = 0;
capbuf[OSC_SUPPORT_DWORD] = root->osc_support_set;
capbuf[OSC_CONTROL_DWORD] = ctrl;
status = acpi_pci_run_osc(handle, capbuf, mask);
- if (ACPI_SUCCESS(status))
- root->osc_control_set = *mask;
-out:
- mutex_unlock(&osc_lock);
- return status;
+ if (ACPI_FAILURE(status))
+ return status;
+
+ root->osc_control_set = *mask;
+ return AE_OK;
}
-EXPORT_SYMBOL(acpi_pci_osc_control_set);
static void negotiate_os_control(struct acpi_pci_root *root, int *no_aspm,
bool is_pcie)
@@ -452,9 +441,8 @@ static void negotiate_os_control(struct acpi_pci_root *root, int *no_aspm,
if ((status == AE_NOT_FOUND) && !is_pcie)
return;
- dev_info(&device->dev, "_OSC failed (%s)%s\n",
- acpi_format_exception(status),
- pcie_aspm_support_enabled() ? "; disabling ASPM" : "");
+ dev_info(&device->dev, "_OSC: platform retains control of PCIe features (%s)\n",
+ acpi_format_exception(status));
return;
}
@@ -510,7 +498,7 @@ static void negotiate_os_control(struct acpi_pci_root *root, int *no_aspm,
} else {
decode_osc_control(root, "OS requested", requested);
decode_osc_control(root, "platform willing to grant", control);
- dev_info(&device->dev, "_OSC failed (%s); disabling ASPM\n",
+ dev_info(&device->dev, "_OSC: platform retains control of PCIe features (%s)\n",
acpi_format_exception(status));
/*
diff --git a/drivers/gpu/drm/qxl/qxl_drv.c b/drivers/gpu/drm/qxl/qxl_drv.c
index fb5f6a5e81d7..1864467f1063 100644
--- a/drivers/gpu/drm/qxl/qxl_drv.c
+++ b/drivers/gpu/drm/qxl/qxl_drv.c
@@ -141,7 +141,7 @@ static void qxl_drm_release(struct drm_device *dev)
/*
* TODO: qxl_device_fini() call should be in qxl_pci_remove(),
- * reodering qxl_modeset_fini() + qxl_device_fini() calls is
+ * reordering qxl_modeset_fini() + qxl_device_fini() calls is
* non-trivial though.
*/
qxl_modeset_fini(qdev);
diff --git a/drivers/misc/pci_endpoint_test.c b/drivers/misc/pci_endpoint_test.c
index eff481ce08ee..1b2868ca4f2a 100644
--- a/drivers/misc/pci_endpoint_test.c
+++ b/drivers/misc/pci_endpoint_test.c
@@ -68,7 +68,6 @@
#define PCI_ENDPOINT_TEST_FLAGS 0x2c
#define FLAG_USE_DMA BIT(0)
-#define PCI_DEVICE_ID_TI_J721E 0xb00d
#define PCI_DEVICE_ID_TI_AM654 0xb00c
#define PCI_DEVICE_ID_LS1088A 0x80c0
diff --git a/drivers/net/wireless/intel/iwlwifi/fw/file.h b/drivers/net/wireless/intel/iwlwifi/fw/file.h
index f2e7b735d211..35dffcaf5aba 100644
--- a/drivers/net/wireless/intel/iwlwifi/fw/file.h
+++ b/drivers/net/wireless/intel/iwlwifi/fw/file.h
@@ -870,7 +870,7 @@ struct iwl_fw_dbg_trigger_time_event {
* tx_bar: tid bitmap to configure on what tid the trigger should occur
* when a BAR is send (for an Rx BlocAck session).
* frame_timeout: tid bitmap to configure on what tid the trigger should occur
- * when a frame times out in the reodering buffer.
+ * when a frame times out in the reordering buffer.
*/
struct iwl_fw_dbg_trigger_ba {
__le16 rx_ba_start;
diff --git a/drivers/ntb/hw/Kconfig b/drivers/ntb/hw/Kconfig
index e77c587060ff..c325be526b80 100644
--- a/drivers/ntb/hw/Kconfig
+++ b/drivers/ntb/hw/Kconfig
@@ -2,4 +2,5 @@
source "drivers/ntb/hw/amd/Kconfig"
source "drivers/ntb/hw/idt/Kconfig"
source "drivers/ntb/hw/intel/Kconfig"
+source "drivers/ntb/hw/epf/Kconfig"
source "drivers/ntb/hw/mscc/Kconfig"
diff --git a/drivers/ntb/hw/Makefile b/drivers/ntb/hw/Makefile
index 4714d6238845..223ca592b5f9 100644
--- a/drivers/ntb/hw/Makefile
+++ b/drivers/ntb/hw/Makefile
@@ -2,4 +2,5 @@
obj-$(CONFIG_NTB_AMD) += amd/
obj-$(CONFIG_NTB_IDT) += idt/
obj-$(CONFIG_NTB_INTEL) += intel/
+obj-$(CONFIG_NTB_EPF) += epf/
obj-$(CONFIG_NTB_SWITCHTEC) += mscc/
diff --git a/drivers/ntb/hw/epf/Kconfig b/drivers/ntb/hw/epf/Kconfig
new file mode 100644
index 000000000000..6197d1aab344
--- /dev/null
+++ b/drivers/ntb/hw/epf/Kconfig
@@ -0,0 +1,6 @@
+config NTB_EPF
+ tristate "Generic EPF Non-Transparent Bridge support"
+ depends on m
+ help
+ This driver supports EPF NTB on configurable endpoint.
+ If unsure, say N.
diff --git a/drivers/ntb/hw/epf/Makefile b/drivers/ntb/hw/epf/Makefile
new file mode 100644
index 000000000000..2f560a422bc6
--- /dev/null
+++ b/drivers/ntb/hw/epf/Makefile
@@ -0,0 +1 @@
+obj-$(CONFIG_NTB_EPF) += ntb_hw_epf.o
diff --git a/drivers/ntb/hw/epf/ntb_hw_epf.c b/drivers/ntb/hw/epf/ntb_hw_epf.c
new file mode 100644
index 000000000000..b019755e4e21
--- /dev/null
+++ b/drivers/ntb/hw/epf/ntb_hw_epf.c
@@ -0,0 +1,753 @@
+// SPDX-License-Identifier: GPL-2.0
+/**
+ * Host side endpoint driver to implement Non-Transparent Bridge functionality
+ *
+ * Copyright (C) 2020 Texas Instruments
+ * Author: Kishon Vijay Abraham I <kishon@ti.com>
+ */
+
+#include <linux/delay.h>
+#include <linux/module.h>
+#include <linux/pci.h>
+#include <linux/slab.h>
+#include <linux/ntb.h>
+
+#define NTB_EPF_COMMAND 0x0
+#define CMD_CONFIGURE_DOORBELL 1
+#define CMD_TEARDOWN_DOORBELL 2
+#define CMD_CONFIGURE_MW 3
+#define CMD_TEARDOWN_MW 4
+#define CMD_LINK_UP 5
+#define CMD_LINK_DOWN 6
+
+#define NTB_EPF_ARGUMENT 0x4
+#define MSIX_ENABLE BIT(16)
+
+#define NTB_EPF_CMD_STATUS 0x8
+#define COMMAND_STATUS_OK 1
+#define COMMAND_STATUS_ERROR 2
+
+#define NTB_EPF_LINK_STATUS 0x0A
+#define LINK_STATUS_UP BIT(0)
+
+#define NTB_EPF_TOPOLOGY 0x0C
+#define NTB_EPF_LOWER_ADDR 0x10
+#define NTB_EPF_UPPER_ADDR 0x14
+#define NTB_EPF_LOWER_SIZE 0x18
+#define NTB_EPF_UPPER_SIZE 0x1C
+#define NTB_EPF_MW_COUNT 0x20
+#define NTB_EPF_MW1_OFFSET 0x24
+#define NTB_EPF_SPAD_OFFSET 0x28
+#define NTB_EPF_SPAD_COUNT 0x2C
+#define NTB_EPF_DB_ENTRY_SIZE 0x30
+#define NTB_EPF_DB_DATA(n) (0x34 + (n) * 4)
+#define NTB_EPF_DB_OFFSET(n) (0xB4 + (n) * 4)
+
+#define NTB_EPF_MIN_DB_COUNT 3
+#define NTB_EPF_MAX_DB_COUNT 31
+#define NTB_EPF_MW_OFFSET 2
+
+#define NTB_EPF_COMMAND_TIMEOUT 1000 /* 1 Sec */
+
+enum pci_barno {
+ BAR_0,
+ BAR_1,
+ BAR_2,
+ BAR_3,
+ BAR_4,
+ BAR_5,
+};
+
+struct ntb_epf_dev {
+ struct ntb_dev ntb;
+ struct device *dev;
+ /* Mutex to protect providing commands to NTB EPF */
+ struct mutex cmd_lock;
+
+ enum pci_barno ctrl_reg_bar;
+ enum pci_barno peer_spad_reg_bar;
+ enum pci_barno db_reg_bar;
+
+ unsigned int mw_count;
+ unsigned int spad_count;
+ unsigned int db_count;
+
+ void __iomem *ctrl_reg;
+ void __iomem *db_reg;
+ void __iomem *peer_spad_reg;
+
+ unsigned int self_spad;
+ unsigned int peer_spad;
+
+ int db_val;
+ u64 db_valid_mask;
+};
+
+#define ntb_ndev(__ntb) container_of(__ntb, struct ntb_epf_dev, ntb)
+
+struct ntb_epf_data {
+ /* BAR that contains both control region and self spad region */
+ enum pci_barno ctrl_reg_bar;
+ /* BAR that contains peer spad region */
+ enum pci_barno peer_spad_reg_bar;
+ /* BAR that contains Doorbell region and Memory window '1' */
+ enum pci_barno db_reg_bar;
+};
+
+static int ntb_epf_send_command(struct ntb_epf_dev *ndev, u32 command,
+ u32 argument)
+{
+ ktime_t timeout;
+ bool timedout;
+ int ret = 0;
+ u32 status;
+
+ mutex_lock(&ndev->cmd_lock);
+ writel(argument, ndev->ctrl_reg + NTB_EPF_ARGUMENT);
+ writel(command, ndev->ctrl_reg + NTB_EPF_COMMAND);
+
+ timeout = ktime_add_ms(ktime_get(), NTB_EPF_COMMAND_TIMEOUT);
+ while (1) {
+ timedout = ktime_after(ktime_get(), timeout);
+ status = readw(ndev->ctrl_reg + NTB_EPF_CMD_STATUS);
+
+ if (status == COMMAND_STATUS_ERROR) {
+ ret = -EINVAL;
+ break;
+ }
+
+ if (status == COMMAND_STATUS_OK)
+ break;
+
+ if (WARN_ON(timedout)) {
+ ret = -ETIMEDOUT;
+ break;
+ }
+
+ usleep_range(5, 10);
+ }
+
+ writew(0, ndev->ctrl_reg + NTB_EPF_CMD_STATUS);
+ mutex_unlock(&ndev->cmd_lock);
+
+ return ret;
+}
+
+static int ntb_epf_mw_to_bar(struct ntb_epf_dev *ndev, int idx)
+{
+ struct device *dev = ndev->dev;
+
+ if (idx < 0 || idx > ndev->mw_count) {
+ dev_err(dev, "Unsupported Memory Window index %d\n", idx);
+ return -EINVAL;
+ }
+
+ return idx + 2;
+}
+
+static int ntb_epf_mw_count(struct ntb_dev *ntb, int pidx)
+{
+ struct ntb_epf_dev *ndev = ntb_ndev(ntb);
+ struct device *dev = ndev->dev;
+
+ if (pidx != NTB_DEF_PEER_IDX) {
+ dev_err(dev, "Unsupported Peer ID %d\n", pidx);
+ return -EINVAL;
+ }
+
+ return ndev->mw_count;
+}
+
+static int ntb_epf_mw_get_align(struct ntb_dev *ntb, int pidx, int idx,
+ resource_size_t *addr_align,
+ resource_size_t *size_align,
+ resource_size_t *size_max)
+{
+ struct ntb_epf_dev *ndev = ntb_ndev(ntb);
+ struct device *dev = ndev->dev;
+ int bar;
+
+ if (pidx != NTB_DEF_PEER_IDX) {
+ dev_err(dev, "Unsupported Peer ID %d\n", pidx);
+ return -EINVAL;
+ }
+
+ bar = ntb_epf_mw_to_bar(ndev, idx);
+ if (bar < 0)
+ return bar;
+
+ if (addr_align)
+ *addr_align = SZ_4K;
+
+ if (size_align)
+ *size_align = 1;
+
+ if (size_max)
+ *size_max = pci_resource_len(ndev->ntb.pdev, bar);
+
+ return 0;
+}
+
+static u64 ntb_epf_link_is_up(struct ntb_dev *ntb,
+ enum ntb_speed *speed,
+ enum ntb_width *width)
+{
+ struct ntb_epf_dev *ndev = ntb_ndev(ntb);
+ u32 status;
+
+ status = readw(ndev->ctrl_reg + NTB_EPF_LINK_STATUS);
+
+ return status & LINK_STATUS_UP;
+}
+
+static u32 ntb_epf_spad_read(struct ntb_dev *ntb, int idx)
+{
+ struct ntb_epf_dev *ndev = ntb_ndev(ntb);
+ struct device *dev = ndev->dev;
+ u32 offset;
+
+ if (idx < 0 || idx >= ndev->spad_count) {
+ dev_err(dev, "READ: Invalid ScratchPad Index %d\n", idx);
+ return 0;
+ }
+
+ offset = readl(ndev->ctrl_reg + NTB_EPF_SPAD_OFFSET);
+ offset += (idx << 2);
+
+ return readl(ndev->ctrl_reg + offset);
+}
+
+static int ntb_epf_spad_write(struct ntb_dev *ntb,
+ int idx, u32 val)
+{
+ struct ntb_epf_dev *ndev = ntb_ndev(ntb);
+ struct device *dev = ndev->dev;
+ u32 offset;
+
+ if (idx < 0 || idx >= ndev->spad_count) {
+ dev_err(dev, "WRITE: Invalid ScratchPad Index %d\n", idx);
+ return -EINVAL;
+ }
+
+ offset = readl(ndev->ctrl_reg + NTB_EPF_SPAD_OFFSET);
+ offset += (idx << 2);
+ writel(val, ndev->ctrl_reg + offset);
+
+ return 0;
+}
+
+static u32 ntb_epf_peer_spad_read(struct ntb_dev *ntb, int pidx, int idx)
+{
+ struct ntb_epf_dev *ndev = ntb_ndev(ntb);
+ struct device *dev = ndev->dev;
+ u32 offset;
+
+ if (pidx != NTB_DEF_PEER_IDX) {
+ dev_err(dev, "Unsupported Peer ID %d\n", pidx);
+ return -EINVAL;
+ }
+
+ if (idx < 0 || idx >= ndev->spad_count) {
+ dev_err(dev, "WRITE: Invalid Peer ScratchPad Index %d\n", idx);
+ return -EINVAL;
+ }
+
+ offset = (idx << 2);
+ return readl(ndev->peer_spad_reg + offset);
+}
+
+static int ntb_epf_peer_spad_write(struct ntb_dev *ntb, int pidx,
+ int idx, u32 val)
+{
+ struct ntb_epf_dev *ndev = ntb_ndev(ntb);
+ struct device *dev = ndev->dev;
+ u32 offset;
+
+ if (pidx != NTB_DEF_PEER_IDX) {
+ dev_err(dev, "Unsupported Peer ID %d\n", pidx);
+ return -EINVAL;
+ }
+
+ if (idx < 0 || idx >= ndev->spad_count) {
+ dev_err(dev, "WRITE: Invalid Peer ScratchPad Index %d\n", idx);
+ return -EINVAL;
+ }
+
+ offset = (idx << 2);
+ writel(val, ndev->peer_spad_reg + offset);
+
+ return 0;
+}
+
+static int ntb_epf_link_enable(struct ntb_dev *ntb,
+ enum ntb_speed max_speed,
+ enum ntb_width max_width)
+{
+ struct ntb_epf_dev *ndev = ntb_ndev(ntb);
+ struct device *dev = ndev->dev;
+ int ret;
+
+ ret = ntb_epf_send_command(ndev, CMD_LINK_UP, 0);
+ if (ret) {
+ dev_err(dev, "Fail to enable link\n");
+ return ret;
+ }
+
+ return 0;
+}
+
+static int ntb_epf_link_disable(struct ntb_dev *ntb)
+{
+ struct ntb_epf_dev *ndev = ntb_ndev(ntb);
+ struct device *dev = ndev->dev;
+ int ret;
+
+ ret = ntb_epf_send_command(ndev, CMD_LINK_DOWN, 0);
+ if (ret) {
+ dev_err(dev, "Fail to disable link\n");
+ return ret;
+ }
+
+ return 0;
+}
+
+static irqreturn_t ntb_epf_vec_isr(int irq, void *dev)
+{
+ struct ntb_epf_dev *ndev = dev;
+ int irq_no;
+
+ irq_no = irq - pci_irq_vector(ndev->ntb.pdev, 0);
+ ndev->db_val = irq_no + 1;
+
+ if (irq_no == 0)
+ ntb_link_event(&ndev->ntb);
+ else
+ ntb_db_event(&ndev->ntb, irq_no);
+
+ return IRQ_HANDLED;
+}
+
+static int ntb_epf_init_isr(struct ntb_epf_dev *ndev, int msi_min, int msi_max)
+{
+ struct pci_dev *pdev = ndev->ntb.pdev;
+ struct device *dev = ndev->dev;
+ u32 argument = MSIX_ENABLE;
+ int irq;
+ int ret;
+ int i;
+
+ irq = pci_alloc_irq_vectors(pdev, msi_min, msi_max, PCI_IRQ_MSIX);
+ if (irq < 0) {
+ dev_dbg(dev, "Failed to get MSIX interrupts\n");
+ irq = pci_alloc_irq_vectors(pdev, msi_min, msi_max,
+ PCI_IRQ_MSI);
+ if (irq < 0) {
+ dev_err(dev, "Failed to get MSI interrupts\n");
+ return irq;
+ }
+ argument &= ~MSIX_ENABLE;
+ }
+
+ for (i = 0; i < irq; i++) {
+ ret = request_irq(pci_irq_vector(pdev, i), ntb_epf_vec_isr,
+ 0, "ntb_epf", ndev);
+ if (ret) {
+ dev_err(dev, "Failed to request irq\n");
+ goto err_request_irq;
+ }
+ }
+
+ ndev->db_count = irq - 1;
+
+ ret = ntb_epf_send_command(ndev, CMD_CONFIGURE_DOORBELL,
+ argument | irq);
+ if (ret) {
+ dev_err(dev, "Failed to configure doorbell\n");
+ goto err_configure_db;
+ }
+
+ return 0;
+
+err_configure_db:
+ for (i = 0; i < ndev->db_count + 1; i++)
+ free_irq(pci_irq_vector(pdev, i), ndev);
+
+err_request_irq:
+ pci_free_irq_vectors(pdev);
+
+ return ret;
+}
+
+static int ntb_epf_peer_mw_count(struct ntb_dev *ntb)
+{
+ return ntb_ndev(ntb)->mw_count;
+}
+
+static int ntb_epf_spad_count(struct ntb_dev *ntb)
+{
+ return ntb_ndev(ntb)->spad_count;
+}
+
+static u64 ntb_epf_db_valid_mask(struct ntb_dev *ntb)
+{
+ return ntb_ndev(ntb)->db_valid_mask;
+}
+
+static int ntb_epf_db_set_mask(struct ntb_dev *ntb, u64 db_bits)
+{
+ return 0;
+}
+
+static int ntb_epf_mw_set_trans(struct ntb_dev *ntb, int pidx, int idx,
+ dma_addr_t addr, resource_size_t size)
+{
+ struct ntb_epf_dev *ndev = ntb_ndev(ntb);
+ struct device *dev = ndev->dev;
+ resource_size_t mw_size;
+ int bar;
+
+ if (pidx != NTB_DEF_PEER_IDX) {
+ dev_err(dev, "Unsupported Peer ID %d\n", pidx);
+ return -EINVAL;
+ }
+
+ bar = idx + NTB_EPF_MW_OFFSET;
+
+ mw_size = pci_resource_len(ntb->pdev, bar);
+
+ if (size > mw_size) {
+ dev_err(dev, "Size:%pa is greater than the MW size %pa\n",
+ &size, &mw_size);
+ return -EINVAL;
+ }
+
+ writel(lower_32_bits(addr), ndev->ctrl_reg + NTB_EPF_LOWER_ADDR);
+ writel(upper_32_bits(addr), ndev->ctrl_reg + NTB_EPF_UPPER_ADDR);
+ writel(lower_32_bits(size), ndev->ctrl_reg + NTB_EPF_LOWER_SIZE);
+ writel(upper_32_bits(size), ndev->ctrl_reg + NTB_EPF_UPPER_SIZE);
+ ntb_epf_send_command(ndev, CMD_CONFIGURE_MW, idx);
+
+ return 0;
+}
+
+static int ntb_epf_mw_clear_trans(struct ntb_dev *ntb, int pidx, int idx)
+{
+ struct ntb_epf_dev *ndev = ntb_ndev(ntb);
+ struct device *dev = ndev->dev;
+ int ret = 0;
+
+ ntb_epf_send_command(ndev, CMD_TEARDOWN_MW, idx);
+ if (ret)
+ dev_err(dev, "Failed to teardown memory window\n");
+
+ return ret;
+}
+
+static int ntb_epf_peer_mw_get_addr(struct ntb_dev *ntb, int idx,
+ phys_addr_t *base, resource_size_t *size)
+{
+ struct ntb_epf_dev *ndev = ntb_ndev(ntb);
+ u32 offset = 0;
+ int bar;
+
+ if (idx == 0)
+ offset = readl(ndev->ctrl_reg + NTB_EPF_MW1_OFFSET);
+
+ bar = idx + NTB_EPF_MW_OFFSET;
+
+ if (base)
+ *base = pci_resource_start(ndev->ntb.pdev, bar) + offset;
+
+ if (size)
+ *size = pci_resource_len(ndev->ntb.pdev, bar) - offset;
+
+ return 0;
+}
+
+static int ntb_epf_peer_db_set(struct ntb_dev *ntb, u64 db_bits)
+{
+ struct ntb_epf_dev *ndev = ntb_ndev(ntb);
+ u32 interrupt_num = ffs(db_bits) + 1;
+ struct device *dev = ndev->dev;
+ u32 db_entry_size;
+ u32 db_offset;
+ u32 db_data;
+
+ if (interrupt_num > ndev->db_count) {
+ dev_err(dev, "DB interrupt %d greater than Max Supported %d\n",
+ interrupt_num, ndev->db_count);
+ return -EINVAL;
+ }
+
+ db_entry_size = readl(ndev->ctrl_reg + NTB_EPF_DB_ENTRY_SIZE);
+
+ db_data = readl(ndev->ctrl_reg + NTB_EPF_DB_DATA(interrupt_num));
+ db_offset = readl(ndev->ctrl_reg + NTB_EPF_DB_OFFSET(interrupt_num));
+ writel(db_data, ndev->db_reg + (db_entry_size * interrupt_num) +
+ db_offset);
+
+ return 0;
+}
+
+static u64 ntb_epf_db_read(struct ntb_dev *ntb)
+{
+ struct ntb_epf_dev *ndev = ntb_ndev(ntb);
+
+ return ndev->db_val;
+}
+
+static int ntb_epf_db_clear_mask(struct ntb_dev *ntb, u64 db_bits)
+{
+ return 0;
+}
+
+static int ntb_epf_db_clear(struct ntb_dev *ntb, u64 db_bits)
+{
+ struct ntb_epf_dev *ndev = ntb_ndev(ntb);
+
+ ndev->db_val = 0;
+
+ return 0;
+}
+
+static const struct ntb_dev_ops ntb_epf_ops = {
+ .mw_count = ntb_epf_mw_count,
+ .spad_count = ntb_epf_spad_count,
+ .peer_mw_count = ntb_epf_peer_mw_count,
+ .db_valid_mask = ntb_epf_db_valid_mask,
+ .db_set_mask = ntb_epf_db_set_mask,
+ .mw_set_trans = ntb_epf_mw_set_trans,
+ .mw_clear_trans = ntb_epf_mw_clear_trans,
+ .peer_mw_get_addr = ntb_epf_peer_mw_get_addr,
+ .link_enable = ntb_epf_link_enable,
+ .spad_read = ntb_epf_spad_read,
+ .spad_write = ntb_epf_spad_write,
+ .peer_spad_read = ntb_epf_peer_spad_read,
+ .peer_spad_write = ntb_epf_peer_spad_write,
+ .peer_db_set = ntb_epf_peer_db_set,
+ .db_read = ntb_epf_db_read,
+ .mw_get_align = ntb_epf_mw_get_align,
+ .link_is_up = ntb_epf_link_is_up,
+ .db_clear_mask = ntb_epf_db_clear_mask,
+ .db_clear = ntb_epf_db_clear,
+ .link_disable = ntb_epf_link_disable,
+};
+
+static inline void ntb_epf_init_struct(struct ntb_epf_dev *ndev,
+ struct pci_dev *pdev)
+{
+ ndev->ntb.pdev = pdev;
+ ndev->ntb.topo = NTB_TOPO_NONE;
+ ndev->ntb.ops = &ntb_epf_ops;
+}
+
+static int ntb_epf_init_dev(struct ntb_epf_dev *ndev)
+{
+ struct device *dev = ndev->dev;
+ int ret;
+
+ /* One Link interrupt and rest doorbell interrupt */
+ ret = ntb_epf_init_isr(ndev, NTB_EPF_MIN_DB_COUNT + 1,
+ NTB_EPF_MAX_DB_COUNT + 1);
+ if (ret) {
+ dev_err(dev, "Failed to init ISR\n");
+ return ret;
+ }
+
+ ndev->db_valid_mask = BIT_ULL(ndev->db_count) - 1;
+ ndev->mw_count = readl(ndev->ctrl_reg + NTB_EPF_MW_COUNT);
+ ndev->spad_count = readl(ndev->ctrl_reg + NTB_EPF_SPAD_COUNT);
+
+ return 0;
+}
+
+static int ntb_epf_init_pci(struct ntb_epf_dev *ndev,
+ struct pci_dev *pdev)
+{
+ struct device *dev = ndev->dev;
+ int ret;
+
+ pci_set_drvdata(pdev, ndev);
+
+ ret = pci_enable_device(pdev);
+ if (ret) {
+ dev_err(dev, "Cannot enable PCI device\n");
+ goto err_pci_enable;
+ }
+
+ ret = pci_request_regions(pdev, "ntb");
+ if (ret) {
+ dev_err(dev, "Cannot obtain PCI resources\n");
+ goto err_pci_regions;
+ }
+
+ pci_set_master(pdev);
+
+ ret = dma_set_mask_and_coherent(dev, DMA_BIT_MASK(64));
+ if (ret) {
+ ret = dma_set_mask_and_coherent(dev, DMA_BIT_MASK(32));
+ if (ret) {
+ dev_err(dev, "Cannot set DMA mask\n");
+ goto err_dma_mask;
+ }
+ dev_warn(&pdev->dev, "Cannot DMA highmem\n");
+ }
+
+ ndev->ctrl_reg = pci_iomap(pdev, ndev->ctrl_reg_bar, 0);
+ if (!ndev->ctrl_reg) {
+ ret = -EIO;
+ goto err_dma_mask;
+ }
+
+ ndev->peer_spad_reg = pci_iomap(pdev, ndev->peer_spad_reg_bar, 0);
+ if (!ndev->peer_spad_reg) {
+ ret = -EIO;
+ goto err_dma_mask;
+ }
+
+ ndev->db_reg = pci_iomap(pdev, ndev->db_reg_bar, 0);
+ if (!ndev->db_reg) {
+ ret = -EIO;
+ goto err_dma_mask;
+ }
+
+ return 0;
+
+err_dma_mask:
+ pci_clear_master(pdev);
+
+err_pci_regions:
+ pci_disable_device(pdev);
+
+err_pci_enable:
+ pci_set_drvdata(pdev, NULL);
+
+ return ret;
+}
+
+static void ntb_epf_deinit_pci(struct ntb_epf_dev *ndev)
+{
+ struct pci_dev *pdev = ndev->ntb.pdev;
+
+ pci_iounmap(pdev, ndev->ctrl_reg);
+ pci_iounmap(pdev, ndev->peer_spad_reg);
+ pci_iounmap(pdev, ndev->db_reg);
+
+ pci_clear_master(pdev);
+ pci_release_regions(pdev);
+ pci_disable_device(pdev);
+ pci_set_drvdata(pdev, NULL);
+}
+
+static void ntb_epf_cleanup_isr(struct ntb_epf_dev *ndev)
+{
+ struct pci_dev *pdev = ndev->ntb.pdev;
+ int i;
+
+ ntb_epf_send_command(ndev, CMD_TEARDOWN_DOORBELL, ndev->db_count + 1);
+
+ for (i = 0; i < ndev->db_count + 1; i++)
+ free_irq(pci_irq_vector(pdev, i), ndev);
+ pci_free_irq_vectors(pdev);
+}
+
+static int ntb_epf_pci_probe(struct pci_dev *pdev,
+ const struct pci_device_id *id)
+{
+ enum pci_barno peer_spad_reg_bar = BAR_1;
+ enum pci_barno ctrl_reg_bar = BAR_0;
+ enum pci_barno db_reg_bar = BAR_2;
+ struct device *dev = &pdev->dev;
+ struct ntb_epf_data *data;
+ struct ntb_epf_dev *ndev;
+ int ret;
+
+ if (pci_is_bridge(pdev))
+ return -ENODEV;
+
+ ndev = devm_kzalloc(dev, sizeof(*ndev), GFP_KERNEL);
+ if (!ndev)
+ return -ENOMEM;
+
+ data = (struct ntb_epf_data *)id->driver_data;
+ if (data) {
+ if (data->peer_spad_reg_bar)
+ peer_spad_reg_bar = data->peer_spad_reg_bar;
+ if (data->ctrl_reg_bar)
+ ctrl_reg_bar = data->ctrl_reg_bar;
+ if (data->db_reg_bar)
+ db_reg_bar = data->db_reg_bar;
+ }
+
+ ndev->peer_spad_reg_bar = peer_spad_reg_bar;
+ ndev->ctrl_reg_bar = ctrl_reg_bar;
+ ndev->db_reg_bar = db_reg_bar;
+ ndev->dev = dev;
+
+ ntb_epf_init_struct(ndev, pdev);
+ mutex_init(&ndev->cmd_lock);
+
+ ret = ntb_epf_init_pci(ndev, pdev);
+ if (ret) {
+ dev_err(dev, "Failed to init PCI\n");
+ return ret;
+ }
+
+ ret = ntb_epf_init_dev(ndev);
+ if (ret) {
+ dev_err(dev, "Failed to init device\n");
+ goto err_init_dev;
+ }
+
+ ret = ntb_register_device(&ndev->ntb);
+ if (ret) {
+ dev_err(dev, "Failed to register NTB device\n");
+ goto err_register_dev;
+ }
+
+ return 0;
+
+err_register_dev:
+ ntb_epf_cleanup_isr(ndev);
+
+err_init_dev:
+ ntb_epf_deinit_pci(ndev);
+
+ return ret;
+}
+
+static void ntb_epf_pci_remove(struct pci_dev *pdev)
+{
+ struct ntb_epf_dev *ndev = pci_get_drvdata(pdev);
+
+ ntb_unregister_device(&ndev->ntb);
+ ntb_epf_cleanup_isr(ndev);
+ ntb_epf_deinit_pci(ndev);
+}
+
+static const struct ntb_epf_data j721e_data = {
+ .ctrl_reg_bar = BAR_0,
+ .peer_spad_reg_bar = BAR_1,
+ .db_reg_bar = BAR_2,
+};
+
+static const struct pci_device_id ntb_epf_pci_tbl[] = {
+ {
+ PCI_DEVICE(PCI_VENDOR_ID_TI, PCI_DEVICE_ID_TI_J721E),
+ .class = PCI_CLASS_MEMORY_RAM << 8, .class_mask = 0xffff00,
+ .driver_data = (kernel_ulong_t)&j721e_data,
+ },
+ { },
+};
+
+static struct pci_driver ntb_epf_pci_driver = {
+ .name = KBUILD_MODNAME,
+ .id_table = ntb_epf_pci_tbl,
+ .probe = ntb_epf_pci_probe,
+ .remove = ntb_epf_pci_remove,
+};
+module_pci_driver(ntb_epf_pci_driver);
+
+MODULE_DESCRIPTION("PCI ENDPOINT NTB HOST DRIVER");
+MODULE_AUTHOR("Kishon Vijay Abraham I <kishon@ti.com>");
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/pci/Makefile b/drivers/pci/Makefile
index 11cc79411e2d..d62c4ac4ae1b 100644
--- a/drivers/pci/Makefile
+++ b/drivers/pci/Makefile
@@ -36,4 +36,4 @@ obj-$(CONFIG_PCI_ENDPOINT) += endpoint/
obj-y += controller/
obj-y += switch/
-ccflags-$(CONFIG_PCI_DEBUG) := -DDEBUG
+subdir-ccflags-$(CONFIG_PCI_DEBUG) := -DDEBUG
diff --git a/drivers/pci/controller/Kconfig b/drivers/pci/controller/Kconfig
index 64e2f5e379aa..5aa8977d7b0f 100644
--- a/drivers/pci/controller/Kconfig
+++ b/drivers/pci/controller/Kconfig
@@ -55,15 +55,6 @@ config PCI_RCAR_GEN2
There are 3 internal PCI controllers available with a single
built-in EHCI/OHCI host controller present on each one.
-config PCIE_RCAR
- bool "Renesas R-Car PCIe controller"
- depends on ARCH_RENESAS || COMPILE_TEST
- depends on PCI_MSI_IRQ_DOMAIN
- select PCIE_RCAR_HOST
- help
- Say Y here if you want PCIe controller support on R-Car SoCs.
- This option will be removed after arm64 defconfig is updated.
-
config PCIE_RCAR_HOST
bool "Renesas R-Car PCIe host controller"
depends on ARCH_RENESAS || COMPILE_TEST
@@ -242,20 +233,6 @@ config PCIE_MEDIATEK
Say Y here if you want to enable PCIe controller support on
MediaTek SoCs.
-config PCIE_TANGO_SMP8759
- bool "Tango SMP8759 PCIe controller (DANGEROUS)"
- depends on ARCH_TANGO && PCI_MSI && OF
- depends on BROKEN
- select PCI_HOST_COMMON
- help
- Say Y here to enable PCIe controller support for Sigma Designs
- Tango SMP8759-based systems.
-
- Note: The SMP8759 controller multiplexes PCI config and MMIO
- accesses, and Linux doesn't provide a way to serialize them.
- This can lead to data corruption if drivers perform concurrent
- config and MMIO accesses.
-
config VMD
depends on PCI_MSI && X86_64 && SRCU
tristate "Intel Volume Management Device Driver"
@@ -273,7 +250,7 @@ config VMD
config PCIE_BRCMSTB
tristate "Broadcom Brcmstb PCIe host controller"
- depends on ARCH_BRCMSTB || ARCH_BCM2835 || COMPILE_TEST
+ depends on ARCH_BRCMSTB || ARCH_BCM2835 || ARCH_BCM4908 || COMPILE_TEST
depends on OF
depends on PCI_MSI_IRQ_DOMAIN
default ARCH_BRCMSTB
@@ -298,6 +275,16 @@ config PCI_LOONGSON
Say Y here if you want to enable PCI controller support on
Loongson systems.
+config PCIE_MICROCHIP_HOST
+ bool "Microchip AXI PCIe host bridge support"
+ depends on PCI_MSI && OF
+ select PCI_MSI_IRQ_DOMAIN
+ select GENERIC_MSI_IRQ_DOMAIN
+ select PCI_HOST_COMMON
+ help
+ Say Y here if you want kernel to support the Microchip AXI PCIe
+ Host Bridge driver.
+
config PCIE_HISI_ERR
depends on ACPI_APEI_GHES && (ARM64 || COMPILE_TEST)
bool "HiSilicon HIP PCIe controller error handling driver"
diff --git a/drivers/pci/controller/Makefile b/drivers/pci/controller/Makefile
index 04c6edc285c5..e4559f2182f2 100644
--- a/drivers/pci/controller/Makefile
+++ b/drivers/pci/controller/Makefile
@@ -27,7 +27,7 @@ obj-$(CONFIG_PCIE_ROCKCHIP) += pcie-rockchip.o
obj-$(CONFIG_PCIE_ROCKCHIP_EP) += pcie-rockchip-ep.o
obj-$(CONFIG_PCIE_ROCKCHIP_HOST) += pcie-rockchip-host.o
obj-$(CONFIG_PCIE_MEDIATEK) += pcie-mediatek.o
-obj-$(CONFIG_PCIE_TANGO_SMP8759) += pcie-tango.o
+obj-$(CONFIG_PCIE_MICROCHIP_HOST) += pcie-microchip-host.o
obj-$(CONFIG_VMD) += vmd.o
obj-$(CONFIG_PCIE_BRCMSTB) += pcie-brcmstb.o
obj-$(CONFIG_PCI_LOONGSON) += pci-loongson.o
diff --git a/drivers/pci/controller/cadence/pci-j721e.c b/drivers/pci/controller/cadence/pci-j721e.c
index dac1ac8a7615..849f1e416ea5 100644
--- a/drivers/pci/controller/cadence/pci-j721e.c
+++ b/drivers/pci/controller/cadence/pci-j721e.c
@@ -64,6 +64,7 @@ enum j721e_pcie_mode {
struct j721e_pcie_data {
enum j721e_pcie_mode mode;
+ bool quirk_retrain_flag;
};
static inline u32 j721e_pcie_user_readl(struct j721e_pcie *pcie, u32 offset)
@@ -280,6 +281,7 @@ static struct pci_ops cdns_ti_pcie_host_ops = {
static const struct j721e_pcie_data j721e_pcie_rc_data = {
.mode = PCI_MODE_RC,
+ .quirk_retrain_flag = true,
};
static const struct j721e_pcie_data j721e_pcie_ep_data = {
@@ -388,6 +390,7 @@ static int j721e_pcie_probe(struct platform_device *pdev)
bridge->ops = &cdns_ti_pcie_host_ops;
rc = pci_host_bridge_priv(bridge);
+ rc->quirk_retrain_flag = data->quirk_retrain_flag;
cdns_pcie = &rc->pcie;
cdns_pcie->dev = dev;
diff --git a/drivers/pci/controller/cadence/pcie-cadence-ep.c b/drivers/pci/controller/cadence/pcie-cadence-ep.c
index 9e2b024d32f2..897cdde02bd8 100644
--- a/drivers/pci/controller/cadence/pcie-cadence-ep.c
+++ b/drivers/pci/controller/cadence/pcie-cadence-ep.c
@@ -382,6 +382,57 @@ static int cdns_pcie_ep_send_msi_irq(struct cdns_pcie_ep *ep, u8 fn,
return 0;
}
+static int cdns_pcie_ep_map_msi_irq(struct pci_epc *epc, u8 fn,
+ phys_addr_t addr, u8 interrupt_num,
+ u32 entry_size, u32 *msi_data,
+ u32 *msi_addr_offset)
+{
+ struct cdns_pcie_ep *ep = epc_get_drvdata(epc);
+ u32 cap = CDNS_PCIE_EP_FUNC_MSI_CAP_OFFSET;
+ struct cdns_pcie *pcie = &ep->pcie;
+ u64 pci_addr, pci_addr_mask = 0xff;
+ u16 flags, mme, data, data_mask;
+ u8 msi_count;
+ int ret;
+ int i;
+
+ /* Check whether the MSI feature has been enabled by the PCI host. */
+ flags = cdns_pcie_ep_fn_readw(pcie, fn, cap + PCI_MSI_FLAGS);
+ if (!(flags & PCI_MSI_FLAGS_ENABLE))
+ return -EINVAL;
+
+ /* Get the number of enabled MSIs */
+ mme = (flags & PCI_MSI_FLAGS_QSIZE) >> 4;
+ msi_count = 1 << mme;
+ if (!interrupt_num || interrupt_num > msi_count)
+ return -EINVAL;
+
+ /* Compute the data value to be written. */
+ data_mask = msi_count - 1;
+ data = cdns_pcie_ep_fn_readw(pcie, fn, cap + PCI_MSI_DATA_64);
+ data = data & ~data_mask;
+
+ /* Get the PCI address where to write the data into. */
+ pci_addr = cdns_pcie_ep_fn_readl(pcie, fn, cap + PCI_MSI_ADDRESS_HI);
+ pci_addr <<= 32;
+ pci_addr |= cdns_pcie_ep_fn_readl(pcie, fn, cap + PCI_MSI_ADDRESS_LO);
+ pci_addr &= GENMASK_ULL(63, 2);
+
+ for (i = 0; i < interrupt_num; i++) {
+ ret = cdns_pcie_ep_map_addr(epc, fn, addr,
+ pci_addr & ~pci_addr_mask,
+ entry_size);
+ if (ret)
+ return ret;
+ addr = addr + entry_size;
+ }
+
+ *msi_data = data;
+ *msi_addr_offset = pci_addr & pci_addr_mask;
+
+ return 0;
+}
+
static int cdns_pcie_ep_send_msix_irq(struct cdns_pcie_ep *ep, u8 fn,
u16 interrupt_num)
{
@@ -455,18 +506,13 @@ static int cdns_pcie_ep_start(struct pci_epc *epc)
struct cdns_pcie_ep *ep = epc_get_drvdata(epc);
struct cdns_pcie *pcie = &ep->pcie;
struct device *dev = pcie->dev;
- struct pci_epf *epf;
- u32 cfg;
int ret;
/*
* BIT(0) is hardwired to 1, hence function 0 is always enabled
* and can't be disabled anyway.
*/
- cfg = BIT(0);
- list_for_each_entry(epf, &epc->pci_epf, list)
- cfg |= BIT(epf->func_no);
- cdns_pcie_writel(pcie, CDNS_PCIE_LM_EP_FUNC_CFG, cfg);
+ cdns_pcie_writel(pcie, CDNS_PCIE_LM_EP_FUNC_CFG, epc->function_num_map);
ret = cdns_pcie_start_link(pcie);
if (ret) {
@@ -481,6 +527,7 @@ static const struct pci_epc_features cdns_pcie_epc_features = {
.linkup_notifier = false,
.msi_capable = true,
.msix_capable = true,
+ .align = 256,
};
static const struct pci_epc_features*
@@ -500,6 +547,7 @@ static const struct pci_epc_ops cdns_pcie_epc_ops = {
.set_msix = cdns_pcie_ep_set_msix,
.get_msix = cdns_pcie_ep_get_msix,
.raise_irq = cdns_pcie_ep_raise_irq,
+ .map_msi_irq = cdns_pcie_ep_map_msi_irq,
.start = cdns_pcie_ep_start,
.get_features = cdns_pcie_ep_get_features,
};
diff --git a/drivers/pci/controller/cadence/pcie-cadence-host.c b/drivers/pci/controller/cadence/pcie-cadence-host.c
index 811c1cb2e8de..73dcf8cf98fb 100644
--- a/drivers/pci/controller/cadence/pcie-cadence-host.c
+++ b/drivers/pci/controller/cadence/pcie-cadence-host.c
@@ -77,6 +77,68 @@ static struct pci_ops cdns_pcie_host_ops = {
.write = pci_generic_config_write,
};
+static int cdns_pcie_host_wait_for_link(struct cdns_pcie *pcie)
+{
+ struct device *dev = pcie->dev;
+ int retries;
+
+ /* Check if the link is up or not */
+ for (retries = 0; retries < LINK_WAIT_MAX_RETRIES; retries++) {
+ if (cdns_pcie_link_up(pcie)) {
+ dev_info(dev, "Link up\n");
+ return 0;
+ }
+ usleep_range(LINK_WAIT_USLEEP_MIN, LINK_WAIT_USLEEP_MAX);
+ }
+
+ return -ETIMEDOUT;
+}
+
+static int cdns_pcie_retrain(struct cdns_pcie *pcie)
+{
+ u32 lnk_cap_sls, pcie_cap_off = CDNS_PCIE_RP_CAP_OFFSET;
+ u16 lnk_stat, lnk_ctl;
+ int ret = 0;
+
+ /*
+ * Set retrain bit if current speed is 2.5 GB/s,
+ * but the PCIe root port support is > 2.5 GB/s.
+ */
+
+ lnk_cap_sls = cdns_pcie_readl(pcie, (CDNS_PCIE_RP_BASE + pcie_cap_off +
+ PCI_EXP_LNKCAP));
+ if ((lnk_cap_sls & PCI_EXP_LNKCAP_SLS) <= PCI_EXP_LNKCAP_SLS_2_5GB)
+ return ret;
+
+ lnk_stat = cdns_pcie_rp_readw(pcie, pcie_cap_off + PCI_EXP_LNKSTA);
+ if ((lnk_stat & PCI_EXP_LNKSTA_CLS) == PCI_EXP_LNKSTA_CLS_2_5GB) {
+ lnk_ctl = cdns_pcie_rp_readw(pcie,
+ pcie_cap_off + PCI_EXP_LNKCTL);
+ lnk_ctl |= PCI_EXP_LNKCTL_RL;
+ cdns_pcie_rp_writew(pcie, pcie_cap_off + PCI_EXP_LNKCTL,
+ lnk_ctl);
+
+ ret = cdns_pcie_host_wait_for_link(pcie);
+ }
+ return ret;
+}
+
+static int cdns_pcie_host_start_link(struct cdns_pcie_rc *rc)
+{
+ struct cdns_pcie *pcie = &rc->pcie;
+ int ret;
+
+ ret = cdns_pcie_host_wait_for_link(pcie);
+
+ /*
+ * Retrain link for Gen2 training defect
+ * if quirk flag is set.
+ */
+ if (!ret && rc->quirk_retrain_flag)
+ ret = cdns_pcie_retrain(pcie);
+
+ return ret;
+}
static int cdns_pcie_host_init_root_port(struct cdns_pcie_rc *rc)
{
@@ -321,9 +383,10 @@ static int cdns_pcie_host_map_dma_ranges(struct cdns_pcie_rc *rc)
resource_list_for_each_entry(entry, &bridge->dma_ranges) {
err = cdns_pcie_host_bar_config(rc, entry);
- if (err)
+ if (err) {
dev_err(dev, "Fail to configure IB using dma-ranges\n");
- return err;
+ return err;
+ }
}
return 0;
@@ -398,23 +461,6 @@ static int cdns_pcie_host_init(struct device *dev,
return cdns_pcie_host_init_address_translation(rc);
}
-static int cdns_pcie_host_wait_for_link(struct cdns_pcie *pcie)
-{
- struct device *dev = pcie->dev;
- int retries;
-
- /* Check if the link is up or not */
- for (retries = 0; retries < LINK_WAIT_MAX_RETRIES; retries++) {
- if (cdns_pcie_link_up(pcie)) {
- dev_info(dev, "Link up\n");
- return 0;
- }
- usleep_range(LINK_WAIT_USLEEP_MIN, LINK_WAIT_USLEEP_MAX);
- }
-
- return -ETIMEDOUT;
-}
-
int cdns_pcie_host_setup(struct cdns_pcie_rc *rc)
{
struct device *dev = rc->pcie.dev;
@@ -457,7 +503,7 @@ int cdns_pcie_host_setup(struct cdns_pcie_rc *rc)
return ret;
}
- ret = cdns_pcie_host_wait_for_link(pcie);
+ ret = cdns_pcie_host_start_link(rc);
if (ret)
dev_dbg(dev, "PCIe link never came up\n");
diff --git a/drivers/pci/controller/cadence/pcie-cadence.h b/drivers/pci/controller/cadence/pcie-cadence.h
index 30eba6cafe2c..254d2570f8c9 100644
--- a/drivers/pci/controller/cadence/pcie-cadence.h
+++ b/drivers/pci/controller/cadence/pcie-cadence.h
@@ -119,7 +119,7 @@
* Root Port Registers (PCI configuration space for the root port function)
*/
#define CDNS_PCIE_RP_BASE 0x00200000
-
+#define CDNS_PCIE_RP_CAP_OFFSET 0xc0
/*
* Address Translation Registers
@@ -291,6 +291,7 @@ struct cdns_pcie {
* @device_id: PCI device ID
* @avail_ib_bar: Satus of RP_BAR0, RP_BAR1 and RP_NO_BAR if it's free or
* available
+ * @quirk_retrain_flag: Retrain link as quirk for PCIe Gen2
*/
struct cdns_pcie_rc {
struct cdns_pcie pcie;
@@ -299,6 +300,7 @@ struct cdns_pcie_rc {
u32 vendor_id;
u32 device_id;
bool avail_ib_bar[CDNS_PCIE_RP_MAX_IB];
+ bool quirk_retrain_flag;
};
/**
@@ -414,6 +416,13 @@ static inline void cdns_pcie_rp_writew(struct cdns_pcie *pcie,
cdns_pcie_write_sz(addr, 0x2, value);
}
+static inline u16 cdns_pcie_rp_readw(struct cdns_pcie *pcie, u32 reg)
+{
+ void __iomem *addr = pcie->reg_base + CDNS_PCIE_RP_BASE + reg;
+
+ return cdns_pcie_read_sz(addr, 0x2);
+}
+
/* Endpoint Function register access */
static inline void cdns_pcie_ep_fn_writeb(struct cdns_pcie *pcie, u8 fn,
u32 reg, u8 value)
diff --git a/drivers/pci/controller/dwc/pci-layerscape-ep.c b/drivers/pci/controller/dwc/pci-layerscape-ep.c
index 4d12efdacd2f..39fe2ed5a6a2 100644
--- a/drivers/pci/controller/dwc/pci-layerscape-ep.c
+++ b/drivers/pci/controller/dwc/pci-layerscape-ep.c
@@ -115,10 +115,17 @@ static const struct ls_pcie_ep_drvdata ls2_ep_drvdata = {
.dw_pcie_ops = &dw_ls_pcie_ep_ops,
};
+static const struct ls_pcie_ep_drvdata lx2_ep_drvdata = {
+ .func_offset = 0x8000,
+ .ops = &ls_pcie_ep_ops,
+ .dw_pcie_ops = &dw_ls_pcie_ep_ops,
+};
+
static const struct of_device_id ls_pcie_ep_of_match[] = {
{ .compatible = "fsl,ls1046a-pcie-ep", .data = &ls1_ep_drvdata },
{ .compatible = "fsl,ls1088a-pcie-ep", .data = &ls2_ep_drvdata },
{ .compatible = "fsl,ls2088a-pcie-ep", .data = &ls2_ep_drvdata },
+ { .compatible = "fsl,lx2160ar2-pcie-ep", .data = &lx2_ep_drvdata },
{ },
};
diff --git a/drivers/pci/controller/dwc/pci-layerscape.c b/drivers/pci/controller/dwc/pci-layerscape.c
index 44ad34cdc3bc..5b9c625df7b8 100644
--- a/drivers/pci/controller/dwc/pci-layerscape.c
+++ b/drivers/pci/controller/dwc/pci-layerscape.c
@@ -232,7 +232,7 @@ static const struct of_device_id ls_pcie_of_match[] = {
{ },
};
-static int __init ls_pcie_probe(struct platform_device *pdev)
+static int ls_pcie_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct dw_pcie *pci;
@@ -271,10 +271,11 @@ static int __init ls_pcie_probe(struct platform_device *pdev)
}
static struct platform_driver ls_pcie_driver = {
+ .probe = ls_pcie_probe,
.driver = {
.name = "layerscape-pcie",
.of_match_table = ls_pcie_of_match,
.suppress_bind_attrs = true,
},
};
-builtin_platform_driver_probe(ls_pcie_driver, ls_pcie_probe);
+builtin_platform_driver(ls_pcie_driver);
diff --git a/drivers/pci/controller/dwc/pcie-al.c b/drivers/pci/controller/dwc/pcie-al.c
index abf37aa68e51..e8afa50129a8 100644
--- a/drivers/pci/controller/dwc/pcie-al.c
+++ b/drivers/pci/controller/dwc/pcie-al.c
@@ -314,9 +314,6 @@ static const struct dw_pcie_host_ops al_pcie_host_ops = {
.host_init = al_pcie_host_init,
};
-static const struct dw_pcie_ops dw_pcie_ops = {
-};
-
static int al_pcie_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
@@ -334,7 +331,6 @@ static int al_pcie_probe(struct platform_device *pdev)
return -ENOMEM;
pci->dev = dev;
- pci->ops = &dw_pcie_ops;
pci->pp.ops = &al_pcie_host_ops;
al_pcie->pci = pci;
diff --git a/drivers/pci/controller/dwc/pcie-designware-ep.c b/drivers/pci/controller/dwc/pcie-designware-ep.c
index bcd1cd9ba8c8..1c25d8337151 100644
--- a/drivers/pci/controller/dwc/pcie-designware-ep.c
+++ b/drivers/pci/controller/dwc/pcie-designware-ep.c
@@ -434,10 +434,8 @@ static void dw_pcie_ep_stop(struct pci_epc *epc)
struct dw_pcie_ep *ep = epc_get_drvdata(epc);
struct dw_pcie *pci = to_dw_pcie_from_ep(ep);
- if (!pci->ops->stop_link)
- return;
-
- pci->ops->stop_link(pci);
+ if (pci->ops && pci->ops->stop_link)
+ pci->ops->stop_link(pci);
}
static int dw_pcie_ep_start(struct pci_epc *epc)
@@ -445,7 +443,7 @@ static int dw_pcie_ep_start(struct pci_epc *epc)
struct dw_pcie_ep *ep = epc_get_drvdata(epc);
struct dw_pcie *pci = to_dw_pcie_from_ep(ep);
- if (!pci->ops->start_link)
+ if (!pci->ops || !pci->ops->start_link)
return -EINVAL;
return pci->ops->start_link(pci);
diff --git a/drivers/pci/controller/dwc/pcie-designware-host.c b/drivers/pci/controller/dwc/pcie-designware-host.c
index 8a84c005f32b..7e55b2b66182 100644
--- a/drivers/pci/controller/dwc/pcie-designware-host.c
+++ b/drivers/pci/controller/dwc/pcie-designware-host.c
@@ -258,10 +258,8 @@ int dw_pcie_allocate_domains(struct pcie_port *pp)
static void dw_pcie_free_msi(struct pcie_port *pp)
{
- if (pp->msi_irq) {
- irq_set_chained_handler(pp->msi_irq, NULL);
- irq_set_handler_data(pp->msi_irq, NULL);
- }
+ if (pp->msi_irq)
+ irq_set_chained_handler_and_data(pp->msi_irq, NULL, NULL);
irq_domain_remove(pp->msi_domain);
irq_domain_remove(pp->irq_domain);
@@ -305,8 +303,13 @@ int dw_pcie_host_init(struct pcie_port *pp)
if (cfg_res) {
pp->cfg0_size = resource_size(cfg_res);
pp->cfg0_base = cfg_res->start;
- } else if (!pp->va_cfg0_base) {
+
+ pp->va_cfg0_base = devm_pci_remap_cfg_resource(dev, cfg_res);
+ if (IS_ERR(pp->va_cfg0_base))
+ return PTR_ERR(pp->va_cfg0_base);
+ } else {
dev_err(dev, "Missing *config* reg space\n");
+ return -ENODEV;
}
if (!pci->dbi_base) {
@@ -322,38 +325,12 @@ int dw_pcie_host_init(struct pcie_port *pp)
pp->bridge = bridge;
- /* Get the I/O and memory ranges from DT */
- resource_list_for_each_entry(win, &bridge->windows) {
- switch (resource_type(win->res)) {
- case IORESOURCE_IO:
- pp->io_size = resource_size(win->res);
- pp->io_bus_addr = win->res->start - win->offset;
- pp->io_base = pci_pio_to_address(win->res->start);
- break;
- case 0:
- dev_err(dev, "Missing *config* reg space\n");
- pp->cfg0_size = resource_size(win->res);
- pp->cfg0_base = win->res->start;
- if (!pci->dbi_base) {
- pci->dbi_base = devm_pci_remap_cfgspace(dev,
- pp->cfg0_base,
- pp->cfg0_size);
- if (!pci->dbi_base) {
- dev_err(dev, "Error with ioremap\n");
- return -ENOMEM;
- }
- }
- break;
- }
- }
-
- if (!pp->va_cfg0_base) {
- pp->va_cfg0_base = devm_pci_remap_cfgspace(dev,
- pp->cfg0_base, pp->cfg0_size);
- if (!pp->va_cfg0_base) {
- dev_err(dev, "Error with ioremap in function\n");
- return -ENOMEM;
- }
+ /* Get the I/O range from DT */
+ win = resource_list_first_type(&bridge->windows, IORESOURCE_IO);
+ if (win) {
+ pp->io_size = resource_size(win->res);
+ pp->io_bus_addr = win->res->start - win->offset;
+ pp->io_base = pci_pio_to_address(win->res->start);
}
if (pci->link_gen < 1)
@@ -425,7 +402,7 @@ int dw_pcie_host_init(struct pcie_port *pp)
dw_pcie_setup_rc(pp);
dw_pcie_msi_init(pp);
- if (!dw_pcie_link_up(pci) && pci->ops->start_link) {
+ if (!dw_pcie_link_up(pci) && pci->ops && pci->ops->start_link) {
ret = pci->ops->start_link(pci);
if (ret)
goto err_free_msi;
diff --git a/drivers/pci/controller/dwc/pcie-designware.c b/drivers/pci/controller/dwc/pcie-designware.c
index 645fa1892375..004cb860e266 100644
--- a/drivers/pci/controller/dwc/pcie-designware.c
+++ b/drivers/pci/controller/dwc/pcie-designware.c
@@ -141,7 +141,7 @@ u32 dw_pcie_read_dbi(struct dw_pcie *pci, u32 reg, size_t size)
int ret;
u32 val;
- if (pci->ops->read_dbi)
+ if (pci->ops && pci->ops->read_dbi)
return pci->ops->read_dbi(pci, pci->dbi_base, reg, size);
ret = dw_pcie_read(pci->dbi_base + reg, size, &val);
@@ -156,7 +156,7 @@ void dw_pcie_write_dbi(struct dw_pcie *pci, u32 reg, size_t size, u32 val)
{
int ret;
- if (pci->ops->write_dbi) {
+ if (pci->ops && pci->ops->write_dbi) {
pci->ops->write_dbi(pci, pci->dbi_base, reg, size, val);
return;
}
@@ -171,7 +171,7 @@ void dw_pcie_write_dbi2(struct dw_pcie *pci, u32 reg, size_t size, u32 val)
{
int ret;
- if (pci->ops->write_dbi2) {
+ if (pci->ops && pci->ops->write_dbi2) {
pci->ops->write_dbi2(pci, pci->dbi_base2, reg, size, val);
return;
}
@@ -186,7 +186,7 @@ static u32 dw_pcie_readl_atu(struct dw_pcie *pci, u32 reg)
int ret;
u32 val;
- if (pci->ops->read_dbi)
+ if (pci->ops && pci->ops->read_dbi)
return pci->ops->read_dbi(pci, pci->atu_base, reg, 4);
ret = dw_pcie_read(pci->atu_base + reg, 4, &val);
@@ -200,7 +200,7 @@ static void dw_pcie_writel_atu(struct dw_pcie *pci, u32 reg, u32 val)
{
int ret;
- if (pci->ops->write_dbi) {
+ if (pci->ops && pci->ops->write_dbi) {
pci->ops->write_dbi(pci, pci->atu_base, reg, 4, val);
return;
}
@@ -225,6 +225,47 @@ static void dw_pcie_writel_ob_unroll(struct dw_pcie *pci, u32 index, u32 reg,
dw_pcie_writel_atu(pci, offset + reg, val);
}
+static inline u32 dw_pcie_enable_ecrc(u32 val)
+{
+ /*
+ * DesignWare core version 4.90A has a design issue where the 'TD'
+ * bit in the Control register-1 of the ATU outbound region acts
+ * like an override for the ECRC setting, i.e., the presence of TLP
+ * Digest (ECRC) in the outgoing TLPs is solely determined by this
+ * bit. This is contrary to the PCIe spec which says that the
+ * enablement of the ECRC is solely determined by the AER
+ * registers.
+ *
+ * Because of this, even when the ECRC is enabled through AER
+ * registers, the transactions going through ATU won't have TLP
+ * Digest as there is no way the PCI core AER code could program
+ * the TD bit which is specific to the DesignWare core.
+ *
+ * The best way to handle this scenario is to program the TD bit
+ * always. It affects only the traffic from root port to downstream
+ * devices.
+ *
+ * At this point,
+ * When ECRC is enabled in AER registers, everything works normally
+ * When ECRC is NOT enabled in AER registers, then,
+ * on Root Port:- TLP Digest (DWord size) gets appended to each packet
+ * even through it is not required. Since downstream
+ * TLPs are mostly for configuration accesses and BAR
+ * accesses, they are not in critical path and won't
+ * have much negative effect on the performance.
+ * on End Point:- TLP Digest is received for some/all the packets coming
+ * from the root port. TLP Digest is ignored because,
+ * as per the PCIe Spec r5.0 v1.0 section 2.2.3
+ * "TLP Digest Rules", when an endpoint receives TLP
+ * Digest when its ECRC check functionality is disabled
+ * in AER registers, received TLP Digest is just ignored.
+ * Since there is no issue or error reported either side, best way to
+ * handle the scenario is to program TD bit by default.
+ */
+
+ return val | PCIE_ATU_TD;
+}
+
static void dw_pcie_prog_outbound_atu_unroll(struct dw_pcie *pci, u8 func_no,
int index, int type,
u64 cpu_addr, u64 pci_addr,
@@ -248,6 +289,8 @@ static void dw_pcie_prog_outbound_atu_unroll(struct dw_pcie *pci, u8 func_no,
val = type | PCIE_ATU_FUNC_NUM(func_no);
val = upper_32_bits(size - 1) ?
val | PCIE_ATU_INCREASE_REGION_SIZE : val;
+ if (pci->version == 0x490A)
+ val = dw_pcie_enable_ecrc(val);
dw_pcie_writel_ob_unroll(pci, index, PCIE_ATU_UNR_REGION_CTRL1, val);
dw_pcie_writel_ob_unroll(pci, index, PCIE_ATU_UNR_REGION_CTRL2,
PCIE_ATU_ENABLE);
@@ -273,7 +316,7 @@ static void __dw_pcie_prog_outbound_atu(struct dw_pcie *pci, u8 func_no,
{
u32 retries, val;
- if (pci->ops->cpu_addr_fixup)
+ if (pci->ops && pci->ops->cpu_addr_fixup)
cpu_addr = pci->ops->cpu_addr_fixup(pci, cpu_addr);
if (pci->iatu_unroll_enabled) {
@@ -290,12 +333,19 @@ static void __dw_pcie_prog_outbound_atu(struct dw_pcie *pci, u8 func_no,
upper_32_bits(cpu_addr));
dw_pcie_writel_dbi(pci, PCIE_ATU_LIMIT,
lower_32_bits(cpu_addr + size - 1));
+ if (pci->version >= 0x460A)
+ dw_pcie_writel_dbi(pci, PCIE_ATU_UPPER_LIMIT,
+ upper_32_bits(cpu_addr + size - 1));
dw_pcie_writel_dbi(pci, PCIE_ATU_LOWER_TARGET,
lower_32_bits(pci_addr));
dw_pcie_writel_dbi(pci, PCIE_ATU_UPPER_TARGET,
upper_32_bits(pci_addr));
- dw_pcie_writel_dbi(pci, PCIE_ATU_CR1, type |
- PCIE_ATU_FUNC_NUM(func_no));
+ val = type | PCIE_ATU_FUNC_NUM(func_no);
+ val = ((upper_32_bits(size - 1)) && (pci->version >= 0x460A)) ?
+ val | PCIE_ATU_INCREASE_REGION_SIZE : val;
+ if (pci->version == 0x490A)
+ val = dw_pcie_enable_ecrc(val);
+ dw_pcie_writel_dbi(pci, PCIE_ATU_CR1, val);
dw_pcie_writel_dbi(pci, PCIE_ATU_CR2, PCIE_ATU_ENABLE);
/*
@@ -321,7 +371,7 @@ void dw_pcie_prog_outbound_atu(struct dw_pcie *pci, int index, int type,
void dw_pcie_prog_ep_outbound_atu(struct dw_pcie *pci, u8 func_no, int index,
int type, u64 cpu_addr, u64 pci_addr,
- u32 size)
+ u64 size)
{
__dw_pcie_prog_outbound_atu(pci, func_no, index, type,
cpu_addr, pci_addr, size);
@@ -481,7 +531,7 @@ int dw_pcie_link_up(struct dw_pcie *pci)
{
u32 val;
- if (pci->ops->link_up)
+ if (pci->ops && pci->ops->link_up)
return pci->ops->link_up(pci);
val = readl(pci->dbi_base + PCIE_PORT_DEBUG1);
diff --git a/drivers/pci/controller/dwc/pcie-designware.h b/drivers/pci/controller/dwc/pcie-designware.h
index 0207840756c4..7247c8b01f04 100644
--- a/drivers/pci/controller/dwc/pcie-designware.h
+++ b/drivers/pci/controller/dwc/pcie-designware.h
@@ -86,6 +86,7 @@
#define PCIE_ATU_TYPE_IO 0x2
#define PCIE_ATU_TYPE_CFG0 0x4
#define PCIE_ATU_TYPE_CFG1 0x5
+#define PCIE_ATU_TD BIT(8)
#define PCIE_ATU_FUNC_NUM(pf) ((pf) << 20)
#define PCIE_ATU_CR2 0x908
#define PCIE_ATU_ENABLE BIT(31)
@@ -99,6 +100,7 @@
#define PCIE_ATU_DEV(x) FIELD_PREP(GENMASK(23, 19), x)
#define PCIE_ATU_FUNC(x) FIELD_PREP(GENMASK(18, 16), x)
#define PCIE_ATU_UPPER_TARGET 0x91C
+#define PCIE_ATU_UPPER_LIMIT 0x924
#define PCIE_MISC_CONTROL_1_OFF 0x8BC
#define PCIE_DBI_RO_WR_EN BIT(0)
@@ -297,7 +299,7 @@ void dw_pcie_prog_outbound_atu(struct dw_pcie *pci, int index,
u64 size);
void dw_pcie_prog_ep_outbound_atu(struct dw_pcie *pci, u8 func_no, int index,
int type, u64 cpu_addr, u64 pci_addr,
- u32 size);
+ u64 size);
int dw_pcie_prog_inbound_atu(struct dw_pcie *pci, u8 func_no, int index,
int bar, u64 cpu_addr,
enum dw_pcie_as_type as_type);
diff --git a/drivers/pci/controller/dwc/pcie-qcom.c b/drivers/pci/controller/dwc/pcie-qcom.c
index affa2713bf80..8a7a300163e5 100644
--- a/drivers/pci/controller/dwc/pcie-qcom.c
+++ b/drivers/pci/controller/dwc/pcie-qcom.c
@@ -159,8 +159,10 @@ struct qcom_pcie_resources_2_3_3 {
struct reset_control *rst[7];
};
+/* 6 clocks typically, 7 for sm8250 */
struct qcom_pcie_resources_2_7_0 {
- struct clk_bulk_data clks[6];
+ struct clk_bulk_data clks[7];
+ int num_clks;
struct regulator_bulk_data supplies[2];
struct reset_control *pci_reset;
struct clk *pipe_clk;
@@ -398,7 +400,9 @@ static int qcom_pcie_init_2_1_0(struct qcom_pcie *pcie)
/* enable external reference clock */
val = readl(pcie->parf + PCIE20_PARF_PHY_REFCLK);
- val &= ~PHY_REFCLK_USE_PAD;
+ /* USE_PAD is required only for ipq806x */
+ if (!of_device_is_compatible(node, "qcom,pcie-apq8064"))
+ val &= ~PHY_REFCLK_USE_PAD;
val |= PHY_REFCLK_SSP_EN;
writel(val, pcie->parf + PCIE20_PARF_PHY_REFCLK);
@@ -1152,8 +1156,14 @@ static int qcom_pcie_get_resources_2_7_0(struct qcom_pcie *pcie)
res->clks[3].id = "bus_slave";
res->clks[4].id = "slave_q2a";
res->clks[5].id = "tbu";
+ if (of_device_is_compatible(dev->of_node, "qcom,pcie-sm8250")) {
+ res->clks[6].id = "ddrss_sf_tbu";
+ res->num_clks = 7;
+ } else {
+ res->num_clks = 6;
+ }
- ret = devm_clk_bulk_get(dev, ARRAY_SIZE(res->clks), res->clks);
+ ret = devm_clk_bulk_get(dev, res->num_clks, res->clks);
if (ret < 0)
return ret;
@@ -1175,7 +1185,7 @@ static int qcom_pcie_init_2_7_0(struct qcom_pcie *pcie)
return ret;
}
- ret = clk_bulk_prepare_enable(ARRAY_SIZE(res->clks), res->clks);
+ ret = clk_bulk_prepare_enable(res->num_clks, res->clks);
if (ret < 0)
goto err_disable_regulators;
@@ -1227,7 +1237,7 @@ static int qcom_pcie_init_2_7_0(struct qcom_pcie *pcie)
return 0;
err_disable_clocks:
- clk_bulk_disable_unprepare(ARRAY_SIZE(res->clks), res->clks);
+ clk_bulk_disable_unprepare(res->num_clks, res->clks);
err_disable_regulators:
regulator_bulk_disable(ARRAY_SIZE(res->supplies), res->supplies);
@@ -1238,7 +1248,7 @@ static void qcom_pcie_deinit_2_7_0(struct qcom_pcie *pcie)
{
struct qcom_pcie_resources_2_7_0 *res = &pcie->res.v2_7_0;
- clk_bulk_disable_unprepare(ARRAY_SIZE(res->clks), res->clks);
+ clk_bulk_disable_unprepare(res->num_clks, res->clks);
regulator_bulk_disable(ARRAY_SIZE(res->supplies), res->supplies);
}
diff --git a/drivers/pci/controller/pci-host-common.c b/drivers/pci/controller/pci-host-common.c
index 6ce34a1deecb..6ab694f8d283 100644
--- a/drivers/pci/controller/pci-host-common.c
+++ b/drivers/pci/controller/pci-host-common.c
@@ -64,6 +64,8 @@ int pci_host_common_probe(struct platform_device *pdev)
if (!bridge)
return -ENOMEM;
+ platform_set_drvdata(pdev, bridge);
+
of_pci_check_probe_only();
/* Parse and map our Configuration Space windows */
@@ -78,8 +80,6 @@ int pci_host_common_probe(struct platform_device *pdev)
bridge->sysdata = cfg;
bridge->ops = (struct pci_ops *)&ops->pci_ops;
- platform_set_drvdata(pdev, bridge);
-
return pci_host_probe(bridge);
}
EXPORT_SYMBOL_GPL(pci_host_common_probe);
diff --git a/drivers/pci/controller/pci-hyperv.c b/drivers/pci/controller/pci-hyperv.c
index 87aa62ee0368..27a17a1e4a7c 100644
--- a/drivers/pci/controller/pci-hyperv.c
+++ b/drivers/pci/controller/pci-hyperv.c
@@ -1714,7 +1714,7 @@ static void prepopulate_bars(struct hv_pcibus_device *hbus)
* resumed and suspended again: see hibernation_snapshot() and
* hibernation_platform_enter().
*
- * If the memory enable bit is already set, Hyper-V sliently ignores
+ * If the memory enable bit is already set, Hyper-V silently ignores
* the below BAR updates, and the related PCI device driver can not
* work, because reading from the device register(s) always returns
* 0xFFFFFFFF.
diff --git a/drivers/pci/controller/pci-xgene-msi.c b/drivers/pci/controller/pci-xgene-msi.c
index 2470782cb01a..1c34c897a7e2 100644
--- a/drivers/pci/controller/pci-xgene-msi.c
+++ b/drivers/pci/controller/pci-xgene-msi.c
@@ -384,13 +384,9 @@ static int xgene_msi_hwirq_alloc(unsigned int cpu)
if (!msi_group->gic_irq)
continue;
- irq_set_chained_handler(msi_group->gic_irq,
- xgene_msi_isr);
- err = irq_set_handler_data(msi_group->gic_irq, msi_group);
- if (err) {
- pr_err("failed to register GIC IRQ handler\n");
- return -EINVAL;
- }
+ irq_set_chained_handler_and_data(msi_group->gic_irq,
+ xgene_msi_isr, msi_group);
+
/*
* Statically allocate MSI GIC IRQs to each CPU core.
* With 8-core X-Gene v1, 2 MSI GIC IRQs are allocated
diff --git a/drivers/pci/controller/pci-xgene.c b/drivers/pci/controller/pci-xgene.c
index 85e7c98265e8..2afdc865253e 100644
--- a/drivers/pci/controller/pci-xgene.c
+++ b/drivers/pci/controller/pci-xgene.c
@@ -173,12 +173,13 @@ static int xgene_pcie_config_read32(struct pci_bus *bus, unsigned int devfn,
/*
* The v1 controller has a bug in its Configuration Request
- * Retry Status (CRS) logic: when CRS is enabled and we read the
- * Vendor and Device ID of a non-existent device, the controller
- * fabricates return data of 0xFFFF0001 ("device exists but is not
- * ready") instead of 0xFFFFFFFF ("device does not exist"). This
- * causes the PCI core to retry the read until it times out.
- * Avoid this by not claiming to support CRS.
+ * Retry Status (CRS) logic: when CRS Software Visibility is
+ * enabled and we read the Vendor and Device ID of a non-existent
+ * device, the controller fabricates return data of 0xFFFF0001
+ * ("device exists but is not ready") instead of 0xFFFFFFFF
+ * ("device does not exist"). This causes the PCI core to retry
+ * the read until it times out. Avoid this by not claiming to
+ * support CRS SV.
*/
if (pci_is_root_bus(bus) && (port->version == XGENE_PCIE_IP_VER_1) &&
((where & ~0x3) == XGENE_V1_PCI_EXP_CAP + PCI_EXP_RTCTL))
diff --git a/drivers/pci/controller/pcie-altera-msi.c b/drivers/pci/controller/pcie-altera-msi.c
index e1636f7714ca..42691dd8ebef 100644
--- a/drivers/pci/controller/pcie-altera-msi.c
+++ b/drivers/pci/controller/pcie-altera-msi.c
@@ -204,8 +204,7 @@ static int altera_msi_remove(struct platform_device *pdev)
struct altera_msi *msi = platform_get_drvdata(pdev);
msi_writel(msi, 0, MSI_INTMASK);
- irq_set_chained_handler(msi->irq, NULL);
- irq_set_handler_data(msi->irq, NULL);
+ irq_set_chained_handler_and_data(msi->irq, NULL, NULL);
altera_free_domains(msi);
diff --git a/drivers/pci/controller/pcie-brcmstb.c b/drivers/pci/controller/pcie-brcmstb.c
index d41257f43a8f..e330e6811f0b 100644
--- a/drivers/pci/controller/pcie-brcmstb.c
+++ b/drivers/pci/controller/pcie-brcmstb.c
@@ -97,6 +97,7 @@
#define PCIE_MISC_REVISION 0x406c
#define BRCM_PCIE_HW_REV_33 0x0303
+#define BRCM_PCIE_HW_REV_3_20 0x0320
#define PCIE_MISC_CPU_2_PCIE_MEM_WIN0_BASE_LIMIT 0x4070
#define PCIE_MISC_CPU_2_PCIE_MEM_WIN0_BASE_LIMIT_LIMIT_MASK 0xfff00000
@@ -187,6 +188,7 @@
struct brcm_pcie;
static inline void brcm_pcie_bridge_sw_init_set_7278(struct brcm_pcie *pcie, u32 val);
static inline void brcm_pcie_bridge_sw_init_set_generic(struct brcm_pcie *pcie, u32 val);
+static inline void brcm_pcie_perst_set_4908(struct brcm_pcie *pcie, u32 val);
static inline void brcm_pcie_perst_set_7278(struct brcm_pcie *pcie, u32 val);
static inline void brcm_pcie_perst_set_generic(struct brcm_pcie *pcie, u32 val);
@@ -203,6 +205,7 @@ enum {
enum pcie_type {
GENERIC,
+ BCM4908,
BCM7278,
BCM2711,
};
@@ -227,6 +230,13 @@ static const struct pcie_cfg_data generic_cfg = {
.bridge_sw_init_set = brcm_pcie_bridge_sw_init_set_generic,
};
+static const struct pcie_cfg_data bcm4908_cfg = {
+ .offsets = pcie_offsets,
+ .type = BCM4908,
+ .perst_set = brcm_pcie_perst_set_4908,
+ .bridge_sw_init_set = brcm_pcie_bridge_sw_init_set_generic,
+};
+
static const int pcie_offset_bcm7278[] = {
[RGR1_SW_INIT_1] = 0xc010,
[EXT_CFG_INDEX] = 0x9000,
@@ -279,6 +289,7 @@ struct brcm_pcie {
const int *reg_offsets;
enum pcie_type type;
struct reset_control *rescal;
+ struct reset_control *perst_reset;
int num_memc;
u64 memc_size[PCIE_BRCM_MAX_MEMC];
u32 hw_rev;
@@ -603,8 +614,7 @@ static void brcm_msi_remove(struct brcm_pcie *pcie)
if (!msi)
return;
- irq_set_chained_handler(msi->irq, NULL);
- irq_set_handler_data(msi->irq, NULL);
+ irq_set_chained_handler_and_data(msi->irq, NULL, NULL);
brcm_free_domains(msi);
}
@@ -735,6 +745,17 @@ static inline void brcm_pcie_bridge_sw_init_set_7278(struct brcm_pcie *pcie, u32
writel(tmp, pcie->base + PCIE_RGR1_SW_INIT_1(pcie));
}
+static inline void brcm_pcie_perst_set_4908(struct brcm_pcie *pcie, u32 val)
+{
+ if (WARN_ONCE(!pcie->perst_reset, "missing PERST# reset controller\n"))
+ return;
+
+ if (val)
+ reset_control_assert(pcie->perst_reset);
+ else
+ reset_control_deassert(pcie->perst_reset);
+}
+
static inline void brcm_pcie_perst_set_7278(struct brcm_pcie *pcie, u32 val)
{
u32 tmp;
@@ -1194,6 +1215,7 @@ static int brcm_pcie_remove(struct platform_device *pdev)
static const struct of_device_id brcm_pcie_match[] = {
{ .compatible = "brcm,bcm2711-pcie", .data = &bcm2711_cfg },
+ { .compatible = "brcm,bcm4908-pcie", .data = &bcm4908_cfg },
{ .compatible = "brcm,bcm7211-pcie", .data = &generic_cfg },
{ .compatible = "brcm,bcm7278-pcie", .data = &bcm7278_cfg },
{ .compatible = "brcm,bcm7216-pcie", .data = &bcm7278_cfg },
@@ -1250,6 +1272,11 @@ static int brcm_pcie_probe(struct platform_device *pdev)
clk_disable_unprepare(pcie->clk);
return PTR_ERR(pcie->rescal);
}
+ pcie->perst_reset = devm_reset_control_get_optional_exclusive(&pdev->dev, "perst");
+ if (IS_ERR(pcie->perst_reset)) {
+ clk_disable_unprepare(pcie->clk);
+ return PTR_ERR(pcie->perst_reset);
+ }
ret = reset_control_deassert(pcie->rescal);
if (ret)
@@ -1267,6 +1294,10 @@ static int brcm_pcie_probe(struct platform_device *pdev)
goto fail;
pcie->hw_rev = readl(pcie->base + PCIE_MISC_REVISION);
+ if (pcie->type == BCM4908 && pcie->hw_rev >= BRCM_PCIE_HW_REV_3_20) {
+ dev_err(pcie->dev, "hardware revision with unsupported PERST# setup\n");
+ goto fail;
+ }
msi_np = of_parse_phandle(pcie->np, "msi-parent", 0);
if (pci_msi_enabled() && msi_np == pcie->np) {
diff --git a/drivers/pci/controller/pcie-mediatek.c b/drivers/pci/controller/pcie-mediatek.c
index cf4c18f0c25a..23548b517e4b 100644
--- a/drivers/pci/controller/pcie-mediatek.c
+++ b/drivers/pci/controller/pcie-mediatek.c
@@ -1035,14 +1035,14 @@ static int mtk_pcie_setup(struct mtk_pcie *pcie)
err = of_pci_get_devfn(child);
if (err < 0) {
dev_err(dev, "failed to parse devfn: %d\n", err);
- return err;
+ goto error_put_node;
}
slot = PCI_SLOT(err);
err = mtk_pcie_parse_port(pcie, child, slot);
if (err)
- return err;
+ goto error_put_node;
}
err = mtk_pcie_subsys_powerup(pcie);
@@ -1058,6 +1058,9 @@ static int mtk_pcie_setup(struct mtk_pcie *pcie)
mtk_pcie_subsys_powerdown(pcie);
return 0;
+error_put_node:
+ of_node_put(child);
+ return err;
}
static int mtk_pcie_probe(struct platform_device *pdev)
diff --git a/drivers/pci/controller/pcie-microchip-host.c b/drivers/pci/controller/pcie-microchip-host.c
new file mode 100644
index 000000000000..04c19ff81aff
--- /dev/null
+++ b/drivers/pci/controller/pcie-microchip-host.c
@@ -0,0 +1,1138 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Microchip AXI PCIe Bridge host controller driver
+ *
+ * Copyright (c) 2018 - 2020 Microchip Corporation. All rights reserved.
+ *
+ * Author: Daire McNamara <daire.mcnamara@microchip.com>
+ */
+
+#include <linux/clk.h>
+#include <linux/irqchip/chained_irq.h>
+#include <linux/module.h>
+#include <linux/msi.h>
+#include <linux/of_address.h>
+#include <linux/of_irq.h>
+#include <linux/of_pci.h>
+#include <linux/pci-ecam.h>
+#include <linux/platform_device.h>
+
+#include "../pci.h"
+
+/* Number of MSI IRQs */
+#define MC_NUM_MSI_IRQS 32
+#define MC_NUM_MSI_IRQS_CODED 5
+
+/* PCIe Bridge Phy and Controller Phy offsets */
+#define MC_PCIE1_BRIDGE_ADDR 0x00008000u
+#define MC_PCIE1_CTRL_ADDR 0x0000a000u
+
+#define MC_PCIE_BRIDGE_ADDR (MC_PCIE1_BRIDGE_ADDR)
+#define MC_PCIE_CTRL_ADDR (MC_PCIE1_CTRL_ADDR)
+
+/* PCIe Controller Phy Regs */
+#define SEC_ERROR_CNT 0x20
+#define DED_ERROR_CNT 0x24
+#define SEC_ERROR_INT 0x28
+#define SEC_ERROR_INT_TX_RAM_SEC_ERR_INT GENMASK(3, 0)
+#define SEC_ERROR_INT_RX_RAM_SEC_ERR_INT GENMASK(7, 4)
+#define SEC_ERROR_INT_PCIE2AXI_RAM_SEC_ERR_INT GENMASK(11, 8)
+#define SEC_ERROR_INT_AXI2PCIE_RAM_SEC_ERR_INT GENMASK(15, 12)
+#define NUM_SEC_ERROR_INTS (4)
+#define SEC_ERROR_INT_MASK 0x2c
+#define DED_ERROR_INT 0x30
+#define DED_ERROR_INT_TX_RAM_DED_ERR_INT GENMASK(3, 0)
+#define DED_ERROR_INT_RX_RAM_DED_ERR_INT GENMASK(7, 4)
+#define DED_ERROR_INT_PCIE2AXI_RAM_DED_ERR_INT GENMASK(11, 8)
+#define DED_ERROR_INT_AXI2PCIE_RAM_DED_ERR_INT GENMASK(15, 12)
+#define NUM_DED_ERROR_INTS (4)
+#define DED_ERROR_INT_MASK 0x34
+#define ECC_CONTROL 0x38
+#define ECC_CONTROL_TX_RAM_INJ_ERROR_0 BIT(0)
+#define ECC_CONTROL_TX_RAM_INJ_ERROR_1 BIT(1)
+#define ECC_CONTROL_TX_RAM_INJ_ERROR_2 BIT(2)
+#define ECC_CONTROL_TX_RAM_INJ_ERROR_3 BIT(3)
+#define ECC_CONTROL_RX_RAM_INJ_ERROR_0 BIT(4)
+#define ECC_CONTROL_RX_RAM_INJ_ERROR_1 BIT(5)
+#define ECC_CONTROL_RX_RAM_INJ_ERROR_2 BIT(6)
+#define ECC_CONTROL_RX_RAM_INJ_ERROR_3 BIT(7)
+#define ECC_CONTROL_PCIE2AXI_RAM_INJ_ERROR_0 BIT(8)
+#define ECC_CONTROL_PCIE2AXI_RAM_INJ_ERROR_1 BIT(9)
+#define ECC_CONTROL_PCIE2AXI_RAM_INJ_ERROR_2 BIT(10)
+#define ECC_CONTROL_PCIE2AXI_RAM_INJ_ERROR_3 BIT(11)
+#define ECC_CONTROL_AXI2PCIE_RAM_INJ_ERROR_0 BIT(12)
+#define ECC_CONTROL_AXI2PCIE_RAM_INJ_ERROR_1 BIT(13)
+#define ECC_CONTROL_AXI2PCIE_RAM_INJ_ERROR_2 BIT(14)
+#define ECC_CONTROL_AXI2PCIE_RAM_INJ_ERROR_3 BIT(15)
+#define ECC_CONTROL_TX_RAM_ECC_BYPASS BIT(24)
+#define ECC_CONTROL_RX_RAM_ECC_BYPASS BIT(25)
+#define ECC_CONTROL_PCIE2AXI_RAM_ECC_BYPASS BIT(26)
+#define ECC_CONTROL_AXI2PCIE_RAM_ECC_BYPASS BIT(27)
+#define LTSSM_STATE 0x5c
+#define LTSSM_L0_STATE 0x10
+#define PCIE_EVENT_INT 0x14c
+#define PCIE_EVENT_INT_L2_EXIT_INT BIT(0)
+#define PCIE_EVENT_INT_HOTRST_EXIT_INT BIT(1)
+#define PCIE_EVENT_INT_DLUP_EXIT_INT BIT(2)
+#define PCIE_EVENT_INT_MASK GENMASK(2, 0)
+#define PCIE_EVENT_INT_L2_EXIT_INT_MASK BIT(16)
+#define PCIE_EVENT_INT_HOTRST_EXIT_INT_MASK BIT(17)
+#define PCIE_EVENT_INT_DLUP_EXIT_INT_MASK BIT(18)
+#define PCIE_EVENT_INT_ENB_MASK GENMASK(18, 16)
+#define PCIE_EVENT_INT_ENB_SHIFT 16
+#define NUM_PCIE_EVENTS (3)
+
+/* PCIe Bridge Phy Regs */
+#define PCIE_PCI_IDS_DW1 0x9c
+
+/* PCIe Config space MSI capability structure */
+#define MC_MSI_CAP_CTRL_OFFSET 0xe0u
+#define MC_MSI_MAX_Q_AVAIL (MC_NUM_MSI_IRQS_CODED << 1)
+#define MC_MSI_Q_SIZE (MC_NUM_MSI_IRQS_CODED << 4)
+
+#define IMASK_LOCAL 0x180
+#define DMA_END_ENGINE_0_MASK 0x00000000u
+#define DMA_END_ENGINE_0_SHIFT 0
+#define DMA_END_ENGINE_1_MASK 0x00000000u
+#define DMA_END_ENGINE_1_SHIFT 1
+#define DMA_ERROR_ENGINE_0_MASK 0x00000100u
+#define DMA_ERROR_ENGINE_0_SHIFT 8
+#define DMA_ERROR_ENGINE_1_MASK 0x00000200u
+#define DMA_ERROR_ENGINE_1_SHIFT 9
+#define A_ATR_EVT_POST_ERR_MASK 0x00010000u
+#define A_ATR_EVT_POST_ERR_SHIFT 16
+#define A_ATR_EVT_FETCH_ERR_MASK 0x00020000u
+#define A_ATR_EVT_FETCH_ERR_SHIFT 17
+#define A_ATR_EVT_DISCARD_ERR_MASK 0x00040000u
+#define A_ATR_EVT_DISCARD_ERR_SHIFT 18
+#define A_ATR_EVT_DOORBELL_MASK 0x00000000u
+#define A_ATR_EVT_DOORBELL_SHIFT 19
+#define P_ATR_EVT_POST_ERR_MASK 0x00100000u
+#define P_ATR_EVT_POST_ERR_SHIFT 20
+#define P_ATR_EVT_FETCH_ERR_MASK 0x00200000u
+#define P_ATR_EVT_FETCH_ERR_SHIFT 21
+#define P_ATR_EVT_DISCARD_ERR_MASK 0x00400000u
+#define P_ATR_EVT_DISCARD_ERR_SHIFT 22
+#define P_ATR_EVT_DOORBELL_MASK 0x00000000u
+#define P_ATR_EVT_DOORBELL_SHIFT 23
+#define PM_MSI_INT_INTA_MASK 0x01000000u
+#define PM_MSI_INT_INTA_SHIFT 24
+#define PM_MSI_INT_INTB_MASK 0x02000000u
+#define PM_MSI_INT_INTB_SHIFT 25
+#define PM_MSI_INT_INTC_MASK 0x04000000u
+#define PM_MSI_INT_INTC_SHIFT 26
+#define PM_MSI_INT_INTD_MASK 0x08000000u
+#define PM_MSI_INT_INTD_SHIFT 27
+#define PM_MSI_INT_INTX_MASK 0x0f000000u
+#define PM_MSI_INT_INTX_SHIFT 24
+#define PM_MSI_INT_MSI_MASK 0x10000000u
+#define PM_MSI_INT_MSI_SHIFT 28
+#define PM_MSI_INT_AER_EVT_MASK 0x20000000u
+#define PM_MSI_INT_AER_EVT_SHIFT 29
+#define PM_MSI_INT_EVENTS_MASK 0x40000000u
+#define PM_MSI_INT_EVENTS_SHIFT 30
+#define PM_MSI_INT_SYS_ERR_MASK 0x80000000u
+#define PM_MSI_INT_SYS_ERR_SHIFT 31
+#define NUM_LOCAL_EVENTS 15
+#define ISTATUS_LOCAL 0x184
+#define IMASK_HOST 0x188
+#define ISTATUS_HOST 0x18c
+#define MSI_ADDR 0x190
+#define ISTATUS_MSI 0x194
+
+/* PCIe Master table init defines */
+#define ATR0_PCIE_WIN0_SRCADDR_PARAM 0x600u
+#define ATR0_PCIE_ATR_SIZE 0x25
+#define ATR0_PCIE_ATR_SIZE_SHIFT 1
+#define ATR0_PCIE_WIN0_SRC_ADDR 0x604u
+#define ATR0_PCIE_WIN0_TRSL_ADDR_LSB 0x608u
+#define ATR0_PCIE_WIN0_TRSL_ADDR_UDW 0x60cu
+#define ATR0_PCIE_WIN0_TRSL_PARAM 0x610u
+
+/* PCIe AXI slave table init defines */
+#define ATR0_AXI4_SLV0_SRCADDR_PARAM 0x800u
+#define ATR_SIZE_SHIFT 1
+#define ATR_IMPL_ENABLE 1
+#define ATR0_AXI4_SLV0_SRC_ADDR 0x804u
+#define ATR0_AXI4_SLV0_TRSL_ADDR_LSB 0x808u
+#define ATR0_AXI4_SLV0_TRSL_ADDR_UDW 0x80cu
+#define ATR0_AXI4_SLV0_TRSL_PARAM 0x810u
+#define PCIE_TX_RX_INTERFACE 0x00000000u
+#define PCIE_CONFIG_INTERFACE 0x00000001u
+
+#define ATR_ENTRY_SIZE 32
+
+#define EVENT_PCIE_L2_EXIT 0
+#define EVENT_PCIE_HOTRST_EXIT 1
+#define EVENT_PCIE_DLUP_EXIT 2
+#define EVENT_SEC_TX_RAM_SEC_ERR 3
+#define EVENT_SEC_RX_RAM_SEC_ERR 4
+#define EVENT_SEC_AXI2PCIE_RAM_SEC_ERR 5
+#define EVENT_SEC_PCIE2AXI_RAM_SEC_ERR 6
+#define EVENT_DED_TX_RAM_DED_ERR 7
+#define EVENT_DED_RX_RAM_DED_ERR 8
+#define EVENT_DED_AXI2PCIE_RAM_DED_ERR 9
+#define EVENT_DED_PCIE2AXI_RAM_DED_ERR 10
+#define EVENT_LOCAL_DMA_END_ENGINE_0 11
+#define EVENT_LOCAL_DMA_END_ENGINE_1 12
+#define EVENT_LOCAL_DMA_ERROR_ENGINE_0 13
+#define EVENT_LOCAL_DMA_ERROR_ENGINE_1 14
+#define EVENT_LOCAL_A_ATR_EVT_POST_ERR 15
+#define EVENT_LOCAL_A_ATR_EVT_FETCH_ERR 16
+#define EVENT_LOCAL_A_ATR_EVT_DISCARD_ERR 17
+#define EVENT_LOCAL_A_ATR_EVT_DOORBELL 18
+#define EVENT_LOCAL_P_ATR_EVT_POST_ERR 19
+#define EVENT_LOCAL_P_ATR_EVT_FETCH_ERR 20
+#define EVENT_LOCAL_P_ATR_EVT_DISCARD_ERR 21
+#define EVENT_LOCAL_P_ATR_EVT_DOORBELL 22
+#define EVENT_LOCAL_PM_MSI_INT_INTX 23
+#define EVENT_LOCAL_PM_MSI_INT_MSI 24
+#define EVENT_LOCAL_PM_MSI_INT_AER_EVT 25
+#define EVENT_LOCAL_PM_MSI_INT_EVENTS 26
+#define EVENT_LOCAL_PM_MSI_INT_SYS_ERR 27
+#define NUM_EVENTS 28
+
+#define PCIE_EVENT_CAUSE(x, s) \
+ [EVENT_PCIE_ ## x] = { __stringify(x), s }
+
+#define SEC_ERROR_CAUSE(x, s) \
+ [EVENT_SEC_ ## x] = { __stringify(x), s }
+
+#define DED_ERROR_CAUSE(x, s) \
+ [EVENT_DED_ ## x] = { __stringify(x), s }
+
+#define LOCAL_EVENT_CAUSE(x, s) \
+ [EVENT_LOCAL_ ## x] = { __stringify(x), s }
+
+#define PCIE_EVENT(x) \
+ .base = MC_PCIE_CTRL_ADDR, \
+ .offset = PCIE_EVENT_INT, \
+ .mask_offset = PCIE_EVENT_INT, \
+ .mask_high = 1, \
+ .mask = PCIE_EVENT_INT_ ## x ## _INT, \
+ .enb_mask = PCIE_EVENT_INT_ENB_MASK
+
+#define SEC_EVENT(x) \
+ .base = MC_PCIE_CTRL_ADDR, \
+ .offset = SEC_ERROR_INT, \
+ .mask_offset = SEC_ERROR_INT_MASK, \
+ .mask = SEC_ERROR_INT_ ## x ## _INT, \
+ .mask_high = 1, \
+ .enb_mask = 0
+
+#define DED_EVENT(x) \
+ .base = MC_PCIE_CTRL_ADDR, \
+ .offset = DED_ERROR_INT, \
+ .mask_offset = DED_ERROR_INT_MASK, \
+ .mask_high = 1, \
+ .mask = DED_ERROR_INT_ ## x ## _INT, \
+ .enb_mask = 0
+
+#define LOCAL_EVENT(x) \
+ .base = MC_PCIE_BRIDGE_ADDR, \
+ .offset = ISTATUS_LOCAL, \
+ .mask_offset = IMASK_LOCAL, \
+ .mask_high = 0, \
+ .mask = x ## _MASK, \
+ .enb_mask = 0
+
+#define PCIE_EVENT_TO_EVENT_MAP(x) \
+ { PCIE_EVENT_INT_ ## x ## _INT, EVENT_PCIE_ ## x }
+
+#define SEC_ERROR_TO_EVENT_MAP(x) \
+ { SEC_ERROR_INT_ ## x ## _INT, EVENT_SEC_ ## x }
+
+#define DED_ERROR_TO_EVENT_MAP(x) \
+ { DED_ERROR_INT_ ## x ## _INT, EVENT_DED_ ## x }
+
+#define LOCAL_STATUS_TO_EVENT_MAP(x) \
+ { x ## _MASK, EVENT_LOCAL_ ## x }
+
+struct event_map {
+ u32 reg_mask;
+ u32 event_bit;
+};
+
+struct mc_msi {
+ struct mutex lock; /* Protect used bitmap */
+ struct irq_domain *msi_domain;
+ struct irq_domain *dev_domain;
+ u32 num_vectors;
+ u64 vector_phy;
+ DECLARE_BITMAP(used, MC_NUM_MSI_IRQS);
+};
+
+struct mc_port {
+ void __iomem *axi_base_addr;
+ struct device *dev;
+ struct irq_domain *intx_domain;
+ struct irq_domain *event_domain;
+ raw_spinlock_t lock;
+ struct mc_msi msi;
+};
+
+struct cause {
+ const char *sym;
+ const char *str;
+};
+
+static const struct cause event_cause[NUM_EVENTS] = {
+ PCIE_EVENT_CAUSE(L2_EXIT, "L2 exit event"),
+ PCIE_EVENT_CAUSE(HOTRST_EXIT, "Hot reset exit event"),
+ PCIE_EVENT_CAUSE(DLUP_EXIT, "DLUP exit event"),
+ SEC_ERROR_CAUSE(TX_RAM_SEC_ERR, "sec error in tx buffer"),
+ SEC_ERROR_CAUSE(RX_RAM_SEC_ERR, "sec error in rx buffer"),
+ SEC_ERROR_CAUSE(PCIE2AXI_RAM_SEC_ERR, "sec error in pcie2axi buffer"),
+ SEC_ERROR_CAUSE(AXI2PCIE_RAM_SEC_ERR, "sec error in axi2pcie buffer"),
+ DED_ERROR_CAUSE(TX_RAM_DED_ERR, "ded error in tx buffer"),
+ DED_ERROR_CAUSE(RX_RAM_DED_ERR, "ded error in rx buffer"),
+ DED_ERROR_CAUSE(PCIE2AXI_RAM_DED_ERR, "ded error in pcie2axi buffer"),
+ DED_ERROR_CAUSE(AXI2PCIE_RAM_DED_ERR, "ded error in axi2pcie buffer"),
+ LOCAL_EVENT_CAUSE(DMA_ERROR_ENGINE_0, "dma engine 0 error"),
+ LOCAL_EVENT_CAUSE(DMA_ERROR_ENGINE_1, "dma engine 1 error"),
+ LOCAL_EVENT_CAUSE(A_ATR_EVT_POST_ERR, "axi write request error"),
+ LOCAL_EVENT_CAUSE(A_ATR_EVT_FETCH_ERR, "axi read request error"),
+ LOCAL_EVENT_CAUSE(A_ATR_EVT_DISCARD_ERR, "axi read timeout"),
+ LOCAL_EVENT_CAUSE(P_ATR_EVT_POST_ERR, "pcie write request error"),
+ LOCAL_EVENT_CAUSE(P_ATR_EVT_FETCH_ERR, "pcie read request error"),
+ LOCAL_EVENT_CAUSE(P_ATR_EVT_DISCARD_ERR, "pcie read timeout"),
+ LOCAL_EVENT_CAUSE(PM_MSI_INT_AER_EVT, "aer event"),
+ LOCAL_EVENT_CAUSE(PM_MSI_INT_EVENTS, "pm/ltr/hotplug event"),
+ LOCAL_EVENT_CAUSE(PM_MSI_INT_SYS_ERR, "system error"),
+};
+
+struct event_map pcie_event_to_event[] = {
+ PCIE_EVENT_TO_EVENT_MAP(L2_EXIT),
+ PCIE_EVENT_TO_EVENT_MAP(HOTRST_EXIT),
+ PCIE_EVENT_TO_EVENT_MAP(DLUP_EXIT),
+};
+
+struct event_map sec_error_to_event[] = {
+ SEC_ERROR_TO_EVENT_MAP(TX_RAM_SEC_ERR),
+ SEC_ERROR_TO_EVENT_MAP(RX_RAM_SEC_ERR),
+ SEC_ERROR_TO_EVENT_MAP(PCIE2AXI_RAM_SEC_ERR),
+ SEC_ERROR_TO_EVENT_MAP(AXI2PCIE_RAM_SEC_ERR),
+};
+
+struct event_map ded_error_to_event[] = {
+ DED_ERROR_TO_EVENT_MAP(TX_RAM_DED_ERR),
+ DED_ERROR_TO_EVENT_MAP(RX_RAM_DED_ERR),
+ DED_ERROR_TO_EVENT_MAP(PCIE2AXI_RAM_DED_ERR),
+ DED_ERROR_TO_EVENT_MAP(AXI2PCIE_RAM_DED_ERR),
+};
+
+struct event_map local_status_to_event[] = {
+ LOCAL_STATUS_TO_EVENT_MAP(DMA_END_ENGINE_0),
+ LOCAL_STATUS_TO_EVENT_MAP(DMA_END_ENGINE_1),
+ LOCAL_STATUS_TO_EVENT_MAP(DMA_ERROR_ENGINE_0),
+ LOCAL_STATUS_TO_EVENT_MAP(DMA_ERROR_ENGINE_1),
+ LOCAL_STATUS_TO_EVENT_MAP(A_ATR_EVT_POST_ERR),
+ LOCAL_STATUS_TO_EVENT_MAP(A_ATR_EVT_FETCH_ERR),
+ LOCAL_STATUS_TO_EVENT_MAP(A_ATR_EVT_DISCARD_ERR),
+ LOCAL_STATUS_TO_EVENT_MAP(A_ATR_EVT_DOORBELL),
+ LOCAL_STATUS_TO_EVENT_MAP(P_ATR_EVT_POST_ERR),
+ LOCAL_STATUS_TO_EVENT_MAP(P_ATR_EVT_FETCH_ERR),
+ LOCAL_STATUS_TO_EVENT_MAP(P_ATR_EVT_DISCARD_ERR),
+ LOCAL_STATUS_TO_EVENT_MAP(P_ATR_EVT_DOORBELL),
+ LOCAL_STATUS_TO_EVENT_MAP(PM_MSI_INT_INTX),
+ LOCAL_STATUS_TO_EVENT_MAP(PM_MSI_INT_MSI),
+ LOCAL_STATUS_TO_EVENT_MAP(PM_MSI_INT_AER_EVT),
+ LOCAL_STATUS_TO_EVENT_MAP(PM_MSI_INT_EVENTS),
+ LOCAL_STATUS_TO_EVENT_MAP(PM_MSI_INT_SYS_ERR),
+};
+
+struct {
+ u32 base;
+ u32 offset;
+ u32 mask;
+ u32 shift;
+ u32 enb_mask;
+ u32 mask_high;
+ u32 mask_offset;
+} event_descs[] = {
+ { PCIE_EVENT(L2_EXIT) },
+ { PCIE_EVENT(HOTRST_EXIT) },
+ { PCIE_EVENT(DLUP_EXIT) },
+ { SEC_EVENT(TX_RAM_SEC_ERR) },
+ { SEC_EVENT(RX_RAM_SEC_ERR) },
+ { SEC_EVENT(PCIE2AXI_RAM_SEC_ERR) },
+ { SEC_EVENT(AXI2PCIE_RAM_SEC_ERR) },
+ { DED_EVENT(TX_RAM_DED_ERR) },
+ { DED_EVENT(RX_RAM_DED_ERR) },
+ { DED_EVENT(PCIE2AXI_RAM_DED_ERR) },
+ { DED_EVENT(AXI2PCIE_RAM_DED_ERR) },
+ { LOCAL_EVENT(DMA_END_ENGINE_0) },
+ { LOCAL_EVENT(DMA_END_ENGINE_1) },
+ { LOCAL_EVENT(DMA_ERROR_ENGINE_0) },
+ { LOCAL_EVENT(DMA_ERROR_ENGINE_1) },
+ { LOCAL_EVENT(A_ATR_EVT_POST_ERR) },
+ { LOCAL_EVENT(A_ATR_EVT_FETCH_ERR) },
+ { LOCAL_EVENT(A_ATR_EVT_DISCARD_ERR) },
+ { LOCAL_EVENT(A_ATR_EVT_DOORBELL) },
+ { LOCAL_EVENT(P_ATR_EVT_POST_ERR) },
+ { LOCAL_EVENT(P_ATR_EVT_FETCH_ERR) },
+ { LOCAL_EVENT(P_ATR_EVT_DISCARD_ERR) },
+ { LOCAL_EVENT(P_ATR_EVT_DOORBELL) },
+ { LOCAL_EVENT(PM_MSI_INT_INTX) },
+ { LOCAL_EVENT(PM_MSI_INT_MSI) },
+ { LOCAL_EVENT(PM_MSI_INT_AER_EVT) },
+ { LOCAL_EVENT(PM_MSI_INT_EVENTS) },
+ { LOCAL_EVENT(PM_MSI_INT_SYS_ERR) },
+};
+
+static char poss_clks[][5] = { "fic0", "fic1", "fic2", "fic3" };
+
+static void mc_pcie_enable_msi(struct mc_port *port, void __iomem *base)
+{
+ struct mc_msi *msi = &port->msi;
+ u32 cap_offset = MC_MSI_CAP_CTRL_OFFSET;
+ u16 msg_ctrl = readw_relaxed(base + cap_offset + PCI_MSI_FLAGS);
+
+ msg_ctrl |= PCI_MSI_FLAGS_ENABLE;
+ msg_ctrl &= ~PCI_MSI_FLAGS_QMASK;
+ msg_ctrl |= MC_MSI_MAX_Q_AVAIL;
+ msg_ctrl &= ~PCI_MSI_FLAGS_QSIZE;
+ msg_ctrl |= MC_MSI_Q_SIZE;
+ msg_ctrl |= PCI_MSI_FLAGS_64BIT;
+
+ writew_relaxed(msg_ctrl, base + cap_offset + PCI_MSI_FLAGS);
+
+ writel_relaxed(lower_32_bits(msi->vector_phy),
+ base + cap_offset + PCI_MSI_ADDRESS_LO);
+ writel_relaxed(upper_32_bits(msi->vector_phy),
+ base + cap_offset + PCI_MSI_ADDRESS_HI);
+}
+
+static void mc_handle_msi(struct irq_desc *desc)
+{
+ struct mc_port *port = irq_desc_get_handler_data(desc);
+ struct device *dev = port->dev;
+ struct mc_msi *msi = &port->msi;
+ void __iomem *bridge_base_addr =
+ port->axi_base_addr + MC_PCIE_BRIDGE_ADDR;
+ unsigned long status;
+ u32 bit;
+ u32 virq;
+
+ status = readl_relaxed(bridge_base_addr + ISTATUS_LOCAL);
+ if (status & PM_MSI_INT_MSI_MASK) {
+ status = readl_relaxed(bridge_base_addr + ISTATUS_MSI);
+ for_each_set_bit(bit, &status, msi->num_vectors) {
+ virq = irq_find_mapping(msi->dev_domain, bit);
+ if (virq)
+ generic_handle_irq(virq);
+ else
+ dev_err_ratelimited(dev, "bad MSI IRQ %d\n",
+ bit);
+ }
+ }
+}
+
+static void mc_msi_bottom_irq_ack(struct irq_data *data)
+{
+ struct mc_port *port = irq_data_get_irq_chip_data(data);
+ void __iomem *bridge_base_addr =
+ port->axi_base_addr + MC_PCIE_BRIDGE_ADDR;
+ u32 bitpos = data->hwirq;
+ unsigned long status;
+
+ writel_relaxed(BIT(bitpos), bridge_base_addr + ISTATUS_MSI);
+ status = readl_relaxed(bridge_base_addr + ISTATUS_MSI);
+ if (!status)
+ writel_relaxed(BIT(PM_MSI_INT_MSI_SHIFT),
+ bridge_base_addr + ISTATUS_LOCAL);
+}
+
+static void mc_compose_msi_msg(struct irq_data *data, struct msi_msg *msg)
+{
+ struct mc_port *port = irq_data_get_irq_chip_data(data);
+ phys_addr_t addr = port->msi.vector_phy;
+
+ msg->address_lo = lower_32_bits(addr);
+ msg->address_hi = upper_32_bits(addr);
+ msg->data = data->hwirq;
+
+ dev_dbg(port->dev, "msi#%x address_hi %#x address_lo %#x\n",
+ (int)data->hwirq, msg->address_hi, msg->address_lo);
+}
+
+static int mc_msi_set_affinity(struct irq_data *irq_data,
+ const struct cpumask *mask, bool force)
+{
+ return -EINVAL;
+}
+
+static struct irq_chip mc_msi_bottom_irq_chip = {
+ .name = "Microchip MSI",
+ .irq_ack = mc_msi_bottom_irq_ack,
+ .irq_compose_msi_msg = mc_compose_msi_msg,
+ .irq_set_affinity = mc_msi_set_affinity,
+};
+
+static int mc_irq_msi_domain_alloc(struct irq_domain *domain, unsigned int virq,
+ unsigned int nr_irqs, void *args)
+{
+ struct mc_port *port = domain->host_data;
+ struct mc_msi *msi = &port->msi;
+ void __iomem *bridge_base_addr =
+ port->axi_base_addr + MC_PCIE_BRIDGE_ADDR;
+ unsigned long bit;
+ u32 val;
+
+ mutex_lock(&msi->lock);
+ bit = find_first_zero_bit(msi->used, msi->num_vectors);
+ if (bit >= msi->num_vectors) {
+ mutex_unlock(&msi->lock);
+ return -ENOSPC;
+ }
+
+ set_bit(bit, msi->used);
+
+ irq_domain_set_info(domain, virq, bit, &mc_msi_bottom_irq_chip,
+ domain->host_data, handle_edge_irq, NULL, NULL);
+
+ /* Enable MSI interrupts */
+ val = readl_relaxed(bridge_base_addr + IMASK_LOCAL);
+ val |= PM_MSI_INT_MSI_MASK;
+ writel_relaxed(val, bridge_base_addr + IMASK_LOCAL);
+
+ mutex_unlock(&msi->lock);
+
+ return 0;
+}
+
+static void mc_irq_msi_domain_free(struct irq_domain *domain, unsigned int virq,
+ unsigned int nr_irqs)
+{
+ struct irq_data *d = irq_domain_get_irq_data(domain, virq);
+ struct mc_port *port = irq_data_get_irq_chip_data(d);
+ struct mc_msi *msi = &port->msi;
+
+ mutex_lock(&msi->lock);
+
+ if (test_bit(d->hwirq, msi->used))
+ __clear_bit(d->hwirq, msi->used);
+ else
+ dev_err(port->dev, "trying to free unused MSI%lu\n", d->hwirq);
+
+ mutex_unlock(&msi->lock);
+}
+
+static const struct irq_domain_ops msi_domain_ops = {
+ .alloc = mc_irq_msi_domain_alloc,
+ .free = mc_irq_msi_domain_free,
+};
+
+static struct irq_chip mc_msi_irq_chip = {
+ .name = "Microchip PCIe MSI",
+ .irq_ack = irq_chip_ack_parent,
+ .irq_mask = pci_msi_mask_irq,
+ .irq_unmask = pci_msi_unmask_irq,
+};
+
+static struct msi_domain_info mc_msi_domain_info = {
+ .flags = (MSI_FLAG_USE_DEF_DOM_OPS | MSI_FLAG_USE_DEF_CHIP_OPS |
+ MSI_FLAG_PCI_MSIX),
+ .chip = &mc_msi_irq_chip,
+};
+
+static int mc_allocate_msi_domains(struct mc_port *port)
+{
+ struct device *dev = port->dev;
+ struct fwnode_handle *fwnode = of_node_to_fwnode(dev->of_node);
+ struct mc_msi *msi = &port->msi;
+
+ mutex_init(&port->msi.lock);
+
+ msi->dev_domain = irq_domain_add_linear(NULL, msi->num_vectors,
+ &msi_domain_ops, port);
+ if (!msi->dev_domain) {
+ dev_err(dev, "failed to create IRQ domain\n");
+ return -ENOMEM;
+ }
+
+ msi->msi_domain = pci_msi_create_irq_domain(fwnode, &mc_msi_domain_info,
+ msi->dev_domain);
+ if (!msi->msi_domain) {
+ dev_err(dev, "failed to create MSI domain\n");
+ irq_domain_remove(msi->dev_domain);
+ return -ENOMEM;
+ }
+
+ return 0;
+}
+
+static void mc_handle_intx(struct irq_desc *desc)
+{
+ struct mc_port *port = irq_desc_get_handler_data(desc);
+ struct device *dev = port->dev;
+ void __iomem *bridge_base_addr =
+ port->axi_base_addr + MC_PCIE_BRIDGE_ADDR;
+ unsigned long status;
+ u32 bit;
+ u32 virq;
+
+ status = readl_relaxed(bridge_base_addr + ISTATUS_LOCAL);
+ if (status & PM_MSI_INT_INTX_MASK) {
+ status &= PM_MSI_INT_INTX_MASK;
+ status >>= PM_MSI_INT_INTX_SHIFT;
+ for_each_set_bit(bit, &status, PCI_NUM_INTX) {
+ virq = irq_find_mapping(port->intx_domain, bit);
+ if (virq)
+ generic_handle_irq(virq);
+ else
+ dev_err_ratelimited(dev, "bad INTx IRQ %d\n",
+ bit);
+ }
+ }
+}
+
+static void mc_ack_intx_irq(struct irq_data *data)
+{
+ struct mc_port *port = irq_data_get_irq_chip_data(data);
+ void __iomem *bridge_base_addr =
+ port->axi_base_addr + MC_PCIE_BRIDGE_ADDR;
+ u32 mask = BIT(data->hwirq + PM_MSI_INT_INTX_SHIFT);
+
+ writel_relaxed(mask, bridge_base_addr + ISTATUS_LOCAL);
+}
+
+static void mc_mask_intx_irq(struct irq_data *data)
+{
+ struct mc_port *port = irq_data_get_irq_chip_data(data);
+ void __iomem *bridge_base_addr =
+ port->axi_base_addr + MC_PCIE_BRIDGE_ADDR;
+ unsigned long flags;
+ u32 mask = BIT(data->hwirq + PM_MSI_INT_INTX_SHIFT);
+ u32 val;
+
+ raw_spin_lock_irqsave(&port->lock, flags);
+ val = readl_relaxed(bridge_base_addr + IMASK_LOCAL);
+ val &= ~mask;
+ writel_relaxed(val, bridge_base_addr + IMASK_LOCAL);
+ raw_spin_unlock_irqrestore(&port->lock, flags);
+}
+
+static void mc_unmask_intx_irq(struct irq_data *data)
+{
+ struct mc_port *port = irq_data_get_irq_chip_data(data);
+ void __iomem *bridge_base_addr =
+ port->axi_base_addr + MC_PCIE_BRIDGE_ADDR;
+ unsigned long flags;
+ u32 mask = BIT(data->hwirq + PM_MSI_INT_INTX_SHIFT);
+ u32 val;
+
+ raw_spin_lock_irqsave(&port->lock, flags);
+ val = readl_relaxed(bridge_base_addr + IMASK_LOCAL);
+ val |= mask;
+ writel_relaxed(val, bridge_base_addr + IMASK_LOCAL);
+ raw_spin_unlock_irqrestore(&port->lock, flags);
+}
+
+static struct irq_chip mc_intx_irq_chip = {
+ .name = "Microchip PCIe INTx",
+ .irq_ack = mc_ack_intx_irq,
+ .irq_mask = mc_mask_intx_irq,
+ .irq_unmask = mc_unmask_intx_irq,
+};
+
+static int mc_pcie_intx_map(struct irq_domain *domain, unsigned int irq,
+ irq_hw_number_t hwirq)
+{
+ irq_set_chip_and_handler(irq, &mc_intx_irq_chip, handle_level_irq);
+ irq_set_chip_data(irq, domain->host_data);
+
+ return 0;
+}
+
+static const struct irq_domain_ops intx_domain_ops = {
+ .map = mc_pcie_intx_map,
+};
+
+static inline u32 reg_to_event(u32 reg, struct event_map field)
+{
+ return (reg & field.reg_mask) ? BIT(field.event_bit) : 0;
+}
+
+static u32 pcie_events(void __iomem *addr)
+{
+ u32 reg = readl_relaxed(addr);
+ u32 val = 0;
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(pcie_event_to_event); i++)
+ val |= reg_to_event(reg, pcie_event_to_event[i]);
+
+ return val;
+}
+
+static u32 sec_errors(void __iomem *addr)
+{
+ u32 reg = readl_relaxed(addr);
+ u32 val = 0;
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(sec_error_to_event); i++)
+ val |= reg_to_event(reg, sec_error_to_event[i]);
+
+ return val;
+}
+
+static u32 ded_errors(void __iomem *addr)
+{
+ u32 reg = readl_relaxed(addr);
+ u32 val = 0;
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(ded_error_to_event); i++)
+ val |= reg_to_event(reg, ded_error_to_event[i]);
+
+ return val;
+}
+
+static u32 local_events(void __iomem *addr)
+{
+ u32 reg = readl_relaxed(addr);
+ u32 val = 0;
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(local_status_to_event); i++)
+ val |= reg_to_event(reg, local_status_to_event[i]);
+
+ return val;
+}
+
+static u32 get_events(struct mc_port *port)
+{
+ void __iomem *bridge_base_addr =
+ port->axi_base_addr + MC_PCIE_BRIDGE_ADDR;
+ void __iomem *ctrl_base_addr = port->axi_base_addr + MC_PCIE_CTRL_ADDR;
+ u32 events = 0;
+
+ events |= pcie_events(ctrl_base_addr + PCIE_EVENT_INT);
+ events |= sec_errors(ctrl_base_addr + SEC_ERROR_INT);
+ events |= ded_errors(ctrl_base_addr + DED_ERROR_INT);
+ events |= local_events(bridge_base_addr + ISTATUS_LOCAL);
+
+ return events;
+}
+
+static irqreturn_t mc_event_handler(int irq, void *dev_id)
+{
+ struct mc_port *port = dev_id;
+ struct device *dev = port->dev;
+ struct irq_data *data;
+
+ data = irq_domain_get_irq_data(port->event_domain, irq);
+
+ if (event_cause[data->hwirq].str)
+ dev_err_ratelimited(dev, "%s\n", event_cause[data->hwirq].str);
+ else
+ dev_err_ratelimited(dev, "bad event IRQ %ld\n", data->hwirq);
+
+ return IRQ_HANDLED;
+}
+
+static void mc_handle_event(struct irq_desc *desc)
+{
+ struct mc_port *port = irq_desc_get_handler_data(desc);
+ unsigned long events;
+ u32 bit;
+ struct irq_chip *chip = irq_desc_get_chip(desc);
+
+ chained_irq_enter(chip, desc);
+
+ events = get_events(port);
+
+ for_each_set_bit(bit, &events, NUM_EVENTS)
+ generic_handle_irq(irq_find_mapping(port->event_domain, bit));
+
+ chained_irq_exit(chip, desc);
+}
+
+static void mc_ack_event_irq(struct irq_data *data)
+{
+ struct mc_port *port = irq_data_get_irq_chip_data(data);
+ u32 event = data->hwirq;
+ void __iomem *addr;
+ u32 mask;
+
+ addr = port->axi_base_addr + event_descs[event].base +
+ event_descs[event].offset;
+ mask = event_descs[event].mask;
+ mask |= event_descs[event].enb_mask;
+
+ writel_relaxed(mask, addr);
+}
+
+static void mc_mask_event_irq(struct irq_data *data)
+{
+ struct mc_port *port = irq_data_get_irq_chip_data(data);
+ u32 event = data->hwirq;
+ void __iomem *addr;
+ u32 mask;
+ u32 val;
+
+ addr = port->axi_base_addr + event_descs[event].base +
+ event_descs[event].mask_offset;
+ mask = event_descs[event].mask;
+ if (event_descs[event].enb_mask) {
+ mask <<= PCIE_EVENT_INT_ENB_SHIFT;
+ mask &= PCIE_EVENT_INT_ENB_MASK;
+ }
+
+ if (!event_descs[event].mask_high)
+ mask = ~mask;
+
+ raw_spin_lock(&port->lock);
+ val = readl_relaxed(addr);
+ if (event_descs[event].mask_high)
+ val |= mask;
+ else
+ val &= mask;
+
+ writel_relaxed(val, addr);
+ raw_spin_unlock(&port->lock);
+}
+
+static void mc_unmask_event_irq(struct irq_data *data)
+{
+ struct mc_port *port = irq_data_get_irq_chip_data(data);
+ u32 event = data->hwirq;
+ void __iomem *addr;
+ u32 mask;
+ u32 val;
+
+ addr = port->axi_base_addr + event_descs[event].base +
+ event_descs[event].mask_offset;
+ mask = event_descs[event].mask;
+
+ if (event_descs[event].enb_mask)
+ mask <<= PCIE_EVENT_INT_ENB_SHIFT;
+
+ if (event_descs[event].mask_high)
+ mask = ~mask;
+
+ if (event_descs[event].enb_mask)
+ mask &= PCIE_EVENT_INT_ENB_MASK;
+
+ raw_spin_lock(&port->lock);
+ val = readl_relaxed(addr);
+ if (event_descs[event].mask_high)
+ val &= mask;
+ else
+ val |= mask;
+ writel_relaxed(val, addr);
+ raw_spin_unlock(&port->lock);
+}
+
+static struct irq_chip mc_event_irq_chip = {
+ .name = "Microchip PCIe EVENT",
+ .irq_ack = mc_ack_event_irq,
+ .irq_mask = mc_mask_event_irq,
+ .irq_unmask = mc_unmask_event_irq,
+};
+
+static int mc_pcie_event_map(struct irq_domain *domain, unsigned int irq,
+ irq_hw_number_t hwirq)
+{
+ irq_set_chip_and_handler(irq, &mc_event_irq_chip, handle_level_irq);
+ irq_set_chip_data(irq, domain->host_data);
+
+ return 0;
+}
+
+static const struct irq_domain_ops event_domain_ops = {
+ .map = mc_pcie_event_map,
+};
+
+static inline struct clk *mc_pcie_init_clk(struct device *dev, const char *id)
+{
+ struct clk *clk;
+ int ret;
+
+ clk = devm_clk_get_optional(dev, id);
+ if (IS_ERR(clk))
+ return clk;
+ if (!clk)
+ return clk;
+
+ ret = clk_prepare_enable(clk);
+ if (ret)
+ return ERR_PTR(ret);
+
+ devm_add_action_or_reset(dev, (void (*) (void *))clk_disable_unprepare,
+ clk);
+
+ return clk;
+}
+
+static int mc_pcie_init_clks(struct device *dev)
+{
+ int i;
+ struct clk *fic;
+
+ /*
+ * PCIe may be clocked via Fabric Interface using between 1 and 4
+ * clocks. Scan DT for clocks and enable them if present
+ */
+ for (i = 0; i < ARRAY_SIZE(poss_clks); i++) {
+ fic = mc_pcie_init_clk(dev, poss_clks[i]);
+ if (IS_ERR(fic))
+ return PTR_ERR(fic);
+ }
+
+ return 0;
+}
+
+static int mc_pcie_init_irq_domains(struct mc_port *port)
+{
+ struct device *dev = port->dev;
+ struct device_node *node = dev->of_node;
+ struct device_node *pcie_intc_node;
+
+ /* Setup INTx */
+ pcie_intc_node = of_get_next_child(node, NULL);
+ if (!pcie_intc_node) {
+ dev_err(dev, "failed to find PCIe Intc node\n");
+ return -EINVAL;
+ }
+
+ port->event_domain = irq_domain_add_linear(pcie_intc_node, NUM_EVENTS,
+ &event_domain_ops, port);
+ if (!port->event_domain) {
+ dev_err(dev, "failed to get event domain\n");
+ return -ENOMEM;
+ }
+
+ irq_domain_update_bus_token(port->event_domain, DOMAIN_BUS_NEXUS);
+
+ port->intx_domain = irq_domain_add_linear(pcie_intc_node, PCI_NUM_INTX,
+ &intx_domain_ops, port);
+ if (!port->intx_domain) {
+ dev_err(dev, "failed to get an INTx IRQ domain\n");
+ return -ENOMEM;
+ }
+
+ irq_domain_update_bus_token(port->intx_domain, DOMAIN_BUS_WIRED);
+
+ of_node_put(pcie_intc_node);
+ raw_spin_lock_init(&port->lock);
+
+ return mc_allocate_msi_domains(port);
+}
+
+static void mc_pcie_setup_window(void __iomem *bridge_base_addr, u32 index,
+ phys_addr_t axi_addr, phys_addr_t pci_addr,
+ size_t size)
+{
+ u32 atr_sz = ilog2(size) - 1;
+ u32 val;
+
+ if (index == 0)
+ val = PCIE_CONFIG_INTERFACE;
+ else
+ val = PCIE_TX_RX_INTERFACE;
+
+ writel(val, bridge_base_addr + (index * ATR_ENTRY_SIZE) +
+ ATR0_AXI4_SLV0_TRSL_PARAM);
+
+ val = lower_32_bits(axi_addr) | (atr_sz << ATR_SIZE_SHIFT) |
+ ATR_IMPL_ENABLE;
+ writel(val, bridge_base_addr + (index * ATR_ENTRY_SIZE) +
+ ATR0_AXI4_SLV0_SRCADDR_PARAM);
+
+ val = upper_32_bits(axi_addr);
+ writel(val, bridge_base_addr + (index * ATR_ENTRY_SIZE) +
+ ATR0_AXI4_SLV0_SRC_ADDR);
+
+ val = lower_32_bits(pci_addr);
+ writel(val, bridge_base_addr + (index * ATR_ENTRY_SIZE) +
+ ATR0_AXI4_SLV0_TRSL_ADDR_LSB);
+
+ val = upper_32_bits(pci_addr);
+ writel(val, bridge_base_addr + (index * ATR_ENTRY_SIZE) +
+ ATR0_AXI4_SLV0_TRSL_ADDR_UDW);
+
+ val = readl(bridge_base_addr + ATR0_PCIE_WIN0_SRCADDR_PARAM);
+ val |= (ATR0_PCIE_ATR_SIZE << ATR0_PCIE_ATR_SIZE_SHIFT);
+ writel(val, bridge_base_addr + ATR0_PCIE_WIN0_SRCADDR_PARAM);
+ writel(0, bridge_base_addr + ATR0_PCIE_WIN0_SRC_ADDR);
+}
+
+static int mc_pcie_setup_windows(struct platform_device *pdev,
+ struct mc_port *port)
+{
+ void __iomem *bridge_base_addr =
+ port->axi_base_addr + MC_PCIE_BRIDGE_ADDR;
+ struct pci_host_bridge *bridge = platform_get_drvdata(pdev);
+ struct resource_entry *entry;
+ u64 pci_addr;
+ u32 index = 1;
+
+ resource_list_for_each_entry(entry, &bridge->windows) {
+ if (resource_type(entry->res) == IORESOURCE_MEM) {
+ pci_addr = entry->res->start - entry->offset;
+ mc_pcie_setup_window(bridge_base_addr, index,
+ entry->res->start, pci_addr,
+ resource_size(entry->res));
+ index++;
+ }
+ }
+
+ return 0;
+}
+
+static int mc_platform_init(struct pci_config_window *cfg)
+{
+ struct device *dev = cfg->parent;
+ struct platform_device *pdev = to_platform_device(dev);
+ struct mc_port *port;
+ void __iomem *bridge_base_addr;
+ void __iomem *ctrl_base_addr;
+ int ret;
+ int irq;
+ int i, intx_irq, msi_irq, event_irq;
+ u32 val;
+ int err;
+
+ port = devm_kzalloc(dev, sizeof(*port), GFP_KERNEL);
+ if (!port)
+ return -ENOMEM;
+ port->dev = dev;
+
+ ret = mc_pcie_init_clks(dev);
+ if (ret) {
+ dev_err(dev, "failed to get clock resources, error %d\n", ret);
+ return -ENODEV;
+ }
+
+ port->axi_base_addr = devm_platform_ioremap_resource(pdev, 1);
+ if (IS_ERR(port->axi_base_addr))
+ return PTR_ERR(port->axi_base_addr);
+
+ bridge_base_addr = port->axi_base_addr + MC_PCIE_BRIDGE_ADDR;
+ ctrl_base_addr = port->axi_base_addr + MC_PCIE_CTRL_ADDR;
+
+ port->msi.vector_phy = MSI_ADDR;
+ port->msi.num_vectors = MC_NUM_MSI_IRQS;
+ ret = mc_pcie_init_irq_domains(port);
+ if (ret) {
+ dev_err(dev, "failed creating IRQ domains\n");
+ return ret;
+ }
+
+ irq = platform_get_irq(pdev, 0);
+ if (irq < 0) {
+ dev_err(dev, "unable to request IRQ%d\n", irq);
+ return -ENODEV;
+ }
+
+ for (i = 0; i < NUM_EVENTS; i++) {
+ event_irq = irq_create_mapping(port->event_domain, i);
+ if (!event_irq) {
+ dev_err(dev, "failed to map hwirq %d\n", i);
+ return -ENXIO;
+ }
+
+ err = devm_request_irq(dev, event_irq, mc_event_handler,
+ 0, event_cause[i].sym, port);
+ if (err) {
+ dev_err(dev, "failed to request IRQ %d\n", event_irq);
+ return err;
+ }
+ }
+
+ intx_irq = irq_create_mapping(port->event_domain,
+ EVENT_LOCAL_PM_MSI_INT_INTX);
+ if (!intx_irq) {
+ dev_err(dev, "failed to map INTx interrupt\n");
+ return -ENXIO;
+ }
+
+ /* Plug the INTx chained handler */
+ irq_set_chained_handler_and_data(intx_irq, mc_handle_intx, port);
+
+ msi_irq = irq_create_mapping(port->event_domain,
+ EVENT_LOCAL_PM_MSI_INT_MSI);
+ if (!msi_irq)
+ return -ENXIO;
+
+ /* Plug the MSI chained handler */
+ irq_set_chained_handler_and_data(msi_irq, mc_handle_msi, port);
+
+ /* Plug the main event chained handler */
+ irq_set_chained_handler_and_data(irq, mc_handle_event, port);
+
+ /* Hardware doesn't setup MSI by default */
+ mc_pcie_enable_msi(port, cfg->win);
+
+ val = readl_relaxed(bridge_base_addr + IMASK_LOCAL);
+ val |= PM_MSI_INT_INTX_MASK;
+ writel_relaxed(val, bridge_base_addr + IMASK_LOCAL);
+
+ writel_relaxed(val, ctrl_base_addr + ECC_CONTROL);
+
+ val = PCIE_EVENT_INT_L2_EXIT_INT |
+ PCIE_EVENT_INT_HOTRST_EXIT_INT |
+ PCIE_EVENT_INT_DLUP_EXIT_INT;
+ writel_relaxed(val, ctrl_base_addr + PCIE_EVENT_INT);
+
+ val = SEC_ERROR_INT_TX_RAM_SEC_ERR_INT |
+ SEC_ERROR_INT_RX_RAM_SEC_ERR_INT |
+ SEC_ERROR_INT_PCIE2AXI_RAM_SEC_ERR_INT |
+ SEC_ERROR_INT_AXI2PCIE_RAM_SEC_ERR_INT;
+ writel_relaxed(val, ctrl_base_addr + SEC_ERROR_INT);
+ writel_relaxed(0, ctrl_base_addr + SEC_ERROR_INT_MASK);
+ writel_relaxed(0, ctrl_base_addr + SEC_ERROR_CNT);
+
+ val = DED_ERROR_INT_TX_RAM_DED_ERR_INT |
+ DED_ERROR_INT_RX_RAM_DED_ERR_INT |
+ DED_ERROR_INT_PCIE2AXI_RAM_DED_ERR_INT |
+ DED_ERROR_INT_AXI2PCIE_RAM_DED_ERR_INT;
+ writel_relaxed(val, ctrl_base_addr + DED_ERROR_INT);
+ writel_relaxed(0, ctrl_base_addr + DED_ERROR_INT_MASK);
+ writel_relaxed(0, ctrl_base_addr + DED_ERROR_CNT);
+
+ writel_relaxed(0, bridge_base_addr + IMASK_HOST);
+ writel_relaxed(GENMASK(31, 0), bridge_base_addr + ISTATUS_HOST);
+
+ /* Configure Address Translation Table 0 for PCIe config space */
+ mc_pcie_setup_window(bridge_base_addr, 0, cfg->res.start & 0xffffffff,
+ cfg->res.start, resource_size(&cfg->res));
+
+ return mc_pcie_setup_windows(pdev, port);
+}
+
+static const struct pci_ecam_ops mc_ecam_ops = {
+ .init = mc_platform_init,
+ .pci_ops = {
+ .map_bus = pci_ecam_map_bus,
+ .read = pci_generic_config_read,
+ .write = pci_generic_config_write,
+ }
+};
+
+static const struct of_device_id mc_pcie_of_match[] = {
+ {
+ .compatible = "microchip,pcie-host-1.0",
+ .data = &mc_ecam_ops,
+ },
+ {},
+};
+
+MODULE_DEVICE_TABLE(of, mc_pcie_of_match)
+
+static struct platform_driver mc_pcie_driver = {
+ .probe = pci_host_common_probe,
+ .driver = {
+ .name = "microchip-pcie",
+ .of_match_table = mc_pcie_of_match,
+ .suppress_bind_attrs = true,
+ },
+};
+
+builtin_platform_driver(mc_pcie_driver);
+MODULE_LICENSE("GPL");
+MODULE_DESCRIPTION("Microchip PCIe host controller driver");
+MODULE_AUTHOR("Daire McNamara <daire.mcnamara@microchip.com>");
diff --git a/drivers/pci/controller/pcie-rcar-host.c b/drivers/pci/controller/pcie-rcar-host.c
index 4d1c4b24e537..a728e8f9ad3c 100644
--- a/drivers/pci/controller/pcie-rcar-host.c
+++ b/drivers/pci/controller/pcie-rcar-host.c
@@ -735,7 +735,7 @@ static int rcar_pcie_enable_msi(struct rcar_pcie_host *host)
}
/* setup MSI data target */
- msi->pages = __get_free_pages(GFP_KERNEL, 0);
+ msi->pages = __get_free_pages(GFP_KERNEL | GFP_DMA32, 0);
rcar_pcie_hw_enable_msi(host);
return 0;
diff --git a/drivers/pci/controller/pcie-rockchip.c b/drivers/pci/controller/pcie-rockchip.c
index 904dec0d3a88..990a00e08bc5 100644
--- a/drivers/pci/controller/pcie-rockchip.c
+++ b/drivers/pci/controller/pcie-rockchip.c
@@ -82,7 +82,7 @@ int rockchip_pcie_parse_dt(struct rockchip_pcie *rockchip)
}
rockchip->mgmt_sticky_rst = devm_reset_control_get_exclusive(dev,
- "mgmt-sticky");
+ "mgmt-sticky");
if (IS_ERR(rockchip->mgmt_sticky_rst)) {
if (PTR_ERR(rockchip->mgmt_sticky_rst) != -EPROBE_DEFER)
dev_err(dev, "missing mgmt-sticky reset property in node\n");
@@ -118,11 +118,11 @@ int rockchip_pcie_parse_dt(struct rockchip_pcie *rockchip)
}
if (rockchip->is_rc) {
- rockchip->ep_gpio = devm_gpiod_get(dev, "ep", GPIOD_OUT_HIGH);
- if (IS_ERR(rockchip->ep_gpio)) {
- dev_err(dev, "missing ep-gpios property in node\n");
- return PTR_ERR(rockchip->ep_gpio);
- }
+ rockchip->ep_gpio = devm_gpiod_get_optional(dev, "ep",
+ GPIOD_OUT_HIGH);
+ if (IS_ERR(rockchip->ep_gpio))
+ return dev_err_probe(dev, PTR_ERR(rockchip->ep_gpio),
+ "failed to get ep GPIO\n");
}
rockchip->aclk_pcie = devm_clk_get(dev, "aclk");
diff --git a/drivers/pci/controller/pcie-tango.c b/drivers/pci/controller/pcie-tango.c
deleted file mode 100644
index 62a061f1d62e..000000000000
--- a/drivers/pci/controller/pcie-tango.c
+++ /dev/null
@@ -1,341 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0
-#include <linux/irqchip/chained_irq.h>
-#include <linux/irqdomain.h>
-#include <linux/pci-ecam.h>
-#include <linux/delay.h>
-#include <linux/msi.h>
-#include <linux/of_address.h>
-
-#define MSI_MAX 256
-
-#define SMP8759_MUX 0x48
-#define SMP8759_TEST_OUT 0x74
-#define SMP8759_DOORBELL 0x7c
-#define SMP8759_STATUS 0x80
-#define SMP8759_ENABLE 0xa0
-
-struct tango_pcie {
- DECLARE_BITMAP(used_msi, MSI_MAX);
- u64 msi_doorbell;
- spinlock_t used_msi_lock;
- void __iomem *base;
- struct irq_domain *dom;
-};
-
-static void tango_msi_isr(struct irq_desc *desc)
-{
- struct irq_chip *chip = irq_desc_get_chip(desc);
- struct tango_pcie *pcie = irq_desc_get_handler_data(desc);
- unsigned long status, base, virq, idx, pos = 0;
-
- chained_irq_enter(chip, desc);
- spin_lock(&pcie->used_msi_lock);
-
- while ((pos = find_next_bit(pcie->used_msi, MSI_MAX, pos)) < MSI_MAX) {
- base = round_down(pos, 32);
- status = readl_relaxed(pcie->base + SMP8759_STATUS + base / 8);
- for_each_set_bit(idx, &status, 32) {
- virq = irq_find_mapping(pcie->dom, base + idx);
- generic_handle_irq(virq);
- }
- pos = base + 32;
- }
-
- spin_unlock(&pcie->used_msi_lock);
- chained_irq_exit(chip, desc);
-}
-
-static void tango_ack(struct irq_data *d)
-{
- struct tango_pcie *pcie = d->chip_data;
- u32 offset = (d->hwirq / 32) * 4;
- u32 bit = BIT(d->hwirq % 32);
-
- writel_relaxed(bit, pcie->base + SMP8759_STATUS + offset);
-}
-
-static void update_msi_enable(struct irq_data *d, bool unmask)
-{
- unsigned long flags;
- struct tango_pcie *pcie = d->chip_data;
- u32 offset = (d->hwirq / 32) * 4;
- u32 bit = BIT(d->hwirq % 32);
- u32 val;
-
- spin_lock_irqsave(&pcie->used_msi_lock, flags);
- val = readl_relaxed(pcie->base + SMP8759_ENABLE + offset);
- val = unmask ? val | bit : val & ~bit;
- writel_relaxed(val, pcie->base + SMP8759_ENABLE + offset);
- spin_unlock_irqrestore(&pcie->used_msi_lock, flags);
-}
-
-static void tango_mask(struct irq_data *d)
-{
- update_msi_enable(d, false);
-}
-
-static void tango_unmask(struct irq_data *d)
-{
- update_msi_enable(d, true);
-}
-
-static int tango_set_affinity(struct irq_data *d, const struct cpumask *mask,
- bool force)
-{
- return -EINVAL;
-}
-
-static void tango_compose_msi_msg(struct irq_data *d, struct msi_msg *msg)
-{
- struct tango_pcie *pcie = d->chip_data;
- msg->address_lo = lower_32_bits(pcie->msi_doorbell);
- msg->address_hi = upper_32_bits(pcie->msi_doorbell);
- msg->data = d->hwirq;
-}
-
-static struct irq_chip tango_chip = {
- .irq_ack = tango_ack,
- .irq_mask = tango_mask,
- .irq_unmask = tango_unmask,
- .irq_set_affinity = tango_set_affinity,
- .irq_compose_msi_msg = tango_compose_msi_msg,
-};
-
-static void msi_ack(struct irq_data *d)
-{
- irq_chip_ack_parent(d);
-}
-
-static void msi_mask(struct irq_data *d)
-{
- pci_msi_mask_irq(d);
- irq_chip_mask_parent(d);
-}
-
-static void msi_unmask(struct irq_data *d)
-{
- pci_msi_unmask_irq(d);
- irq_chip_unmask_parent(d);
-}
-
-static struct irq_chip msi_chip = {
- .name = "MSI",
- .irq_ack = msi_ack,
- .irq_mask = msi_mask,
- .irq_unmask = msi_unmask,
-};
-
-static struct msi_domain_info msi_dom_info = {
- .flags = MSI_FLAG_PCI_MSIX
- | MSI_FLAG_USE_DEF_DOM_OPS
- | MSI_FLAG_USE_DEF_CHIP_OPS,
- .chip = &msi_chip,
-};
-
-static int tango_irq_domain_alloc(struct irq_domain *dom, unsigned int virq,
- unsigned int nr_irqs, void *args)
-{
- struct tango_pcie *pcie = dom->host_data;
- unsigned long flags;
- int pos;
-
- spin_lock_irqsave(&pcie->used_msi_lock, flags);
- pos = find_first_zero_bit(pcie->used_msi, MSI_MAX);
- if (pos >= MSI_MAX) {
- spin_unlock_irqrestore(&pcie->used_msi_lock, flags);
- return -ENOSPC;
- }
- __set_bit(pos, pcie->used_msi);
- spin_unlock_irqrestore(&pcie->used_msi_lock, flags);
- irq_domain_set_info(dom, virq, pos, &tango_chip,
- pcie, handle_edge_irq, NULL, NULL);
-
- return 0;
-}
-
-static void tango_irq_domain_free(struct irq_domain *dom, unsigned int virq,
- unsigned int nr_irqs)
-{
- unsigned long flags;
- struct irq_data *d = irq_domain_get_irq_data(dom, virq);
- struct tango_pcie *pcie = d->chip_data;
-
- spin_lock_irqsave(&pcie->used_msi_lock, flags);
- __clear_bit(d->hwirq, pcie->used_msi);
- spin_unlock_irqrestore(&pcie->used_msi_lock, flags);
-}
-
-static const struct irq_domain_ops dom_ops = {
- .alloc = tango_irq_domain_alloc,
- .free = tango_irq_domain_free,
-};
-
-static int smp8759_config_read(struct pci_bus *bus, unsigned int devfn,
- int where, int size, u32 *val)
-{
- struct pci_config_window *cfg = bus->sysdata;
- struct tango_pcie *pcie = dev_get_drvdata(cfg->parent);
- int ret;
-
- /* Reads in configuration space outside devfn 0 return garbage */
- if (devfn != 0)
- return PCIBIOS_FUNC_NOT_SUPPORTED;
-
- /*
- * PCI config and MMIO accesses are muxed. Linux doesn't have a
- * mutual exclusion mechanism for config vs. MMIO accesses, so
- * concurrent accesses may cause corruption.
- */
- writel_relaxed(1, pcie->base + SMP8759_MUX);
- ret = pci_generic_config_read(bus, devfn, where, size, val);
- writel_relaxed(0, pcie->base + SMP8759_MUX);
-
- return ret;
-}
-
-static int smp8759_config_write(struct pci_bus *bus, unsigned int devfn,
- int where, int size, u32 val)
-{
- struct pci_config_window *cfg = bus->sysdata;
- struct tango_pcie *pcie = dev_get_drvdata(cfg->parent);
- int ret;
-
- writel_relaxed(1, pcie->base + SMP8759_MUX);
- ret = pci_generic_config_write(bus, devfn, where, size, val);
- writel_relaxed(0, pcie->base + SMP8759_MUX);
-
- return ret;
-}
-
-static const struct pci_ecam_ops smp8759_ecam_ops = {
- .pci_ops = {
- .map_bus = pci_ecam_map_bus,
- .read = smp8759_config_read,
- .write = smp8759_config_write,
- }
-};
-
-static int tango_pcie_link_up(struct tango_pcie *pcie)
-{
- void __iomem *test_out = pcie->base + SMP8759_TEST_OUT;
- int i;
-
- writel_relaxed(16, test_out);
- for (i = 0; i < 10; ++i) {
- u32 ltssm_state = readl_relaxed(test_out) >> 8;
- if ((ltssm_state & 0x1f) == 0xf) /* L0 */
- return 1;
- usleep_range(3000, 4000);
- }
-
- return 0;
-}
-
-static int tango_pcie_probe(struct platform_device *pdev)
-{
- struct device *dev = &pdev->dev;
- struct tango_pcie *pcie;
- struct resource *res;
- struct fwnode_handle *fwnode = of_node_to_fwnode(dev->of_node);
- struct irq_domain *msi_dom, *irq_dom;
- struct of_pci_range_parser parser;
- struct of_pci_range range;
- int virq, offset;
-
- dev_warn(dev, "simultaneous PCI config and MMIO accesses may cause data corruption\n");
- add_taint(TAINT_CRAP, LOCKDEP_STILL_OK);
-
- pcie = devm_kzalloc(dev, sizeof(*pcie), GFP_KERNEL);
- if (!pcie)
- return -ENOMEM;
-
- res = platform_get_resource(pdev, IORESOURCE_MEM, 1);
- pcie->base = devm_ioremap_resource(dev, res);
- if (IS_ERR(pcie->base))
- return PTR_ERR(pcie->base);
-
- platform_set_drvdata(pdev, pcie);
-
- if (!tango_pcie_link_up(pcie))
- return -ENODEV;
-
- if (of_pci_dma_range_parser_init(&parser, dev->of_node) < 0)
- return -ENOENT;
-
- if (of_pci_range_parser_one(&parser, &range) == NULL)
- return -ENOENT;
-
- range.pci_addr += range.size;
- pcie->msi_doorbell = range.pci_addr + res->start + SMP8759_DOORBELL;
-
- for (offset = 0; offset < MSI_MAX / 8; offset += 4)
- writel_relaxed(0, pcie->base + SMP8759_ENABLE + offset);
-
- virq = platform_get_irq(pdev, 1);
- if (virq < 0)
- return virq;
-
- irq_dom = irq_domain_create_linear(fwnode, MSI_MAX, &dom_ops, pcie);
- if (!irq_dom) {
- dev_err(dev, "Failed to create IRQ domain\n");
- return -ENOMEM;
- }
-
- msi_dom = pci_msi_create_irq_domain(fwnode, &msi_dom_info, irq_dom);
- if (!msi_dom) {
- dev_err(dev, "Failed to create MSI domain\n");
- irq_domain_remove(irq_dom);
- return -ENOMEM;
- }
-
- pcie->dom = irq_dom;
- spin_lock_init(&pcie->used_msi_lock);
- irq_set_chained_handler_and_data(virq, tango_msi_isr, pcie);
-
- return pci_host_common_probe(pdev);
-}
-
-static const struct of_device_id tango_pcie_ids[] = {
- {
- .compatible = "sigma,smp8759-pcie",
- .data = &smp8759_ecam_ops,
- },
- { },
-};
-
-static struct platform_driver tango_pcie_driver = {
- .probe = tango_pcie_probe,
- .driver = {
- .name = KBUILD_MODNAME,
- .of_match_table = tango_pcie_ids,
- .suppress_bind_attrs = true,
- },
-};
-builtin_platform_driver(tango_pcie_driver);
-
-/*
- * The root complex advertises the wrong device class.
- * Header Type 1 is for PCI-to-PCI bridges.
- */
-static void tango_fixup_class(struct pci_dev *dev)
-{
- dev->class = PCI_CLASS_BRIDGE_PCI << 8;
-}
-DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_SIGMA, 0x0024, tango_fixup_class);
-DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_SIGMA, 0x0028, tango_fixup_class);
-
-/*
- * The root complex exposes a "fake" BAR, which is used to filter
- * bus-to-system accesses. Only accesses within the range defined by this
- * BAR are forwarded to the host, others are ignored.
- *
- * By default, the DMA framework expects an identity mapping, and DRAM0 is
- * mapped at 0x80000000.
- */
-static void tango_fixup_bar(struct pci_dev *dev)
-{
- dev->non_compliant_bars = true;
- pci_write_config_dword(dev, PCI_BASE_ADDRESS_0, 0x80000000);
-}
-DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_SIGMA, 0x0024, tango_fixup_bar);
-DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_SIGMA, 0x0028, tango_fixup_bar);
diff --git a/drivers/pci/controller/pcie-xilinx-cpm.c b/drivers/pci/controller/pcie-xilinx-cpm.c
index f92e0152e65e..67937facd90c 100644
--- a/drivers/pci/controller/pcie-xilinx-cpm.c
+++ b/drivers/pci/controller/pcie-xilinx-cpm.c
@@ -404,6 +404,7 @@ static int xilinx_cpm_pcie_init_irq_domain(struct xilinx_cpm_pcie_port *port)
return 0;
out:
xilinx_cpm_free_irq_domains(port);
+ of_node_put(pcie_intc_node);
dev_err(dev, "Failed to allocate IRQ domains\n");
return -ENOMEM;
diff --git a/drivers/pci/endpoint/functions/Kconfig b/drivers/pci/endpoint/functions/Kconfig
index 8820d0f7ec77..5f1242ca2f4e 100644
--- a/drivers/pci/endpoint/functions/Kconfig
+++ b/drivers/pci/endpoint/functions/Kconfig
@@ -12,3 +12,16 @@ config PCI_EPF_TEST
for PCI Endpoint.
If in doubt, say "N" to disable Endpoint test driver.
+
+config PCI_EPF_NTB
+ tristate "PCI Endpoint NTB driver"
+ depends on PCI_ENDPOINT
+ select CONFIGFS_FS
+ help
+ Select this configuration option to enable the Non-Transparent
+ Bridge (NTB) driver for PCI Endpoint. NTB driver implements NTB
+ controller functionality using multiple PCIe endpoint instances.
+ It can support NTB endpoint function devices created using
+ device tree.
+
+ If in doubt, say "N" to disable Endpoint NTB driver.
diff --git a/drivers/pci/endpoint/functions/Makefile b/drivers/pci/endpoint/functions/Makefile
index d6fafff080e2..96ab932a537a 100644
--- a/drivers/pci/endpoint/functions/Makefile
+++ b/drivers/pci/endpoint/functions/Makefile
@@ -4,3 +4,4 @@
#
obj-$(CONFIG_PCI_EPF_TEST) += pci-epf-test.o
+obj-$(CONFIG_PCI_EPF_NTB) += pci-epf-ntb.o
diff --git a/drivers/pci/endpoint/functions/pci-epf-ntb.c b/drivers/pci/endpoint/functions/pci-epf-ntb.c
new file mode 100644
index 000000000000..338148cf56f5
--- /dev/null
+++ b/drivers/pci/endpoint/functions/pci-epf-ntb.c
@@ -0,0 +1,2128 @@
+// SPDX-License-Identifier: GPL-2.0
+/**
+ * Endpoint Function Driver to implement Non-Transparent Bridge functionality
+ *
+ * Copyright (C) 2020 Texas Instruments
+ * Author: Kishon Vijay Abraham I <kishon@ti.com>
+ */
+
+/*
+ * The PCI NTB function driver configures the SoC with multiple PCIe Endpoint
+ * (EP) controller instances (see diagram below) in such a way that
+ * transactions from one EP controller are routed to the other EP controller.
+ * Once PCI NTB function driver configures the SoC with multiple EP instances,
+ * HOST1 and HOST2 can communicate with each other using SoC as a bridge.
+ *
+ * +-------------+ +-------------+
+ * | | | |
+ * | HOST1 | | HOST2 |
+ * | | | |
+ * +------^------+ +------^------+
+ * | |
+ * | |
+ * +---------|-------------------------------------------------|---------+
+ * | +------v------+ +------v------+ |
+ * | | | | | |
+ * | | EP | | EP | |
+ * | | CONTROLLER1 | | CONTROLLER2 | |
+ * | | <-----------------------------------> | |
+ * | | | | | |
+ * | | | | | |
+ * | | | SoC With Multiple EP Instances | | |
+ * | | | (Configured using NTB Function) | | |
+ * | +-------------+ +-------------+ |
+ * +---------------------------------------------------------------------+
+ */
+
+#include <linux/delay.h>
+#include <linux/io.h>
+#include <linux/module.h>
+#include <linux/slab.h>
+
+#include <linux/pci-epc.h>
+#include <linux/pci-epf.h>
+
+static struct workqueue_struct *kpcintb_workqueue;
+
+#define COMMAND_CONFIGURE_DOORBELL 1
+#define COMMAND_TEARDOWN_DOORBELL 2
+#define COMMAND_CONFIGURE_MW 3
+#define COMMAND_TEARDOWN_MW 4
+#define COMMAND_LINK_UP 5
+#define COMMAND_LINK_DOWN 6
+
+#define COMMAND_STATUS_OK 1
+#define COMMAND_STATUS_ERROR 2
+
+#define LINK_STATUS_UP BIT(0)
+
+#define SPAD_COUNT 64
+#define DB_COUNT 4
+#define NTB_MW_OFFSET 2
+#define DB_COUNT_MASK GENMASK(15, 0)
+#define MSIX_ENABLE BIT(16)
+#define MAX_DB_COUNT 32
+#define MAX_MW 4
+
+enum epf_ntb_bar {
+ BAR_CONFIG,
+ BAR_PEER_SPAD,
+ BAR_DB_MW1,
+ BAR_MW2,
+ BAR_MW3,
+ BAR_MW4,
+};
+
+struct epf_ntb {
+ u32 num_mws;
+ u32 db_count;
+ u32 spad_count;
+ struct pci_epf *epf;
+ u64 mws_size[MAX_MW];
+ struct config_group group;
+ struct epf_ntb_epc *epc[2];
+};
+
+#define to_epf_ntb(epf_group) container_of((epf_group), struct epf_ntb, group)
+
+struct epf_ntb_epc {
+ u8 func_no;
+ bool linkup;
+ bool is_msix;
+ int msix_bar;
+ u32 spad_size;
+ struct pci_epc *epc;
+ struct epf_ntb *epf_ntb;
+ void __iomem *mw_addr[6];
+ size_t msix_table_offset;
+ struct epf_ntb_ctrl *reg;
+ struct pci_epf_bar *epf_bar;
+ enum pci_barno epf_ntb_bar[6];
+ struct delayed_work cmd_handler;
+ enum pci_epc_interface_type type;
+ const struct pci_epc_features *epc_features;
+};
+
+struct epf_ntb_ctrl {
+ u32 command;
+ u32 argument;
+ u16 command_status;
+ u16 link_status;
+ u32 topology;
+ u64 addr;
+ u64 size;
+ u32 num_mws;
+ u32 mw1_offset;
+ u32 spad_offset;
+ u32 spad_count;
+ u32 db_entry_size;
+ u32 db_data[MAX_DB_COUNT];
+ u32 db_offset[MAX_DB_COUNT];
+} __packed;
+
+static struct pci_epf_header epf_ntb_header = {
+ .vendorid = PCI_ANY_ID,
+ .deviceid = PCI_ANY_ID,
+ .baseclass_code = PCI_BASE_CLASS_MEMORY,
+ .interrupt_pin = PCI_INTERRUPT_INTA,
+};
+
+/**
+ * epf_ntb_link_up() - Raise link_up interrupt to both the hosts
+ * @ntb: NTB device that facilitates communication between HOST1 and HOST2
+ * @link_up: true or false indicating Link is UP or Down
+ *
+ * Once NTB function in HOST1 and the NTB function in HOST2 invoke
+ * ntb_link_enable(), this NTB function driver will trigger a link event to
+ * the NTB client in both the hosts.
+ */
+static int epf_ntb_link_up(struct epf_ntb *ntb, bool link_up)
+{
+ enum pci_epc_interface_type type;
+ enum pci_epc_irq_type irq_type;
+ struct epf_ntb_epc *ntb_epc;
+ struct epf_ntb_ctrl *ctrl;
+ struct pci_epc *epc;
+ bool is_msix;
+ u8 func_no;
+ int ret;
+
+ for (type = PRIMARY_INTERFACE; type <= SECONDARY_INTERFACE; type++) {
+ ntb_epc = ntb->epc[type];
+ epc = ntb_epc->epc;
+ func_no = ntb_epc->func_no;
+ is_msix = ntb_epc->is_msix;
+ ctrl = ntb_epc->reg;
+ if (link_up)
+ ctrl->link_status |= LINK_STATUS_UP;
+ else
+ ctrl->link_status &= ~LINK_STATUS_UP;
+ irq_type = is_msix ? PCI_EPC_IRQ_MSIX : PCI_EPC_IRQ_MSI;
+ ret = pci_epc_raise_irq(epc, func_no, irq_type, 1);
+ if (ret) {
+ dev_err(&epc->dev,
+ "%s intf: Failed to raise Link Up IRQ\n",
+ pci_epc_interface_string(type));
+ return ret;
+ }
+ }
+
+ return 0;
+}
+
+/**
+ * epf_ntb_configure_mw() - Configure the Outbound Address Space for one host
+ * to access the memory window of other host
+ * @ntb: NTB device that facilitates communication between HOST1 and HOST2
+ * @type: PRIMARY interface or SECONDARY interface
+ * @mw: Index of the memory window (either 0, 1, 2 or 3)
+ *
+ * +-----------------+ +---->+----------------+-----------+-----------------+
+ * | BAR0 | | | Doorbell 1 +-----------> MSI|X ADDRESS 1 |
+ * +-----------------+ | +----------------+ +-----------------+
+ * | BAR1 | | | Doorbell 2 +---------+ | |
+ * +-----------------+----+ +----------------+ | | |
+ * | BAR2 | | Doorbell 3 +-------+ | +-----------------+
+ * +-----------------+----+ +----------------+ | +-> MSI|X ADDRESS 2 |
+ * | BAR3 | | | Doorbell 4 +-----+ | +-----------------+
+ * +-----------------+ | |----------------+ | | | |
+ * | BAR4 | | | | | | +-----------------+
+ * +-----------------+ | | MW1 +---+ | +-->+ MSI|X ADDRESS 3||
+ * | BAR5 | | | | | | +-----------------+
+ * +-----------------+ +---->-----------------+ | | | |
+ * EP CONTROLLER 1 | | | | +-----------------+
+ * | | | +---->+ MSI|X ADDRESS 4 |
+ * +----------------+ | +-----------------+
+ * (A) EP CONTROLLER 2 | | |
+ * (OB SPACE) | | |
+ * +-------> MW1 |
+ * | |
+ * | |
+ * (B) +-----------------+
+ * | |
+ * | |
+ * | |
+ * | |
+ * | |
+ * +-----------------+
+ * PCI Address Space
+ * (Managed by HOST2)
+ *
+ * This function performs stage (B) in the above diagram (see MW1) i.e., map OB
+ * address space of memory window to PCI address space.
+ *
+ * This operation requires 3 parameters
+ * 1) Address in the outbound address space
+ * 2) Address in the PCI Address space
+ * 3) Size of the address region to be mapped
+ *
+ * The address in the outbound address space (for MW1, MW2, MW3 and MW4) is
+ * stored in epf_bar corresponding to BAR_DB_MW1 for MW1 and BAR_MW2, BAR_MW3
+ * BAR_MW4 for rest of the BARs of epf_ntb_epc that is connected to HOST1. This
+ * is populated in epf_ntb_alloc_peer_mem() in this driver.
+ *
+ * The address and size of the PCI address region that has to be mapped would
+ * be provided by HOST2 in ctrl->addr and ctrl->size of epf_ntb_epc that is
+ * connected to HOST2.
+ *
+ * Please note Memory window1 (MW1) and Doorbell registers together will be
+ * mapped to a single BAR (BAR2) above for 32-bit BARs. The exact BAR that's
+ * used for Memory window (MW) can be obtained from epf_ntb_bar[BAR_DB_MW1],
+ * epf_ntb_bar[BAR_MW2], epf_ntb_bar[BAR_MW2], epf_ntb_bar[BAR_MW2].
+ */
+static int epf_ntb_configure_mw(struct epf_ntb *ntb,
+ enum pci_epc_interface_type type, u32 mw)
+{
+ struct epf_ntb_epc *peer_ntb_epc, *ntb_epc;
+ struct pci_epf_bar *peer_epf_bar;
+ enum pci_barno peer_barno;
+ struct epf_ntb_ctrl *ctrl;
+ phys_addr_t phys_addr;
+ struct pci_epc *epc;
+ u64 addr, size;
+ int ret = 0;
+ u8 func_no;
+
+ ntb_epc = ntb->epc[type];
+ epc = ntb_epc->epc;
+
+ peer_ntb_epc = ntb->epc[!type];
+ peer_barno = peer_ntb_epc->epf_ntb_bar[mw + NTB_MW_OFFSET];
+ peer_epf_bar = &peer_ntb_epc->epf_bar[peer_barno];
+
+ phys_addr = peer_epf_bar->phys_addr;
+ ctrl = ntb_epc->reg;
+ addr = ctrl->addr;
+ size = ctrl->size;
+ if (mw + NTB_MW_OFFSET == BAR_DB_MW1)
+ phys_addr += ctrl->mw1_offset;
+
+ if (size > ntb->mws_size[mw]) {
+ dev_err(&epc->dev,
+ "%s intf: MW: %d Req Sz:%llxx > Supported Sz:%llx\n",
+ pci_epc_interface_string(type), mw, size,
+ ntb->mws_size[mw]);
+ ret = -EINVAL;
+ goto err_invalid_size;
+ }
+
+ func_no = ntb_epc->func_no;
+
+ ret = pci_epc_map_addr(epc, func_no, phys_addr, addr, size);
+ if (ret)
+ dev_err(&epc->dev,
+ "%s intf: Failed to map memory window %d address\n",
+ pci_epc_interface_string(type), mw);
+
+err_invalid_size:
+
+ return ret;
+}
+
+/**
+ * epf_ntb_teardown_mw() - Teardown the configured OB ATU
+ * @ntb: NTB device that facilitates communication between HOST1 and HOST2
+ * @type: PRIMARY interface or SECONDARY interface
+ * @mw: Index of the memory window (either 0, 1, 2 or 3)
+ *
+ * Teardown the configured OB ATU configured in epf_ntb_configure_mw() using
+ * pci_epc_unmap_addr()
+ */
+static void epf_ntb_teardown_mw(struct epf_ntb *ntb,
+ enum pci_epc_interface_type type, u32 mw)
+{
+ struct epf_ntb_epc *peer_ntb_epc, *ntb_epc;
+ struct pci_epf_bar *peer_epf_bar;
+ enum pci_barno peer_barno;
+ struct epf_ntb_ctrl *ctrl;
+ phys_addr_t phys_addr;
+ struct pci_epc *epc;
+ u8 func_no;
+
+ ntb_epc = ntb->epc[type];
+ epc = ntb_epc->epc;
+
+ peer_ntb_epc = ntb->epc[!type];
+ peer_barno = peer_ntb_epc->epf_ntb_bar[mw + NTB_MW_OFFSET];
+ peer_epf_bar = &peer_ntb_epc->epf_bar[peer_barno];
+
+ phys_addr = peer_epf_bar->phys_addr;
+ ctrl = ntb_epc->reg;
+ if (mw + NTB_MW_OFFSET == BAR_DB_MW1)
+ phys_addr += ctrl->mw1_offset;
+ func_no = ntb_epc->func_no;
+
+ pci_epc_unmap_addr(epc, func_no, phys_addr);
+}
+
+/**
+ * epf_ntb_configure_msi() - Map OB address space to MSI address
+ * @ntb: NTB device that facilitates communication between HOST1 and HOST2
+ * @type: PRIMARY interface or SECONDARY interface
+ * @db_count: Number of doorbell interrupts to map
+ *
+ *+-----------------+ +----->+----------------+-----------+-----------------+
+ *| BAR0 | | | Doorbell 1 +---+-------> MSI ADDRESS |
+ *+-----------------+ | +----------------+ | +-----------------+
+ *| BAR1 | | | Doorbell 2 +---+ | |
+ *+-----------------+----+ +----------------+ | | |
+ *| BAR2 | | Doorbell 3 +---+ | |
+ *+-----------------+----+ +----------------+ | | |
+ *| BAR3 | | | Doorbell 4 +---+ | |
+ *+-----------------+ | |----------------+ | |
+ *| BAR4 | | | | | |
+ *+-----------------+ | | MW1 | | |
+ *| BAR5 | | | | | |
+ *+-----------------+ +----->-----------------+ | |
+ * EP CONTROLLER 1 | | | |
+ * | | | |
+ * +----------------+ +-----------------+
+ * (A) EP CONTROLLER 2 | |
+ * (OB SPACE) | |
+ * | MW1 |
+ * | |
+ * | |
+ * (B) +-----------------+
+ * | |
+ * | |
+ * | |
+ * | |
+ * | |
+ * +-----------------+
+ * PCI Address Space
+ * (Managed by HOST2)
+ *
+ *
+ * This function performs stage (B) in the above diagram (see Doorbell 1,
+ * Doorbell 2, Doorbell 3, Doorbell 4) i.e map OB address space corresponding to
+ * doorbell to MSI address in PCI address space.
+ *
+ * This operation requires 3 parameters
+ * 1) Address reserved for doorbell in the outbound address space
+ * 2) MSI-X address in the PCIe Address space
+ * 3) Number of MSI-X interrupts that has to be configured
+ *
+ * The address in the outbound address space (for the Doorbell) is stored in
+ * epf_bar corresponding to BAR_DB_MW1 of epf_ntb_epc that is connected to
+ * HOST1. This is populated in epf_ntb_alloc_peer_mem() in this driver along
+ * with address for MW1.
+ *
+ * pci_epc_map_msi_irq() takes the MSI address from MSI capability register
+ * and maps the OB address (obtained in epf_ntb_alloc_peer_mem()) to the MSI
+ * address.
+ *
+ * epf_ntb_configure_msi() also stores the MSI data to raise each interrupt
+ * in db_data of the peer's control region. This helps the peer to raise
+ * doorbell of the other host by writing db_data to the BAR corresponding to
+ * BAR_DB_MW1.
+ */
+static int epf_ntb_configure_msi(struct epf_ntb *ntb,
+ enum pci_epc_interface_type type, u16 db_count)
+{
+ struct epf_ntb_epc *peer_ntb_epc, *ntb_epc;
+ u32 db_entry_size, db_data, db_offset;
+ struct pci_epf_bar *peer_epf_bar;
+ struct epf_ntb_ctrl *peer_ctrl;
+ enum pci_barno peer_barno;
+ phys_addr_t phys_addr;
+ struct pci_epc *epc;
+ u8 func_no;
+ int ret, i;
+
+ ntb_epc = ntb->epc[type];
+ epc = ntb_epc->epc;
+
+ peer_ntb_epc = ntb->epc[!type];
+ peer_barno = peer_ntb_epc->epf_ntb_bar[BAR_DB_MW1];
+ peer_epf_bar = &peer_ntb_epc->epf_bar[peer_barno];
+ peer_ctrl = peer_ntb_epc->reg;
+ db_entry_size = peer_ctrl->db_entry_size;
+
+ phys_addr = peer_epf_bar->phys_addr;
+ func_no = ntb_epc->func_no;
+
+ ret = pci_epc_map_msi_irq(epc, func_no, phys_addr, db_count,
+ db_entry_size, &db_data, &db_offset);
+ if (ret) {
+ dev_err(&epc->dev, "%s intf: Failed to map MSI IRQ\n",
+ pci_epc_interface_string(type));
+ return ret;
+ }
+
+ for (i = 0; i < db_count; i++) {
+ peer_ctrl->db_data[i] = db_data | i;
+ peer_ctrl->db_offset[i] = db_offset;
+ }
+
+ return 0;
+}
+
+/**
+ * epf_ntb_configure_msix() - Map OB address space to MSI-X address
+ * @ntb: NTB device that facilitates communication between HOST1 and HOST2
+ * @type: PRIMARY interface or SECONDARY interface
+ * @db_count: Number of doorbell interrupts to map
+ *
+ *+-----------------+ +----->+----------------+-----------+-----------------+
+ *| BAR0 | | | Doorbell 1 +-----------> MSI-X ADDRESS 1 |
+ *+-----------------+ | +----------------+ +-----------------+
+ *| BAR1 | | | Doorbell 2 +---------+ | |
+ *+-----------------+----+ +----------------+ | | |
+ *| BAR2 | | Doorbell 3 +-------+ | +-----------------+
+ *+-----------------+----+ +----------------+ | +-> MSI-X ADDRESS 2 |
+ *| BAR3 | | | Doorbell 4 +-----+ | +-----------------+
+ *+-----------------+ | |----------------+ | | | |
+ *| BAR4 | | | | | | +-----------------+
+ *+-----------------+ | | MW1 + | +-->+ MSI-X ADDRESS 3||
+ *| BAR5 | | | | | +-----------------+
+ *+-----------------+ +----->-----------------+ | | |
+ * EP CONTROLLER 1 | | | +-----------------+
+ * | | +---->+ MSI-X ADDRESS 4 |
+ * +----------------+ +-----------------+
+ * (A) EP CONTROLLER 2 | |
+ * (OB SPACE) | |
+ * | MW1 |
+ * | |
+ * | |
+ * (B) +-----------------+
+ * | |
+ * | |
+ * | |
+ * | |
+ * | |
+ * +-----------------+
+ * PCI Address Space
+ * (Managed by HOST2)
+ *
+ * This function performs stage (B) in the above diagram (see Doorbell 1,
+ * Doorbell 2, Doorbell 3, Doorbell 4) i.e map OB address space corresponding to
+ * doorbell to MSI-X address in PCI address space.
+ *
+ * This operation requires 3 parameters
+ * 1) Address reserved for doorbell in the outbound address space
+ * 2) MSI-X address in the PCIe Address space
+ * 3) Number of MSI-X interrupts that has to be configured
+ *
+ * The address in the outbound address space (for the Doorbell) is stored in
+ * epf_bar corresponding to BAR_DB_MW1 of epf_ntb_epc that is connected to
+ * HOST1. This is populated in epf_ntb_alloc_peer_mem() in this driver along
+ * with address for MW1.
+ *
+ * The MSI-X address is in the MSI-X table of EP CONTROLLER 2 and
+ * the count of doorbell is in ctrl->argument of epf_ntb_epc that is connected
+ * to HOST2. MSI-X table is stored memory mapped to ntb_epc->msix_bar and the
+ * offset is in ntb_epc->msix_table_offset. From this epf_ntb_configure_msix()
+ * gets the MSI-X address and data.
+ *
+ * epf_ntb_configure_msix() also stores the MSI-X data to raise each interrupt
+ * in db_data of the peer's control region. This helps the peer to raise
+ * doorbell of the other host by writing db_data to the BAR corresponding to
+ * BAR_DB_MW1.
+ */
+static int epf_ntb_configure_msix(struct epf_ntb *ntb,
+ enum pci_epc_interface_type type,
+ u16 db_count)
+{
+ const struct pci_epc_features *epc_features;
+ struct epf_ntb_epc *peer_ntb_epc, *ntb_epc;
+ struct pci_epf_bar *peer_epf_bar, *epf_bar;
+ struct pci_epf_msix_tbl *msix_tbl;
+ struct epf_ntb_ctrl *peer_ctrl;
+ u32 db_entry_size, msg_data;
+ enum pci_barno peer_barno;
+ phys_addr_t phys_addr;
+ struct pci_epc *epc;
+ size_t align;
+ u64 msg_addr;
+ u8 func_no;
+ int ret, i;
+
+ ntb_epc = ntb->epc[type];
+ epc = ntb_epc->epc;
+
+ epf_bar = &ntb_epc->epf_bar[ntb_epc->msix_bar];
+ msix_tbl = epf_bar->addr + ntb_epc->msix_table_offset;
+
+ peer_ntb_epc = ntb->epc[!type];
+ peer_barno = peer_ntb_epc->epf_ntb_bar[BAR_DB_MW1];
+ peer_epf_bar = &peer_ntb_epc->epf_bar[peer_barno];
+ phys_addr = peer_epf_bar->phys_addr;
+ peer_ctrl = peer_ntb_epc->reg;
+ epc_features = ntb_epc->epc_features;
+ align = epc_features->align;
+
+ func_no = ntb_epc->func_no;
+ db_entry_size = peer_ctrl->db_entry_size;
+
+ for (i = 0; i < db_count; i++) {
+ msg_addr = ALIGN_DOWN(msix_tbl[i].msg_addr, align);
+ msg_data = msix_tbl[i].msg_data;
+ ret = pci_epc_map_addr(epc, func_no, phys_addr, msg_addr,
+ db_entry_size);
+ if (ret) {
+ dev_err(&epc->dev,
+ "%s intf: Failed to configure MSI-X IRQ\n",
+ pci_epc_interface_string(type));
+ return ret;
+ }
+ phys_addr = phys_addr + db_entry_size;
+ peer_ctrl->db_data[i] = msg_data;
+ peer_ctrl->db_offset[i] = msix_tbl[i].msg_addr & (align - 1);
+ }
+ ntb_epc->is_msix = true;
+
+ return 0;
+}
+
+/**
+ * epf_ntb_configure_db() - Configure the Outbound Address Space for one host
+ * to ring the doorbell of other host
+ * @ntb: NTB device that facilitates communication between HOST1 and HOST2
+ * @type: PRIMARY interface or SECONDARY interface
+ * @db_count: Count of the number of doorbells that has to be configured
+ * @msix: Indicates whether MSI-X or MSI should be used
+ *
+ * Invokes epf_ntb_configure_msix() or epf_ntb_configure_msi() required for
+ * one HOST to ring the doorbell of other HOST.
+ */
+static int epf_ntb_configure_db(struct epf_ntb *ntb,
+ enum pci_epc_interface_type type,
+ u16 db_count, bool msix)
+{
+ struct epf_ntb_epc *ntb_epc;
+ struct pci_epc *epc;
+ int ret;
+
+ if (db_count > MAX_DB_COUNT)
+ return -EINVAL;
+
+ ntb_epc = ntb->epc[type];
+ epc = ntb_epc->epc;
+
+ if (msix)
+ ret = epf_ntb_configure_msix(ntb, type, db_count);
+ else
+ ret = epf_ntb_configure_msi(ntb, type, db_count);
+
+ if (ret)
+ dev_err(&epc->dev, "%s intf: Failed to configure DB\n",
+ pci_epc_interface_string(type));
+
+ return ret;
+}
+
+/**
+ * epf_ntb_teardown_db() - Unmap address in OB address space to MSI/MSI-X
+ * address
+ * @ntb: NTB device that facilitates communication between HOST1 and HOST2
+ * @type: PRIMARY interface or SECONDARY interface
+ *
+ * Invoke pci_epc_unmap_addr() to unmap OB address to MSI/MSI-X address.
+ */
+static void
+epf_ntb_teardown_db(struct epf_ntb *ntb, enum pci_epc_interface_type type)
+{
+ struct epf_ntb_epc *peer_ntb_epc, *ntb_epc;
+ struct pci_epf_bar *peer_epf_bar;
+ enum pci_barno peer_barno;
+ phys_addr_t phys_addr;
+ struct pci_epc *epc;
+ u8 func_no;
+
+ ntb_epc = ntb->epc[type];
+ epc = ntb_epc->epc;
+
+ peer_ntb_epc = ntb->epc[!type];
+ peer_barno = peer_ntb_epc->epf_ntb_bar[BAR_DB_MW1];
+ peer_epf_bar = &peer_ntb_epc->epf_bar[peer_barno];
+ phys_addr = peer_epf_bar->phys_addr;
+ func_no = ntb_epc->func_no;
+
+ pci_epc_unmap_addr(epc, func_no, phys_addr);
+}
+
+/**
+ * epf_ntb_cmd_handler() - Handle commands provided by the NTB Host
+ * @work: work_struct for the two epf_ntb_epc (PRIMARY and SECONDARY)
+ *
+ * Workqueue function that gets invoked for the two epf_ntb_epc
+ * periodically (once every 5ms) to see if it has received any commands
+ * from NTB host. The host can send commands to configure doorbell or
+ * configure memory window or to update link status.
+ */
+static void epf_ntb_cmd_handler(struct work_struct *work)
+{
+ enum pci_epc_interface_type type;
+ struct epf_ntb_epc *ntb_epc;
+ struct epf_ntb_ctrl *ctrl;
+ u32 command, argument;
+ struct epf_ntb *ntb;
+ struct device *dev;
+ u16 db_count;
+ bool is_msix;
+ int ret;
+
+ ntb_epc = container_of(work, struct epf_ntb_epc, cmd_handler.work);
+ ctrl = ntb_epc->reg;
+ command = ctrl->command;
+ if (!command)
+ goto reset_handler;
+ argument = ctrl->argument;
+
+ ctrl->command = 0;
+ ctrl->argument = 0;
+
+ ctrl = ntb_epc->reg;
+ type = ntb_epc->type;
+ ntb = ntb_epc->epf_ntb;
+ dev = &ntb->epf->dev;
+
+ switch (command) {
+ case COMMAND_CONFIGURE_DOORBELL:
+ db_count = argument & DB_COUNT_MASK;
+ is_msix = argument & MSIX_ENABLE;
+ ret = epf_ntb_configure_db(ntb, type, db_count, is_msix);
+ if (ret < 0)
+ ctrl->command_status = COMMAND_STATUS_ERROR;
+ else
+ ctrl->command_status = COMMAND_STATUS_OK;
+ break;
+ case COMMAND_TEARDOWN_DOORBELL:
+ epf_ntb_teardown_db(ntb, type);
+ ctrl->command_status = COMMAND_STATUS_OK;
+ break;
+ case COMMAND_CONFIGURE_MW:
+ ret = epf_ntb_configure_mw(ntb, type, argument);
+ if (ret < 0)
+ ctrl->command_status = COMMAND_STATUS_ERROR;
+ else
+ ctrl->command_status = COMMAND_STATUS_OK;
+ break;
+ case COMMAND_TEARDOWN_MW:
+ epf_ntb_teardown_mw(ntb, type, argument);
+ ctrl->command_status = COMMAND_STATUS_OK;
+ break;
+ case COMMAND_LINK_UP:
+ ntb_epc->linkup = true;
+ if (ntb->epc[PRIMARY_INTERFACE]->linkup &&
+ ntb->epc[SECONDARY_INTERFACE]->linkup) {
+ ret = epf_ntb_link_up(ntb, true);
+ if (ret < 0)
+ ctrl->command_status = COMMAND_STATUS_ERROR;
+ else
+ ctrl->command_status = COMMAND_STATUS_OK;
+ goto reset_handler;
+ }
+ ctrl->command_status = COMMAND_STATUS_OK;
+ break;
+ case COMMAND_LINK_DOWN:
+ ntb_epc->linkup = false;
+ ret = epf_ntb_link_up(ntb, false);
+ if (ret < 0)
+ ctrl->command_status = COMMAND_STATUS_ERROR;
+ else
+ ctrl->command_status = COMMAND_STATUS_OK;
+ break;
+ default:
+ dev_err(dev, "%s intf UNKNOWN command: %d\n",
+ pci_epc_interface_string(type), command);
+ break;
+ }
+
+reset_handler:
+ queue_delayed_work(kpcintb_workqueue, &ntb_epc->cmd_handler,
+ msecs_to_jiffies(5));
+}
+
+/**
+ * epf_ntb_peer_spad_bar_clear() - Clear Peer Scratchpad BAR
+ * @ntb: NTB device that facilitates communication between HOST1 and HOST2
+ *
+ *+-----------------+------->+------------------+ +-----------------+
+ *| BAR0 | | CONFIG REGION | | BAR0 |
+ *+-----------------+----+ +------------------+<-------+-----------------+
+ *| BAR1 | | |SCRATCHPAD REGION | | BAR1 |
+ *+-----------------+ +-->+------------------+<-------+-----------------+
+ *| BAR2 | Local Memory | BAR2 |
+ *+-----------------+ +-----------------+
+ *| BAR3 | | BAR3 |
+ *+-----------------+ +-----------------+
+ *| BAR4 | | BAR4 |
+ *+-----------------+ +-----------------+
+ *| BAR5 | | BAR5 |
+ *+-----------------+ +-----------------+
+ * EP CONTROLLER 1 EP CONTROLLER 2
+ *
+ * Clear BAR1 of EP CONTROLLER 2 which contains the HOST2's peer scratchpad
+ * region. While BAR1 is the default peer scratchpad BAR, an NTB could have
+ * other BARs for peer scratchpad (because of 64-bit BARs or reserved BARs).
+ * This function can get the exact BAR used for peer scratchpad from
+ * epf_ntb_bar[BAR_PEER_SPAD].
+ *
+ * Since HOST2's peer scratchpad is also HOST1's self scratchpad, this function
+ * gets the address of peer scratchpad from
+ * peer_ntb_epc->epf_ntb_bar[BAR_CONFIG].
+ */
+static void epf_ntb_peer_spad_bar_clear(struct epf_ntb_epc *ntb_epc)
+{
+ struct pci_epf_bar *epf_bar;
+ enum pci_barno barno;
+ struct pci_epc *epc;
+ u8 func_no;
+
+ epc = ntb_epc->epc;
+ func_no = ntb_epc->func_no;
+ barno = ntb_epc->epf_ntb_bar[BAR_PEER_SPAD];
+ epf_bar = &ntb_epc->epf_bar[barno];
+ pci_epc_clear_bar(epc, func_no, epf_bar);
+}
+
+/**
+ * epf_ntb_peer_spad_bar_set() - Set peer scratchpad BAR
+ * @ntb: NTB device that facilitates communication between HOST1 and HOST2
+ *
+ *+-----------------+------->+------------------+ +-----------------+
+ *| BAR0 | | CONFIG REGION | | BAR0 |
+ *+-----------------+----+ +------------------+<-------+-----------------+
+ *| BAR1 | | |SCRATCHPAD REGION | | BAR1 |
+ *+-----------------+ +-->+------------------+<-------+-----------------+
+ *| BAR2 | Local Memory | BAR2 |
+ *+-----------------+ +-----------------+
+ *| BAR3 | | BAR3 |
+ *+-----------------+ +-----------------+
+ *| BAR4 | | BAR4 |
+ *+-----------------+ +-----------------+
+ *| BAR5 | | BAR5 |
+ *+-----------------+ +-----------------+
+ * EP CONTROLLER 1 EP CONTROLLER 2
+ *
+ * Set BAR1 of EP CONTROLLER 2 which contains the HOST2's peer scratchpad
+ * region. While BAR1 is the default peer scratchpad BAR, an NTB could have
+ * other BARs for peer scratchpad (because of 64-bit BARs or reserved BARs).
+ * This function can get the exact BAR used for peer scratchpad from
+ * epf_ntb_bar[BAR_PEER_SPAD].
+ *
+ * Since HOST2's peer scratchpad is also HOST1's self scratchpad, this function
+ * gets the address of peer scratchpad from
+ * peer_ntb_epc->epf_ntb_bar[BAR_CONFIG].
+ */
+static int epf_ntb_peer_spad_bar_set(struct epf_ntb *ntb,
+ enum pci_epc_interface_type type)
+{
+ struct epf_ntb_epc *peer_ntb_epc, *ntb_epc;
+ struct pci_epf_bar *peer_epf_bar, *epf_bar;
+ enum pci_barno peer_barno, barno;
+ u32 peer_spad_offset;
+ struct pci_epc *epc;
+ struct device *dev;
+ u8 func_no;
+ int ret;
+
+ dev = &ntb->epf->dev;
+
+ peer_ntb_epc = ntb->epc[!type];
+ peer_barno = peer_ntb_epc->epf_ntb_bar[BAR_CONFIG];
+ peer_epf_bar = &peer_ntb_epc->epf_bar[peer_barno];
+
+ ntb_epc = ntb->epc[type];
+ barno = ntb_epc->epf_ntb_bar[BAR_PEER_SPAD];
+ epf_bar = &ntb_epc->epf_bar[barno];
+ func_no = ntb_epc->func_no;
+ epc = ntb_epc->epc;
+
+ peer_spad_offset = peer_ntb_epc->reg->spad_offset;
+ epf_bar->phys_addr = peer_epf_bar->phys_addr + peer_spad_offset;
+ epf_bar->size = peer_ntb_epc->spad_size;
+ epf_bar->barno = barno;
+ epf_bar->flags = PCI_BASE_ADDRESS_MEM_TYPE_32;
+
+ ret = pci_epc_set_bar(epc, func_no, epf_bar);
+ if (ret) {
+ dev_err(dev, "%s intf: peer SPAD BAR set failed\n",
+ pci_epc_interface_string(type));
+ return ret;
+ }
+
+ return 0;
+}
+
+/**
+ * epf_ntb_config_sspad_bar_clear() - Clear Config + Self scratchpad BAR
+ * @ntb: NTB device that facilitates communication between HOST1 and HOST2
+ *
+ * +-----------------+------->+------------------+ +-----------------+
+ * | BAR0 | | CONFIG REGION | | BAR0 |
+ * +-----------------+----+ +------------------+<-------+-----------------+
+ * | BAR1 | | |SCRATCHPAD REGION | | BAR1 |
+ * +-----------------+ +-->+------------------+<-------+-----------------+
+ * | BAR2 | Local Memory | BAR2 |
+ * +-----------------+ +-----------------+
+ * | BAR3 | | BAR3 |
+ * +-----------------+ +-----------------+
+ * | BAR4 | | BAR4 |
+ * +-----------------+ +-----------------+
+ * | BAR5 | | BAR5 |
+ * +-----------------+ +-----------------+
+ * EP CONTROLLER 1 EP CONTROLLER 2
+ *
+ * Clear BAR0 of EP CONTROLLER 1 which contains the HOST1's config and
+ * self scratchpad region (removes inbound ATU configuration). While BAR0 is
+ * the default self scratchpad BAR, an NTB could have other BARs for self
+ * scratchpad (because of reserved BARs). This function can get the exact BAR
+ * used for self scratchpad from epf_ntb_bar[BAR_CONFIG].
+ *
+ * Please note the self scratchpad region and config region is combined to
+ * a single region and mapped using the same BAR. Also note HOST2's peer
+ * scratchpad is HOST1's self scratchpad.
+ */
+static void epf_ntb_config_sspad_bar_clear(struct epf_ntb_epc *ntb_epc)
+{
+ struct pci_epf_bar *epf_bar;
+ enum pci_barno barno;
+ struct pci_epc *epc;
+ u8 func_no;
+
+ epc = ntb_epc->epc;
+ func_no = ntb_epc->func_no;
+ barno = ntb_epc->epf_ntb_bar[BAR_CONFIG];
+ epf_bar = &ntb_epc->epf_bar[barno];
+ pci_epc_clear_bar(epc, func_no, epf_bar);
+}
+
+/**
+ * epf_ntb_config_sspad_bar_set() - Set Config + Self scratchpad BAR
+ * @ntb: NTB device that facilitates communication between HOST1 and HOST2
+ *
+ * +-----------------+------->+------------------+ +-----------------+
+ * | BAR0 | | CONFIG REGION | | BAR0 |
+ * +-----------------+----+ +------------------+<-------+-----------------+
+ * | BAR1 | | |SCRATCHPAD REGION | | BAR1 |
+ * +-----------------+ +-->+------------------+<-------+-----------------+
+ * | BAR2 | Local Memory | BAR2 |
+ * +-----------------+ +-----------------+
+ * | BAR3 | | BAR3 |
+ * +-----------------+ +-----------------+
+ * | BAR4 | | BAR4 |
+ * +-----------------+ +-----------------+
+ * | BAR5 | | BAR5 |
+ * +-----------------+ +-----------------+
+ * EP CONTROLLER 1 EP CONTROLLER 2
+ *
+ * Map BAR0 of EP CONTROLLER 1 which contains the HOST1's config and
+ * self scratchpad region. While BAR0 is the default self scratchpad BAR, an
+ * NTB could have other BARs for self scratchpad (because of reserved BARs).
+ * This function can get the exact BAR used for self scratchpad from
+ * epf_ntb_bar[BAR_CONFIG].
+ *
+ * Please note the self scratchpad region and config region is combined to
+ * a single region and mapped using the same BAR. Also note HOST2's peer
+ * scratchpad is HOST1's self scratchpad.
+ */
+static int epf_ntb_config_sspad_bar_set(struct epf_ntb_epc *ntb_epc)
+{
+ struct pci_epf_bar *epf_bar;
+ enum pci_barno barno;
+ struct epf_ntb *ntb;
+ struct pci_epc *epc;
+ struct device *dev;
+ u8 func_no;
+ int ret;
+
+ ntb = ntb_epc->epf_ntb;
+ dev = &ntb->epf->dev;
+
+ epc = ntb_epc->epc;
+ func_no = ntb_epc->func_no;
+ barno = ntb_epc->epf_ntb_bar[BAR_CONFIG];
+ epf_bar = &ntb_epc->epf_bar[barno];
+
+ ret = pci_epc_set_bar(epc, func_no, epf_bar);
+ if (ret) {
+ dev_err(dev, "%s inft: Config/Status/SPAD BAR set failed\n",
+ pci_epc_interface_string(ntb_epc->type));
+ return ret;
+ }
+
+ return 0;
+}
+
+/**
+ * epf_ntb_config_spad_bar_free() - Free the physical memory associated with
+ * config + scratchpad region
+ * @ntb: NTB device that facilitates communication between HOST1 and HOST2
+ *
+ * +-----------------+------->+------------------+ +-----------------+
+ * | BAR0 | | CONFIG REGION | | BAR0 |
+ * +-----------------+----+ +------------------+<-------+-----------------+
+ * | BAR1 | | |SCRATCHPAD REGION | | BAR1 |
+ * +-----------------+ +-->+------------------+<-------+-----------------+
+ * | BAR2 | Local Memory | BAR2 |
+ * +-----------------+ +-----------------+
+ * | BAR3 | | BAR3 |
+ * +-----------------+ +-----------------+
+ * | BAR4 | | BAR4 |
+ * +-----------------+ +-----------------+
+ * | BAR5 | | BAR5 |
+ * +-----------------+ +-----------------+
+ * EP CONTROLLER 1 EP CONTROLLER 2
+ *
+ * Free the Local Memory mentioned in the above diagram. After invoking this
+ * function, any of config + self scratchpad region of HOST1 or peer scratchpad
+ * region of HOST2 should not be accessed.
+ */
+static void epf_ntb_config_spad_bar_free(struct epf_ntb *ntb)
+{
+ enum pci_epc_interface_type type;
+ struct epf_ntb_epc *ntb_epc;
+ enum pci_barno barno;
+ struct pci_epf *epf;
+
+ epf = ntb->epf;
+ for (type = PRIMARY_INTERFACE; type <= SECONDARY_INTERFACE; type++) {
+ ntb_epc = ntb->epc[type];
+ barno = ntb_epc->epf_ntb_bar[BAR_CONFIG];
+ if (ntb_epc->reg)
+ pci_epf_free_space(epf, ntb_epc->reg, barno, type);
+ }
+}
+
+/**
+ * epf_ntb_config_spad_bar_alloc() - Allocate memory for config + scratchpad
+ * region
+ * @ntb: NTB device that facilitates communication between HOST1 and HOST2
+ * @type: PRIMARY interface or SECONDARY interface
+ *
+ * +-----------------+------->+------------------+ +-----------------+
+ * | BAR0 | | CONFIG REGION | | BAR0 |
+ * +-----------------+----+ +------------------+<-------+-----------------+
+ * | BAR1 | | |SCRATCHPAD REGION | | BAR1 |
+ * +-----------------+ +-->+------------------+<-------+-----------------+
+ * | BAR2 | Local Memory | BAR2 |
+ * +-----------------+ +-----------------+
+ * | BAR3 | | BAR3 |
+ * +-----------------+ +-----------------+
+ * | BAR4 | | BAR4 |
+ * +-----------------+ +-----------------+
+ * | BAR5 | | BAR5 |
+ * +-----------------+ +-----------------+
+ * EP CONTROLLER 1 EP CONTROLLER 2
+ *
+ * Allocate the Local Memory mentioned in the above diagram. The size of
+ * CONFIG REGION is sizeof(struct epf_ntb_ctrl) and size of SCRATCHPAD REGION
+ * is obtained from "spad-count" configfs entry.
+ *
+ * The size of both config region and scratchpad region has to be aligned,
+ * since the scratchpad region will also be mapped as PEER SCRATCHPAD of
+ * other host using a separate BAR.
+ */
+static int epf_ntb_config_spad_bar_alloc(struct epf_ntb *ntb,
+ enum pci_epc_interface_type type)
+{
+ const struct pci_epc_features *peer_epc_features, *epc_features;
+ struct epf_ntb_epc *peer_ntb_epc, *ntb_epc;
+ size_t msix_table_size, pba_size, align;
+ enum pci_barno peer_barno, barno;
+ struct epf_ntb_ctrl *ctrl;
+ u32 spad_size, ctrl_size;
+ u64 size, peer_size;
+ struct pci_epf *epf;
+ struct device *dev;
+ bool msix_capable;
+ u32 spad_count;
+ void *base;
+
+ epf = ntb->epf;
+ dev = &epf->dev;
+ ntb_epc = ntb->epc[type];
+
+ epc_features = ntb_epc->epc_features;
+ barno = ntb_epc->epf_ntb_bar[BAR_CONFIG];
+ size = epc_features->bar_fixed_size[barno];
+ align = epc_features->align;
+
+ peer_ntb_epc = ntb->epc[!type];
+ peer_epc_features = peer_ntb_epc->epc_features;
+ peer_barno = ntb_epc->epf_ntb_bar[BAR_PEER_SPAD];
+ peer_size = peer_epc_features->bar_fixed_size[peer_barno];
+
+ /* Check if epc_features is populated incorrectly */
+ if ((!IS_ALIGNED(size, align)))
+ return -EINVAL;
+
+ spad_count = ntb->spad_count;
+
+ ctrl_size = sizeof(struct epf_ntb_ctrl);
+ spad_size = spad_count * 4;
+
+ msix_capable = epc_features->msix_capable;
+ if (msix_capable) {
+ msix_table_size = PCI_MSIX_ENTRY_SIZE * ntb->db_count;
+ ctrl_size = ALIGN(ctrl_size, 8);
+ ntb_epc->msix_table_offset = ctrl_size;
+ ntb_epc->msix_bar = barno;
+ /* Align to QWORD or 8 Bytes */
+ pba_size = ALIGN(DIV_ROUND_UP(ntb->db_count, 8), 8);
+ ctrl_size = ctrl_size + msix_table_size + pba_size;
+ }
+
+ if (!align) {
+ ctrl_size = roundup_pow_of_two(ctrl_size);
+ spad_size = roundup_pow_of_two(spad_size);
+ } else {
+ ctrl_size = ALIGN(ctrl_size, align);
+ spad_size = ALIGN(spad_size, align);
+ }
+
+ if (peer_size) {
+ if (peer_size < spad_size)
+ spad_count = peer_size / 4;
+ spad_size = peer_size;
+ }
+
+ /*
+ * In order to make sure SPAD offset is aligned to its size,
+ * expand control region size to the size of SPAD if SPAD size
+ * is greater than control region size.
+ */
+ if (spad_size > ctrl_size)
+ ctrl_size = spad_size;
+
+ if (!size)
+ size = ctrl_size + spad_size;
+ else if (size < ctrl_size + spad_size)
+ return -EINVAL;
+
+ base = pci_epf_alloc_space(epf, size, barno, align, type);
+ if (!base) {
+ dev_err(dev, "%s intf: Config/Status/SPAD alloc region fail\n",
+ pci_epc_interface_string(type));
+ return -ENOMEM;
+ }
+
+ ntb_epc->reg = base;
+
+ ctrl = ntb_epc->reg;
+ ctrl->spad_offset = ctrl_size;
+ ctrl->spad_count = spad_count;
+ ctrl->num_mws = ntb->num_mws;
+ ctrl->db_entry_size = align ? align : 4;
+ ntb_epc->spad_size = spad_size;
+
+ return 0;
+}
+
+/**
+ * epf_ntb_config_spad_bar_alloc_interface() - Allocate memory for config +
+ * scratchpad region for each of PRIMARY and SECONDARY interface
+ * @ntb: NTB device that facilitates communication between HOST1 and HOST2
+ *
+ * Wrapper for epf_ntb_config_spad_bar_alloc() which allocates memory for
+ * config + scratchpad region for a specific interface
+ */
+static int epf_ntb_config_spad_bar_alloc_interface(struct epf_ntb *ntb)
+{
+ enum pci_epc_interface_type type;
+ struct device *dev;
+ int ret;
+
+ dev = &ntb->epf->dev;
+
+ for (type = PRIMARY_INTERFACE; type <= SECONDARY_INTERFACE; type++) {
+ ret = epf_ntb_config_spad_bar_alloc(ntb, type);
+ if (ret) {
+ dev_err(dev, "%s intf: Config/SPAD BAR alloc failed\n",
+ pci_epc_interface_string(type));
+ return ret;
+ }
+ }
+
+ return 0;
+}
+
+/**
+ * epf_ntb_free_peer_mem() - Free memory allocated in peers outbound address
+ * space
+ * @ntb_epc: EPC associated with one of the HOST which holds peers outbound
+ * address regions
+ *
+ * +-----------------+ +---->+----------------+-----------+-----------------+
+ * | BAR0 | | | Doorbell 1 +-----------> MSI|X ADDRESS 1 |
+ * +-----------------+ | +----------------+ +-----------------+
+ * | BAR1 | | | Doorbell 2 +---------+ | |
+ * +-----------------+----+ +----------------+ | | |
+ * | BAR2 | | Doorbell 3 +-------+ | +-----------------+
+ * +-----------------+----+ +----------------+ | +-> MSI|X ADDRESS 2 |
+ * | BAR3 | | | Doorbell 4 +-----+ | +-----------------+
+ * +-----------------+ | |----------------+ | | | |
+ * | BAR4 | | | | | | +-----------------+
+ * +-----------------+ | | MW1 +---+ | +-->+ MSI|X ADDRESS 3||
+ * | BAR5 | | | | | | +-----------------+
+ * +-----------------+ +---->-----------------+ | | | |
+ * EP CONTROLLER 1 | | | | +-----------------+
+ * | | | +---->+ MSI|X ADDRESS 4 |
+ * +----------------+ | +-----------------+
+ * (A) EP CONTROLLER 2 | | |
+ * (OB SPACE) | | |
+ * +-------> MW1 |
+ * | |
+ * | |
+ * (B) +-----------------+
+ * | |
+ * | |
+ * | |
+ * | |
+ * | |
+ * +-----------------+
+ * PCI Address Space
+ * (Managed by HOST2)
+ *
+ * Free memory allocated in EP CONTROLLER 2 (OB SPACE) in the above diagram.
+ * It'll free Doorbell 1, Doorbell 2, Doorbell 3, Doorbell 4, MW1 (and MW2, MW3,
+ * MW4).
+ */
+static void epf_ntb_free_peer_mem(struct epf_ntb_epc *ntb_epc)
+{
+ struct pci_epf_bar *epf_bar;
+ void __iomem *mw_addr;
+ phys_addr_t phys_addr;
+ enum epf_ntb_bar bar;
+ enum pci_barno barno;
+ struct pci_epc *epc;
+ size_t size;
+
+ epc = ntb_epc->epc;
+
+ for (bar = BAR_DB_MW1; bar < BAR_MW4; bar++) {
+ barno = ntb_epc->epf_ntb_bar[bar];
+ mw_addr = ntb_epc->mw_addr[barno];
+ epf_bar = &ntb_epc->epf_bar[barno];
+ phys_addr = epf_bar->phys_addr;
+ size = epf_bar->size;
+ if (mw_addr) {
+ pci_epc_mem_free_addr(epc, phys_addr, mw_addr, size);
+ ntb_epc->mw_addr[barno] = NULL;
+ }
+ }
+}
+
+/**
+ * epf_ntb_db_mw_bar_clear() - Clear doorbell and memory BAR
+ * @ntb_epc: EPC associated with one of the HOST which holds peer's outbound
+ * address
+ *
+ * +-----------------+ +---->+----------------+-----------+-----------------+
+ * | BAR0 | | | Doorbell 1 +-----------> MSI|X ADDRESS 1 |
+ * +-----------------+ | +----------------+ +-----------------+
+ * | BAR1 | | | Doorbell 2 +---------+ | |
+ * +-----------------+----+ +----------------+ | | |
+ * | BAR2 | | Doorbell 3 +-------+ | +-----------------+
+ * +-----------------+----+ +----------------+ | +-> MSI|X ADDRESS 2 |
+ * | BAR3 | | | Doorbell 4 +-----+ | +-----------------+
+ * +-----------------+ | |----------------+ | | | |
+ * | BAR4 | | | | | | +-----------------+
+ * +-----------------+ | | MW1 +---+ | +-->+ MSI|X ADDRESS 3||
+ * | BAR5 | | | | | | +-----------------+
+ * +-----------------+ +---->-----------------+ | | | |
+ * EP CONTROLLER 1 | | | | +-----------------+
+ * | | | +---->+ MSI|X ADDRESS 4 |
+ * +----------------+ | +-----------------+
+ * (A) EP CONTROLLER 2 | | |
+ * (OB SPACE) | | |
+ * +-------> MW1 |
+ * | |
+ * | |
+ * (B) +-----------------+
+ * | |
+ * | |
+ * | |
+ * | |
+ * | |
+ * +-----------------+
+ * PCI Address Space
+ * (Managed by HOST2)
+ *
+ * Clear doorbell and memory BARs (remove inbound ATU configuration). In the above
+ * diagram it clears BAR2 TO BAR5 of EP CONTROLLER 1 (Doorbell BAR, MW1 BAR, MW2
+ * BAR, MW3 BAR and MW4 BAR).
+ */
+static void epf_ntb_db_mw_bar_clear(struct epf_ntb_epc *ntb_epc)
+{
+ struct pci_epf_bar *epf_bar;
+ enum epf_ntb_bar bar;
+ enum pci_barno barno;
+ struct pci_epc *epc;
+ u8 func_no;
+
+ epc = ntb_epc->epc;
+
+ func_no = ntb_epc->func_no;
+
+ for (bar = BAR_DB_MW1; bar < BAR_MW4; bar++) {
+ barno = ntb_epc->epf_ntb_bar[bar];
+ epf_bar = &ntb_epc->epf_bar[barno];
+ pci_epc_clear_bar(epc, func_no, epf_bar);
+ }
+}
+
+/**
+ * epf_ntb_db_mw_bar_cleanup() - Clear doorbell/memory BAR and free memory
+ * allocated in peers outbound address space
+ * @ntb: NTB device that facilitates communication between HOST1 and HOST2
+ * @type: PRIMARY interface or SECONDARY interface
+ *
+ * Wrapper for epf_ntb_db_mw_bar_clear() to clear HOST1's BAR and
+ * epf_ntb_free_peer_mem() which frees up HOST2 outbound memory.
+ */
+static void epf_ntb_db_mw_bar_cleanup(struct epf_ntb *ntb,
+ enum pci_epc_interface_type type)
+{
+ struct epf_ntb_epc *peer_ntb_epc, *ntb_epc;
+
+ ntb_epc = ntb->epc[type];
+ peer_ntb_epc = ntb->epc[!type];
+
+ epf_ntb_db_mw_bar_clear(ntb_epc);
+ epf_ntb_free_peer_mem(peer_ntb_epc);
+}
+
+/**
+ * epf_ntb_configure_interrupt() - Configure MSI/MSI-X capaiblity
+ * @ntb: NTB device that facilitates communication between HOST1 and HOST2
+ * @type: PRIMARY interface or SECONDARY interface
+ *
+ * Configure MSI/MSI-X capability for each interface with number of
+ * interrupts equal to "db_count" configfs entry.
+ */
+static int epf_ntb_configure_interrupt(struct epf_ntb *ntb,
+ enum pci_epc_interface_type type)
+{
+ const struct pci_epc_features *epc_features;
+ bool msix_capable, msi_capable;
+ struct epf_ntb_epc *ntb_epc;
+ struct pci_epc *epc;
+ struct device *dev;
+ u32 db_count;
+ u8 func_no;
+ int ret;
+
+ ntb_epc = ntb->epc[type];
+ dev = &ntb->epf->dev;
+
+ epc_features = ntb_epc->epc_features;
+ msix_capable = epc_features->msix_capable;
+ msi_capable = epc_features->msi_capable;
+
+ if (!(msix_capable || msi_capable)) {
+ dev_err(dev, "MSI or MSI-X is required for doorbell\n");
+ return -EINVAL;
+ }
+
+ func_no = ntb_epc->func_no;
+
+ db_count = ntb->db_count;
+ if (db_count > MAX_DB_COUNT) {
+ dev_err(dev, "DB count cannot be more than %d\n", MAX_DB_COUNT);
+ return -EINVAL;
+ }
+
+ ntb->db_count = db_count;
+ epc = ntb_epc->epc;
+
+ if (msi_capable) {
+ ret = pci_epc_set_msi(epc, func_no, db_count);
+ if (ret) {
+ dev_err(dev, "%s intf: MSI configuration failed\n",
+ pci_epc_interface_string(type));
+ return ret;
+ }
+ }
+
+ if (msix_capable) {
+ ret = pci_epc_set_msix(epc, func_no, db_count,
+ ntb_epc->msix_bar,
+ ntb_epc->msix_table_offset);
+ if (ret) {
+ dev_err(dev, "MSI configuration failed\n");
+ return ret;
+ }
+ }
+
+ return 0;
+}
+
+/**
+ * epf_ntb_alloc_peer_mem() - Allocate memory in peer's outbound address space
+ * @ntb_epc: EPC associated with one of the HOST whose BAR holds peer's outbound
+ * address
+ * @bar: BAR of @ntb_epc in for which memory has to be allocated (could be
+ * BAR_DB_MW1, BAR_MW2, BAR_MW3, BAR_MW4)
+ * @peer_ntb_epc: EPC associated with HOST whose outbound address space is
+ * used by @ntb_epc
+ * @size: Size of the address region that has to be allocated in peers OB SPACE
+ *
+ *
+ * +-----------------+ +---->+----------------+-----------+-----------------+
+ * | BAR0 | | | Doorbell 1 +-----------> MSI|X ADDRESS 1 |
+ * +-----------------+ | +----------------+ +-----------------+
+ * | BAR1 | | | Doorbell 2 +---------+ | |
+ * +-----------------+----+ +----------------+ | | |
+ * | BAR2 | | Doorbell 3 +-------+ | +-----------------+
+ * +-----------------+----+ +----------------+ | +-> MSI|X ADDRESS 2 |
+ * | BAR3 | | | Doorbell 4 +-----+ | +-----------------+
+ * +-----------------+ | |----------------+ | | | |
+ * | BAR4 | | | | | | +-----------------+
+ * +-----------------+ | | MW1 +---+ | +-->+ MSI|X ADDRESS 3||
+ * | BAR5 | | | | | | +-----------------+
+ * +-----------------+ +---->-----------------+ | | | |
+ * EP CONTROLLER 1 | | | | +-----------------+
+ * | | | +---->+ MSI|X ADDRESS 4 |
+ * +----------------+ | +-----------------+
+ * (A) EP CONTROLLER 2 | | |
+ * (OB SPACE) | | |
+ * +-------> MW1 |
+ * | |
+ * | |
+ * (B) +-----------------+
+ * | |
+ * | |
+ * | |
+ * | |
+ * | |
+ * +-----------------+
+ * PCI Address Space
+ * (Managed by HOST2)
+ *
+ * Allocate memory in OB space of EP CONTROLLER 2 in the above diagram. Allocate
+ * for Doorbell 1, Doorbell 2, Doorbell 3, Doorbell 4, MW1 (and MW2, MW3, MW4).
+ */
+static int epf_ntb_alloc_peer_mem(struct device *dev,
+ struct epf_ntb_epc *ntb_epc,
+ enum epf_ntb_bar bar,
+ struct epf_ntb_epc *peer_ntb_epc,
+ size_t size)
+{
+ const struct pci_epc_features *epc_features;
+ struct pci_epf_bar *epf_bar;
+ struct pci_epc *peer_epc;
+ phys_addr_t phys_addr;
+ void __iomem *mw_addr;
+ enum pci_barno barno;
+ size_t align;
+
+ epc_features = ntb_epc->epc_features;
+ align = epc_features->align;
+
+ if (size < 128)
+ size = 128;
+
+ if (align)
+ size = ALIGN(size, align);
+ else
+ size = roundup_pow_of_two(size);
+
+ peer_epc = peer_ntb_epc->epc;
+ mw_addr = pci_epc_mem_alloc_addr(peer_epc, &phys_addr, size);
+ if (!mw_addr) {
+ dev_err(dev, "%s intf: Failed to allocate OB address\n",
+ pci_epc_interface_string(peer_ntb_epc->type));
+ return -ENOMEM;
+ }
+
+ barno = ntb_epc->epf_ntb_bar[bar];
+ epf_bar = &ntb_epc->epf_bar[barno];
+ ntb_epc->mw_addr[barno] = mw_addr;
+
+ epf_bar->phys_addr = phys_addr;
+ epf_bar->size = size;
+ epf_bar->barno = barno;
+ epf_bar->flags = PCI_BASE_ADDRESS_MEM_TYPE_32;
+
+ return 0;
+}
+
+/**
+ * epf_ntb_db_mw_bar_init() - Configure Doorbell and Memory window BARs
+ * @ntb: NTB device that facilitates communication between HOST1 and HOST2
+ * @type: PRIMARY interface or SECONDARY interface
+ *
+ * Wrapper for epf_ntb_alloc_peer_mem() and pci_epc_set_bar() that allocates
+ * memory in OB address space of HOST2 and configures BAR of HOST1
+ */
+static int epf_ntb_db_mw_bar_init(struct epf_ntb *ntb,
+ enum pci_epc_interface_type type)
+{
+ const struct pci_epc_features *epc_features;
+ struct epf_ntb_epc *peer_ntb_epc, *ntb_epc;
+ struct pci_epf_bar *epf_bar;
+ struct epf_ntb_ctrl *ctrl;
+ u32 num_mws, db_count;
+ enum epf_ntb_bar bar;
+ enum pci_barno barno;
+ struct pci_epc *epc;
+ struct device *dev;
+ size_t align;
+ int ret, i;
+ u8 func_no;
+ u64 size;
+
+ ntb_epc = ntb->epc[type];
+ peer_ntb_epc = ntb->epc[!type];
+
+ dev = &ntb->epf->dev;
+ epc_features = ntb_epc->epc_features;
+ align = epc_features->align;
+ func_no = ntb_epc->func_no;
+ epc = ntb_epc->epc;
+ num_mws = ntb->num_mws;
+ db_count = ntb->db_count;
+
+ for (bar = BAR_DB_MW1, i = 0; i < num_mws; bar++, i++) {
+ if (bar == BAR_DB_MW1) {
+ align = align ? align : 4;
+ size = db_count * align;
+ size = ALIGN(size, ntb->mws_size[i]);
+ ctrl = ntb_epc->reg;
+ ctrl->mw1_offset = size;
+ size += ntb->mws_size[i];
+ } else {
+ size = ntb->mws_size[i];
+ }
+
+ ret = epf_ntb_alloc_peer_mem(dev, ntb_epc, bar,
+ peer_ntb_epc, size);
+ if (ret) {
+ dev_err(dev, "%s intf: DoorBell mem alloc failed\n",
+ pci_epc_interface_string(type));
+ goto err_alloc_peer_mem;
+ }
+
+ barno = ntb_epc->epf_ntb_bar[bar];
+ epf_bar = &ntb_epc->epf_bar[barno];
+
+ ret = pci_epc_set_bar(epc, func_no, epf_bar);
+ if (ret) {
+ dev_err(dev, "%s intf: DoorBell BAR set failed\n",
+ pci_epc_interface_string(type));
+ goto err_alloc_peer_mem;
+ }
+ }
+
+ return 0;
+
+err_alloc_peer_mem:
+ epf_ntb_db_mw_bar_cleanup(ntb, type);
+
+ return ret;
+}
+
+/**
+ * epf_ntb_epc_destroy_interface() - Cleanup NTB EPC interface
+ * @ntb: NTB device that facilitates communication between HOST1 and HOST2
+ * @type: PRIMARY interface or SECONDARY interface
+ *
+ * Unbind NTB function device from EPC and relinquish reference to pci_epc
+ * for each of the interface.
+ */
+static void epf_ntb_epc_destroy_interface(struct epf_ntb *ntb,
+ enum pci_epc_interface_type type)
+{
+ struct epf_ntb_epc *ntb_epc;
+ struct pci_epc *epc;
+ struct pci_epf *epf;
+
+ if (type < 0)
+ return;
+
+ epf = ntb->epf;
+ ntb_epc = ntb->epc[type];
+ if (!ntb_epc)
+ return;
+ epc = ntb_epc->epc;
+ pci_epc_remove_epf(epc, epf, type);
+ pci_epc_put(epc);
+}
+
+/**
+ * epf_ntb_epc_destroy() - Cleanup NTB EPC interface
+ * @ntb: NTB device that facilitates communication between HOST1 and HOST2
+ *
+ * Wrapper for epf_ntb_epc_destroy_interface() to cleanup all the NTB interfaces
+ */
+static void epf_ntb_epc_destroy(struct epf_ntb *ntb)
+{
+ enum pci_epc_interface_type type;
+
+ for (type = PRIMARY_INTERFACE; type <= SECONDARY_INTERFACE; type++)
+ epf_ntb_epc_destroy_interface(ntb, type);
+}
+
+/**
+ * epf_ntb_epc_create_interface() - Create and initialize NTB EPC interface
+ * @ntb: NTB device that facilitates communication between HOST1 and HOST2
+ * @epc: struct pci_epc to which a particular NTB interface should be associated
+ * @type: PRIMARY interface or SECONDARY interface
+ *
+ * Allocate memory for NTB EPC interface and initialize it.
+ */
+static int epf_ntb_epc_create_interface(struct epf_ntb *ntb,
+ struct pci_epc *epc,
+ enum pci_epc_interface_type type)
+{
+ const struct pci_epc_features *epc_features;
+ struct pci_epf_bar *epf_bar;
+ struct epf_ntb_epc *ntb_epc;
+ struct pci_epf *epf;
+ struct device *dev;
+ u8 func_no;
+
+ dev = &ntb->epf->dev;
+
+ ntb_epc = devm_kzalloc(dev, sizeof(*ntb_epc), GFP_KERNEL);
+ if (!ntb_epc)
+ return -ENOMEM;
+
+ epf = ntb->epf;
+ if (type == PRIMARY_INTERFACE) {
+ func_no = epf->func_no;
+ epf_bar = epf->bar;
+ } else {
+ func_no = epf->sec_epc_func_no;
+ epf_bar = epf->sec_epc_bar;
+ }
+
+ ntb_epc->linkup = false;
+ ntb_epc->epc = epc;
+ ntb_epc->func_no = func_no;
+ ntb_epc->type = type;
+ ntb_epc->epf_bar = epf_bar;
+ ntb_epc->epf_ntb = ntb;
+
+ epc_features = pci_epc_get_features(epc, func_no);
+ if (!epc_features)
+ return -EINVAL;
+ ntb_epc->epc_features = epc_features;
+
+ ntb->epc[type] = ntb_epc;
+
+ return 0;
+}
+
+/**
+ * epf_ntb_epc_create() - Create and initialize NTB EPC interface
+ * @ntb: NTB device that facilitates communication between HOST1 and HOST2
+ *
+ * Get a reference to EPC device and bind NTB function device to that EPC
+ * for each of the interface. It is also a wrapper to
+ * epf_ntb_epc_create_interface() to allocate memory for NTB EPC interface
+ * and initialize it
+ */
+static int epf_ntb_epc_create(struct epf_ntb *ntb)
+{
+ struct pci_epf *epf;
+ struct device *dev;
+ int ret;
+
+ epf = ntb->epf;
+ dev = &epf->dev;
+
+ ret = epf_ntb_epc_create_interface(ntb, epf->epc, PRIMARY_INTERFACE);
+ if (ret) {
+ dev_err(dev, "PRIMARY intf: Fail to create NTB EPC\n");
+ return ret;
+ }
+
+ ret = epf_ntb_epc_create_interface(ntb, epf->sec_epc,
+ SECONDARY_INTERFACE);
+ if (ret) {
+ dev_err(dev, "SECONDARY intf: Fail to create NTB EPC\n");
+ goto err_epc_create;
+ }
+
+ return 0;
+
+err_epc_create:
+ epf_ntb_epc_destroy_interface(ntb, PRIMARY_INTERFACE);
+
+ return ret;
+}
+
+/**
+ * epf_ntb_init_epc_bar_interface() - Identify BARs to be used for each of
+ * the NTB constructs (scratchpad region, doorbell, memorywindow)
+ * @ntb: NTB device that facilitates communication between HOST1 and HOST2
+ * @type: PRIMARY interface or SECONDARY interface
+ *
+ * Identify the free BARs to be used for each of BAR_CONFIG, BAR_PEER_SPAD,
+ * BAR_DB_MW1, BAR_MW2, BAR_MW3 and BAR_MW4.
+ */
+static int epf_ntb_init_epc_bar_interface(struct epf_ntb *ntb,
+ enum pci_epc_interface_type type)
+{
+ const struct pci_epc_features *epc_features;
+ struct epf_ntb_epc *ntb_epc;
+ enum pci_barno barno;
+ enum epf_ntb_bar bar;
+ struct device *dev;
+ u32 num_mws;
+ int i;
+
+ barno = BAR_0;
+ ntb_epc = ntb->epc[type];
+ num_mws = ntb->num_mws;
+ dev = &ntb->epf->dev;
+ epc_features = ntb_epc->epc_features;
+
+ /* These are required BARs which are mandatory for NTB functionality */
+ for (bar = BAR_CONFIG; bar <= BAR_DB_MW1; bar++, barno++) {
+ barno = pci_epc_get_next_free_bar(epc_features, barno);
+ if (barno < 0) {
+ dev_err(dev, "%s intf: Fail to get NTB function BAR\n",
+ pci_epc_interface_string(type));
+ return barno;
+ }
+ ntb_epc->epf_ntb_bar[bar] = barno;
+ }
+
+ /* These are optional BARs which don't impact NTB functionality */
+ for (bar = BAR_MW2, i = 1; i < num_mws; bar++, barno++, i++) {
+ barno = pci_epc_get_next_free_bar(epc_features, barno);
+ if (barno < 0) {
+ ntb->num_mws = i;
+ dev_dbg(dev, "BAR not available for > MW%d\n", i + 1);
+ }
+ ntb_epc->epf_ntb_bar[bar] = barno;
+ }
+
+ return 0;
+}
+
+/**
+ * epf_ntb_init_epc_bar() - Identify BARs to be used for each of the NTB
+ * constructs (scratchpad region, doorbell, memorywindow)
+ * @ntb: NTB device that facilitates communication between HOST1 and HOST2
+ * @type: PRIMARY interface or SECONDARY interface
+ *
+ * Wrapper to epf_ntb_init_epc_bar_interface() to identify the free BARs
+ * to be used for each of BAR_CONFIG, BAR_PEER_SPAD, BAR_DB_MW1, BAR_MW2,
+ * BAR_MW3 and BAR_MW4 for all the interfaces.
+ */
+static int epf_ntb_init_epc_bar(struct epf_ntb *ntb)
+{
+ enum pci_epc_interface_type type;
+ struct device *dev;
+ int ret;
+
+ dev = &ntb->epf->dev;
+ for (type = PRIMARY_INTERFACE; type <= SECONDARY_INTERFACE; type++) {
+ ret = epf_ntb_init_epc_bar_interface(ntb, type);
+ if (ret) {
+ dev_err(dev, "Fail to init EPC bar for %s interface\n",
+ pci_epc_interface_string(type));
+ return ret;
+ }
+ }
+
+ return 0;
+}
+
+/**
+ * epf_ntb_epc_init_interface() - Initialize NTB interface
+ * @ntb: NTB device that facilitates communication between HOST1 and HOST2
+ * @type: PRIMARY interface or SECONDARY interface
+ *
+ * Wrapper to initialize a particular EPC interface and start the workqueue
+ * to check for commands from host. This function will write to the
+ * EP controller HW for configuring it.
+ */
+static int epf_ntb_epc_init_interface(struct epf_ntb *ntb,
+ enum pci_epc_interface_type type)
+{
+ struct epf_ntb_epc *ntb_epc;
+ struct pci_epc *epc;
+ struct pci_epf *epf;
+ struct device *dev;
+ u8 func_no;
+ int ret;
+
+ ntb_epc = ntb->epc[type];
+ epf = ntb->epf;
+ dev = &epf->dev;
+ epc = ntb_epc->epc;
+ func_no = ntb_epc->func_no;
+
+ ret = epf_ntb_config_sspad_bar_set(ntb->epc[type]);
+ if (ret) {
+ dev_err(dev, "%s intf: Config/self SPAD BAR init failed\n",
+ pci_epc_interface_string(type));
+ return ret;
+ }
+
+ ret = epf_ntb_peer_spad_bar_set(ntb, type);
+ if (ret) {
+ dev_err(dev, "%s intf: Peer SPAD BAR init failed\n",
+ pci_epc_interface_string(type));
+ goto err_peer_spad_bar_init;
+ }
+
+ ret = epf_ntb_configure_interrupt(ntb, type);
+ if (ret) {
+ dev_err(dev, "%s intf: Interrupt configuration failed\n",
+ pci_epc_interface_string(type));
+ goto err_peer_spad_bar_init;
+ }
+
+ ret = epf_ntb_db_mw_bar_init(ntb, type);
+ if (ret) {
+ dev_err(dev, "%s intf: DB/MW BAR init failed\n",
+ pci_epc_interface_string(type));
+ goto err_db_mw_bar_init;
+ }
+
+ ret = pci_epc_write_header(epc, func_no, epf->header);
+ if (ret) {
+ dev_err(dev, "%s intf: Configuration header write failed\n",
+ pci_epc_interface_string(type));
+ goto err_write_header;
+ }
+
+ INIT_DELAYED_WORK(&ntb->epc[type]->cmd_handler, epf_ntb_cmd_handler);
+ queue_work(kpcintb_workqueue, &ntb->epc[type]->cmd_handler.work);
+
+ return 0;
+
+err_write_header:
+ epf_ntb_db_mw_bar_cleanup(ntb, type);
+
+err_db_mw_bar_init:
+ epf_ntb_peer_spad_bar_clear(ntb->epc[type]);
+
+err_peer_spad_bar_init:
+ epf_ntb_config_sspad_bar_clear(ntb->epc[type]);
+
+ return ret;
+}
+
+/**
+ * epf_ntb_epc_cleanup_interface() - Cleanup NTB interface
+ * @ntb: NTB device that facilitates communication between HOST1 and HOST2
+ * @type: PRIMARY interface or SECONDARY interface
+ *
+ * Wrapper to cleanup a particular NTB interface.
+ */
+static void epf_ntb_epc_cleanup_interface(struct epf_ntb *ntb,
+ enum pci_epc_interface_type type)
+{
+ struct epf_ntb_epc *ntb_epc;
+
+ if (type < 0)
+ return;
+
+ ntb_epc = ntb->epc[type];
+ cancel_delayed_work(&ntb_epc->cmd_handler);
+ epf_ntb_db_mw_bar_cleanup(ntb, type);
+ epf_ntb_peer_spad_bar_clear(ntb_epc);
+ epf_ntb_config_sspad_bar_clear(ntb_epc);
+}
+
+/**
+ * epf_ntb_epc_cleanup() - Cleanup all NTB interfaces
+ * @ntb: NTB device that facilitates communication between HOST1 and HOST2
+ *
+ * Wrapper to cleanup all NTB interfaces.
+ */
+static void epf_ntb_epc_cleanup(struct epf_ntb *ntb)
+{
+ enum pci_epc_interface_type type;
+
+ for (type = PRIMARY_INTERFACE; type <= SECONDARY_INTERFACE; type++)
+ epf_ntb_epc_cleanup_interface(ntb, type);
+}
+
+/**
+ * epf_ntb_epc_init() - Initialize all NTB interfaces
+ * @ntb: NTB device that facilitates communication between HOST1 and HOST2
+ *
+ * Wrapper to initialize all NTB interface and start the workqueue
+ * to check for commands from host.
+ */
+static int epf_ntb_epc_init(struct epf_ntb *ntb)
+{
+ enum pci_epc_interface_type type;
+ struct device *dev;
+ int ret;
+
+ dev = &ntb->epf->dev;
+
+ for (type = PRIMARY_INTERFACE; type <= SECONDARY_INTERFACE; type++) {
+ ret = epf_ntb_epc_init_interface(ntb, type);
+ if (ret) {
+ dev_err(dev, "%s intf: Failed to initialize\n",
+ pci_epc_interface_string(type));
+ goto err_init_type;
+ }
+ }
+
+ return 0;
+
+err_init_type:
+ epf_ntb_epc_cleanup_interface(ntb, type - 1);
+
+ return ret;
+}
+
+/**
+ * epf_ntb_bind() - Initialize endpoint controller to provide NTB functionality
+ * @epf: NTB endpoint function device
+ *
+ * Initialize both the endpoint controllers associated with NTB function device.
+ * Invoked when a primary interface or secondary interface is bound to EPC
+ * device. This function will succeed only when EPC is bound to both the
+ * interfaces.
+ */
+static int epf_ntb_bind(struct pci_epf *epf)
+{
+ struct epf_ntb *ntb = epf_get_drvdata(epf);
+ struct device *dev = &epf->dev;
+ int ret;
+
+ if (!epf->epc) {
+ dev_dbg(dev, "PRIMARY EPC interface not yet bound\n");
+ return 0;
+ }
+
+ if (!epf->sec_epc) {
+ dev_dbg(dev, "SECONDARY EPC interface not yet bound\n");
+ return 0;
+ }
+
+ ret = epf_ntb_epc_create(ntb);
+ if (ret) {
+ dev_err(dev, "Failed to create NTB EPC\n");
+ return ret;
+ }
+
+ ret = epf_ntb_init_epc_bar(ntb);
+ if (ret) {
+ dev_err(dev, "Failed to create NTB EPC\n");
+ goto err_bar_init;
+ }
+
+ ret = epf_ntb_config_spad_bar_alloc_interface(ntb);
+ if (ret) {
+ dev_err(dev, "Failed to allocate BAR memory\n");
+ goto err_bar_alloc;
+ }
+
+ ret = epf_ntb_epc_init(ntb);
+ if (ret) {
+ dev_err(dev, "Failed to initialize EPC\n");
+ goto err_bar_alloc;
+ }
+
+ epf_set_drvdata(epf, ntb);
+
+ return 0;
+
+err_bar_alloc:
+ epf_ntb_config_spad_bar_free(ntb);
+
+err_bar_init:
+ epf_ntb_epc_destroy(ntb);
+
+ return ret;
+}
+
+/**
+ * epf_ntb_unbind() - Cleanup the initialization from epf_ntb_bind()
+ * @epf: NTB endpoint function device
+ *
+ * Cleanup the initialization from epf_ntb_bind()
+ */
+static void epf_ntb_unbind(struct pci_epf *epf)
+{
+ struct epf_ntb *ntb = epf_get_drvdata(epf);
+
+ epf_ntb_epc_cleanup(ntb);
+ epf_ntb_config_spad_bar_free(ntb);
+ epf_ntb_epc_destroy(ntb);
+}
+
+#define EPF_NTB_R(_name) \
+static ssize_t epf_ntb_##_name##_show(struct config_item *item, \
+ char *page) \
+{ \
+ struct config_group *group = to_config_group(item); \
+ struct epf_ntb *ntb = to_epf_ntb(group); \
+ \
+ return sprintf(page, "%d\n", ntb->_name); \
+}
+
+#define EPF_NTB_W(_name) \
+static ssize_t epf_ntb_##_name##_store(struct config_item *item, \
+ const char *page, size_t len) \
+{ \
+ struct config_group *group = to_config_group(item); \
+ struct epf_ntb *ntb = to_epf_ntb(group); \
+ u32 val; \
+ int ret; \
+ \
+ ret = kstrtou32(page, 0, &val); \
+ if (ret) \
+ return ret; \
+ \
+ ntb->_name = val; \
+ \
+ return len; \
+}
+
+#define EPF_NTB_MW_R(_name) \
+static ssize_t epf_ntb_##_name##_show(struct config_item *item, \
+ char *page) \
+{ \
+ struct config_group *group = to_config_group(item); \
+ struct epf_ntb *ntb = to_epf_ntb(group); \
+ int win_no; \
+ \
+ sscanf(#_name, "mw%d", &win_no); \
+ \
+ return sprintf(page, "%lld\n", ntb->mws_size[win_no - 1]); \
+}
+
+#define EPF_NTB_MW_W(_name) \
+static ssize_t epf_ntb_##_name##_store(struct config_item *item, \
+ const char *page, size_t len) \
+{ \
+ struct config_group *group = to_config_group(item); \
+ struct epf_ntb *ntb = to_epf_ntb(group); \
+ struct device *dev = &ntb->epf->dev; \
+ int win_no; \
+ u64 val; \
+ int ret; \
+ \
+ ret = kstrtou64(page, 0, &val); \
+ if (ret) \
+ return ret; \
+ \
+ if (sscanf(#_name, "mw%d", &win_no) != 1) \
+ return -EINVAL; \
+ \
+ if (ntb->num_mws < win_no) { \
+ dev_err(dev, "Invalid num_nws: %d value\n", ntb->num_mws); \
+ return -EINVAL; \
+ } \
+ \
+ ntb->mws_size[win_no - 1] = val; \
+ \
+ return len; \
+}
+
+static ssize_t epf_ntb_num_mws_store(struct config_item *item,
+ const char *page, size_t len)
+{
+ struct config_group *group = to_config_group(item);
+ struct epf_ntb *ntb = to_epf_ntb(group);
+ u32 val;
+ int ret;
+
+ ret = kstrtou32(page, 0, &val);
+ if (ret)
+ return ret;
+
+ if (val > MAX_MW)
+ return -EINVAL;
+
+ ntb->num_mws = val;
+
+ return len;
+}
+
+EPF_NTB_R(spad_count)
+EPF_NTB_W(spad_count)
+EPF_NTB_R(db_count)
+EPF_NTB_W(db_count)
+EPF_NTB_R(num_mws)
+EPF_NTB_MW_R(mw1)
+EPF_NTB_MW_W(mw1)
+EPF_NTB_MW_R(mw2)
+EPF_NTB_MW_W(mw2)
+EPF_NTB_MW_R(mw3)
+EPF_NTB_MW_W(mw3)
+EPF_NTB_MW_R(mw4)
+EPF_NTB_MW_W(mw4)
+
+CONFIGFS_ATTR(epf_ntb_, spad_count);
+CONFIGFS_ATTR(epf_ntb_, db_count);
+CONFIGFS_ATTR(epf_ntb_, num_mws);
+CONFIGFS_ATTR(epf_ntb_, mw1);
+CONFIGFS_ATTR(epf_ntb_, mw2);
+CONFIGFS_ATTR(epf_ntb_, mw3);
+CONFIGFS_ATTR(epf_ntb_, mw4);
+
+static struct configfs_attribute *epf_ntb_attrs[] = {
+ &epf_ntb_attr_spad_count,
+ &epf_ntb_attr_db_count,
+ &epf_ntb_attr_num_mws,
+ &epf_ntb_attr_mw1,
+ &epf_ntb_attr_mw2,
+ &epf_ntb_attr_mw3,
+ &epf_ntb_attr_mw4,
+ NULL,
+};
+
+static const struct config_item_type ntb_group_type = {
+ .ct_attrs = epf_ntb_attrs,
+ .ct_owner = THIS_MODULE,
+};
+
+/**
+ * epf_ntb_add_cfs() - Add configfs directory specific to NTB
+ * @epf: NTB endpoint function device
+ *
+ * Add configfs directory specific to NTB. This directory will hold
+ * NTB specific properties like db_count, spad_count, num_mws etc.,
+ */
+static struct config_group *epf_ntb_add_cfs(struct pci_epf *epf,
+ struct config_group *group)
+{
+ struct epf_ntb *ntb = epf_get_drvdata(epf);
+ struct config_group *ntb_group = &ntb->group;
+ struct device *dev = &epf->dev;
+
+ config_group_init_type_name(ntb_group, dev_name(dev), &ntb_group_type);
+
+ return ntb_group;
+}
+
+/**
+ * epf_ntb_probe() - Probe NTB function driver
+ * @epf: NTB endpoint function device
+ *
+ * Probe NTB function driver when endpoint function bus detects a NTB
+ * endpoint function.
+ */
+static int epf_ntb_probe(struct pci_epf *epf)
+{
+ struct epf_ntb *ntb;
+ struct device *dev;
+
+ dev = &epf->dev;
+
+ ntb = devm_kzalloc(dev, sizeof(*ntb), GFP_KERNEL);
+ if (!ntb)
+ return -ENOMEM;
+
+ epf->header = &epf_ntb_header;
+ ntb->epf = epf;
+ epf_set_drvdata(epf, ntb);
+
+ return 0;
+}
+
+static struct pci_epf_ops epf_ntb_ops = {
+ .bind = epf_ntb_bind,
+ .unbind = epf_ntb_unbind,
+ .add_cfs = epf_ntb_add_cfs,
+};
+
+static const struct pci_epf_device_id epf_ntb_ids[] = {
+ {
+ .name = "pci_epf_ntb",
+ },
+ {},
+};
+
+static struct pci_epf_driver epf_ntb_driver = {
+ .driver.name = "pci_epf_ntb",
+ .probe = epf_ntb_probe,
+ .id_table = epf_ntb_ids,
+ .ops = &epf_ntb_ops,
+ .owner = THIS_MODULE,
+};
+
+static int __init epf_ntb_init(void)
+{
+ int ret;
+
+ kpcintb_workqueue = alloc_workqueue("kpcintb", WQ_MEM_RECLAIM |
+ WQ_HIGHPRI, 0);
+ ret = pci_epf_register_driver(&epf_ntb_driver);
+ if (ret) {
+ destroy_workqueue(kpcintb_workqueue);
+ pr_err("Failed to register pci epf ntb driver --> %d\n", ret);
+ return ret;
+ }
+
+ return 0;
+}
+module_init(epf_ntb_init);
+
+static void __exit epf_ntb_exit(void)
+{
+ pci_epf_unregister_driver(&epf_ntb_driver);
+ destroy_workqueue(kpcintb_workqueue);
+}
+module_exit(epf_ntb_exit);
+
+MODULE_DESCRIPTION("PCI EPF NTB DRIVER");
+MODULE_AUTHOR("Kishon Vijay Abraham I <kishon@ti.com>");
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/pci/endpoint/functions/pci-epf-test.c b/drivers/pci/endpoint/functions/pci-epf-test.c
index e4e51d884553..c0ac4e9cbe72 100644
--- a/drivers/pci/endpoint/functions/pci-epf-test.c
+++ b/drivers/pci/endpoint/functions/pci-epf-test.c
@@ -619,7 +619,8 @@ static void pci_epf_test_unbind(struct pci_epf *epf)
if (epf_test->reg[bar]) {
pci_epc_clear_bar(epc, epf->func_no, epf_bar);
- pci_epf_free_space(epf, epf_test->reg[bar], bar);
+ pci_epf_free_space(epf, epf_test->reg[bar], bar,
+ PRIMARY_INTERFACE);
}
}
}
@@ -651,7 +652,8 @@ static int pci_epf_test_set_bar(struct pci_epf *epf)
ret = pci_epc_set_bar(epc, epf->func_no, epf_bar);
if (ret) {
- pci_epf_free_space(epf, epf_test->reg[bar], bar);
+ pci_epf_free_space(epf, epf_test->reg[bar], bar,
+ PRIMARY_INTERFACE);
dev_err(dev, "Failed to set BAR%d\n", bar);
if (bar == test_reg_bar)
return ret;
@@ -771,7 +773,7 @@ static int pci_epf_test_alloc_space(struct pci_epf *epf)
}
base = pci_epf_alloc_space(epf, test_reg_size, test_reg_bar,
- epc_features->align);
+ epc_features->align, PRIMARY_INTERFACE);
if (!base) {
dev_err(dev, "Failed to allocated register space\n");
return -ENOMEM;
@@ -789,7 +791,8 @@ static int pci_epf_test_alloc_space(struct pci_epf *epf)
continue;
base = pci_epf_alloc_space(epf, bar_size[bar], bar,
- epc_features->align);
+ epc_features->align,
+ PRIMARY_INTERFACE);
if (!base)
dev_err(dev, "Failed to allocate space for BAR%d\n",
bar);
@@ -834,6 +837,8 @@ static int pci_epf_test_bind(struct pci_epf *epf)
linkup_notifier = epc_features->linkup_notifier;
core_init_notifier = epc_features->core_init_notifier;
test_reg_bar = pci_epc_get_first_free_bar(epc_features);
+ if (test_reg_bar < 0)
+ return -EINVAL;
pci_epf_configure_bar(epf, epc_features);
}
diff --git a/drivers/pci/endpoint/pci-ep-cfs.c b/drivers/pci/endpoint/pci-ep-cfs.c
index 3710adf51912..f3a8b833b479 100644
--- a/drivers/pci/endpoint/pci-ep-cfs.c
+++ b/drivers/pci/endpoint/pci-ep-cfs.c
@@ -21,6 +21,9 @@ static struct config_group *controllers_group;
struct pci_epf_group {
struct config_group group;
+ struct config_group primary_epc_group;
+ struct config_group secondary_epc_group;
+ struct delayed_work cfs_work;
struct pci_epf *epf;
int index;
};
@@ -41,6 +44,127 @@ static inline struct pci_epc_group *to_pci_epc_group(struct config_item *item)
return container_of(to_config_group(item), struct pci_epc_group, group);
}
+static int pci_secondary_epc_epf_link(struct config_item *epf_item,
+ struct config_item *epc_item)
+{
+ int ret;
+ struct pci_epf_group *epf_group = to_pci_epf_group(epf_item->ci_parent);
+ struct pci_epc_group *epc_group = to_pci_epc_group(epc_item);
+ struct pci_epc *epc = epc_group->epc;
+ struct pci_epf *epf = epf_group->epf;
+
+ ret = pci_epc_add_epf(epc, epf, SECONDARY_INTERFACE);
+ if (ret)
+ return ret;
+
+ ret = pci_epf_bind(epf);
+ if (ret) {
+ pci_epc_remove_epf(epc, epf, SECONDARY_INTERFACE);
+ return ret;
+ }
+
+ return 0;
+}
+
+static void pci_secondary_epc_epf_unlink(struct config_item *epc_item,
+ struct config_item *epf_item)
+{
+ struct pci_epf_group *epf_group = to_pci_epf_group(epf_item->ci_parent);
+ struct pci_epc_group *epc_group = to_pci_epc_group(epc_item);
+ struct pci_epc *epc;
+ struct pci_epf *epf;
+
+ WARN_ON_ONCE(epc_group->start);
+
+ epc = epc_group->epc;
+ epf = epf_group->epf;
+ pci_epf_unbind(epf);
+ pci_epc_remove_epf(epc, epf, SECONDARY_INTERFACE);
+}
+
+static struct configfs_item_operations pci_secondary_epc_item_ops = {
+ .allow_link = pci_secondary_epc_epf_link,
+ .drop_link = pci_secondary_epc_epf_unlink,
+};
+
+static const struct config_item_type pci_secondary_epc_type = {
+ .ct_item_ops = &pci_secondary_epc_item_ops,
+ .ct_owner = THIS_MODULE,
+};
+
+static struct config_group
+*pci_ep_cfs_add_secondary_group(struct pci_epf_group *epf_group)
+{
+ struct config_group *secondary_epc_group;
+
+ secondary_epc_group = &epf_group->secondary_epc_group;
+ config_group_init_type_name(secondary_epc_group, "secondary",
+ &pci_secondary_epc_type);
+ configfs_register_group(&epf_group->group, secondary_epc_group);
+
+ return secondary_epc_group;
+}
+
+static int pci_primary_epc_epf_link(struct config_item *epf_item,
+ struct config_item *epc_item)
+{
+ int ret;
+ struct pci_epf_group *epf_group = to_pci_epf_group(epf_item->ci_parent);
+ struct pci_epc_group *epc_group = to_pci_epc_group(epc_item);
+ struct pci_epc *epc = epc_group->epc;
+ struct pci_epf *epf = epf_group->epf;
+
+ ret = pci_epc_add_epf(epc, epf, PRIMARY_INTERFACE);
+ if (ret)
+ return ret;
+
+ ret = pci_epf_bind(epf);
+ if (ret) {
+ pci_epc_remove_epf(epc, epf, PRIMARY_INTERFACE);
+ return ret;
+ }
+
+ return 0;
+}
+
+static void pci_primary_epc_epf_unlink(struct config_item *epc_item,
+ struct config_item *epf_item)
+{
+ struct pci_epf_group *epf_group = to_pci_epf_group(epf_item->ci_parent);
+ struct pci_epc_group *epc_group = to_pci_epc_group(epc_item);
+ struct pci_epc *epc;
+ struct pci_epf *epf;
+
+ WARN_ON_ONCE(epc_group->start);
+
+ epc = epc_group->epc;
+ epf = epf_group->epf;
+ pci_epf_unbind(epf);
+ pci_epc_remove_epf(epc, epf, PRIMARY_INTERFACE);
+}
+
+static struct configfs_item_operations pci_primary_epc_item_ops = {
+ .allow_link = pci_primary_epc_epf_link,
+ .drop_link = pci_primary_epc_epf_unlink,
+};
+
+static const struct config_item_type pci_primary_epc_type = {
+ .ct_item_ops = &pci_primary_epc_item_ops,
+ .ct_owner = THIS_MODULE,
+};
+
+static struct config_group
+*pci_ep_cfs_add_primary_group(struct pci_epf_group *epf_group)
+{
+ struct config_group *primary_epc_group = &epf_group->primary_epc_group;
+
+ config_group_init_type_name(primary_epc_group, "primary",
+ &pci_primary_epc_type);
+ configfs_register_group(&epf_group->group, primary_epc_group);
+
+ return primary_epc_group;
+}
+
static ssize_t pci_epc_start_store(struct config_item *item, const char *page,
size_t len)
{
@@ -94,13 +218,13 @@ static int pci_epc_epf_link(struct config_item *epc_item,
struct pci_epc *epc = epc_group->epc;
struct pci_epf *epf = epf_group->epf;
- ret = pci_epc_add_epf(epc, epf);
+ ret = pci_epc_add_epf(epc, epf, PRIMARY_INTERFACE);
if (ret)
return ret;
ret = pci_epf_bind(epf);
if (ret) {
- pci_epc_remove_epf(epc, epf);
+ pci_epc_remove_epf(epc, epf, PRIMARY_INTERFACE);
return ret;
}
@@ -120,7 +244,7 @@ static void pci_epc_epf_unlink(struct config_item *epc_item,
epc = epc_group->epc;
epf = epf_group->epf;
pci_epf_unbind(epf);
- pci_epc_remove_epf(epc, epf);
+ pci_epc_remove_epf(epc, epf, PRIMARY_INTERFACE);
}
static struct configfs_item_operations pci_epc_item_ops = {
@@ -366,12 +490,53 @@ static struct configfs_item_operations pci_epf_ops = {
.release = pci_epf_release,
};
+static struct config_group *pci_epf_type_make(struct config_group *group,
+ const char *name)
+{
+ struct pci_epf_group *epf_group = to_pci_epf_group(&group->cg_item);
+ struct config_group *epf_type_group;
+
+ epf_type_group = pci_epf_type_add_cfs(epf_group->epf, group);
+ return epf_type_group;
+}
+
+static void pci_epf_type_drop(struct config_group *group,
+ struct config_item *item)
+{
+ config_item_put(item);
+}
+
+static struct configfs_group_operations pci_epf_type_group_ops = {
+ .make_group = &pci_epf_type_make,
+ .drop_item = &pci_epf_type_drop,
+};
+
static const struct config_item_type pci_epf_type = {
+ .ct_group_ops = &pci_epf_type_group_ops,
.ct_item_ops = &pci_epf_ops,
.ct_attrs = pci_epf_attrs,
.ct_owner = THIS_MODULE,
};
+static void pci_epf_cfs_work(struct work_struct *work)
+{
+ struct pci_epf_group *epf_group;
+ struct config_group *group;
+
+ epf_group = container_of(work, struct pci_epf_group, cfs_work.work);
+ group = pci_ep_cfs_add_primary_group(epf_group);
+ if (IS_ERR(group)) {
+ pr_err("failed to create 'primary' EPC interface\n");
+ return;
+ }
+
+ group = pci_ep_cfs_add_secondary_group(epf_group);
+ if (IS_ERR(group)) {
+ pr_err("failed to create 'secondary' EPC interface\n");
+ return;
+ }
+}
+
static struct config_group *pci_epf_make(struct config_group *group,
const char *name)
{
@@ -410,10 +575,15 @@ static struct config_group *pci_epf_make(struct config_group *group,
goto free_name;
}
+ epf->group = &epf_group->group;
epf_group->epf = epf;
kfree(epf_name);
+ INIT_DELAYED_WORK(&epf_group->cfs_work, pci_epf_cfs_work);
+ queue_delayed_work(system_wq, &epf_group->cfs_work,
+ msecs_to_jiffies(1));
+
return &epf_group->group;
free_name:
diff --git a/drivers/pci/endpoint/pci-epc-core.c b/drivers/pci/endpoint/pci-epc-core.c
index cadd3db0cbb0..cc8f9eb2b177 100644
--- a/drivers/pci/endpoint/pci-epc-core.c
+++ b/drivers/pci/endpoint/pci-epc-core.c
@@ -87,24 +87,50 @@ EXPORT_SYMBOL_GPL(pci_epc_get);
* pci_epc_get_first_free_bar() - helper to get first unreserved BAR
* @epc_features: pci_epc_features structure that holds the reserved bar bitmap
*
- * Invoke to get the first unreserved BAR that can be used for endpoint
+ * Invoke to get the first unreserved BAR that can be used by the endpoint
* function. For any incorrect value in reserved_bar return '0'.
*/
-unsigned int pci_epc_get_first_free_bar(const struct pci_epc_features
- *epc_features)
+enum pci_barno
+pci_epc_get_first_free_bar(const struct pci_epc_features *epc_features)
{
- int free_bar;
+ return pci_epc_get_next_free_bar(epc_features, BAR_0);
+}
+EXPORT_SYMBOL_GPL(pci_epc_get_first_free_bar);
+
+/**
+ * pci_epc_get_next_free_bar() - helper to get unreserved BAR starting from @bar
+ * @epc_features: pci_epc_features structure that holds the reserved bar bitmap
+ * @bar: the starting BAR number from where unreserved BAR should be searched
+ *
+ * Invoke to get the next unreserved BAR starting from @bar that can be used
+ * for endpoint function. For any incorrect value in reserved_bar return '0'.
+ */
+enum pci_barno pci_epc_get_next_free_bar(const struct pci_epc_features
+ *epc_features, enum pci_barno bar)
+{
+ unsigned long free_bar;
if (!epc_features)
- return 0;
+ return BAR_0;
+
+ /* If 'bar - 1' is a 64-bit BAR, move to the next BAR */
+ if ((epc_features->bar_fixed_64bit << 1) & 1 << bar)
+ bar++;
+
+ /* Find if the reserved BAR is also a 64-bit BAR */
+ free_bar = epc_features->reserved_bar & epc_features->bar_fixed_64bit;
- free_bar = ffz(epc_features->reserved_bar);
+ /* Set the adjacent bit if the reserved BAR is also a 64-bit BAR */
+ free_bar <<= 1;
+ free_bar |= epc_features->reserved_bar;
+
+ free_bar = find_next_zero_bit(&free_bar, 6, bar);
if (free_bar > 5)
- return 0;
+ return NO_BAR;
return free_bar;
}
-EXPORT_SYMBOL_GPL(pci_epc_get_first_free_bar);
+EXPORT_SYMBOL_GPL(pci_epc_get_next_free_bar);
/**
* pci_epc_get_features() - get the features supported by EPC
@@ -205,6 +231,47 @@ int pci_epc_raise_irq(struct pci_epc *epc, u8 func_no,
EXPORT_SYMBOL_GPL(pci_epc_raise_irq);
/**
+ * pci_epc_map_msi_irq() - Map physical address to MSI address and return
+ * MSI data
+ * @epc: the EPC device which has the MSI capability
+ * @func_no: the physical endpoint function number in the EPC device
+ * @phys_addr: the physical address of the outbound region
+ * @interrupt_num: the MSI interrupt number
+ * @entry_size: Size of Outbound address region for each interrupt
+ * @msi_data: the data that should be written in order to raise MSI interrupt
+ * with interrupt number as 'interrupt num'
+ * @msi_addr_offset: Offset of MSI address from the aligned outbound address
+ * to which the MSI address is mapped
+ *
+ * Invoke to map physical address to MSI address and return MSI data. The
+ * physical address should be an address in the outbound region. This is
+ * required to implement doorbell functionality of NTB wherein EPC on either
+ * side of the interface (primary and secondary) can directly write to the
+ * physical address (in outbound region) of the other interface to ring
+ * doorbell.
+ */
+int pci_epc_map_msi_irq(struct pci_epc *epc, u8 func_no, phys_addr_t phys_addr,
+ u8 interrupt_num, u32 entry_size, u32 *msi_data,
+ u32 *msi_addr_offset)
+{
+ int ret;
+
+ if (IS_ERR_OR_NULL(epc))
+ return -EINVAL;
+
+ if (!epc->ops->map_msi_irq)
+ return -EINVAL;
+
+ mutex_lock(&epc->lock);
+ ret = epc->ops->map_msi_irq(epc, func_no, phys_addr, interrupt_num,
+ entry_size, msi_data, msi_addr_offset);
+ mutex_unlock(&epc->lock);
+
+ return ret;
+}
+EXPORT_SYMBOL_GPL(pci_epc_map_msi_irq);
+
+/**
* pci_epc_get_msi() - get the number of MSI interrupt numbers allocated
* @epc: the EPC device to which MSI interrupts was requested
* @func_no: the endpoint function number in the EPC device
@@ -467,21 +534,28 @@ EXPORT_SYMBOL_GPL(pci_epc_write_header);
* pci_epc_add_epf() - bind PCI endpoint function to an endpoint controller
* @epc: the EPC device to which the endpoint function should be added
* @epf: the endpoint function to be added
+ * @type: Identifies if the EPC is connected to the primary or secondary
+ * interface of EPF
*
* A PCI endpoint device can have one or more functions. In the case of PCIe,
* the specification allows up to 8 PCIe endpoint functions. Invoke
* pci_epc_add_epf() to add a PCI endpoint function to an endpoint controller.
*/
-int pci_epc_add_epf(struct pci_epc *epc, struct pci_epf *epf)
+int pci_epc_add_epf(struct pci_epc *epc, struct pci_epf *epf,
+ enum pci_epc_interface_type type)
{
+ struct list_head *list;
u32 func_no;
int ret = 0;
- if (epf->epc)
+ if (IS_ERR_OR_NULL(epc))
+ return -EINVAL;
+
+ if (type == PRIMARY_INTERFACE && epf->epc)
return -EBUSY;
- if (IS_ERR(epc))
- return -EINVAL;
+ if (type == SECONDARY_INTERFACE && epf->sec_epc)
+ return -EBUSY;
mutex_lock(&epc->lock);
func_no = find_first_zero_bit(&epc->function_num_map,
@@ -498,11 +572,17 @@ int pci_epc_add_epf(struct pci_epc *epc, struct pci_epf *epf)
}
set_bit(func_no, &epc->function_num_map);
- epf->func_no = func_no;
- epf->epc = epc;
-
- list_add_tail(&epf->list, &epc->pci_epf);
+ if (type == PRIMARY_INTERFACE) {
+ epf->func_no = func_no;
+ epf->epc = epc;
+ list = &epf->list;
+ } else {
+ epf->sec_epc_func_no = func_no;
+ epf->sec_epc = epc;
+ list = &epf->sec_epc_list;
+ }
+ list_add_tail(list, &epc->pci_epf);
ret:
mutex_unlock(&epc->lock);
@@ -517,14 +597,26 @@ EXPORT_SYMBOL_GPL(pci_epc_add_epf);
*
* Invoke to remove PCI endpoint function from the endpoint controller.
*/
-void pci_epc_remove_epf(struct pci_epc *epc, struct pci_epf *epf)
+void pci_epc_remove_epf(struct pci_epc *epc, struct pci_epf *epf,
+ enum pci_epc_interface_type type)
{
+ struct list_head *list;
+ u32 func_no = 0;
+
if (!epc || IS_ERR(epc) || !epf)
return;
+ if (type == PRIMARY_INTERFACE) {
+ func_no = epf->func_no;
+ list = &epf->list;
+ } else {
+ func_no = epf->sec_epc_func_no;
+ list = &epf->sec_epc_list;
+ }
+
mutex_lock(&epc->lock);
- clear_bit(epf->func_no, &epc->function_num_map);
- list_del(&epf->list);
+ clear_bit(func_no, &epc->function_num_map);
+ list_del(list);
epf->epc = NULL;
mutex_unlock(&epc->lock);
}
diff --git a/drivers/pci/endpoint/pci-epf-core.c b/drivers/pci/endpoint/pci-epf-core.c
index c977cf9dce56..7646c8660d42 100644
--- a/drivers/pci/endpoint/pci-epf-core.c
+++ b/drivers/pci/endpoint/pci-epf-core.c
@@ -21,6 +21,38 @@ static struct bus_type pci_epf_bus_type;
static const struct device_type pci_epf_type;
/**
+ * pci_epf_type_add_cfs() - Help function drivers to expose function specific
+ * attributes in configfs
+ * @epf: the EPF device that has to be configured using configfs
+ * @group: the parent configfs group (corresponding to entries in
+ * pci_epf_device_id)
+ *
+ * Invoke to expose function specific attributes in configfs. If the function
+ * driver does not have anything to expose (attributes configured by user),
+ * return NULL.
+ */
+struct config_group *pci_epf_type_add_cfs(struct pci_epf *epf,
+ struct config_group *group)
+{
+ struct config_group *epf_type_group;
+
+ if (!epf->driver) {
+ dev_err(&epf->dev, "epf device not bound to driver\n");
+ return NULL;
+ }
+
+ if (!epf->driver->ops->add_cfs)
+ return NULL;
+
+ mutex_lock(&epf->lock);
+ epf_type_group = epf->driver->ops->add_cfs(epf, group);
+ mutex_unlock(&epf->lock);
+
+ return epf_type_group;
+}
+EXPORT_SYMBOL_GPL(pci_epf_type_add_cfs);
+
+/**
* pci_epf_unbind() - Notify the function driver that the binding between the
* EPF device and EPC device has been lost
* @epf: the EPF device which has lost the binding with the EPC device
@@ -74,24 +106,37 @@ EXPORT_SYMBOL_GPL(pci_epf_bind);
* @epf: the EPF device from whom to free the memory
* @addr: the virtual address of the PCI EPF register space
* @bar: the BAR number corresponding to the register space
+ * @type: Identifies if the allocated space is for primary EPC or secondary EPC
*
* Invoke to free the allocated PCI EPF register space.
*/
-void pci_epf_free_space(struct pci_epf *epf, void *addr, enum pci_barno bar)
+void pci_epf_free_space(struct pci_epf *epf, void *addr, enum pci_barno bar,
+ enum pci_epc_interface_type type)
{
struct device *dev = epf->epc->dev.parent;
+ struct pci_epf_bar *epf_bar;
+ struct pci_epc *epc;
if (!addr)
return;
- dma_free_coherent(dev, epf->bar[bar].size, addr,
- epf->bar[bar].phys_addr);
+ if (type == PRIMARY_INTERFACE) {
+ epc = epf->epc;
+ epf_bar = epf->bar;
+ } else {
+ epc = epf->sec_epc;
+ epf_bar = epf->sec_epc_bar;
+ }
- epf->bar[bar].phys_addr = 0;
- epf->bar[bar].addr = NULL;
- epf->bar[bar].size = 0;
- epf->bar[bar].barno = 0;
- epf->bar[bar].flags = 0;
+ dev = epc->dev.parent;
+ dma_free_coherent(dev, epf_bar[bar].size, addr,
+ epf_bar[bar].phys_addr);
+
+ epf_bar[bar].phys_addr = 0;
+ epf_bar[bar].addr = NULL;
+ epf_bar[bar].size = 0;
+ epf_bar[bar].barno = 0;
+ epf_bar[bar].flags = 0;
}
EXPORT_SYMBOL_GPL(pci_epf_free_space);
@@ -101,15 +146,18 @@ EXPORT_SYMBOL_GPL(pci_epf_free_space);
* @size: the size of the memory that has to be allocated
* @bar: the BAR number corresponding to the allocated register space
* @align: alignment size for the allocation region
+ * @type: Identifies if the allocation is for primary EPC or secondary EPC
*
* Invoke to allocate memory for the PCI EPF register space.
*/
void *pci_epf_alloc_space(struct pci_epf *epf, size_t size, enum pci_barno bar,
- size_t align)
+ size_t align, enum pci_epc_interface_type type)
{
- void *space;
- struct device *dev = epf->epc->dev.parent;
+ struct pci_epf_bar *epf_bar;
dma_addr_t phys_addr;
+ struct pci_epc *epc;
+ struct device *dev;
+ void *space;
if (size < 128)
size = 128;
@@ -119,17 +167,26 @@ void *pci_epf_alloc_space(struct pci_epf *epf, size_t size, enum pci_barno bar,
else
size = roundup_pow_of_two(size);
+ if (type == PRIMARY_INTERFACE) {
+ epc = epf->epc;
+ epf_bar = epf->bar;
+ } else {
+ epc = epf->sec_epc;
+ epf_bar = epf->sec_epc_bar;
+ }
+
+ dev = epc->dev.parent;
space = dma_alloc_coherent(dev, size, &phys_addr, GFP_KERNEL);
if (!space) {
dev_err(dev, "failed to allocate mem space\n");
return NULL;
}
- epf->bar[bar].phys_addr = phys_addr;
- epf->bar[bar].addr = space;
- epf->bar[bar].size = size;
- epf->bar[bar].barno = bar;
- epf->bar[bar].flags |= upper_32_bits(size) ?
+ epf_bar[bar].phys_addr = phys_addr;
+ epf_bar[bar].addr = space;
+ epf_bar[bar].size = size;
+ epf_bar[bar].barno = bar;
+ epf_bar[bar].flags |= upper_32_bits(size) ?
PCI_BASE_ADDRESS_MEM_TYPE_64 :
PCI_BASE_ADDRESS_MEM_TYPE_32;
@@ -282,22 +339,6 @@ struct pci_epf *pci_epf_create(const char *name)
}
EXPORT_SYMBOL_GPL(pci_epf_create);
-const struct pci_epf_device_id *
-pci_epf_match_device(const struct pci_epf_device_id *id, struct pci_epf *epf)
-{
- if (!id || !epf)
- return NULL;
-
- while (*id->name) {
- if (strcmp(epf->name, id->name) == 0)
- return id;
- id++;
- }
-
- return NULL;
-}
-EXPORT_SYMBOL_GPL(pci_epf_match_device);
-
static void pci_epf_dev_release(struct device *dev)
{
struct pci_epf *epf = to_pci_epf(dev);
diff --git a/drivers/pci/hotplug/acpiphp.h b/drivers/pci/hotplug/acpiphp.h
index a2094c07af6a..a74b274a8c45 100644
--- a/drivers/pci/hotplug/acpiphp.h
+++ b/drivers/pci/hotplug/acpiphp.h
@@ -176,9 +176,6 @@ int acpiphp_unregister_attention(struct acpiphp_attention_info *info);
int acpiphp_register_hotplug_slot(struct acpiphp_slot *slot, unsigned int sun);
void acpiphp_unregister_hotplug_slot(struct acpiphp_slot *slot);
-/* acpiphp_glue.c */
-typedef int (*acpiphp_callback)(struct acpiphp_slot *slot, void *data);
-
int acpiphp_enable_slot(struct acpiphp_slot *slot);
int acpiphp_disable_slot(struct acpiphp_slot *slot);
u8 acpiphp_get_power_status(struct acpiphp_slot *slot);
diff --git a/drivers/pci/pci-bridge-emul.c b/drivers/pci/pci-bridge-emul.c
index 139869d50eb2..fdaf86a888b7 100644
--- a/drivers/pci/pci-bridge-emul.c
+++ b/drivers/pci/pci-bridge-emul.c
@@ -21,8 +21,9 @@
#include "pci-bridge-emul.h"
#define PCI_BRIDGE_CONF_END PCI_STD_HEADER_SIZEOF
+#define PCI_CAP_PCIE_SIZEOF (PCI_EXP_SLTSTA2 + 2)
#define PCI_CAP_PCIE_START PCI_BRIDGE_CONF_END
-#define PCI_CAP_PCIE_END (PCI_CAP_PCIE_START + PCI_EXP_SLTSTA2 + 2)
+#define PCI_CAP_PCIE_END (PCI_CAP_PCIE_START + PCI_CAP_PCIE_SIZEOF)
/**
* struct pci_bridge_reg_behavior - register bits behaviors
@@ -46,7 +47,8 @@ struct pci_bridge_reg_behavior {
u32 w1c;
};
-static const struct pci_bridge_reg_behavior pci_regs_behavior[] = {
+static const
+struct pci_bridge_reg_behavior pci_regs_behavior[PCI_STD_HEADER_SIZEOF / 4] = {
[PCI_VENDOR_ID / 4] = { .ro = ~0 },
[PCI_COMMAND / 4] = {
.rw = (PCI_COMMAND_IO | PCI_COMMAND_MEMORY |
@@ -164,7 +166,8 @@ static const struct pci_bridge_reg_behavior pci_regs_behavior[] = {
},
};
-static const struct pci_bridge_reg_behavior pcie_cap_regs_behavior[] = {
+static const
+struct pci_bridge_reg_behavior pcie_cap_regs_behavior[PCI_CAP_PCIE_SIZEOF / 4] = {
[PCI_CAP_LIST_ID / 4] = {
/*
* Capability ID, Next Capability Pointer and
@@ -260,6 +263,8 @@ static const struct pci_bridge_reg_behavior pcie_cap_regs_behavior[] = {
int pci_bridge_emul_init(struct pci_bridge_emul *bridge,
unsigned int flags)
{
+ BUILD_BUG_ON(sizeof(bridge->conf) != PCI_BRIDGE_CONF_END);
+
bridge->conf.class_revision |= cpu_to_le32(PCI_CLASS_BRIDGE_PCI << 16);
bridge->conf.header_type = PCI_HEADER_TYPE_BRIDGE;
bridge->conf.cache_line_size = 0x10;
diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
index b67c4327d307..16a17215f633 100644
--- a/drivers/pci/pci.c
+++ b/drivers/pci/pci.c
@@ -4030,6 +4030,10 @@ int pci_register_io_range(struct fwnode_handle *fwnode, phys_addr_t addr,
ret = logic_pio_register_range(range);
if (ret)
kfree(range);
+
+ /* Ignore duplicates due to deferred probing */
+ if (ret == -EEXIST)
+ ret = 0;
#endif
return ret;
diff --git a/drivers/pci/pcie/Kconfig b/drivers/pci/pcie/Kconfig
index 3946555a6042..45a2ef702b45 100644
--- a/drivers/pci/pcie/Kconfig
+++ b/drivers/pci/pcie/Kconfig
@@ -133,14 +133,6 @@ config PCIE_PTM
This is only useful if you have devices that support PTM, but it
is safe to enable even if you don't.
-config PCIE_BW
- bool "PCI Express Bandwidth Change Notification"
- depends on PCIEPORTBUS
- help
- This enables PCI Express Bandwidth Change Notification. If
- you know link width or rate changes occur only to correct
- unreliable links, you may answer Y.
-
config PCIE_EDR
bool "PCI Express Error Disconnect Recover support"
depends on PCIE_DPC && ACPI
diff --git a/drivers/pci/pcie/Makefile b/drivers/pci/pcie/Makefile
index d9697892fa3e..b2980db88cc0 100644
--- a/drivers/pci/pcie/Makefile
+++ b/drivers/pci/pcie/Makefile
@@ -12,5 +12,4 @@ obj-$(CONFIG_PCIEAER_INJECT) += aer_inject.o
obj-$(CONFIG_PCIE_PME) += pme.o
obj-$(CONFIG_PCIE_DPC) += dpc.o
obj-$(CONFIG_PCIE_PTM) += ptm.o
-obj-$(CONFIG_PCIE_BW) += bw_notification.o
obj-$(CONFIG_PCIE_EDR) += edr.o
diff --git a/drivers/pci/pcie/aer.c b/drivers/pci/pcie/aer.c
index 77b0f2c45bc0..ba22388342d1 100644
--- a/drivers/pci/pcie/aer.c
+++ b/drivers/pci/pcie/aer.c
@@ -1388,7 +1388,7 @@ static pci_ers_result_t aer_root_reset(struct pci_dev *dev)
if (type == PCI_EXP_TYPE_RC_END)
root = dev->rcec;
else
- root = dev;
+ root = pcie_find_root_port(dev);
/*
* If the platform retained control of AER, an RCiEP may not have
@@ -1414,7 +1414,8 @@ static pci_ers_result_t aer_root_reset(struct pci_dev *dev)
}
} else {
rc = pci_bus_error_reset(dev);
- pci_info(dev, "Root Port link has been reset (%d)\n", rc);
+ pci_info(dev, "%s Port link has been reset (%d)\n",
+ pci_is_root_bus(dev->bus) ? "Root" : "Downstream", rc);
}
if ((host->native_aer || pcie_ports_native) && aer) {
diff --git a/drivers/pci/pcie/bw_notification.c b/drivers/pci/pcie/bw_notification.c
deleted file mode 100644
index 565d23cccb8b..000000000000
--- a/drivers/pci/pcie/bw_notification.c
+++ /dev/null
@@ -1,138 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0+
-/*
- * PCI Express Link Bandwidth Notification services driver
- * Author: Alexandru Gagniuc <mr.nuke.me@gmail.com>
- *
- * Copyright (C) 2019, Dell Inc
- *
- * The PCIe Link Bandwidth Notification provides a way to notify the
- * operating system when the link width or data rate changes. This
- * capability is required for all root ports and downstream ports
- * supporting links wider than x1 and/or multiple link speeds.
- *
- * This service port driver hooks into the bandwidth notification interrupt
- * and warns when links become degraded in operation.
- */
-
-#define dev_fmt(fmt) "bw_notification: " fmt
-
-#include "../pci.h"
-#include "portdrv.h"
-
-static bool pcie_link_bandwidth_notification_supported(struct pci_dev *dev)
-{
- int ret;
- u32 lnk_cap;
-
- ret = pcie_capability_read_dword(dev, PCI_EXP_LNKCAP, &lnk_cap);
- return (ret == PCIBIOS_SUCCESSFUL) && (lnk_cap & PCI_EXP_LNKCAP_LBNC);
-}
-
-static void pcie_enable_link_bandwidth_notification(struct pci_dev *dev)
-{
- u16 lnk_ctl;
-
- pcie_capability_write_word(dev, PCI_EXP_LNKSTA, PCI_EXP_LNKSTA_LBMS);
-
- pcie_capability_read_word(dev, PCI_EXP_LNKCTL, &lnk_ctl);
- lnk_ctl |= PCI_EXP_LNKCTL_LBMIE;
- pcie_capability_write_word(dev, PCI_EXP_LNKCTL, lnk_ctl);
-}
-
-static void pcie_disable_link_bandwidth_notification(struct pci_dev *dev)
-{
- u16 lnk_ctl;
-
- pcie_capability_read_word(dev, PCI_EXP_LNKCTL, &lnk_ctl);
- lnk_ctl &= ~PCI_EXP_LNKCTL_LBMIE;
- pcie_capability_write_word(dev, PCI_EXP_LNKCTL, lnk_ctl);
-}
-
-static irqreturn_t pcie_bw_notification_irq(int irq, void *context)
-{
- struct pcie_device *srv = context;
- struct pci_dev *port = srv->port;
- u16 link_status, events;
- int ret;
-
- ret = pcie_capability_read_word(port, PCI_EXP_LNKSTA, &link_status);
- events = link_status & PCI_EXP_LNKSTA_LBMS;
-
- if (ret != PCIBIOS_SUCCESSFUL || !events)
- return IRQ_NONE;
-
- pcie_capability_write_word(port, PCI_EXP_LNKSTA, events);
- pcie_update_link_speed(port->subordinate, link_status);
- return IRQ_WAKE_THREAD;
-}
-
-static irqreturn_t pcie_bw_notification_handler(int irq, void *context)
-{
- struct pcie_device *srv = context;
- struct pci_dev *port = srv->port;
- struct pci_dev *dev;
-
- /*
- * Print status from downstream devices, not this root port or
- * downstream switch port.
- */
- down_read(&pci_bus_sem);
- list_for_each_entry(dev, &port->subordinate->devices, bus_list)
- pcie_report_downtraining(dev);
- up_read(&pci_bus_sem);
-
- return IRQ_HANDLED;
-}
-
-static int pcie_bandwidth_notification_probe(struct pcie_device *srv)
-{
- int ret;
-
- /* Single-width or single-speed ports do not have to support this. */
- if (!pcie_link_bandwidth_notification_supported(srv->port))
- return -ENODEV;
-
- ret = request_threaded_irq(srv->irq, pcie_bw_notification_irq,
- pcie_bw_notification_handler,
- IRQF_SHARED, "PCIe BW notif", srv);
- if (ret)
- return ret;
-
- pcie_enable_link_bandwidth_notification(srv->port);
- pci_info(srv->port, "enabled with IRQ %d\n", srv->irq);
-
- return 0;
-}
-
-static void pcie_bandwidth_notification_remove(struct pcie_device *srv)
-{
- pcie_disable_link_bandwidth_notification(srv->port);
- free_irq(srv->irq, srv);
-}
-
-static int pcie_bandwidth_notification_suspend(struct pcie_device *srv)
-{
- pcie_disable_link_bandwidth_notification(srv->port);
- return 0;
-}
-
-static int pcie_bandwidth_notification_resume(struct pcie_device *srv)
-{
- pcie_enable_link_bandwidth_notification(srv->port);
- return 0;
-}
-
-static struct pcie_port_service_driver pcie_bandwidth_notification_driver = {
- .name = "pcie_bw_notification",
- .port_type = PCIE_ANY_PORT,
- .service = PCIE_PORT_SERVICE_BWNOTIF,
- .probe = pcie_bandwidth_notification_probe,
- .suspend = pcie_bandwidth_notification_suspend,
- .resume = pcie_bandwidth_notification_resume,
- .remove = pcie_bandwidth_notification_remove,
-};
-
-int __init pcie_bandwidth_notification_init(void)
-{
- return pcie_port_service_register(&pcie_bandwidth_notification_driver);
-}
diff --git a/drivers/pci/pcie/err.c b/drivers/pci/pcie/err.c
index 510f31f0ef6d..b576aa890c76 100644
--- a/drivers/pci/pcie/err.c
+++ b/drivers/pci/pcie/err.c
@@ -198,8 +198,7 @@ pci_ers_result_t pcie_do_recovery(struct pci_dev *dev,
pci_dbg(bridge, "broadcast error_detected message\n");
if (state == pci_channel_io_frozen) {
pci_walk_bridge(bridge, report_frozen_detected, &status);
- status = reset_subordinates(bridge);
- if (status != PCI_ERS_RESULT_RECOVERED) {
+ if (reset_subordinates(bridge) != PCI_ERS_RESULT_RECOVERED) {
pci_warn(bridge, "subordinate device reset failed\n");
goto failed;
}
@@ -231,15 +230,14 @@ pci_ers_result_t pcie_do_recovery(struct pci_dev *dev,
pci_walk_bridge(bridge, report_resume, &status);
/*
- * If we have native control of AER, clear error status in the Root
- * Port or Downstream Port that signaled the error. If the
- * platform retained control of AER, it is responsible for clearing
- * this status. In that case, the signaling device may not even be
- * visible to the OS.
+ * If we have native control of AER, clear error status in the device
+ * that detected the error. If the platform retained control of AER,
+ * it is responsible for clearing this status. In that case, the
+ * signaling device may not even be visible to the OS.
*/
if (host->native_aer || pcie_ports_native) {
- pcie_clear_device_status(bridge);
- pci_aer_clear_nonfatal_status(bridge);
+ pcie_clear_device_status(dev);
+ pci_aer_clear_nonfatal_status(dev);
}
pci_info(bridge, "device recovery successful\n");
return status;
diff --git a/drivers/pci/pcie/portdrv.h b/drivers/pci/pcie/portdrv.h
index af7cf237432a..2ff5724b8f13 100644
--- a/drivers/pci/pcie/portdrv.h
+++ b/drivers/pci/pcie/portdrv.h
@@ -53,12 +53,6 @@ int pcie_dpc_init(void);
static inline int pcie_dpc_init(void) { return 0; }
#endif
-#ifdef CONFIG_PCIE_BW
-int pcie_bandwidth_notification_init(void);
-#else
-static inline int pcie_bandwidth_notification_init(void) { return 0; }
-#endif
-
/* Port Type */
#define PCIE_ANY_PORT (~0)
diff --git a/drivers/pci/pcie/portdrv_pci.c b/drivers/pci/pcie/portdrv_pci.c
index 0b250bc5f405..c7ff1eea225a 100644
--- a/drivers/pci/pcie/portdrv_pci.c
+++ b/drivers/pci/pcie/portdrv_pci.c
@@ -153,7 +153,8 @@ static void pcie_portdrv_remove(struct pci_dev *dev)
static pci_ers_result_t pcie_portdrv_error_detected(struct pci_dev *dev,
pci_channel_state_t error)
{
- /* Root Port has no impact. Always recovers. */
+ if (error == pci_channel_io_frozen)
+ return PCI_ERS_RESULT_NEED_RESET;
return PCI_ERS_RESULT_CAN_RECOVER;
}
@@ -255,7 +256,6 @@ static void __init pcie_init_services(void)
pcie_pme_init();
pcie_dpc_init();
pcie_hp_init();
- pcie_bandwidth_notification_init();
}
static int __init pcie_portdrv_init(void)
diff --git a/drivers/pci/search.c b/drivers/pci/search.c
index 2061672954ee..b4c138a6ec02 100644
--- a/drivers/pci/search.c
+++ b/drivers/pci/search.c
@@ -168,7 +168,6 @@ struct pci_bus *pci_find_next_bus(const struct pci_bus *from)
struct list_head *n;
struct pci_bus *b = NULL;
- WARN_ON(in_interrupt());
down_read(&pci_bus_sem);
n = from ? from->node.next : pci_root_buses.next;
if (n != &pci_root_buses)
@@ -196,7 +195,6 @@ struct pci_dev *pci_get_slot(struct pci_bus *bus, unsigned int devfn)
{
struct pci_dev *dev;
- WARN_ON(in_interrupt());
down_read(&pci_bus_sem);
list_for_each_entry(dev, &bus->devices, bus_list) {
@@ -274,7 +272,6 @@ static struct pci_dev *pci_get_dev_by_id(const struct pci_device_id *id,
struct device *dev_start = NULL;
struct pci_dev *pdev = NULL;
- WARN_ON(in_interrupt());
if (from)
dev_start = &from->dev;
dev = bus_find_device(&pci_bus_type, dev_start, (void *)id,
@@ -381,7 +378,6 @@ int pci_dev_present(const struct pci_device_id *ids)
{
struct pci_dev *found = NULL;
- WARN_ON(in_interrupt());
while (ids->vendor || ids->subvendor || ids->class_mask) {
found = pci_get_dev_by_id(ids, NULL);
if (found) {
diff --git a/drivers/pci/setup-res.c b/drivers/pci/setup-res.c
index 43eda101fcf4..7f1acb3918d0 100644
--- a/drivers/pci/setup-res.c
+++ b/drivers/pci/setup-res.c
@@ -410,10 +410,16 @@ EXPORT_SYMBOL(pci_release_resource);
int pci_resize_resource(struct pci_dev *dev, int resno, int size)
{
struct resource *res = dev->resource + resno;
+ struct pci_host_bridge *host;
int old, ret;
u32 sizes;
u16 cmd;
+ /* Check if we must preserve the firmware's resource assignment */
+ host = pci_find_host_bridge(dev->bus);
+ if (host->preserve_config)
+ return -ENOTSUPP;
+
/* Make sure the resource isn't assigned before resizing it. */
if (!(res->flags & IORESOURCE_UNSET))
return -EBUSY;
diff --git a/drivers/pci/syscall.c b/drivers/pci/syscall.c
index 31e39558d49d..8b003c890b87 100644
--- a/drivers/pci/syscall.c
+++ b/drivers/pci/syscall.c
@@ -20,7 +20,7 @@ SYSCALL_DEFINE5(pciconfig_read, unsigned long, bus, unsigned long, dfn,
u16 word;
u32 dword;
long err;
- long cfg_ret;
+ int cfg_ret;
if (!capable(CAP_SYS_ADMIN))
return -EPERM;
@@ -46,7 +46,7 @@ SYSCALL_DEFINE5(pciconfig_read, unsigned long, bus, unsigned long, dfn,
}
err = -EIO;
- if (cfg_ret != PCIBIOS_SUCCESSFUL)
+ if (cfg_ret)
goto error;
switch (len) {
@@ -105,7 +105,7 @@ SYSCALL_DEFINE5(pciconfig_write, unsigned long, bus, unsigned long, dfn,
if (err)
break;
err = pci_user_write_config_byte(dev, off, byte);
- if (err != PCIBIOS_SUCCESSFUL)
+ if (err)
err = -EIO;
break;
@@ -114,7 +114,7 @@ SYSCALL_DEFINE5(pciconfig_write, unsigned long, bus, unsigned long, dfn,
if (err)
break;
err = pci_user_write_config_word(dev, off, word);
- if (err != PCIBIOS_SUCCESSFUL)
+ if (err)
err = -EIO;
break;
@@ -123,7 +123,7 @@ SYSCALL_DEFINE5(pciconfig_write, unsigned long, bus, unsigned long, dfn,
if (err)
break;
err = pci_user_write_config_dword(dev, off, dword);
- if (err != PCIBIOS_SUCCESSFUL)
+ if (err)
err = -EIO;
break;