diff options
author | Linus Torvalds <torvalds@linux-foundation.org> | 2018-04-06 18:31:06 -0700 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2018-04-06 18:31:06 -0700 |
commit | 3c0d551e02b2590fa71a1354f2f1994551a33315 (patch) | |
tree | da94dc3559fe0c63fcc13852b53ba3d3b08d5292 /drivers/pci | |
parent | 19fd08b85bc7e0502b55cd726f466df82ee7e777 (diff) | |
parent | 5f764419098671cfffcfc44f8a5220afd3e37864 (diff) | |
download | linux-3c0d551e02b2590fa71a1354f2f1994551a33315.tar.bz2 |
Merge tag 'pci-v4.17-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci
Pull PCI updates from Bjorn Helgaas:
- move pci_uevent_ers() out of pci.h (Michael Ellerman)
- skip ASPM common clock warning if BIOS already configured it (Sinan
Kaya)
- fix ASPM Coverity warning about threshold_ns (Gustavo A. R. Silva)
- remove last user of pci_get_bus_and_slot() and the function itself
(Sinan Kaya)
- add decoding for 16 GT/s link speed (Jay Fang)
- add interfaces to get max link speed and width (Tal Gilboa)
- add pcie_bandwidth_capable() to compute max supported link bandwidth
(Tal Gilboa)
- add pcie_bandwidth_available() to compute bandwidth available to
device (Tal Gilboa)
- add pcie_print_link_status() to log link speed and whether it's
limited (Tal Gilboa)
- use PCI core interfaces to report when device performance may be
limited by its slot instead of doing it in each driver (Tal Gilboa)
- fix possible cpqphp NULL pointer dereference (Shawn Lin)
- rescan more of the hierarchy on ACPI hotplug to fix Thunderbolt/xHCI
hotplug (Mika Westerberg)
- add support for PCI I/O port space that's neither directly accessible
via CPU in/out instructions nor directly mapped into CPU physical
memory space. This is fairly intrusive and includes minor changes to
interfaces used for I/O space on most platforms (Zhichang Yuan, John
Garry)
- add support for HiSilicon Hip06/Hip07 LPC I/O space (Zhichang Yuan,
John Garry)
- use PCI_EXP_DEVCTL2_COMP_TIMEOUT in rapidio/tsi721 (Bjorn Helgaas)
- remove possible NULL pointer dereference in of_pci_bus_find_domain_nr()
(Shawn Lin)
- report quirk timings with dev_info (Bjorn Helgaas)
- report quirks that take longer than 10ms (Bjorn Helgaas)
- add and use Altera Vendor ID (Johannes Thumshirn)
- tidy Makefiles and comments (Bjorn Helgaas)
- don't set up INTx if MSI or MSI-X is enabled to align cris, frv,
ia64, and mn10300 with x86 (Bjorn Helgaas)
- move pcieport_if.h to drivers/pci/pcie/ to encapsulate it (Frederick
Lawler)
- merge pcieport_if.h into portdrv.h (Bjorn Helgaas)
- move workaround for BIOS PME issue from portdrv to PCI core (Bjorn
Helgaas)
- completely disable portdrv with "pcie_ports=compat" (Bjorn Helgaas)
- remove portdrv link order dependency (Bjorn Helgaas)
- remove support for unused VC portdrv service (Bjorn Helgaas)
- simplify portdrv feature permission checking (Bjorn Helgaas)
- remove "pcie_hp=nomsi" parameter (use "pci=nomsi" instead) (Bjorn
Helgaas)
- remove unnecessary "pcie_ports=auto" parameter (Bjorn Helgaas)
- use cached AER capability offset (Frederick Lawler)
- don't enable DPC if BIOS hasn't granted AER control (Mika Westerberg)
- rename pcie-dpc.c to dpc.c (Bjorn Helgaas)
- use generic pci_mmap_resource_range() instead of powerpc and xtensa
arch-specific versions (David Woodhouse)
- support arbitrary PCI host bridge offsets on sparc (Yinghai Lu)
- remove System and Video ROM reservations on sparc (Bjorn Helgaas)
- probe for device reset support during enumeration instead of runtime
(Bjorn Helgaas)
- add ACS quirk for Ampere (née APM) root ports (Feng Kan)
- add function 1 DMA alias quirk for Marvell 88SE9220 (Thomas
Vincent-Cross)
- protect device restore with device lock (Sinan Kaya)
- handle failure of FLR gracefully (Sinan Kaya)
- handle CRS (config retry status) after device resets (Sinan Kaya)
- skip various config reads for SR-IOV VFs as an optimization
(KarimAllah Ahmed)
- consolidate VPD code in vpd.c (Bjorn Helgaas)
- add Tegra dependency on PCI_MSI_IRQ_DOMAIN (Arnd Bergmann)
- add DT support for R-Car r8a7743 (Biju Das)
- fix a PCI_EJECT vs PCI_BUS_RELATIONS race condition in Hyper-V host
bridge driver that causes a general protection fault (Dexuan Cui)
- fix Hyper-V host bridge hang in MSI setup on 1-vCPU VMs with SR-IOV
(Dexuan Cui)
- fix Hyper-V host bridge hang when ejecting a VF before setting up MSI
(Dexuan Cui)
- make several structures static (Fengguang Wu)
- increase number of MSI IRQs supported by Synopsys DesignWare bridges
from 32 to 256 (Gustavo Pimentel)
- implemented multiplexed IRQ domain API and remove obsolete MSI IRQ
API from DesignWare drivers (Gustavo Pimentel)
- add Tegra power management support (Manikanta Maddireddy)
- add Tegra loadable module support (Manikanta Maddireddy)
- handle 64-bit BARs correctly in endpoint support (Niklas Cassel)
- support optional regulator for HiSilicon STB (Shawn Guo)
- use regulator bulk API for Qualcomm apq8064 (Srinivas Kandagatla)
- support power supplies for Qualcomm msm8996 (Srinivas Kandagatla)
* tag 'pci-v4.17-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci: (123 commits)
MAINTAINERS: Add John Garry as maintainer for HiSilicon LPC driver
HISI LPC: Add ACPI support
ACPI / scan: Do not enumerate Indirect IO host children
ACPI / scan: Rename acpi_is_serial_bus_slave() for more general use
HISI LPC: Support the LPC host on Hip06/Hip07 with DT bindings
of: Add missing I/O range exception for indirect-IO devices
PCI: Apply the new generic I/O management on PCI IO hosts
PCI: Add fwnode handler as input param of pci_register_io_range()
PCI: Remove __weak tag from pci_register_io_range()
MAINTAINERS: Add missing /drivers/pci/cadence directory entry
fm10k: Report PCIe link properties with pcie_print_link_status()
net/mlx5e: Use pcie_bandwidth_available() to compute bandwidth
net/mlx5: Report PCIe link properties with pcie_print_link_status()
net/mlx4_core: Report PCIe link properties with pcie_print_link_status()
PCI: Add pcie_print_link_status() to log link speed and whether it's limited
PCI: Add pcie_bandwidth_available() to compute bandwidth available to device
misc: pci_endpoint_test: Handle 64-bit BARs properly
PCI: designware-ep: Make dw_pcie_ep_reset_bar() handle 64-bit BARs properly
PCI: endpoint: Make sure that BAR_5 does not have 64-bit flag set when clearing
PCI: endpoint: Make epc->ops->clear_bar()/pci_epc_clear_bar() take struct *epf_bar
...
Diffstat (limited to 'drivers/pci')
78 files changed, 2134 insertions, 1742 deletions
diff --git a/drivers/pci/Makefile b/drivers/pci/Makefile index 941970936840..952addc7bacf 100644 --- a/drivers/pci/Makefile +++ b/drivers/pci/Makefile @@ -1,61 +1,40 @@ # SPDX-License-Identifier: GPL-2.0 # # Makefile for the PCI bus specific drivers. -# -obj-$(CONFIG_PCI) += access.o bus.o probe.o host-bridge.o remove.o pci.o \ - pci-driver.o search.o pci-sysfs.o rom.o setup-res.o \ - irq.o vpd.o setup-bus.o vc.o mmap.o setup-irq.o +obj-$(CONFIG_PCI) += access.o bus.o probe.o host-bridge.o \ + remove.o pci.o pci-driver.o search.o \ + pci-sysfs.o rom.o setup-res.o irq.o vpd.o \ + setup-bus.o vc.o mmap.o setup-irq.o ifdef CONFIG_PCI -obj-$(CONFIG_PROC_FS) += proc.o -obj-$(CONFIG_SYSFS) += slot.o -obj-$(CONFIG_OF) += of.o +obj-$(CONFIG_PROC_FS) += proc.o +obj-$(CONFIG_SYSFS) += slot.o +obj-$(CONFIG_OF) += of.o endif -obj-$(CONFIG_PCI_QUIRKS) += quirks.o - -# Build PCI Express stuff if needed -obj-$(CONFIG_PCIEPORTBUS) += pcie/ - -# Build the PCI Hotplug drivers if we were asked to -obj-$(CONFIG_HOTPLUG_PCI) += hotplug/ - -# Build the PCI MSI interrupt support -obj-$(CONFIG_PCI_MSI) += msi.o - -obj-$(CONFIG_PCI_ATS) += ats.o -obj-$(CONFIG_PCI_IOV) += iov.o - -# -# ACPI Related PCI FW Functions -# ACPI _DSM provided firmware instance and string name -# -obj-$(CONFIG_ACPI) += pci-acpi.o - -# SMBIOS provided firmware instance and labels -obj-$(CONFIG_PCI_LABEL) += pci-label.o - -# Intel MID platform PM support -obj-$(CONFIG_X86_INTEL_MID) += pci-mid.o - -obj-$(CONFIG_PCI_SYSCALL) += syscall.o - -obj-$(CONFIG_PCI_STUB) += pci-stub.o - -obj-$(CONFIG_PCI_ECAM) += ecam.o - +obj-$(CONFIG_PCI_QUIRKS) += quirks.o +obj-$(CONFIG_PCIEPORTBUS) += pcie/ +obj-$(CONFIG_HOTPLUG_PCI) += hotplug/ +obj-$(CONFIG_PCI_MSI) += msi.o +obj-$(CONFIG_PCI_ATS) += ats.o +obj-$(CONFIG_PCI_IOV) += iov.o +obj-$(CONFIG_ACPI) += pci-acpi.o +obj-$(CONFIG_PCI_LABEL) += pci-label.o +obj-$(CONFIG_X86_INTEL_MID) += pci-mid.o +obj-$(CONFIG_PCI_SYSCALL) += syscall.o +obj-$(CONFIG_PCI_STUB) += pci-stub.o +obj-$(CONFIG_PCI_ECAM) += ecam.o obj-$(CONFIG_XEN_PCIDEV_FRONTEND) += xen-pcifront.o -ccflags-$(CONFIG_PCI_DEBUG) := -DDEBUG - -# PCI host controller drivers -obj-y += host/ -obj-y += switch/ +obj-y += host/ +obj-y += switch/ +# Endpoint library must be initialized before its users obj-$(CONFIG_PCI_ENDPOINT) += endpoint/ -# Endpoint library must be initialized before its users obj-$(CONFIG_PCIE_CADENCE) += cadence/ # pcie-hisi.o quirks are needed even without CONFIG_PCIE_DW obj-y += dwc/ + +ccflags-$(CONFIG_PCI_DEBUG) := -DDEBUG diff --git a/drivers/pci/access.c b/drivers/pci/access.c index 5e9a9822d9d4..a3ad2fe185b9 100644 --- a/drivers/pci/access.c +++ b/drivers/pci/access.c @@ -1,8 +1,6 @@ // SPDX-License-Identifier: GPL-2.0 -#include <linux/delay.h> #include <linux/pci.h> #include <linux/module.h> -#include <linux/sched/signal.h> #include <linux/slab.h> #include <linux/ioport.h> #include <linux/wait.h> @@ -17,9 +15,9 @@ DEFINE_RAW_SPINLOCK(pci_lock); /* - * Wrappers for all PCI configuration access functions. They just check - * alignment, do locking and call the low-level functions pointed to - * by pci_dev->ops. + * Wrappers for all PCI configuration access functions. They just check + * alignment, do locking and call the low-level functions pointed to + * by pci_dev->ops. */ #define PCI_byte_BAD 0 @@ -264,372 +262,6 @@ PCI_USER_WRITE_CONFIG(byte, u8) PCI_USER_WRITE_CONFIG(word, u16) PCI_USER_WRITE_CONFIG(dword, u32) -/* VPD access through PCI 2.2+ VPD capability */ - -/** - * pci_read_vpd - Read one entry from Vital Product Data - * @dev: pci device struct - * @pos: offset in vpd space - * @count: number of bytes to read - * @buf: pointer to where to store result - */ -ssize_t pci_read_vpd(struct pci_dev *dev, loff_t pos, size_t count, void *buf) -{ - if (!dev->vpd || !dev->vpd->ops) - return -ENODEV; - return dev->vpd->ops->read(dev, pos, count, buf); -} -EXPORT_SYMBOL(pci_read_vpd); - -/** - * pci_write_vpd - Write entry to Vital Product Data - * @dev: pci device struct - * @pos: offset in vpd space - * @count: number of bytes to write - * @buf: buffer containing write data - */ -ssize_t pci_write_vpd(struct pci_dev *dev, loff_t pos, size_t count, const void *buf) -{ - if (!dev->vpd || !dev->vpd->ops) - return -ENODEV; - return dev->vpd->ops->write(dev, pos, count, buf); -} -EXPORT_SYMBOL(pci_write_vpd); - -/** - * pci_set_vpd_size - Set size of Vital Product Data space - * @dev: pci device struct - * @len: size of vpd space - */ -int pci_set_vpd_size(struct pci_dev *dev, size_t len) -{ - if (!dev->vpd || !dev->vpd->ops) - return -ENODEV; - return dev->vpd->ops->set_size(dev, len); -} -EXPORT_SYMBOL(pci_set_vpd_size); - -#define PCI_VPD_MAX_SIZE (PCI_VPD_ADDR_MASK + 1) - -/** - * pci_vpd_size - determine actual size of Vital Product Data - * @dev: pci device struct - * @old_size: current assumed size, also maximum allowed size - */ -static size_t pci_vpd_size(struct pci_dev *dev, size_t old_size) -{ - size_t off = 0; - unsigned char header[1+2]; /* 1 byte tag, 2 bytes length */ - - while (off < old_size && - pci_read_vpd(dev, off, 1, header) == 1) { - unsigned char tag; - - if (header[0] & PCI_VPD_LRDT) { - /* Large Resource Data Type Tag */ - tag = pci_vpd_lrdt_tag(header); - /* Only read length from known tag items */ - if ((tag == PCI_VPD_LTIN_ID_STRING) || - (tag == PCI_VPD_LTIN_RO_DATA) || - (tag == PCI_VPD_LTIN_RW_DATA)) { - if (pci_read_vpd(dev, off+1, 2, - &header[1]) != 2) { - pci_warn(dev, "invalid large VPD tag %02x size at offset %zu", - tag, off + 1); - return 0; - } - off += PCI_VPD_LRDT_TAG_SIZE + - pci_vpd_lrdt_size(header); - } - } else { - /* Short Resource Data Type Tag */ - off += PCI_VPD_SRDT_TAG_SIZE + - pci_vpd_srdt_size(header); - tag = pci_vpd_srdt_tag(header); - } - - if (tag == PCI_VPD_STIN_END) /* End tag descriptor */ - return off; - - if ((tag != PCI_VPD_LTIN_ID_STRING) && - (tag != PCI_VPD_LTIN_RO_DATA) && - (tag != PCI_VPD_LTIN_RW_DATA)) { - pci_warn(dev, "invalid %s VPD tag %02x at offset %zu", - (header[0] & PCI_VPD_LRDT) ? "large" : "short", - tag, off); - return 0; - } - } - return 0; -} - -/* - * Wait for last operation to complete. - * This code has to spin since there is no other notification from the PCI - * hardware. Since the VPD is often implemented by serial attachment to an - * EEPROM, it may take many milliseconds to complete. - * - * Returns 0 on success, negative values indicate error. - */ -static int pci_vpd_wait(struct pci_dev *dev) -{ - struct pci_vpd *vpd = dev->vpd; - unsigned long timeout = jiffies + msecs_to_jiffies(125); - unsigned long max_sleep = 16; - u16 status; - int ret; - - if (!vpd->busy) - return 0; - - while (time_before(jiffies, timeout)) { - ret = pci_user_read_config_word(dev, vpd->cap + PCI_VPD_ADDR, - &status); - if (ret < 0) - return ret; - - if ((status & PCI_VPD_ADDR_F) == vpd->flag) { - vpd->busy = 0; - return 0; - } - - if (fatal_signal_pending(current)) - return -EINTR; - - usleep_range(10, max_sleep); - if (max_sleep < 1024) - max_sleep *= 2; - } - - pci_warn(dev, "VPD access failed. This is likely a firmware bug on this device. Contact the card vendor for a firmware update\n"); - return -ETIMEDOUT; -} - -static ssize_t pci_vpd_read(struct pci_dev *dev, loff_t pos, size_t count, - void *arg) -{ - struct pci_vpd *vpd = dev->vpd; - int ret; - loff_t end = pos + count; - u8 *buf = arg; - - if (pos < 0) - return -EINVAL; - - if (!vpd->valid) { - vpd->valid = 1; - vpd->len = pci_vpd_size(dev, vpd->len); - } - - if (vpd->len == 0) - return -EIO; - - if (pos > vpd->len) - return 0; - - if (end > vpd->len) { - end = vpd->len; - count = end - pos; - } - - if (mutex_lock_killable(&vpd->lock)) - return -EINTR; - - ret = pci_vpd_wait(dev); - if (ret < 0) - goto out; - - while (pos < end) { - u32 val; - unsigned int i, skip; - - ret = pci_user_write_config_word(dev, vpd->cap + PCI_VPD_ADDR, - pos & ~3); - if (ret < 0) - break; - vpd->busy = 1; - vpd->flag = PCI_VPD_ADDR_F; - ret = pci_vpd_wait(dev); - if (ret < 0) - break; - - ret = pci_user_read_config_dword(dev, vpd->cap + PCI_VPD_DATA, &val); - if (ret < 0) - break; - - skip = pos & 3; - for (i = 0; i < sizeof(u32); i++) { - if (i >= skip) { - *buf++ = val; - if (++pos == end) - break; - } - val >>= 8; - } - } -out: - mutex_unlock(&vpd->lock); - return ret ? ret : count; -} - -static ssize_t pci_vpd_write(struct pci_dev *dev, loff_t pos, size_t count, - const void *arg) -{ - struct pci_vpd *vpd = dev->vpd; - const u8 *buf = arg; - loff_t end = pos + count; - int ret = 0; - - if (pos < 0 || (pos & 3) || (count & 3)) - return -EINVAL; - - if (!vpd->valid) { - vpd->valid = 1; - vpd->len = pci_vpd_size(dev, vpd->len); - } - - if (vpd->len == 0) - return -EIO; - - if (end > vpd->len) - return -EINVAL; - - if (mutex_lock_killable(&vpd->lock)) - return -EINTR; - - ret = pci_vpd_wait(dev); - if (ret < 0) - goto out; - - while (pos < end) { - u32 val; - - val = *buf++; - val |= *buf++ << 8; - val |= *buf++ << 16; - val |= *buf++ << 24; - - ret = pci_user_write_config_dword(dev, vpd->cap + PCI_VPD_DATA, val); - if (ret < 0) - break; - ret = pci_user_write_config_word(dev, vpd->cap + PCI_VPD_ADDR, - pos | PCI_VPD_ADDR_F); - if (ret < 0) - break; - - vpd->busy = 1; - vpd->flag = 0; - ret = pci_vpd_wait(dev); - if (ret < 0) - break; - - pos += sizeof(u32); - } -out: - mutex_unlock(&vpd->lock); - return ret ? ret : count; -} - -static int pci_vpd_set_size(struct pci_dev *dev, size_t len) -{ - struct pci_vpd *vpd = dev->vpd; - - if (len == 0 || len > PCI_VPD_MAX_SIZE) - return -EIO; - - vpd->valid = 1; - vpd->len = len; - - return 0; -} - -static const struct pci_vpd_ops pci_vpd_ops = { - .read = pci_vpd_read, - .write = pci_vpd_write, - .set_size = pci_vpd_set_size, -}; - -static ssize_t pci_vpd_f0_read(struct pci_dev *dev, loff_t pos, size_t count, - void *arg) -{ - struct pci_dev *tdev = pci_get_slot(dev->bus, - PCI_DEVFN(PCI_SLOT(dev->devfn), 0)); - ssize_t ret; - - if (!tdev) - return -ENODEV; - - ret = pci_read_vpd(tdev, pos, count, arg); - pci_dev_put(tdev); - return ret; -} - -static ssize_t pci_vpd_f0_write(struct pci_dev *dev, loff_t pos, size_t count, - const void *arg) -{ - struct pci_dev *tdev = pci_get_slot(dev->bus, - PCI_DEVFN(PCI_SLOT(dev->devfn), 0)); - ssize_t ret; - - if (!tdev) - return -ENODEV; - - ret = pci_write_vpd(tdev, pos, count, arg); - pci_dev_put(tdev); - return ret; -} - -static int pci_vpd_f0_set_size(struct pci_dev *dev, size_t len) -{ - struct pci_dev *tdev = pci_get_slot(dev->bus, - PCI_DEVFN(PCI_SLOT(dev->devfn), 0)); - int ret; - - if (!tdev) - return -ENODEV; - - ret = pci_set_vpd_size(tdev, len); - pci_dev_put(tdev); - return ret; -} - -static const struct pci_vpd_ops pci_vpd_f0_ops = { - .read = pci_vpd_f0_read, - .write = pci_vpd_f0_write, - .set_size = pci_vpd_f0_set_size, -}; - -int pci_vpd_init(struct pci_dev *dev) -{ - struct pci_vpd *vpd; - u8 cap; - - cap = pci_find_capability(dev, PCI_CAP_ID_VPD); - if (!cap) - return -ENODEV; - - vpd = kzalloc(sizeof(*vpd), GFP_ATOMIC); - if (!vpd) - return -ENOMEM; - - vpd->len = PCI_VPD_MAX_SIZE; - if (dev->dev_flags & PCI_DEV_FLAGS_VPD_REF_F0) - vpd->ops = &pci_vpd_f0_ops; - else - vpd->ops = &pci_vpd_ops; - mutex_init(&vpd->lock); - vpd->cap = cap; - vpd->busy = 0; - vpd->valid = 0; - dev->vpd = vpd; - return 0; -} - -void pci_vpd_release(struct pci_dev *dev) -{ - kfree(dev->vpd); -} - /** * pci_cfg_access_lock - Lock PCI config reads/writes * @dev: pci device struct @@ -686,8 +318,10 @@ void pci_cfg_access_unlock(struct pci_dev *dev) raw_spin_lock_irqsave(&pci_lock, flags); - /* This indicates a problem in the caller, but we don't need - * to kill them, unlike a double-block above. */ + /* + * This indicates a problem in the caller, but we don't need + * to kill them, unlike a double-block above. + */ WARN_ON(!dev->block_cfg_access); dev->block_cfg_access = 0; diff --git a/drivers/pci/ats.c b/drivers/pci/ats.c index 6ad80a1fd5a7..89305b569d3d 100644 --- a/drivers/pci/ats.c +++ b/drivers/pci/ats.c @@ -1,14 +1,12 @@ // SPDX-License-Identifier: GPL-2.0 /* - * drivers/pci/ats.c - * - * Copyright (C) 2009 Intel Corporation, Yu Zhao <yu.zhao@intel.com> - * Copyright (C) 2011 Advanced Micro Devices, - * - * PCI Express I/O Virtualization (IOV) support. + * PCI Express I/O Virtualization (IOV) support * Address Translation Service 1.0 * Page Request Interface added by Joerg Roedel <joerg.roedel@amd.com> * PASID support added by Joerg Roedel <joerg.roedel@amd.com> + * + * Copyright (C) 2009 Intel Corporation, Yu Zhao <yu.zhao@intel.com> + * Copyright (C) 2011 Advanced Micro Devices, */ #include <linux/export.h> diff --git a/drivers/pci/bus.c b/drivers/pci/bus.c index 737d1c52f002..bc2ded4c451f 100644 --- a/drivers/pci/bus.c +++ b/drivers/pci/bus.c @@ -1,7 +1,5 @@ // SPDX-License-Identifier: GPL-2.0 /* - * drivers/pci/bus.c - * * From setup-res.c, by: * Dave Rusling (david.rusling@reo.mts.dec.com) * David Mosberger (davidm@cs.arizona.edu) diff --git a/drivers/pci/cadence/pcie-cadence-ep.c b/drivers/pci/cadence/pcie-cadence-ep.c index 3c3a97743453..3d8283e450a9 100644 --- a/drivers/pci/cadence/pcie-cadence-ep.c +++ b/drivers/pci/cadence/pcie-cadence-ep.c @@ -77,16 +77,19 @@ static int cdns_pcie_ep_write_header(struct pci_epc *epc, u8 fn, return 0; } -static int cdns_pcie_ep_set_bar(struct pci_epc *epc, u8 fn, enum pci_barno bar, - dma_addr_t bar_phys, size_t size, int flags) +static int cdns_pcie_ep_set_bar(struct pci_epc *epc, u8 fn, + struct pci_epf_bar *epf_bar) { struct cdns_pcie_ep *ep = epc_get_drvdata(epc); struct cdns_pcie *pcie = &ep->pcie; + dma_addr_t bar_phys = epf_bar->phys_addr; + enum pci_barno bar = epf_bar->barno; + int flags = epf_bar->flags; u32 addr0, addr1, reg, cfg, b, aperture, ctrl; u64 sz; /* BAR size is 2^(aperture + 7) */ - sz = max_t(size_t, size, CDNS_PCIE_EP_MIN_APERTURE); + sz = max_t(size_t, epf_bar->size, CDNS_PCIE_EP_MIN_APERTURE); /* * roundup_pow_of_two() returns an unsigned long, which is not suited * for 64bit values. @@ -103,6 +106,9 @@ static int cdns_pcie_ep_set_bar(struct pci_epc *epc, u8 fn, enum pci_barno bar, if (is_64bits && (bar & 1)) return -EINVAL; + if (is_64bits && !(flags & PCI_BASE_ADDRESS_MEM_TYPE_64)) + epf_bar->flags |= PCI_BASE_ADDRESS_MEM_TYPE_64; + if (is_64bits && is_prefetch) ctrl = CDNS_PCIE_LM_BAR_CFG_CTRL_PREFETCH_MEM_64BITS; else if (is_prefetch) @@ -139,10 +145,11 @@ static int cdns_pcie_ep_set_bar(struct pci_epc *epc, u8 fn, enum pci_barno bar, } static void cdns_pcie_ep_clear_bar(struct pci_epc *epc, u8 fn, - enum pci_barno bar) + struct pci_epf_bar *epf_bar) { struct cdns_pcie_ep *ep = epc_get_drvdata(epc); struct cdns_pcie *pcie = &ep->pcie; + enum pci_barno bar = epf_bar->barno; u32 reg, cfg, b, ctrl; if (bar < BAR_4) { diff --git a/drivers/pci/dwc/Kconfig b/drivers/pci/dwc/Kconfig index 0f666b1ce289..2f3f5c50aa48 100644 --- a/drivers/pci/dwc/Kconfig +++ b/drivers/pci/dwc/Kconfig @@ -176,6 +176,7 @@ config PCIE_ARTPEC6_EP config PCIE_KIRIN depends on OF && ARM64 bool "HiSilicon Kirin series SoCs PCIe controllers" + depends on PCI_MSI_IRQ_DOMAIN depends on PCI select PCIEPORTBUS select PCIE_DW_HOST diff --git a/drivers/pci/dwc/pci-exynos.c b/drivers/pci/dwc/pci-exynos.c index ca6278113936..4cc1e5df8c79 100644 --- a/drivers/pci/dwc/pci-exynos.c +++ b/drivers/pci/dwc/pci-exynos.c @@ -294,15 +294,6 @@ static irqreturn_t exynos_pcie_irq_handler(int irq, void *arg) return IRQ_HANDLED; } -static irqreturn_t exynos_pcie_msi_irq_handler(int irq, void *arg) -{ - struct exynos_pcie *ep = arg; - struct dw_pcie *pci = ep->pci; - struct pcie_port *pp = &pci->pp; - - return dw_handle_msi_irq(pp); -} - static void exynos_pcie_msi_init(struct exynos_pcie *ep) { struct dw_pcie *pci = ep->pci; @@ -428,15 +419,6 @@ static int __init exynos_add_pcie_port(struct exynos_pcie *ep, dev_err(dev, "failed to get msi irq\n"); return pp->msi_irq; } - - ret = devm_request_irq(dev, pp->msi_irq, - exynos_pcie_msi_irq_handler, - IRQF_SHARED | IRQF_NO_THREAD, - "exynos-pcie", ep); - if (ret) { - dev_err(dev, "failed to request msi irq\n"); - return ret; - } } pp->root_bus_nr = -1; diff --git a/drivers/pci/dwc/pci-imx6.c b/drivers/pci/dwc/pci-imx6.c index 4fddbd08b089..4818ef875f8a 100644 --- a/drivers/pci/dwc/pci-imx6.c +++ b/drivers/pci/dwc/pci-imx6.c @@ -542,15 +542,6 @@ static int imx6_pcie_wait_for_speed_change(struct imx6_pcie *imx6_pcie) return -EINVAL; } -static irqreturn_t imx6_pcie_msi_handler(int irq, void *arg) -{ - struct imx6_pcie *imx6_pcie = arg; - struct dw_pcie *pci = imx6_pcie->pci; - struct pcie_port *pp = &pci->pp; - - return dw_handle_msi_irq(pp); -} - static int imx6_pcie_establish_link(struct imx6_pcie *imx6_pcie) { struct dw_pcie *pci = imx6_pcie->pci; @@ -674,15 +665,6 @@ static int imx6_add_pcie_port(struct imx6_pcie *imx6_pcie, dev_err(dev, "failed to get MSI irq\n"); return -ENODEV; } - - ret = devm_request_irq(dev, pp->msi_irq, - imx6_pcie_msi_handler, - IRQF_SHARED | IRQF_NO_THREAD, - "mx6-pcie-msi", imx6_pcie); - if (ret) { - dev_err(dev, "failed to request MSI irq\n"); - return ret; - } } pp->root_bus_nr = -1; diff --git a/drivers/pci/dwc/pci-keystone-dw.c b/drivers/pci/dwc/pci-keystone-dw.c index 99a0e7076221..0682213328e9 100644 --- a/drivers/pci/dwc/pci-keystone-dw.c +++ b/drivers/pci/dwc/pci-keystone-dw.c @@ -120,20 +120,15 @@ void ks_dw_pcie_handle_msi_irq(struct keystone_pcie *ks_pcie, int offset) } } -static void ks_dw_pcie_msi_irq_ack(struct irq_data *d) +void ks_dw_pcie_msi_irq_ack(int irq, struct pcie_port *pp) { - u32 offset, reg_offset, bit_pos; + u32 reg_offset, bit_pos; struct keystone_pcie *ks_pcie; - struct msi_desc *msi; - struct pcie_port *pp; struct dw_pcie *pci; - msi = irq_data_get_msi_desc(d); - pp = (struct pcie_port *) msi_desc_to_pci_sysdata(msi); pci = to_dw_pcie_from_pp(pp); ks_pcie = to_keystone_pcie(pci); - offset = d->irq - irq_linear_revmap(pp->irq_domain, 0); - update_reg_offset_bit_pos(offset, ®_offset, &bit_pos); + update_reg_offset_bit_pos(irq, ®_offset, &bit_pos); ks_dw_app_writel(ks_pcie, MSI0_IRQ_STATUS + (reg_offset << 4), BIT(bit_pos)); @@ -162,85 +157,9 @@ void ks_dw_pcie_msi_clear_irq(struct pcie_port *pp, int irq) BIT(bit_pos)); } -static void ks_dw_pcie_msi_irq_mask(struct irq_data *d) -{ - struct msi_desc *msi; - struct pcie_port *pp; - u32 offset; - - msi = irq_data_get_msi_desc(d); - pp = (struct pcie_port *) msi_desc_to_pci_sysdata(msi); - offset = d->irq - irq_linear_revmap(pp->irq_domain, 0); - - /* Mask the end point if PVM implemented */ - if (IS_ENABLED(CONFIG_PCI_MSI)) { - if (msi->msi_attrib.maskbit) - pci_msi_mask_irq(d); - } - - ks_dw_pcie_msi_clear_irq(pp, offset); -} - -static void ks_dw_pcie_msi_irq_unmask(struct irq_data *d) -{ - struct msi_desc *msi; - struct pcie_port *pp; - u32 offset; - - msi = irq_data_get_msi_desc(d); - pp = (struct pcie_port *) msi_desc_to_pci_sysdata(msi); - offset = d->irq - irq_linear_revmap(pp->irq_domain, 0); - - /* Mask the end point if PVM implemented */ - if (IS_ENABLED(CONFIG_PCI_MSI)) { - if (msi->msi_attrib.maskbit) - pci_msi_unmask_irq(d); - } - - ks_dw_pcie_msi_set_irq(pp, offset); -} - -static struct irq_chip ks_dw_pcie_msi_irq_chip = { - .name = "Keystone-PCIe-MSI-IRQ", - .irq_ack = ks_dw_pcie_msi_irq_ack, - .irq_mask = ks_dw_pcie_msi_irq_mask, - .irq_unmask = ks_dw_pcie_msi_irq_unmask, -}; - -static int ks_dw_pcie_msi_map(struct irq_domain *domain, unsigned int irq, - irq_hw_number_t hwirq) +int ks_dw_pcie_msi_host_init(struct pcie_port *pp) { - irq_set_chip_and_handler(irq, &ks_dw_pcie_msi_irq_chip, - handle_level_irq); - irq_set_chip_data(irq, domain->host_data); - - return 0; -} - -static const struct irq_domain_ops ks_dw_pcie_msi_domain_ops = { - .map = ks_dw_pcie_msi_map, -}; - -int ks_dw_pcie_msi_host_init(struct pcie_port *pp, struct msi_controller *chip) -{ - struct dw_pcie *pci = to_dw_pcie_from_pp(pp); - struct keystone_pcie *ks_pcie = to_keystone_pcie(pci); - struct device *dev = pci->dev; - int i; - - pp->irq_domain = irq_domain_add_linear(ks_pcie->msi_intc_np, - MAX_MSI_IRQS, - &ks_dw_pcie_msi_domain_ops, - chip); - if (!pp->irq_domain) { - dev_err(dev, "irq domain init failed\n"); - return -ENXIO; - } - - for (i = 0; i < MAX_MSI_IRQS; i++) - irq_create_mapping(pp->irq_domain, i); - - return 0; + return dw_pcie_allocate_domains(pp); } void ks_dw_pcie_enable_legacy_irqs(struct keystone_pcie *ks_pcie) diff --git a/drivers/pci/dwc/pci-keystone.c b/drivers/pci/dwc/pci-keystone.c index d4f8ab90c018..d55ae0716adf 100644 --- a/drivers/pci/dwc/pci-keystone.c +++ b/drivers/pci/dwc/pci-keystone.c @@ -297,6 +297,7 @@ static const struct dw_pcie_host_ops keystone_pcie_host_ops = { .msi_clear_irq = ks_dw_pcie_msi_clear_irq, .get_msi_addr = ks_dw_pcie_get_msi_addr, .msi_host_init = ks_dw_pcie_msi_host_init, + .msi_irq_ack = ks_dw_pcie_msi_irq_ack, .scan_bus = ks_dw_pcie_v3_65_scan_bus, }; diff --git a/drivers/pci/dwc/pci-keystone.h b/drivers/pci/dwc/pci-keystone.h index 1dd1f3ef98e7..8a13da391543 100644 --- a/drivers/pci/dwc/pci-keystone.h +++ b/drivers/pci/dwc/pci-keystone.h @@ -49,9 +49,9 @@ int ks_dw_pcie_rd_other_conf(struct pcie_port *pp, struct pci_bus *bus, unsigned int devfn, int where, int size, u32 *val); void ks_dw_pcie_setup_rc_app_regs(struct keystone_pcie *ks_pcie); void ks_dw_pcie_initiate_link_train(struct keystone_pcie *ks_pcie); +void ks_dw_pcie_msi_irq_ack(int i, struct pcie_port *pp); void ks_dw_pcie_msi_set_irq(struct pcie_port *pp, int irq); void ks_dw_pcie_msi_clear_irq(struct pcie_port *pp, int irq); void ks_dw_pcie_v3_65_scan_bus(struct pcie_port *pp); -int ks_dw_pcie_msi_host_init(struct pcie_port *pp, - struct msi_controller *chip); +int ks_dw_pcie_msi_host_init(struct pcie_port *pp); int ks_dw_pcie_link_up(struct dw_pcie *pci); diff --git a/drivers/pci/dwc/pci-layerscape.c b/drivers/pci/dwc/pci-layerscape.c index a7b4159631ae..3724d3ef7008 100644 --- a/drivers/pci/dwc/pci-layerscape.c +++ b/drivers/pci/dwc/pci-layerscape.c @@ -182,8 +182,7 @@ static int ls1021_pcie_host_init(struct pcie_port *pp) return ls_pcie_host_init(pp); } -static int ls_pcie_msi_host_init(struct pcie_port *pp, - struct msi_controller *chip) +static int ls_pcie_msi_host_init(struct pcie_port *pp) { struct dw_pcie *pci = to_dw_pcie_from_pp(pp); struct device *dev = pci->dev; diff --git a/drivers/pci/dwc/pcie-artpec6.c b/drivers/pci/dwc/pcie-artpec6.c index 93b3df9ed1b5..e66cede2b5b7 100644 --- a/drivers/pci/dwc/pcie-artpec6.c +++ b/drivers/pci/dwc/pcie-artpec6.c @@ -383,15 +383,6 @@ static const struct dw_pcie_host_ops artpec6_pcie_host_ops = { .host_init = artpec6_pcie_host_init, }; -static irqreturn_t artpec6_pcie_msi_handler(int irq, void *arg) -{ - struct artpec6_pcie *artpec6_pcie = arg; - struct dw_pcie *pci = artpec6_pcie->pci; - struct pcie_port *pp = &pci->pp; - - return dw_handle_msi_irq(pp); -} - static int artpec6_add_pcie_port(struct artpec6_pcie *artpec6_pcie, struct platform_device *pdev) { @@ -406,15 +397,6 @@ static int artpec6_add_pcie_port(struct artpec6_pcie *artpec6_pcie, dev_err(dev, "failed to get MSI irq\n"); return pp->msi_irq; } - - ret = devm_request_irq(dev, pp->msi_irq, - artpec6_pcie_msi_handler, - IRQF_SHARED | IRQF_NO_THREAD, - "artpec6-pcie-msi", artpec6_pcie); - if (ret) { - dev_err(dev, "failed to request MSI irq\n"); - return ret; - } } pp->root_bus_nr = -1; diff --git a/drivers/pci/dwc/pcie-designware-ep.c b/drivers/pci/dwc/pcie-designware-ep.c index 3a6feeff5f5b..f07678bf7cfc 100644 --- a/drivers/pci/dwc/pcie-designware-ep.c +++ b/drivers/pci/dwc/pcie-designware-ep.c @@ -19,7 +19,8 @@ void dw_pcie_ep_linkup(struct dw_pcie_ep *ep) pci_epc_linkup(epc); } -void dw_pcie_ep_reset_bar(struct dw_pcie *pci, enum pci_barno bar) +static void __dw_pcie_ep_reset_bar(struct dw_pcie *pci, enum pci_barno bar, + int flags) { u32 reg; @@ -27,9 +28,18 @@ void dw_pcie_ep_reset_bar(struct dw_pcie *pci, enum pci_barno bar) dw_pcie_dbi_ro_wr_en(pci); dw_pcie_writel_dbi2(pci, reg, 0x0); dw_pcie_writel_dbi(pci, reg, 0x0); + if (flags & PCI_BASE_ADDRESS_MEM_TYPE_64) { + dw_pcie_writel_dbi2(pci, reg + 4, 0x0); + dw_pcie_writel_dbi(pci, reg + 4, 0x0); + } dw_pcie_dbi_ro_wr_dis(pci); } +void dw_pcie_ep_reset_bar(struct dw_pcie *pci, enum pci_barno bar) +{ + __dw_pcie_ep_reset_bar(pci, bar, 0); +} + static int dw_pcie_ep_write_header(struct pci_epc *epc, u8 func_no, struct pci_epf_header *hdr) { @@ -104,25 +114,28 @@ static int dw_pcie_ep_outbound_atu(struct dw_pcie_ep *ep, phys_addr_t phys_addr, } static void dw_pcie_ep_clear_bar(struct pci_epc *epc, u8 func_no, - enum pci_barno bar) + struct pci_epf_bar *epf_bar) { struct dw_pcie_ep *ep = epc_get_drvdata(epc); struct dw_pcie *pci = to_dw_pcie_from_ep(ep); + enum pci_barno bar = epf_bar->barno; u32 atu_index = ep->bar_to_atu[bar]; - dw_pcie_ep_reset_bar(pci, bar); + __dw_pcie_ep_reset_bar(pci, bar, epf_bar->flags); dw_pcie_disable_atu(pci, atu_index, DW_PCIE_REGION_INBOUND); clear_bit(atu_index, ep->ib_window_map); } static int dw_pcie_ep_set_bar(struct pci_epc *epc, u8 func_no, - enum pci_barno bar, - dma_addr_t bar_phys, size_t size, int flags) + struct pci_epf_bar *epf_bar) { int ret; struct dw_pcie_ep *ep = epc_get_drvdata(epc); struct dw_pcie *pci = to_dw_pcie_from_ep(ep); + enum pci_barno bar = epf_bar->barno; + size_t size = epf_bar->size; + int flags = epf_bar->flags; enum dw_pcie_as_type as_type; u32 reg = PCI_BASE_ADDRESS_0 + (4 * bar); @@ -131,13 +144,20 @@ static int dw_pcie_ep_set_bar(struct pci_epc *epc, u8 func_no, else as_type = DW_PCIE_AS_IO; - ret = dw_pcie_ep_inbound_atu(ep, bar, bar_phys, as_type); + ret = dw_pcie_ep_inbound_atu(ep, bar, epf_bar->phys_addr, as_type); if (ret) return ret; dw_pcie_dbi_ro_wr_en(pci); - dw_pcie_writel_dbi2(pci, reg, size - 1); + + dw_pcie_writel_dbi2(pci, reg, lower_32_bits(size - 1)); dw_pcie_writel_dbi(pci, reg, flags); + + if (flags & PCI_BASE_ADDRESS_MEM_TYPE_64) { + dw_pcie_writel_dbi2(pci, reg + 4, upper_32_bits(size - 1)); + dw_pcie_writel_dbi(pci, reg + 4, 0); + } + dw_pcie_dbi_ro_wr_dis(pci); return 0; @@ -322,7 +342,7 @@ int dw_pcie_ep_init(struct dw_pcie_ep *ep) struct device_node *np = dev->of_node; if (!pci->dbi_base || !pci->dbi_base2) { - dev_err(dev, "dbi_base/deb_base2 is not populated\n"); + dev_err(dev, "dbi_base/dbi_base2 is not populated\n"); return -EINVAL; } diff --git a/drivers/pci/dwc/pcie-designware-host.c b/drivers/pci/dwc/pcie-designware-host.c index dc9303abda42..6c409079d514 100644 --- a/drivers/pci/dwc/pcie-designware-host.c +++ b/drivers/pci/dwc/pcie-designware-host.c @@ -8,6 +8,7 @@ * Author: Jingoo Han <jg1.han@samsung.com> */ +#include <linux/irqchip/chained_irq.h> #include <linux/irqdomain.h> #include <linux/of_address.h> #include <linux/of_pci.h> @@ -42,22 +43,46 @@ static int dw_pcie_wr_own_conf(struct pcie_port *pp, int where, int size, return dw_pcie_write(pci->dbi_base + where, size, val); } -static struct irq_chip dw_msi_irq_chip = { +static void dw_msi_ack_irq(struct irq_data *d) +{ + irq_chip_ack_parent(d); +} + +static void dw_msi_mask_irq(struct irq_data *d) +{ + pci_msi_mask_irq(d); + irq_chip_mask_parent(d); +} + +static void dw_msi_unmask_irq(struct irq_data *d) +{ + pci_msi_unmask_irq(d); + irq_chip_unmask_parent(d); +} + +static struct irq_chip dw_pcie_msi_irq_chip = { .name = "PCI-MSI", - .irq_enable = pci_msi_unmask_irq, - .irq_disable = pci_msi_mask_irq, - .irq_mask = pci_msi_mask_irq, - .irq_unmask = pci_msi_unmask_irq, + .irq_ack = dw_msi_ack_irq, + .irq_mask = dw_msi_mask_irq, + .irq_unmask = dw_msi_unmask_irq, +}; + +static struct msi_domain_info dw_pcie_msi_domain_info = { + .flags = (MSI_FLAG_USE_DEF_DOM_OPS | MSI_FLAG_USE_DEF_CHIP_OPS | + MSI_FLAG_PCI_MSIX | MSI_FLAG_MULTI_PCI_MSI), + .chip = &dw_pcie_msi_irq_chip, }; /* MSI int handler */ irqreturn_t dw_handle_msi_irq(struct pcie_port *pp) { - u32 val; int i, pos, irq; + u32 val, num_ctrls; irqreturn_t ret = IRQ_NONE; - for (i = 0; i < MAX_MSI_CTRLS; i++) { + num_ctrls = pp->num_vectors / MAX_MSI_IRQS_PER_CTRL; + + for (i = 0; i < num_ctrls; i++) { dw_pcie_rd_own_conf(pp, PCIE_MSI_INTR0_STATUS + i * 12, 4, &val); if (!val) @@ -78,206 +103,216 @@ irqreturn_t dw_handle_msi_irq(struct pcie_port *pp) return ret; } -void dw_pcie_msi_init(struct pcie_port *pp) +/* Chained MSI interrupt service routine */ +static void dw_chained_msi_isr(struct irq_desc *desc) { - struct dw_pcie *pci = to_dw_pcie_from_pp(pp); - struct device *dev = pci->dev; - struct page *page; - u64 msi_target; + struct irq_chip *chip = irq_desc_get_chip(desc); + struct pcie_port *pp; - page = alloc_page(GFP_KERNEL); - pp->msi_data = dma_map_page(dev, page, 0, PAGE_SIZE, DMA_FROM_DEVICE); - if (dma_mapping_error(dev, pp->msi_data)) { - dev_err(dev, "failed to map MSI data\n"); - __free_page(page); - return; - } - msi_target = (u64)pp->msi_data; + chained_irq_enter(chip, desc); - /* program the msi_data */ - dw_pcie_wr_own_conf(pp, PCIE_MSI_ADDR_LO, 4, - (u32)(msi_target & 0xffffffff)); - dw_pcie_wr_own_conf(pp, PCIE_MSI_ADDR_HI, 4, - (u32)(msi_target >> 32 & 0xffffffff)); + pp = irq_desc_get_handler_data(desc); + dw_handle_msi_irq(pp); + + chained_irq_exit(chip, desc); } -static void dw_pcie_msi_clear_irq(struct pcie_port *pp, int irq) +static void dw_pci_setup_msi_msg(struct irq_data *data, struct msi_msg *msg) { - unsigned int res, bit, val; + struct pcie_port *pp = irq_data_get_irq_chip_data(data); + struct dw_pcie *pci = to_dw_pcie_from_pp(pp); + u64 msi_target; - res = (irq / 32) * 12; - bit = irq % 32; - dw_pcie_rd_own_conf(pp, PCIE_MSI_INTR0_ENABLE + res, 4, &val); - val &= ~(1 << bit); - dw_pcie_wr_own_conf(pp, PCIE_MSI_INTR0_ENABLE + res, 4, val); -} + if (pp->ops->get_msi_addr) + msi_target = pp->ops->get_msi_addr(pp); + else + msi_target = (u64)pp->msi_data; -static void clear_irq_range(struct pcie_port *pp, unsigned int irq_base, - unsigned int nvec, unsigned int pos) -{ - unsigned int i; - - for (i = 0; i < nvec; i++) { - irq_set_msi_desc_off(irq_base, i, NULL); - /* Disable corresponding interrupt on MSI controller */ - if (pp->ops->msi_clear_irq) - pp->ops->msi_clear_irq(pp, pos + i); - else - dw_pcie_msi_clear_irq(pp, pos + i); - } + msg->address_lo = lower_32_bits(msi_target); + msg->address_hi = upper_32_bits(msi_target); - bitmap_release_region(pp->msi_irq_in_use, pos, order_base_2(nvec)); + if (pp->ops->get_msi_data) + msg->data = pp->ops->get_msi_data(pp, data->hwirq); + else + msg->data = data->hwirq; + + dev_dbg(pci->dev, "msi#%d address_hi %#x address_lo %#x\n", + (int)data->hwirq, msg->address_hi, msg->address_lo); } -static void dw_pcie_msi_set_irq(struct pcie_port *pp, int irq) +static int dw_pci_msi_set_affinity(struct irq_data *irq_data, + const struct cpumask *mask, bool force) { - unsigned int res, bit, val; - - res = (irq / 32) * 12; - bit = irq % 32; - dw_pcie_rd_own_conf(pp, PCIE_MSI_INTR0_ENABLE + res, 4, &val); - val |= 1 << bit; - dw_pcie_wr_own_conf(pp, PCIE_MSI_INTR0_ENABLE + res, 4, val); + return -EINVAL; } -static int assign_irq(int no_irqs, struct msi_desc *desc, int *pos) +static void dw_pci_bottom_mask(struct irq_data *data) { - int irq, pos0, i; - struct pcie_port *pp; - - pp = (struct pcie_port *)msi_desc_to_pci_sysdata(desc); - pos0 = bitmap_find_free_region(pp->msi_irq_in_use, MAX_MSI_IRQS, - order_base_2(no_irqs)); - if (pos0 < 0) - goto no_valid_irq; + struct pcie_port *pp = irq_data_get_irq_chip_data(data); + unsigned int res, bit, ctrl; + unsigned long flags; - irq = irq_find_mapping(pp->irq_domain, pos0); - if (!irq) - goto no_valid_irq; + raw_spin_lock_irqsave(&pp->lock, flags); - /* - * irq_create_mapping (called from dw_pcie_host_init) pre-allocates - * descs so there is no need to allocate descs here. We can therefore - * assume that if irq_find_mapping above returns non-zero, then the - * descs are also successfully allocated. - */ + if (pp->ops->msi_clear_irq) { + pp->ops->msi_clear_irq(pp, data->hwirq); + } else { + ctrl = data->hwirq / 32; + res = ctrl * 12; + bit = data->hwirq % 32; - for (i = 0; i < no_irqs; i++) { - if (irq_set_msi_desc_off(irq, i, desc) != 0) { - clear_irq_range(pp, irq, i, pos0); - goto no_valid_irq; - } - /*Enable corresponding interrupt in MSI interrupt controller */ - if (pp->ops->msi_set_irq) - pp->ops->msi_set_irq(pp, pos0 + i); - else - dw_pcie_msi_set_irq(pp, pos0 + i); + pp->irq_status[ctrl] &= ~(1 << bit); + dw_pcie_wr_own_conf(pp, PCIE_MSI_INTR0_ENABLE + res, 4, + pp->irq_status[ctrl]); } - *pos = pos0; - desc->nvec_used = no_irqs; - desc->msi_attrib.multiple = order_base_2(no_irqs); - - return irq; - -no_valid_irq: - *pos = pos0; - return -ENOSPC; + raw_spin_unlock_irqrestore(&pp->lock, flags); } -static void dw_msi_setup_msg(struct pcie_port *pp, unsigned int irq, u32 pos) +static void dw_pci_bottom_unmask(struct irq_data *data) { - struct msi_msg msg; - u64 msi_target; + struct pcie_port *pp = irq_data_get_irq_chip_data(data); + unsigned int res, bit, ctrl; + unsigned long flags; - if (pp->ops->get_msi_addr) - msi_target = pp->ops->get_msi_addr(pp); - else - msi_target = (u64)pp->msi_data; + raw_spin_lock_irqsave(&pp->lock, flags); - msg.address_lo = (u32)(msi_target & 0xffffffff); - msg.address_hi = (u32)(msi_target >> 32 & 0xffffffff); + if (pp->ops->msi_set_irq) { + pp->ops->msi_set_irq(pp, data->hwirq); + } else { + ctrl = data->hwirq / 32; + res = ctrl * 12; + bit = data->hwirq % 32; - if (pp->ops->get_msi_data) - msg.data = pp->ops->get_msi_data(pp, pos); - else - msg.data = pos; + pp->irq_status[ctrl] |= 1 << bit; + dw_pcie_wr_own_conf(pp, PCIE_MSI_INTR0_ENABLE + res, 4, + pp->irq_status[ctrl]); + } - pci_write_msi_msg(irq, &msg); + raw_spin_unlock_irqrestore(&pp->lock, flags); } -static int dw_msi_setup_irq(struct msi_controller *chip, struct pci_dev *pdev, - struct msi_desc *desc) +static void dw_pci_bottom_ack(struct irq_data *d) { - int irq, pos; - struct pcie_port *pp = pdev->bus->sysdata; - - if (desc->msi_attrib.is_msix) - return -EINVAL; - - irq = assign_irq(1, desc, &pos); - if (irq < 0) - return irq; + struct msi_desc *msi = irq_data_get_msi_desc(d); + struct pcie_port *pp; - dw_msi_setup_msg(pp, irq, pos); + pp = msi_desc_to_pci_sysdata(msi); - return 0; + if (pp->ops->msi_irq_ack) + pp->ops->msi_irq_ack(d->hwirq, pp); } -static int dw_msi_setup_irqs(struct msi_controller *chip, struct pci_dev *pdev, - int nvec, int type) +static struct irq_chip dw_pci_msi_bottom_irq_chip = { + .name = "DWPCI-MSI", + .irq_ack = dw_pci_bottom_ack, + .irq_compose_msi_msg = dw_pci_setup_msi_msg, + .irq_set_affinity = dw_pci_msi_set_affinity, + .irq_mask = dw_pci_bottom_mask, + .irq_unmask = dw_pci_bottom_unmask, +}; + +static int dw_pcie_irq_domain_alloc(struct irq_domain *domain, + unsigned int virq, unsigned int nr_irqs, + void *args) { -#ifdef CONFIG_PCI_MSI - int irq, pos; - struct msi_desc *desc; - struct pcie_port *pp = pdev->bus->sysdata; + struct pcie_port *pp = domain->host_data; + unsigned long flags; + u32 i; + int bit; + + raw_spin_lock_irqsave(&pp->lock, flags); - /* MSI-X interrupts are not supported */ - if (type == PCI_CAP_ID_MSIX) - return -EINVAL; + bit = bitmap_find_free_region(pp->msi_irq_in_use, pp->num_vectors, + order_base_2(nr_irqs)); - WARN_ON(!list_is_singular(&pdev->dev.msi_list)); - desc = list_entry(pdev->dev.msi_list.next, struct msi_desc, list); + raw_spin_unlock_irqrestore(&pp->lock, flags); - irq = assign_irq(nvec, desc, &pos); - if (irq < 0) - return irq; + if (bit < 0) + return -ENOSPC; - dw_msi_setup_msg(pp, irq, pos); + for (i = 0; i < nr_irqs; i++) + irq_domain_set_info(domain, virq + i, bit + i, + &dw_pci_msi_bottom_irq_chip, + pp, handle_edge_irq, + NULL, NULL); return 0; -#else - return -EINVAL; -#endif } -static void dw_msi_teardown_irq(struct msi_controller *chip, unsigned int irq) +static void dw_pcie_irq_domain_free(struct irq_domain *domain, + unsigned int virq, unsigned int nr_irqs) { - struct irq_data *data = irq_get_irq_data(irq); - struct msi_desc *msi = irq_data_get_msi_desc(data); - struct pcie_port *pp = (struct pcie_port *)msi_desc_to_pci_sysdata(msi); - - clear_irq_range(pp, irq, 1, data->hwirq); + struct irq_data *data = irq_domain_get_irq_data(domain, virq); + struct pcie_port *pp = irq_data_get_irq_chip_data(data); + unsigned long flags; + + raw_spin_lock_irqsave(&pp->lock, flags); + bitmap_release_region(pp->msi_irq_in_use, data->hwirq, + order_base_2(nr_irqs)); + raw_spin_unlock_irqrestore(&pp->lock, flags); } -static struct msi_controller dw_pcie_msi_chip = { - .setup_irq = dw_msi_setup_irq, - .setup_irqs = dw_msi_setup_irqs, - .teardown_irq = dw_msi_teardown_irq, +static const struct irq_domain_ops dw_pcie_msi_domain_ops = { + .alloc = dw_pcie_irq_domain_alloc, + .free = dw_pcie_irq_domain_free, }; -static int dw_pcie_msi_map(struct irq_domain *domain, unsigned int irq, - irq_hw_number_t hwirq) +int dw_pcie_allocate_domains(struct pcie_port *pp) { - irq_set_chip_and_handler(irq, &dw_msi_irq_chip, handle_simple_irq); - irq_set_chip_data(irq, domain->host_data); + struct dw_pcie *pci = to_dw_pcie_from_pp(pp); + struct fwnode_handle *fwnode = of_node_to_fwnode(pci->dev->of_node); + + pp->irq_domain = irq_domain_create_linear(fwnode, pp->num_vectors, + &dw_pcie_msi_domain_ops, pp); + if (!pp->irq_domain) { + dev_err(pci->dev, "failed to create IRQ domain\n"); + return -ENOMEM; + } + + pp->msi_domain = pci_msi_create_irq_domain(fwnode, + &dw_pcie_msi_domain_info, + pp->irq_domain); + if (!pp->msi_domain) { + dev_err(pci->dev, "failed to create MSI domain\n"); + irq_domain_remove(pp->irq_domain); + return -ENOMEM; + } return 0; } -static const struct irq_domain_ops msi_domain_ops = { - .map = dw_pcie_msi_map, -}; +void dw_pcie_free_msi(struct pcie_port *pp) +{ + irq_set_chained_handler(pp->msi_irq, NULL); + irq_set_handler_data(pp->msi_irq, NULL); + + irq_domain_remove(pp->msi_domain); + irq_domain_remove(pp->irq_domain); +} + +void dw_pcie_msi_init(struct pcie_port *pp) +{ + struct dw_pcie *pci = to_dw_pcie_from_pp(pp); + struct device *dev = pci->dev; + struct page *page; + u64 msi_target; + + page = alloc_page(GFP_KERNEL); + pp->msi_data = dma_map_page(dev, page, 0, PAGE_SIZE, DMA_FROM_DEVICE); + if (dma_mapping_error(dev, pp->msi_data)) { + dev_err(dev, "failed to map MSI data\n"); + __free_page(page); + return; + } + msi_target = (u64)pp->msi_data; + + /* program the msi_data */ + dw_pcie_wr_own_conf(pp, PCIE_MSI_ADDR_LO, 4, + lower_32_bits(msi_target)); + dw_pcie_wr_own_conf(pp, PCIE_MSI_ADDR_HI, 4, + upper_32_bits(msi_target)); +} int dw_pcie_host_init(struct pcie_port *pp) { @@ -285,11 +320,13 @@ int dw_pcie_host_init(struct pcie_port *pp) struct device *dev = pci->dev; struct device_node *np = dev->of_node; struct platform_device *pdev = to_platform_device(dev); + struct resource_entry *win, *tmp; struct pci_bus *bus, *child; struct pci_host_bridge *bridge; struct resource *cfg_res; - int i, ret; - struct resource_entry *win, *tmp; + int ret; + + raw_spin_lock_init(&pci->pp.lock); cfg_res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "config"); if (cfg_res) { @@ -388,20 +425,35 @@ int dw_pcie_host_init(struct pcie_port *pp) pci->num_viewport = 2; if (IS_ENABLED(CONFIG_PCI_MSI)) { - if (!pp->ops->msi_host_init) { - pp->irq_domain = irq_domain_add_linear(dev->of_node, - MAX_MSI_IRQS, &msi_domain_ops, - &dw_pcie_msi_chip); - if (!pp->irq_domain) { - dev_err(dev, "irq domain init failed\n"); - ret = -ENXIO; + /* + * If a specific SoC driver needs to change the + * default number of vectors, it needs to implement + * the set_num_vectors callback. + */ + if (!pp->ops->set_num_vectors) { + pp->num_vectors = MSI_DEF_NUM_VECTORS; + } else { + pp->ops->set_num_vectors(pp); + + if (pp->num_vectors > MAX_MSI_IRQS || + pp->num_vectors == 0) { + dev_err(dev, + "Invalid number of vectors\n"); goto error; } + } - for (i = 0; i < MAX_MSI_IRQS; i++) - irq_create_mapping(pp->irq_domain, i); + if (!pp->ops->msi_host_init) { + ret = dw_pcie_allocate_domains(pp); + if (ret) + goto error; + + if (pp->msi_irq) + irq_set_chained_handler_and_data(pp->msi_irq, + dw_chained_msi_isr, + pp); } else { - ret = pp->ops->msi_host_init(pp, &dw_pcie_msi_chip); + ret = pp->ops->msi_host_init(pp); if (ret < 0) goto error; } @@ -421,10 +473,6 @@ int dw_pcie_host_init(struct pcie_port *pp) bridge->ops = &dw_pcie_ops; bridge->map_irq = of_irq_parse_and_map_pci; bridge->swizzle_irq = pci_common_swizzle; - if (IS_ENABLED(CONFIG_PCI_MSI)) { - bridge->msi = &dw_pcie_msi_chip; - dw_pcie_msi_chip.dev = dev; - } ret = pci_scan_root_bus_bridge(bridge); if (ret) @@ -593,11 +641,17 @@ static u8 dw_pcie_iatu_unroll_enabled(struct dw_pcie *pci) void dw_pcie_setup_rc(struct pcie_port *pp) { - u32 val; + u32 val, ctrl, num_ctrls; struct dw_pcie *pci = to_dw_pcie_from_pp(pp); dw_pcie_setup(pci); + num_ctrls = pp->num_vectors / MAX_MSI_IRQS_PER_CTRL; + + /* Initialize IRQ Status array */ + for (ctrl = 0; ctrl < num_ctrls; ctrl++) + dw_pcie_rd_own_conf(pp, PCIE_MSI_INTR0_ENABLE + (ctrl * 12), 4, + &pp->irq_status[ctrl]); /* setup RC BARs */ dw_pcie_writel_dbi(pci, PCI_BASE_ADDRESS_0, 0x00000004); dw_pcie_writel_dbi(pci, PCI_BASE_ADDRESS_1, 0x00000000); diff --git a/drivers/pci/dwc/pcie-designware-plat.c b/drivers/pci/dwc/pcie-designware-plat.c index ebdf28bcd67d..5416aa8a07a5 100644 --- a/drivers/pci/dwc/pcie-designware-plat.c +++ b/drivers/pci/dwc/pcie-designware-plat.c @@ -25,13 +25,6 @@ struct dw_plat_pcie { struct dw_pcie *pci; }; -static irqreturn_t dw_plat_pcie_msi_irq_handler(int irq, void *arg) -{ - struct pcie_port *pp = arg; - - return dw_handle_msi_irq(pp); -} - static int dw_plat_pcie_host_init(struct pcie_port *pp) { struct dw_pcie *pci = to_dw_pcie_from_pp(pp); @@ -63,15 +56,6 @@ static int dw_plat_add_pcie_port(struct pcie_port *pp, pp->msi_irq = platform_get_irq(pdev, 0); if (pp->msi_irq < 0) return pp->msi_irq; - - ret = devm_request_irq(dev, pp->msi_irq, - dw_plat_pcie_msi_irq_handler, - IRQF_SHARED | IRQF_NO_THREAD, - "dw-plat-pcie-msi", pp); - if (ret) { - dev_err(dev, "failed to request MSI IRQ\n"); - return ret; - } } pp->root_bus_nr = -1; diff --git a/drivers/pci/dwc/pcie-designware.h b/drivers/pci/dwc/pcie-designware.h index 11b13864a406..fe811dbc12cf 100644 --- a/drivers/pci/dwc/pcie-designware.h +++ b/drivers/pci/dwc/pcie-designware.h @@ -107,13 +107,10 @@ #define MSI_MESSAGE_DATA_32 0x58 #define MSI_MESSAGE_DATA_64 0x5C -/* - * Maximum number of MSI IRQs can be 256 per controller. But keep - * it 32 as of now. Probably we will never need more than 32. If needed, - * then increment it in multiple of 32. - */ -#define MAX_MSI_IRQS 32 -#define MAX_MSI_CTRLS (MAX_MSI_IRQS / 32) +#define MAX_MSI_IRQS 256 +#define MAX_MSI_IRQS_PER_CTRL 32 +#define MAX_MSI_CTRLS (MAX_MSI_IRQS / MAX_MSI_IRQS_PER_CTRL) +#define MSI_DEF_NUM_VECTORS 32 /* Maximum number of inbound/outbound iATUs */ #define MAX_IATU_IN 256 @@ -149,7 +146,9 @@ struct dw_pcie_host_ops { phys_addr_t (*get_msi_addr)(struct pcie_port *pp); u32 (*get_msi_data)(struct pcie_port *pp, int pos); void (*scan_bus)(struct pcie_port *pp); - int (*msi_host_init)(struct pcie_port *pp, struct msi_controller *chip); + void (*set_num_vectors)(struct pcie_port *pp); + int (*msi_host_init)(struct pcie_port *pp); + void (*msi_irq_ack)(int irq, struct pcie_port *pp); }; struct pcie_port { @@ -174,7 +173,11 @@ struct pcie_port { const struct dw_pcie_host_ops *ops; int msi_irq; struct irq_domain *irq_domain; + struct irq_domain *msi_domain; dma_addr_t msi_data; + u32 num_vectors; + u32 irq_status[MAX_MSI_CTRLS]; + raw_spinlock_t lock; DECLARE_BITMAP(msi_irq_in_use, MAX_MSI_IRQS); }; @@ -316,8 +319,10 @@ static inline void dw_pcie_dbi_ro_wr_dis(struct dw_pcie *pci) #ifdef CONFIG_PCIE_DW_HOST irqreturn_t dw_handle_msi_irq(struct pcie_port *pp); void dw_pcie_msi_init(struct pcie_port *pp); +void dw_pcie_free_msi(struct pcie_port *pp); void dw_pcie_setup_rc(struct pcie_port *pp); int dw_pcie_host_init(struct pcie_port *pp); +int dw_pcie_allocate_domains(struct pcie_port *pp); #else static inline irqreturn_t dw_handle_msi_irq(struct pcie_port *pp) { @@ -328,6 +333,10 @@ static inline void dw_pcie_msi_init(struct pcie_port *pp) { } +static inline void dw_pcie_free_msi(struct pcie_port *pp) +{ +} + static inline void dw_pcie_setup_rc(struct pcie_port *pp) { } @@ -336,6 +345,11 @@ static inline int dw_pcie_host_init(struct pcie_port *pp) { return 0; } + +static inline int dw_pcie_allocate_domains(struct pcie_port *pp) +{ + return 0; +} #endif #ifdef CONFIG_PCIE_DW_EP diff --git a/drivers/pci/dwc/pcie-histb.c b/drivers/pci/dwc/pcie-histb.c index 70b5c0b108bf..3611d6ce9a92 100644 --- a/drivers/pci/dwc/pcie-histb.c +++ b/drivers/pci/dwc/pcie-histb.c @@ -61,6 +61,7 @@ struct histb_pcie { struct reset_control *bus_reset; void __iomem *ctrl; int reset_gpio; + struct regulator *vpcie; }; static u32 histb_pcie_readl(struct histb_pcie *histb_pcie, u32 reg) @@ -207,13 +208,6 @@ static struct dw_pcie_host_ops histb_pcie_host_ops = { .host_init = histb_pcie_host_init, }; -static irqreturn_t histb_pcie_msi_irq_handler(int irq, void *arg) -{ - struct pcie_port *pp = arg; - - return dw_handle_msi_irq(pp); -} - static void histb_pcie_host_disable(struct histb_pcie *hipcie) { reset_control_assert(hipcie->soft_reset); @@ -227,6 +221,9 @@ static void histb_pcie_host_disable(struct histb_pcie *hipcie) if (gpio_is_valid(hipcie->reset_gpio)) gpio_set_value_cansleep(hipcie->reset_gpio, 0); + + if (hipcie->vpcie) + regulator_disable(hipcie->vpcie); } static int histb_pcie_host_enable(struct pcie_port *pp) @@ -237,6 +234,14 @@ static int histb_pcie_host_enable(struct pcie_port *pp) int ret; /* power on PCIe device if have */ + if (hipcie->vpcie) { + ret = regulator_enable(hipcie->vpcie); + if (ret) { + dev_err(dev, "failed to enable regulator: %d\n", ret); + return ret; + } + } + if (gpio_is_valid(hipcie->reset_gpio)) gpio_set_value_cansleep(hipcie->reset_gpio, 1); @@ -276,13 +281,14 @@ static int histb_pcie_host_enable(struct pcie_port *pp) return 0; err_aux_clk: - clk_disable_unprepare(hipcie->aux_clk); -err_pipe_clk: clk_disable_unprepare(hipcie->pipe_clk); -err_sys_clk: +err_pipe_clk: clk_disable_unprepare(hipcie->sys_clk); -err_bus_clk: +err_sys_clk: clk_disable_unprepare(hipcie->bus_clk); +err_bus_clk: + if (hipcie->vpcie) + regulator_disable(hipcie->vpcie); return ret; } @@ -332,6 +338,13 @@ static int histb_pcie_probe(struct platform_device *pdev) return PTR_ERR(pci->dbi_base); } + hipcie->vpcie = devm_regulator_get_optional(dev, "vpcie"); + if (IS_ERR(hipcie->vpcie)) { + if (PTR_ERR(hipcie->vpcie) == -EPROBE_DEFER) + return -EPROBE_DEFER; + hipcie->vpcie = NULL; + } + hipcie->reset_gpio = of_get_named_gpio_flags(np, "reset-gpios", 0, &of_flags); if (of_flags & OF_GPIO_ACTIVE_LOW) @@ -393,14 +406,6 @@ static int histb_pcie_probe(struct platform_device *pdev) dev_err(dev, "Failed to get MSI IRQ\n"); return pp->msi_irq; } - - ret = devm_request_irq(dev, pp->msi_irq, - histb_pcie_msi_irq_handler, - IRQF_SHARED, "histb-pcie-msi", pp); - if (ret) { - dev_err(dev, "cannot request MSI IRQ\n"); - return ret; - } } hipcie->phy = devm_phy_get(dev, "phy"); diff --git a/drivers/pci/dwc/pcie-kirin.c b/drivers/pci/dwc/pcie-kirin.c index 13d839bd6160..a6b88c7f6e3e 100644 --- a/drivers/pci/dwc/pcie-kirin.c +++ b/drivers/pci/dwc/pcie-kirin.c @@ -8,7 +8,6 @@ * Author: Xiaowei Song <songxiaowei@huawei.com> */ -#include <asm/compiler.h> #include <linux/compiler.h> #include <linux/clk.h> #include <linux/delay.h> @@ -505,7 +504,7 @@ static const struct of_device_id kirin_pcie_match[] = { {}, }; -struct platform_driver kirin_pcie_driver = { +static struct platform_driver kirin_pcie_driver = { .probe = kirin_pcie_probe, .driver = { .name = "kirin-pcie", diff --git a/drivers/pci/dwc/pcie-qcom.c b/drivers/pci/dwc/pcie-qcom.c index 6310c66e265c..5897af7d3355 100644 --- a/drivers/pci/dwc/pcie-qcom.c +++ b/drivers/pci/dwc/pcie-qcom.c @@ -79,6 +79,7 @@ #define PCIE20_v3_PARF_SLV_ADDR_SPACE_SIZE 0x358 #define SLV_ADDR_SPACE_SZ 0x10000000 +#define QCOM_PCIE_2_1_0_MAX_SUPPLY 3 struct qcom_pcie_resources_2_1_0 { struct clk *iface_clk; struct clk *core_clk; @@ -88,9 +89,7 @@ struct qcom_pcie_resources_2_1_0 { struct reset_control *ahb_reset; struct reset_control *por_reset; struct reset_control *phy_reset; - struct regulator *vdda; - struct regulator *vdda_phy; - struct regulator *vdda_refclk; + struct regulator_bulk_data supplies[QCOM_PCIE_2_1_0_MAX_SUPPLY]; }; struct qcom_pcie_resources_1_0_0 { @@ -102,12 +101,14 @@ struct qcom_pcie_resources_1_0_0 { struct regulator *vdda; }; +#define QCOM_PCIE_2_3_2_MAX_SUPPLY 2 struct qcom_pcie_resources_2_3_2 { struct clk *aux_clk; struct clk *master_clk; struct clk *slave_clk; struct clk *cfg_clk; struct clk *pipe_clk; + struct regulator_bulk_data supplies[QCOM_PCIE_2_3_2_MAX_SUPPLY]; }; struct qcom_pcie_resources_2_4_0 { @@ -180,13 +181,6 @@ static void qcom_ep_reset_deassert(struct qcom_pcie *pcie) usleep_range(PERST_DELAY_US, PERST_DELAY_US + 500); } -static irqreturn_t qcom_pcie_msi_irq_handler(int irq, void *arg) -{ - struct pcie_port *pp = arg; - - return dw_handle_msi_irq(pp); -} - static int qcom_pcie_establish_link(struct qcom_pcie *pcie) { struct dw_pcie *pci = pcie->pci; @@ -216,18 +210,15 @@ static int qcom_pcie_get_resources_2_1_0(struct qcom_pcie *pcie) struct qcom_pcie_resources_2_1_0 *res = &pcie->res.v2_1_0; struct dw_pcie *pci = pcie->pci; struct device *dev = pci->dev; + int ret; - res->vdda = devm_regulator_get(dev, "vdda"); - if (IS_ERR(res->vdda)) - return PTR_ERR(res->vdda); - - res->vdda_phy = devm_regulator_get(dev, "vdda_phy"); - if (IS_ERR(res->vdda_phy)) - return PTR_ERR(res->vdda_phy); - - res->vdda_refclk = devm_regulator_get(dev, "vdda_refclk"); - if (IS_ERR(res->vdda_refclk)) - return PTR_ERR(res->vdda_refclk); + res->supplies[0].supply = "vdda"; + res->supplies[1].supply = "vdda_phy"; + res->supplies[2].supply = "vdda_refclk"; + ret = devm_regulator_bulk_get(dev, ARRAY_SIZE(res->supplies), + res->supplies); + if (ret) + return ret; res->iface_clk = devm_clk_get(dev, "iface"); if (IS_ERR(res->iface_clk)) @@ -273,9 +264,7 @@ static void qcom_pcie_deinit_2_1_0(struct qcom_pcie *pcie) clk_disable_unprepare(res->iface_clk); clk_disable_unprepare(res->core_clk); clk_disable_unprepare(res->phy_clk); - regulator_disable(res->vdda); - regulator_disable(res->vdda_phy); - regulator_disable(res->vdda_refclk); + regulator_bulk_disable(ARRAY_SIZE(res->supplies), res->supplies); } static int qcom_pcie_init_2_1_0(struct qcom_pcie *pcie) @@ -286,24 +275,12 @@ static int qcom_pcie_init_2_1_0(struct qcom_pcie *pcie) u32 val; int ret; - ret = regulator_enable(res->vdda); - if (ret) { - dev_err(dev, "cannot enable vdda regulator\n"); + ret = regulator_bulk_enable(ARRAY_SIZE(res->supplies), res->supplies); + if (ret < 0) { + dev_err(dev, "cannot enable regulators\n"); return ret; } - ret = regulator_enable(res->vdda_refclk); - if (ret) { - dev_err(dev, "cannot enable vdda_refclk regulator\n"); - goto err_refclk; - } - - ret = regulator_enable(res->vdda_phy); - if (ret) { - dev_err(dev, "cannot enable vdda_phy regulator\n"); - goto err_vdda_phy; - } - ret = reset_control_assert(res->ahb_reset); if (ret) { dev_err(dev, "cannot assert ahb reset\n"); @@ -387,11 +364,7 @@ err_clk_core: err_clk_phy: clk_disable_unprepare(res->iface_clk); err_assert_ahb: - regulator_disable(res->vdda_phy); -err_vdda_phy: - regulator_disable(res->vdda_refclk); -err_refclk: - regulator_disable(res->vdda); + regulator_bulk_disable(ARRAY_SIZE(res->supplies), res->supplies); return ret; } @@ -521,6 +494,14 @@ static int qcom_pcie_get_resources_2_3_2(struct qcom_pcie *pcie) struct qcom_pcie_resources_2_3_2 *res = &pcie->res.v2_3_2; struct dw_pcie *pci = pcie->pci; struct device *dev = pci->dev; + int ret; + + res->supplies[0].supply = "vdda"; + res->supplies[1].supply = "vddpe-3v3"; + ret = devm_regulator_bulk_get(dev, ARRAY_SIZE(res->supplies), + res->supplies); + if (ret) + return ret; res->aux_clk = devm_clk_get(dev, "aux"); if (IS_ERR(res->aux_clk)) @@ -550,6 +531,8 @@ static void qcom_pcie_deinit_2_3_2(struct qcom_pcie *pcie) clk_disable_unprepare(res->master_clk); clk_disable_unprepare(res->cfg_clk); clk_disable_unprepare(res->aux_clk); + + regulator_bulk_disable(ARRAY_SIZE(res->supplies), res->supplies); } static void qcom_pcie_post_deinit_2_3_2(struct qcom_pcie *pcie) @@ -567,10 +550,16 @@ static int qcom_pcie_init_2_3_2(struct qcom_pcie *pcie) u32 val; int ret; + ret = regulator_bulk_enable(ARRAY_SIZE(res->supplies), res->supplies); + if (ret < 0) { + dev_err(dev, "cannot enable regulators\n"); + return ret; + } + ret = clk_prepare_enable(res->aux_clk); if (ret) { dev_err(dev, "cannot prepare/enable aux clock\n"); - return ret; + goto err_aux_clk; } ret = clk_prepare_enable(res->cfg_clk); @@ -621,6 +610,9 @@ err_master_clk: err_cfg_clk: clk_disable_unprepare(res->aux_clk); +err_aux_clk: + regulator_bulk_disable(ARRAY_SIZE(res->supplies), res->supplies); + return ret; } @@ -1262,15 +1254,6 @@ static int qcom_pcie_probe(struct platform_device *pdev) pp->msi_irq = platform_get_irq_byname(pdev, "msi"); if (pp->msi_irq < 0) return pp->msi_irq; - - ret = devm_request_irq(dev, pp->msi_irq, - qcom_pcie_msi_irq_handler, - IRQF_SHARED | IRQF_NO_THREAD, - "qcom-pcie-msi", pp); - if (ret) { - dev_err(dev, "cannot request msi irq\n"); - return ret; - } } ret = phy_init(pcie->phy); diff --git a/drivers/pci/endpoint/functions/pci-epf-test.c b/drivers/pci/endpoint/functions/pci-epf-test.c index 64d8a17f8094..7cef85124325 100644 --- a/drivers/pci/endpoint/functions/pci-epf-test.c +++ b/drivers/pci/endpoint/functions/pci-epf-test.c @@ -70,7 +70,7 @@ struct pci_epf_test_data { bool linkup_notifier; }; -static int bar_size[] = { 512, 512, 1024, 16384, 131072, 1048576 }; +static size_t bar_size[] = { 512, 512, 1024, 16384, 131072, 1048576 }; static int pci_epf_test_copy(struct pci_epf_test *epf_test) { @@ -344,21 +344,23 @@ static void pci_epf_test_unbind(struct pci_epf *epf) { struct pci_epf_test *epf_test = epf_get_drvdata(epf); struct pci_epc *epc = epf->epc; + struct pci_epf_bar *epf_bar; int bar; cancel_delayed_work(&epf_test->cmd_handler); pci_epc_stop(epc); for (bar = BAR_0; bar <= BAR_5; bar++) { + epf_bar = &epf->bar[bar]; + if (epf_test->reg[bar]) { pci_epf_free_space(epf, epf_test->reg[bar], bar); - pci_epc_clear_bar(epc, epf->func_no, bar); + pci_epc_clear_bar(epc, epf->func_no, epf_bar); } } } static int pci_epf_test_set_bar(struct pci_epf *epf) { - int flags; int bar; int ret; struct pci_epf_bar *epf_bar; @@ -367,21 +369,27 @@ static int pci_epf_test_set_bar(struct pci_epf *epf) struct pci_epf_test *epf_test = epf_get_drvdata(epf); enum pci_barno test_reg_bar = epf_test->test_reg_bar; - flags = PCI_BASE_ADDRESS_SPACE_MEMORY | PCI_BASE_ADDRESS_MEM_TYPE_32; - if (sizeof(dma_addr_t) == 0x8) - flags |= PCI_BASE_ADDRESS_MEM_TYPE_64; - for (bar = BAR_0; bar <= BAR_5; bar++) { epf_bar = &epf->bar[bar]; - ret = pci_epc_set_bar(epc, epf->func_no, bar, - epf_bar->phys_addr, - epf_bar->size, flags); + + epf_bar->flags |= upper_32_bits(epf_bar->size) ? + PCI_BASE_ADDRESS_MEM_TYPE_64 : + PCI_BASE_ADDRESS_MEM_TYPE_32; + + ret = pci_epc_set_bar(epc, epf->func_no, epf_bar); if (ret) { pci_epf_free_space(epf, epf_test->reg[bar], bar); dev_err(dev, "failed to set BAR%d\n", bar); if (bar == test_reg_bar) return ret; } + /* + * pci_epc_set_bar() sets PCI_BASE_ADDRESS_MEM_TYPE_64 + * if the specific implementation required a 64-bit BAR, + * even if we only requested a 32-bit BAR. + */ + if (epf_bar->flags & PCI_BASE_ADDRESS_MEM_TYPE_64) + bar++; } return 0; diff --git a/drivers/pci/endpoint/pci-epc-core.c b/drivers/pci/endpoint/pci-epc-core.c index e245bba0ab53..b0ee42739c3c 100644 --- a/drivers/pci/endpoint/pci-epc-core.c +++ b/drivers/pci/endpoint/pci-epc-core.c @@ -276,22 +276,25 @@ EXPORT_SYMBOL_GPL(pci_epc_map_addr); * pci_epc_clear_bar() - reset the BAR * @epc: the EPC device for which the BAR has to be cleared * @func_no: the endpoint function number in the EPC device - * @bar: the BAR number that has to be reset + * @epf_bar: the struct epf_bar that contains the BAR information * * Invoke to reset the BAR of the endpoint device. */ -void pci_epc_clear_bar(struct pci_epc *epc, u8 func_no, int bar) +void pci_epc_clear_bar(struct pci_epc *epc, u8 func_no, + struct pci_epf_bar *epf_bar) { unsigned long flags; - if (IS_ERR_OR_NULL(epc) || func_no >= epc->max_functions) + if (IS_ERR_OR_NULL(epc) || func_no >= epc->max_functions || + (epf_bar->barno == BAR_5 && + epf_bar->flags & PCI_BASE_ADDRESS_MEM_TYPE_64)) return; if (!epc->ops->clear_bar) return; spin_lock_irqsave(&epc->lock, flags); - epc->ops->clear_bar(epc, func_no, bar); + epc->ops->clear_bar(epc, func_no, epf_bar); spin_unlock_irqrestore(&epc->lock, flags); } EXPORT_SYMBOL_GPL(pci_epc_clear_bar); @@ -300,26 +303,31 @@ EXPORT_SYMBOL_GPL(pci_epc_clear_bar); * pci_epc_set_bar() - configure BAR in order for host to assign PCI addr space * @epc: the EPC device on which BAR has to be configured * @func_no: the endpoint function number in the EPC device - * @bar: the BAR number that has to be configured - * @size: the size of the addr space - * @flags: specify memory allocation/io allocation/32bit address/64 bit address + * @epf_bar: the struct epf_bar that contains the BAR information * * Invoke to configure the BAR of the endpoint device. */ -int pci_epc_set_bar(struct pci_epc *epc, u8 func_no, enum pci_barno bar, - dma_addr_t bar_phys, size_t size, int flags) +int pci_epc_set_bar(struct pci_epc *epc, u8 func_no, + struct pci_epf_bar *epf_bar) { int ret; unsigned long irq_flags; - - if (IS_ERR_OR_NULL(epc) || func_no >= epc->max_functions) + int flags = epf_bar->flags; + + if (IS_ERR_OR_NULL(epc) || func_no >= epc->max_functions || + (epf_bar->barno == BAR_5 && + flags & PCI_BASE_ADDRESS_MEM_TYPE_64) || + (flags & PCI_BASE_ADDRESS_SPACE_IO && + flags & PCI_BASE_ADDRESS_IO_MASK) || + (upper_32_bits(epf_bar->size) && + !(flags & PCI_BASE_ADDRESS_MEM_TYPE_64))) return -EINVAL; if (!epc->ops->set_bar) return 0; spin_lock_irqsave(&epc->lock, irq_flags); - ret = epc->ops->set_bar(epc, func_no, bar, bar_phys, size, flags); + ret = epc->ops->set_bar(epc, func_no, epf_bar); spin_unlock_irqrestore(&epc->lock, irq_flags); return ret; diff --git a/drivers/pci/endpoint/pci-epf-core.c b/drivers/pci/endpoint/pci-epf-core.c index 766ce1dca2ec..465b5f058b6d 100644 --- a/drivers/pci/endpoint/pci-epf-core.c +++ b/drivers/pci/endpoint/pci-epf-core.c @@ -98,6 +98,8 @@ void pci_epf_free_space(struct pci_epf *epf, void *addr, enum pci_barno bar) epf->bar[bar].phys_addr = 0; epf->bar[bar].size = 0; + epf->bar[bar].barno = 0; + epf->bar[bar].flags = 0; } EXPORT_SYMBOL_GPL(pci_epf_free_space); @@ -126,6 +128,8 @@ void *pci_epf_alloc_space(struct pci_epf *epf, size_t size, enum pci_barno bar) epf->bar[bar].phys_addr = phys_addr; epf->bar[bar].size = size; + epf->bar[bar].barno = bar; + epf->bar[bar].flags = PCI_BASE_ADDRESS_SPACE_MEMORY; return space; } @@ -200,29 +204,17 @@ struct pci_epf *pci_epf_create(const char *name) int ret; struct pci_epf *epf; struct device *dev; - char *func_name; - char *buf; + int len; epf = kzalloc(sizeof(*epf), GFP_KERNEL); - if (!epf) { - ret = -ENOMEM; - goto err_ret; - } - - buf = kstrdup(name, GFP_KERNEL); - if (!buf) { - ret = -ENOMEM; - goto free_epf; - } - - func_name = buf; - buf = strchrnul(buf, '.'); - *buf = '\0'; + if (!epf) + return ERR_PTR(-ENOMEM); - epf->name = kstrdup(func_name, GFP_KERNEL); + len = strchrnul(name, '.') - name; + epf->name = kstrndup(name, len, GFP_KERNEL); if (!epf->name) { - ret = -ENOMEM; - goto free_func_name; + kfree(epf); + return ERR_PTR(-ENOMEM); } dev = &epf->dev; @@ -231,28 +223,18 @@ struct pci_epf *pci_epf_create(const char *name) dev->type = &pci_epf_type; ret = dev_set_name(dev, "%s", name); - if (ret) - goto put_dev; + if (ret) { + put_device(dev); + return ERR_PTR(ret); + } ret = device_add(dev); - if (ret) - goto put_dev; + if (ret) { + put_device(dev); + return ERR_PTR(ret); + } - kfree(func_name); return epf; - -put_dev: - put_device(dev); - kfree(epf->name); - -free_func_name: - kfree(func_name); - -free_epf: - kfree(epf); - -err_ret: - return ERR_PTR(ret); } EXPORT_SYMBOL_GPL(pci_epf_create); diff --git a/drivers/pci/host-bridge.c b/drivers/pci/host-bridge.c index ac8d81268296..e01d53f5b32f 100644 --- a/drivers/pci/host-bridge.c +++ b/drivers/pci/host-bridge.c @@ -1,6 +1,6 @@ // SPDX-License-Identifier: GPL-2.0 /* - * host bridge related code + * Host bridge related code */ #include <linux/kernel.h> diff --git a/drivers/pci/host/Kconfig b/drivers/pci/host/Kconfig index a4ed7484d127..0d0177ce436c 100644 --- a/drivers/pci/host/Kconfig +++ b/drivers/pci/host/Kconfig @@ -38,6 +38,7 @@ config PCI_FTPCI100 config PCI_TEGRA bool "NVIDIA Tegra PCIe controller" depends on ARCH_TEGRA + depends on PCI_MSI_IRQ_DOMAIN help Say Y here if you want support for the PCIe host controller found on NVIDIA Tegra SoCs. @@ -215,7 +216,6 @@ config PCIE_TANGO_SMP8759 config VMD depends on PCI_MSI && X86_64 && SRCU tristate "Intel Volume Management Device Driver" - default N ---help--- Adds support for the Intel Volume Management Device (VMD). VMD is a secondary PCI host bridge that allows PCI Express root ports, diff --git a/drivers/pci/host/pci-ftpci100.c b/drivers/pci/host/pci-ftpci100.c index b9617d1c1d48..5008fd87956a 100644 --- a/drivers/pci/host/pci-ftpci100.c +++ b/drivers/pci/host/pci-ftpci100.c @@ -586,11 +586,11 @@ static int faraday_pci_probe(struct platform_device *pdev) * We encode bridge variants here, we have at least two so it doesn't * hurt to have infrastructure to encompass future variants as well. */ -const struct faraday_pci_variant faraday_regular = { +static const struct faraday_pci_variant faraday_regular = { .cascaded_irq = true, }; -const struct faraday_pci_variant faraday_dual = { +static const struct faraday_pci_variant faraday_dual = { .cascaded_irq = false, }; diff --git a/drivers/pci/host/pci-hyperv.c b/drivers/pci/host/pci-hyperv.c index 2faf38eab785..50cdefe3f6d3 100644 --- a/drivers/pci/host/pci-hyperv.c +++ b/drivers/pci/host/pci-hyperv.c @@ -447,7 +447,6 @@ struct hv_pcibus_device { spinlock_t device_list_lock; /* Protect lists below */ void __iomem *cfg_addr; - struct semaphore enum_sem; struct list_head resources_for_children; struct list_head children; @@ -461,6 +460,8 @@ struct hv_pcibus_device { struct retarget_msi_interrupt retarget_msi_interrupt_params; spinlock_t retarget_msi_interrupt_lock; + + struct workqueue_struct *wq; }; /* @@ -520,6 +521,8 @@ struct hv_pci_compl { s32 completion_status; }; +static void hv_pci_onchannelcallback(void *context); + /** * hv_pci_generic_compl() - Invoked for a completion packet * @context: Set up by the sender of the packet. @@ -653,7 +656,7 @@ static void _hv_pcifront_read_config(struct hv_pci_dev *hpdev, int where, break; } /* - * Make sure the write was done before we release the spinlock + * Make sure the read was done before we release the spinlock * allowing consecutive reads/writes. */ mb(); @@ -664,6 +667,31 @@ static void _hv_pcifront_read_config(struct hv_pci_dev *hpdev, int where, } } +static u16 hv_pcifront_get_vendor_id(struct hv_pci_dev *hpdev) +{ + u16 ret; + unsigned long flags; + void __iomem *addr = hpdev->hbus->cfg_addr + CFG_PAGE_OFFSET + + PCI_VENDOR_ID; + + spin_lock_irqsave(&hpdev->hbus->config_lock, flags); + + /* Choose the function to be read. (See comment above) */ + writel(hpdev->desc.win_slot.slot, hpdev->hbus->cfg_addr); + /* Make sure the function was chosen before we start reading. */ + mb(); + /* Read from that function's config space. */ + ret = readw(addr); + /* + * mb() is not required here, because the spin_unlock_irqrestore() + * is a barrier. + */ + + spin_unlock_irqrestore(&hpdev->hbus->config_lock, flags); + + return ret; +} + /** * _hv_pcifront_write_config() - Internal PCI config write * @hpdev: The PCI driver's representation of the device @@ -1106,8 +1134,37 @@ static void hv_compose_msi_msg(struct irq_data *data, struct msi_msg *msg) * Since this function is called with IRQ locks held, can't * do normal wait for completion; instead poll. */ - while (!try_wait_for_completion(&comp.comp_pkt.host_event)) + while (!try_wait_for_completion(&comp.comp_pkt.host_event)) { + /* 0xFFFF means an invalid PCI VENDOR ID. */ + if (hv_pcifront_get_vendor_id(hpdev) == 0xFFFF) { + dev_err_once(&hbus->hdev->device, + "the device has gone\n"); + goto free_int_desc; + } + + /* + * When the higher level interrupt code calls us with + * interrupt disabled, we must poll the channel by calling + * the channel callback directly when channel->target_cpu is + * the current CPU. When the higher level interrupt code + * calls us with interrupt enabled, let's add the + * local_bh_disable()/enable() to avoid race. + */ + local_bh_disable(); + + if (hbus->hdev->channel->target_cpu == smp_processor_id()) + hv_pci_onchannelcallback(hbus); + + local_bh_enable(); + + if (hpdev->state == hv_pcichild_ejecting) { + dev_err_once(&hbus->hdev->device, + "the device is being ejected\n"); + goto free_int_desc; + } + udelay(100); + } if (comp.comp_pkt.completion_status < 0) { dev_err(&hbus->hdev->device, @@ -1590,12 +1647,8 @@ static struct hv_pci_dev *get_pcichild_wslot(struct hv_pcibus_device *hbus, * It must also treat the omission of a previously observed device as * notification that the device no longer exists. * - * Note that this function is a work item, and it may not be - * invoked in the order that it was queued. Back to back - * updates of the list of present devices may involve queuing - * multiple work items, and this one may run before ones that - * were sent later. As such, this function only does something - * if is the last one in the queue. + * Note that this function is serialized with hv_eject_device_work(), + * because both are pushed to the ordered workqueue hbus->wq. */ static void pci_devices_present_work(struct work_struct *work) { @@ -1616,11 +1669,6 @@ static void pci_devices_present_work(struct work_struct *work) INIT_LIST_HEAD(&removed); - if (down_interruptible(&hbus->enum_sem)) { - put_hvpcibus(hbus); - return; - } - /* Pull this off the queue and process it if it was the last one. */ spin_lock_irqsave(&hbus->device_list_lock, flags); while (!list_empty(&hbus->dr_list)) { @@ -1637,7 +1685,6 @@ static void pci_devices_present_work(struct work_struct *work) spin_unlock_irqrestore(&hbus->device_list_lock, flags); if (!dr) { - up(&hbus->enum_sem); put_hvpcibus(hbus); return; } @@ -1724,7 +1771,6 @@ static void pci_devices_present_work(struct work_struct *work) break; } - up(&hbus->enum_sem); put_hvpcibus(hbus); kfree(dr); } @@ -1743,6 +1789,7 @@ static void hv_pci_devices_present(struct hv_pcibus_device *hbus, struct hv_dr_state *dr; struct hv_dr_work *dr_wrk; unsigned long flags; + bool pending_dr; dr_wrk = kzalloc(sizeof(*dr_wrk), GFP_NOWAIT); if (!dr_wrk) @@ -1766,11 +1813,21 @@ static void hv_pci_devices_present(struct hv_pcibus_device *hbus, } spin_lock_irqsave(&hbus->device_list_lock, flags); + /* + * If pending_dr is true, we have already queued a work, + * which will see the new dr. Otherwise, we need to + * queue a new work. + */ + pending_dr = !list_empty(&hbus->dr_list); list_add_tail(&dr->list_entry, &hbus->dr_list); spin_unlock_irqrestore(&hbus->device_list_lock, flags); - get_hvpcibus(hbus); - schedule_work(&dr_wrk->wrk); + if (pending_dr) { + kfree(dr_wrk); + } else { + get_hvpcibus(hbus); + queue_work(hbus->wq, &dr_wrk->wrk); + } } /** @@ -1796,10 +1853,7 @@ static void hv_eject_device_work(struct work_struct *work) hpdev = container_of(work, struct hv_pci_dev, wrk); - if (hpdev->state != hv_pcichild_ejecting) { - put_pcichild(hpdev, hv_pcidev_ref_pnp); - return; - } + WARN_ON(hpdev->state != hv_pcichild_ejecting); /* * Ejection can come before or after the PCI bus has been set up, so @@ -1848,7 +1902,7 @@ static void hv_pci_eject_device(struct hv_pci_dev *hpdev) get_pcichild(hpdev, hv_pcidev_ref_pnp); INIT_WORK(&hpdev->wrk, hv_eject_device_work); get_hvpcibus(hpdev->hbus); - schedule_work(&hpdev->wrk); + queue_work(hpdev->hbus->wq, &hpdev->wrk); } /** @@ -2461,13 +2515,18 @@ static int hv_pci_probe(struct hv_device *hdev, spin_lock_init(&hbus->config_lock); spin_lock_init(&hbus->device_list_lock); spin_lock_init(&hbus->retarget_msi_interrupt_lock); - sema_init(&hbus->enum_sem, 1); init_completion(&hbus->remove_event); + hbus->wq = alloc_ordered_workqueue("hv_pci_%x", 0, + hbus->sysdata.domain); + if (!hbus->wq) { + ret = -ENOMEM; + goto free_bus; + } ret = vmbus_open(hdev->channel, pci_ring_size, pci_ring_size, NULL, 0, hv_pci_onchannelcallback, hbus); if (ret) - goto free_bus; + goto destroy_wq; hv_set_drvdata(hdev, hbus); @@ -2536,6 +2595,8 @@ free_config: hv_free_config_window(hbus); close: vmbus_close(hdev->channel); +destroy_wq: + destroy_workqueue(hbus->wq); free_bus: free_page((unsigned long)hbus); return ret; @@ -2615,6 +2676,7 @@ static int hv_pci_remove(struct hv_device *hdev) irq_domain_free_fwnode(hbus->sysdata.fwnode); put_hvpcibus(hbus); wait_for_completion(&hbus->remove_event); + destroy_workqueue(hbus->wq); free_page((unsigned long)hbus); return 0; } diff --git a/drivers/pci/host/pci-rcar-gen2.c b/drivers/pci/host/pci-rcar-gen2.c index a28370bb2b2a..dd4f1a6b57c5 100644 --- a/drivers/pci/host/pci-rcar-gen2.c +++ b/drivers/pci/host/pci-rcar-gen2.c @@ -53,7 +53,6 @@ #define RCAR_PCI_INT_PME (1 << 19) #define RCAR_PCI_INT_ALLERRORS (RCAR_PCI_INT_SIGTABORT | \ RCAR_PCI_INT_SIGRETABORT | \ - RCAR_PCI_INT_SIGRETABORT | \ RCAR_PCI_INT_REMABORT | \ RCAR_PCI_INT_PERR | \ RCAR_PCI_INT_SIGSERR | \ diff --git a/drivers/pci/host/pci-tegra.c b/drivers/pci/host/pci-tegra.c index dd9b3bcc41c3..389e74be846c 100644 --- a/drivers/pci/host/pci-tegra.c +++ b/drivers/pci/host/pci-tegra.c @@ -18,10 +18,12 @@ #include <linux/delay.h> #include <linux/export.h> #include <linux/interrupt.h> +#include <linux/iopoll.h> #include <linux/irq.h> #include <linux/irqdomain.h> #include <linux/kernel.h> #include <linux/init.h> +#include <linux/module.h> #include <linux/msi.h> #include <linux/of_address.h> #include <linux/of_pci.h> @@ -139,6 +141,8 @@ #define AFI_INTR_EN_FPCI_TIMEOUT (1 << 7) #define AFI_INTR_EN_PRSNT_SENSE (1 << 8) +#define AFI_PCIE_PME 0xf0 + #define AFI_PCIE_CONFIG 0x0f8 #define AFI_PCIE_CONFIG_PCIE_DISABLE(x) (1 << ((x) + 1)) #define AFI_PCIE_CONFIG_PCIE_DISABLE_ALL 0xe @@ -219,6 +223,8 @@ #define PADS_REFCLK_CFG_PREDI_SHIFT 8 /* 11:8 */ #define PADS_REFCLK_CFG_DRVI_SHIFT 12 /* 15:12 */ +#define PME_ACK_TIMEOUT 10000 + struct tegra_msi { struct msi_controller chip; DECLARE_BITMAP(used, INT_PCI_MSI_NR); @@ -230,8 +236,16 @@ struct tegra_msi { }; /* used to differentiate between Tegra SoC generations */ +struct tegra_pcie_port_soc { + struct { + u8 turnoff_bit; + u8 ack_bit; + } pme; +}; + struct tegra_pcie_soc { unsigned int num_ports; + const struct tegra_pcie_port_soc *ports; unsigned int msi_base_shift; u32 pads_pll_ctl; u32 tx_ref_sel; @@ -549,14 +563,25 @@ static int tegra_pcie_request_resources(struct tegra_pcie *pcie) pci_add_resource(windows, &pcie->busn); err = devm_request_pci_bus_resources(dev, windows); - if (err < 0) + if (err < 0) { + pci_free_resource_list(windows); return err; + } pci_remap_iospace(&pcie->pio, pcie->io.start); return 0; } +static void tegra_pcie_free_resources(struct tegra_pcie *pcie) +{ + struct pci_host_bridge *host = pci_host_bridge_from_priv(pcie); + struct list_head *windows = &host->windows; + + pci_unmap_iospace(&pcie->pio); + pci_free_resource_list(windows); +} + static int tegra_pcie_map_irq(const struct pci_dev *pdev, u8 slot, u8 pin) { struct tegra_pcie *pcie = pdev->bus->sysdata; @@ -966,24 +991,35 @@ static int tegra_pcie_enable_controller(struct tegra_pcie *pcie) return 0; } -static void tegra_pcie_power_off(struct tegra_pcie *pcie) +static void tegra_pcie_disable_controller(struct tegra_pcie *pcie) { - struct device *dev = pcie->dev; - const struct tegra_pcie_soc *soc = pcie->soc; int err; - /* TODO: disable and unprepare clocks? */ + reset_control_assert(pcie->pcie_xrst); - if (soc->program_uphy) { + if (pcie->soc->program_uphy) { err = tegra_pcie_phy_power_off(pcie); if (err < 0) - dev_err(dev, "failed to power off PHY(s): %d\n", err); + dev_err(pcie->dev, "failed to power off PHY(s): %d\n", + err); } +} + +static void tegra_pcie_power_off(struct tegra_pcie *pcie) +{ + struct device *dev = pcie->dev; + const struct tegra_pcie_soc *soc = pcie->soc; + int err; - reset_control_assert(pcie->pcie_xrst); reset_control_assert(pcie->afi_rst); reset_control_assert(pcie->pex_rst); + clk_disable_unprepare(pcie->pll_e); + if (soc->has_cml_clk) + clk_disable_unprepare(pcie->cml_clk); + clk_disable_unprepare(pcie->afi_clk); + clk_disable_unprepare(pcie->pex_clk); + if (!dev->pm_domain) tegra_powergate_power_off(TEGRA_POWERGATE_PCIE); @@ -1192,6 +1228,30 @@ static int tegra_pcie_phys_get(struct tegra_pcie *pcie) return 0; } +static void tegra_pcie_phys_put(struct tegra_pcie *pcie) +{ + struct tegra_pcie_port *port; + struct device *dev = pcie->dev; + int err, i; + + if (pcie->legacy_phy) { + err = phy_exit(pcie->phy); + if (err < 0) + dev_err(dev, "failed to teardown PHY: %d\n", err); + return; + } + + list_for_each_entry(port, &pcie->ports, list) { + for (i = 0; i < port->lanes; i++) { + err = phy_exit(port->phys[i]); + if (err < 0) + dev_err(dev, "failed to teardown PHY#%u: %d\n", + i, err); + } + } +} + + static int tegra_pcie_get_resources(struct tegra_pcie *pcie) { struct device *dev = pcie->dev; @@ -1220,31 +1280,25 @@ static int tegra_pcie_get_resources(struct tegra_pcie *pcie) } } - err = tegra_pcie_power_on(pcie); - if (err) { - dev_err(dev, "failed to power up: %d\n", err); - return err; - } - pads = platform_get_resource_byname(pdev, IORESOURCE_MEM, "pads"); pcie->pads = devm_ioremap_resource(dev, pads); if (IS_ERR(pcie->pads)) { err = PTR_ERR(pcie->pads); - goto poweroff; + goto phys_put; } afi = platform_get_resource_byname(pdev, IORESOURCE_MEM, "afi"); pcie->afi = devm_ioremap_resource(dev, afi); if (IS_ERR(pcie->afi)) { err = PTR_ERR(pcie->afi); - goto poweroff; + goto phys_put; } /* request configuration space, but remap later, on demand */ res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "cs"); if (!res) { err = -EADDRNOTAVAIL; - goto poweroff; + goto phys_put; } pcie->cs = *res; @@ -1255,14 +1309,14 @@ static int tegra_pcie_get_resources(struct tegra_pcie *pcie) pcie->cfg = devm_ioremap_resource(dev, &pcie->cs); if (IS_ERR(pcie->cfg)) { err = PTR_ERR(pcie->cfg); - goto poweroff; + goto phys_put; } /* request interrupt */ err = platform_get_irq_byname(pdev, "intr"); if (err < 0) { dev_err(dev, "failed to get IRQ: %d\n", err); - goto poweroff; + goto phys_put; } pcie->irq = err; @@ -1270,36 +1324,56 @@ static int tegra_pcie_get_resources(struct tegra_pcie *pcie) err = request_irq(pcie->irq, tegra_pcie_isr, IRQF_SHARED, "PCIE", pcie); if (err) { dev_err(dev, "failed to register IRQ: %d\n", err); - goto poweroff; + goto phys_put; } return 0; -poweroff: - tegra_pcie_power_off(pcie); +phys_put: + if (soc->program_uphy) + tegra_pcie_phys_put(pcie); return err; } static int tegra_pcie_put_resources(struct tegra_pcie *pcie) { - struct device *dev = pcie->dev; const struct tegra_pcie_soc *soc = pcie->soc; - int err; if (pcie->irq > 0) free_irq(pcie->irq, pcie); - tegra_pcie_power_off(pcie); - - if (soc->program_uphy) { - err = phy_exit(pcie->phy); - if (err < 0) - dev_err(dev, "failed to teardown PHY: %d\n", err); - } + if (soc->program_uphy) + tegra_pcie_phys_put(pcie); return 0; } +static void tegra_pcie_pme_turnoff(struct tegra_pcie_port *port) +{ + struct tegra_pcie *pcie = port->pcie; + const struct tegra_pcie_soc *soc = pcie->soc; + int err; + u32 val; + u8 ack_bit; + + val = afi_readl(pcie, AFI_PCIE_PME); + val |= (0x1 << soc->ports[port->index].pme.turnoff_bit); + afi_writel(pcie, val, AFI_PCIE_PME); + + ack_bit = soc->ports[port->index].pme.ack_bit; + err = readl_poll_timeout(pcie->afi + AFI_PCIE_PME, val, + val & (0x1 << ack_bit), 1, PME_ACK_TIMEOUT); + if (err) + dev_err(pcie->dev, "PME Ack is not received on port: %d\n", + port->index); + + usleep_range(10000, 11000); + + val = afi_readl(pcie, AFI_PCIE_PME); + val &= ~(0x1 << soc->ports[port->index].pme.turnoff_bit); + afi_writel(pcie, val, AFI_PCIE_PME); +} + static int tegra_msi_alloc(struct tegra_msi *chip) { int msi; @@ -1436,15 +1510,13 @@ static const struct irq_domain_ops msi_domain_ops = { .map = tegra_msi_map, }; -static int tegra_pcie_enable_msi(struct tegra_pcie *pcie) +static int tegra_pcie_msi_setup(struct tegra_pcie *pcie) { struct pci_host_bridge *host = pci_host_bridge_from_priv(pcie); struct platform_device *pdev = to_platform_device(pcie->dev); - const struct tegra_pcie_soc *soc = pcie->soc; struct tegra_msi *msi = &pcie->msi; struct device *dev = pcie->dev; int err; - u32 reg; mutex_init(&msi->lock); @@ -1477,6 +1549,20 @@ static int tegra_pcie_enable_msi(struct tegra_pcie *pcie) /* setup AFI/FPCI range */ msi->pages = __get_free_pages(GFP_KERNEL, 0); msi->phys = virt_to_phys((void *)msi->pages); + host->msi = &msi->chip; + + return 0; + +err: + irq_domain_remove(msi->domain); + return err; +} + +static void tegra_pcie_enable_msi(struct tegra_pcie *pcie) +{ + const struct tegra_pcie_soc *soc = pcie->soc; + struct tegra_msi *msi = &pcie->msi; + u32 reg; afi_writel(pcie, msi->phys >> soc->msi_base_shift, AFI_MSI_FPCI_BAR_ST); afi_writel(pcie, msi->phys, AFI_MSI_AXI_BAR_ST); @@ -1497,20 +1583,29 @@ static int tegra_pcie_enable_msi(struct tegra_pcie *pcie) reg = afi_readl(pcie, AFI_INTR_MASK); reg |= AFI_INTR_MASK_MSI_MASK; afi_writel(pcie, reg, AFI_INTR_MASK); +} - host->msi = &msi->chip; +static void tegra_pcie_msi_teardown(struct tegra_pcie *pcie) +{ + struct tegra_msi *msi = &pcie->msi; + unsigned int i, irq; - return 0; + free_pages(msi->pages, 0); + + if (msi->irq > 0) + free_irq(msi->irq, pcie); + + for (i = 0; i < INT_PCI_MSI_NR; i++) { + irq = irq_find_mapping(msi->domain, i); + if (irq > 0) + irq_dispose_mapping(irq); + } -err: irq_domain_remove(msi->domain); - return err; } static int tegra_pcie_disable_msi(struct tegra_pcie *pcie) { - struct tegra_msi *msi = &pcie->msi; - unsigned int i, irq; u32 value; /* mask the MSI interrupt */ @@ -1528,19 +1623,6 @@ static int tegra_pcie_disable_msi(struct tegra_pcie *pcie) afi_writel(pcie, 0, AFI_MSI_EN_VEC6); afi_writel(pcie, 0, AFI_MSI_EN_VEC7); - free_pages(msi->pages, 0); - - if (msi->irq > 0) - free_irq(msi->irq, pcie); - - for (i = 0; i < INT_PCI_MSI_NR; i++) { - irq = irq_find_mapping(msi->domain, i); - if (irq > 0) - irq_dispose_mapping(irq); - } - - irq_domain_remove(msi->domain); - return 0; } @@ -2035,8 +2117,22 @@ static void tegra_pcie_enable_ports(struct tegra_pcie *pcie) } } +static void tegra_pcie_disable_ports(struct tegra_pcie *pcie) +{ + struct tegra_pcie_port *port, *tmp; + + list_for_each_entry_safe(port, tmp, &pcie->ports, list) + tegra_pcie_port_disable(port); +} + +static const struct tegra_pcie_port_soc tegra20_pcie_ports[] = { + { .pme.turnoff_bit = 0, .pme.ack_bit = 5 }, + { .pme.turnoff_bit = 8, .pme.ack_bit = 10 }, +}; + static const struct tegra_pcie_soc tegra20_pcie = { .num_ports = 2, + .ports = tegra20_pcie_ports, .msi_base_shift = 0, .pads_pll_ctl = PADS_PLL_CTL_TEGRA20, .tx_ref_sel = PADS_PLL_CTL_TXCLKREF_DIV10, @@ -2050,8 +2146,15 @@ static const struct tegra_pcie_soc tegra20_pcie = { .program_uphy = true, }; +static const struct tegra_pcie_port_soc tegra30_pcie_ports[] = { + { .pme.turnoff_bit = 0, .pme.ack_bit = 5 }, + { .pme.turnoff_bit = 8, .pme.ack_bit = 10 }, + { .pme.turnoff_bit = 16, .pme.ack_bit = 18 }, +}; + static const struct tegra_pcie_soc tegra30_pcie = { .num_ports = 3, + .ports = tegra30_pcie_ports, .msi_base_shift = 8, .pads_pll_ctl = PADS_PLL_CTL_TEGRA30, .tx_ref_sel = PADS_PLL_CTL_TXCLKREF_BUF_EN, @@ -2068,6 +2171,7 @@ static const struct tegra_pcie_soc tegra30_pcie = { static const struct tegra_pcie_soc tegra124_pcie = { .num_ports = 2, + .ports = tegra20_pcie_ports, .msi_base_shift = 8, .pads_pll_ctl = PADS_PLL_CTL_TEGRA30, .tx_ref_sel = PADS_PLL_CTL_TXCLKREF_BUF_EN, @@ -2083,6 +2187,7 @@ static const struct tegra_pcie_soc tegra124_pcie = { static const struct tegra_pcie_soc tegra210_pcie = { .num_ports = 2, + .ports = tegra20_pcie_ports, .msi_base_shift = 8, .pads_pll_ctl = PADS_PLL_CTL_TEGRA30, .tx_ref_sel = PADS_PLL_CTL_TXCLKREF_BUF_EN, @@ -2096,8 +2201,15 @@ static const struct tegra_pcie_soc tegra210_pcie = { .program_uphy = true, }; +static const struct tegra_pcie_port_soc tegra186_pcie_ports[] = { + { .pme.turnoff_bit = 0, .pme.ack_bit = 5 }, + { .pme.turnoff_bit = 8, .pme.ack_bit = 10 }, + { .pme.turnoff_bit = 12, .pme.ack_bit = 14 }, +}; + static const struct tegra_pcie_soc tegra186_pcie = { .num_ports = 3, + .ports = tegra186_pcie_ports, .msi_base_shift = 8, .pads_pll_ctl = PADS_PLL_CTL_TEGRA30, .tx_ref_sel = PADS_PLL_CTL_TXCLKREF_BUF_EN, @@ -2209,6 +2321,12 @@ static const struct file_operations tegra_pcie_ports_ops = { .release = seq_release, }; +static void tegra_pcie_debugfs_exit(struct tegra_pcie *pcie) +{ + debugfs_remove_recursive(pcie->debugfs); + pcie->debugfs = NULL; +} + static int tegra_pcie_debugfs_init(struct tegra_pcie *pcie) { struct dentry *file; @@ -2225,8 +2343,7 @@ static int tegra_pcie_debugfs_init(struct tegra_pcie *pcie) return 0; remove: - debugfs_remove_recursive(pcie->debugfs); - pcie->debugfs = NULL; + tegra_pcie_debugfs_exit(pcie); return -ENOMEM; } @@ -2244,6 +2361,7 @@ static int tegra_pcie_probe(struct platform_device *pdev) pcie = pci_host_bridge_priv(host); host->sysdata = pcie; + platform_set_drvdata(pdev, pcie); pcie->soc = of_device_get_match_data(dev); INIT_LIST_HEAD(&pcie->ports); @@ -2259,26 +2377,22 @@ static int tegra_pcie_probe(struct platform_device *pdev) return err; } - err = tegra_pcie_enable_controller(pcie); - if (err) - goto put_resources; - - err = tegra_pcie_request_resources(pcie); - if (err) + err = tegra_pcie_msi_setup(pcie); + if (err < 0) { + dev_err(dev, "failed to enable MSI support: %d\n", err); goto put_resources; + } - /* setup the AFI address translations */ - tegra_pcie_setup_translations(pcie); - - if (IS_ENABLED(CONFIG_PCI_MSI)) { - err = tegra_pcie_enable_msi(pcie); - if (err < 0) { - dev_err(dev, "failed to enable MSI support: %d\n", err); - goto put_resources; - } + pm_runtime_enable(pcie->dev); + err = pm_runtime_get_sync(pcie->dev); + if (err) { + dev_err(dev, "fail to enable pcie controller: %d\n", err); + goto teardown_msi; } - tegra_pcie_enable_ports(pcie); + err = tegra_pcie_request_resources(pcie); + if (err) + goto pm_runtime_put; host->busnr = pcie->busn.start; host->dev.parent = &pdev->dev; @@ -2289,7 +2403,7 @@ static int tegra_pcie_probe(struct platform_device *pdev) err = pci_scan_root_bus_bridge(host); if (err < 0) { dev_err(dev, "failed to register host: %d\n", err); - goto disable_msi; + goto free_resources; } pci_bus_size_bridges(host->bus); @@ -2308,20 +2422,108 @@ static int tegra_pcie_probe(struct platform_device *pdev) return 0; -disable_msi: - if (IS_ENABLED(CONFIG_PCI_MSI)) - tegra_pcie_disable_msi(pcie); +free_resources: + tegra_pcie_free_resources(pcie); +pm_runtime_put: + pm_runtime_put_sync(pcie->dev); + pm_runtime_disable(pcie->dev); +teardown_msi: + tegra_pcie_msi_teardown(pcie); put_resources: tegra_pcie_put_resources(pcie); return err; } +static int tegra_pcie_remove(struct platform_device *pdev) +{ + struct tegra_pcie *pcie = platform_get_drvdata(pdev); + struct pci_host_bridge *host = pci_host_bridge_from_priv(pcie); + struct tegra_pcie_port *port, *tmp; + + if (IS_ENABLED(CONFIG_DEBUG_FS)) + tegra_pcie_debugfs_exit(pcie); + + pci_stop_root_bus(host->bus); + pci_remove_root_bus(host->bus); + tegra_pcie_free_resources(pcie); + pm_runtime_put_sync(pcie->dev); + pm_runtime_disable(pcie->dev); + + if (IS_ENABLED(CONFIG_PCI_MSI)) + tegra_pcie_msi_teardown(pcie); + + tegra_pcie_put_resources(pcie); + + list_for_each_entry_safe(port, tmp, &pcie->ports, list) + tegra_pcie_port_free(port); + + return 0; +} + +static int __maybe_unused tegra_pcie_pm_suspend(struct device *dev) +{ + struct tegra_pcie *pcie = dev_get_drvdata(dev); + struct tegra_pcie_port *port; + + list_for_each_entry(port, &pcie->ports, list) + tegra_pcie_pme_turnoff(port); + + tegra_pcie_disable_ports(pcie); + + if (IS_ENABLED(CONFIG_PCI_MSI)) + tegra_pcie_disable_msi(pcie); + + tegra_pcie_disable_controller(pcie); + tegra_pcie_power_off(pcie); + + return 0; +} + +static int __maybe_unused tegra_pcie_pm_resume(struct device *dev) +{ + struct tegra_pcie *pcie = dev_get_drvdata(dev); + int err; + + err = tegra_pcie_power_on(pcie); + if (err) { + dev_err(dev, "tegra pcie power on fail: %d\n", err); + return err; + } + err = tegra_pcie_enable_controller(pcie); + if (err) { + dev_err(dev, "tegra pcie controller enable fail: %d\n", err); + goto poweroff; + } + tegra_pcie_setup_translations(pcie); + + if (IS_ENABLED(CONFIG_PCI_MSI)) + tegra_pcie_enable_msi(pcie); + + tegra_pcie_enable_ports(pcie); + + return 0; + +poweroff: + tegra_pcie_power_off(pcie); + + return err; +} + +static const struct dev_pm_ops tegra_pcie_pm_ops = { + SET_RUNTIME_PM_OPS(tegra_pcie_pm_suspend, tegra_pcie_pm_resume, NULL) + SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(tegra_pcie_pm_suspend, + tegra_pcie_pm_resume) +}; + static struct platform_driver tegra_pcie_driver = { .driver = { .name = "tegra-pcie", .of_match_table = tegra_pcie_of_match, .suppress_bind_attrs = true, + .pm = &tegra_pcie_pm_ops, }, .probe = tegra_pcie_probe, + .remove = tegra_pcie_remove, }; -builtin_platform_driver(tegra_pcie_driver); +module_platform_driver(tegra_pcie_driver); +MODULE_LICENSE("GPL"); diff --git a/drivers/pci/host/pci-v3-semi.c b/drivers/pci/host/pci-v3-semi.c index 7fef64869d19..0a4dea796663 100644 --- a/drivers/pci/host/pci-v3-semi.c +++ b/drivers/pci/host/pci-v3-semi.c @@ -673,7 +673,7 @@ static int v3_get_dma_range_config(struct v3_pci *v3, dev_err(v3->dev, "illegal dma memory chunk size\n"); return -EINVAL; break; - }; + } val |= V3_PCI_MAP_M_REG_EN | V3_PCI_MAP_M_ENABLE; *pci_map = val; diff --git a/drivers/pci/host/pci-xgene-msi.c b/drivers/pci/host/pci-xgene-msi.c index df8e4bd5ddb2..f4c02da84e59 100644 --- a/drivers/pci/host/pci-xgene-msi.c +++ b/drivers/pci/host/pci-xgene-msi.c @@ -456,7 +456,7 @@ static int xgene_msi_probe(struct platform_device *pdev) xgene_msi->msi_regs = devm_ioremap_resource(&pdev->dev, res); if (IS_ERR(xgene_msi->msi_regs)) { dev_err(&pdev->dev, "no reg space\n"); - rc = -EINVAL; + rc = PTR_ERR(xgene_msi->msi_regs); goto error; } xgene_msi->msi_addr = res->start; diff --git a/drivers/pci/host/pcie-altera.c b/drivers/pci/host/pcie-altera.c index 2235f4760951..a6af62e0256d 100644 --- a/drivers/pci/host/pcie-altera.c +++ b/drivers/pci/host/pcie-altera.c @@ -145,7 +145,7 @@ static bool altera_pcie_valid_device(struct altera_pcie *pcie, static int tlp_read_packet(struct altera_pcie *pcie, u32 *value) { int i; - bool sop = 0; + bool sop = false; u32 ctrl; u32 reg0, reg1; u32 comp_status = 1; diff --git a/drivers/pci/host/pcie-iproc-bcma.c b/drivers/pci/host/pcie-iproc-bcma.c index 603c83429cb3..aa55b064f64d 100644 --- a/drivers/pci/host/pcie-iproc-bcma.c +++ b/drivers/pci/host/pcie-iproc-bcma.c @@ -25,8 +25,7 @@ DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_BROADCOM, 0x8012, bcma_pcie2_fixup_class); static int iproc_pcie_bcma_map_irq(const struct pci_dev *dev, u8 slot, u8 pin) { - struct pci_sys_data *sys = dev->sysdata; - struct iproc_pcie *pcie = sys->private_data; + struct iproc_pcie *pcie = dev->sysdata; struct bcma_device *bdev = container_of(pcie->dev, struct bcma_device, dev); return bcma_core_irq(bdev, 5); diff --git a/drivers/pci/host/pcie-iproc.c b/drivers/pci/host/pcie-iproc.c index cbb095481cdc..3c76c5fa4f32 100644 --- a/drivers/pci/host/pcie-iproc.c +++ b/drivers/pci/host/pcie-iproc.c @@ -377,14 +377,7 @@ static const u16 iproc_pcie_reg_paxc_v2[] = { static inline struct iproc_pcie *iproc_data(struct pci_bus *bus) { - struct iproc_pcie *pcie; -#ifdef CONFIG_ARM - struct pci_sys_data *sys = bus->sysdata; - - pcie = sys->private_data; -#else - pcie = bus->sysdata; -#endif + struct iproc_pcie *pcie = bus->sysdata; return pcie; } @@ -1331,7 +1324,6 @@ int iproc_pcie_setup(struct iproc_pcie *pcie, struct list_head *res) { struct device *dev; int ret; - void *sysdata; struct pci_bus *child; struct pci_host_bridge *host = pci_host_bridge_from_priv(pcie); @@ -1376,13 +1368,6 @@ int iproc_pcie_setup(struct iproc_pcie *pcie, struct list_head *res) goto err_power_off_phy; } -#ifdef CONFIG_ARM - pcie->sysdata.private_data = pcie; - sysdata = &pcie->sysdata; -#else - sysdata = pcie; -#endif - ret = iproc_pcie_check_link(pcie); if (ret) { dev_err(dev, "no PCIe EP device detected\n"); @@ -1399,7 +1384,7 @@ int iproc_pcie_setup(struct iproc_pcie *pcie, struct list_head *res) host->busnr = 0; host->dev.parent = dev; host->ops = &iproc_pcie_ops; - host->sysdata = sysdata; + host->sysdata = pcie; host->map_irq = pcie->map_irq; host->swizzle_irq = pci_common_swizzle; diff --git a/drivers/pci/host/pcie-iproc.h b/drivers/pci/host/pcie-iproc.h index d55f56a186cd..814b600b383a 100644 --- a/drivers/pci/host/pcie-iproc.h +++ b/drivers/pci/host/pcie-iproc.h @@ -54,7 +54,6 @@ struct iproc_msi; * @reg_offsets: register offsets * @base: PCIe host controller I/O register base * @base_addr: PCIe host controller register base physical address - * @sysdata: Per PCI controller data (ARM-specific) * @root_bus: pointer to root bus * @phy: optional PHY device that controls the Serdes * @map_irq: function callback to map interrupts @@ -80,9 +79,6 @@ struct iproc_pcie { u16 *reg_offsets; void __iomem *base; phys_addr_t base_addr; -#ifdef CONFIG_ARM - struct pci_sys_data sysdata; -#endif struct resource mem; struct pci_bus *root_bus; struct phy *phy; diff --git a/drivers/pci/host/pcie-rcar.c b/drivers/pci/host/pcie-rcar.c index b4c4aad2cf66..6ab28f29ac6a 100644 --- a/drivers/pci/host/pcie-rcar.c +++ b/drivers/pci/host/pcie-rcar.c @@ -435,7 +435,7 @@ static void rcar_pcie_force_speedup(struct rcar_pcie *pcie) } msleep(1); - }; + } dev_err(dev, "Speed change timed out\n"); diff --git a/drivers/pci/host/pcie-xilinx-nwl.c b/drivers/pci/host/pcie-xilinx-nwl.c index 0acaf483d031..4839ae578711 100644 --- a/drivers/pci/host/pcie-xilinx-nwl.c +++ b/drivers/pci/host/pcie-xilinx-nwl.c @@ -630,7 +630,7 @@ static int nwl_pcie_enable_msi(struct nwl_pcie *pcie) * For high range MSI interrupts: disable, clear any pending, * and enable */ - nwl_bridge_writel(pcie, (u32)~MSGF_MSI_SR_HI_MASK, MSGF_MSI_MASK_HI); + nwl_bridge_writel(pcie, 0, MSGF_MSI_MASK_HI); nwl_bridge_writel(pcie, nwl_bridge_readl(pcie, MSGF_MSI_STATUS_HI) & MSGF_MSI_SR_HI_MASK, MSGF_MSI_STATUS_HI); @@ -641,7 +641,7 @@ static int nwl_pcie_enable_msi(struct nwl_pcie *pcie) * For low range MSI interrupts: disable, clear any pending, * and enable */ - nwl_bridge_writel(pcie, (u32)~MSGF_MSI_SR_LO_MASK, MSGF_MSI_MASK_LO); + nwl_bridge_writel(pcie, 0, MSGF_MSI_MASK_LO); nwl_bridge_writel(pcie, nwl_bridge_readl(pcie, MSGF_MSI_STATUS_LO) & MSGF_MSI_SR_LO_MASK, MSGF_MSI_STATUS_LO); diff --git a/drivers/pci/hotplug/acpiphp_glue.c b/drivers/pci/hotplug/acpiphp_glue.c index e2198a2feeca..b45b375c0e6c 100644 --- a/drivers/pci/hotplug/acpiphp_glue.c +++ b/drivers/pci/hotplug/acpiphp_glue.c @@ -541,6 +541,7 @@ static unsigned int get_slot_status(struct acpiphp_slot *slot) { unsigned long long sta = 0; struct acpiphp_func *func; + u32 dvid; list_for_each_entry(func, &slot->funcs, sibling) { if (func->flags & FUNC_HAS_STA) { @@ -551,19 +552,27 @@ static unsigned int get_slot_status(struct acpiphp_slot *slot) if (ACPI_SUCCESS(status) && sta) break; } else { - u32 dvid; - - pci_bus_read_config_dword(slot->bus, - PCI_DEVFN(slot->device, - func->function), - PCI_VENDOR_ID, &dvid); - if (dvid != 0xffffffff) { + if (pci_bus_read_dev_vendor_id(slot->bus, + PCI_DEVFN(slot->device, func->function), + &dvid, 0)) { sta = ACPI_STA_ALL; break; } } } + if (!sta) { + /* + * Check for the slot itself since it may be that the + * ACPI slot is a device below PCIe upstream port so in + * that case it may not even be reachable yet. + */ + if (pci_bus_read_dev_vendor_id(slot->bus, + PCI_DEVFN(slot->device, 0), &dvid, 0)) { + sta = ACPI_STA_ALL; + } + } + return (unsigned int)sta; } diff --git a/drivers/pci/hotplug/cpqphp_ctrl.c b/drivers/pci/hotplug/cpqphp_ctrl.c index b1b6e45253b2..616df442520b 100644 --- a/drivers/pci/hotplug/cpqphp_ctrl.c +++ b/drivers/pci/hotplug/cpqphp_ctrl.c @@ -2812,18 +2812,16 @@ static int configure_new_function(struct controller *ctrl, struct pci_func *func dbg("CND: length = 0x%x\n", base); io_node = get_io_resource(&(resources->io_head), base); + if (!io_node) + return -ENOMEM; dbg("Got io_node start = %8.8x, length = %8.8x next (%p)\n", io_node->base, io_node->length, io_node->next); dbg("func (%p) io_head (%p)\n", func, func->io_head); /* allocate the resource to the board */ - if (io_node) { - base = io_node->base; - - io_node->next = func->io_head; - func->io_head = io_node; - } else - return -ENOMEM; + base = io_node->base; + io_node->next = func->io_head; + func->io_head = io_node; } else if ((temp_register & 0x0BL) == 0x08) { /* Map prefetchable memory */ base = temp_register & 0xFFFFFFF0; diff --git a/drivers/pci/hotplug/pciehp.h b/drivers/pci/hotplug/pciehp.h index 636ed8f4b869..88e917c9120f 100644 --- a/drivers/pci/hotplug/pciehp.h +++ b/drivers/pci/hotplug/pciehp.h @@ -20,10 +20,11 @@ #include <linux/pci_hotplug.h> #include <linux/delay.h> #include <linux/sched/signal.h> /* signal_pending() */ -#include <linux/pcieport_if.h> #include <linux/mutex.h> #include <linux/workqueue.h> +#include "../pcie/portdrv.h" + #define MY_NAME "pciehp" extern bool pciehp_poll_mode; diff --git a/drivers/pci/iov.c b/drivers/pci/iov.c index 677924ae0350..8adf4a64f291 100644 --- a/drivers/pci/iov.c +++ b/drivers/pci/iov.c @@ -1,12 +1,10 @@ // SPDX-License-Identifier: GPL-2.0 /* - * drivers/pci/iov.c - * - * Copyright (C) 2009 Intel Corporation, Yu Zhao <yu.zhao@intel.com> - * - * PCI Express I/O Virtualization (IOV) support. + * PCI Express I/O Virtualization (IOV) support * Single Root IOV 1.0 * Address Translation Service 1.0 + * + * Copyright (C) 2009 Intel Corporation, Yu Zhao <yu.zhao@intel.com> */ #include <linux/pci.h> @@ -114,6 +112,29 @@ resource_size_t pci_iov_resource_size(struct pci_dev *dev, int resno) return dev->sriov->barsz[resno - PCI_IOV_RESOURCES]; } +static void pci_read_vf_config_common(struct pci_dev *virtfn) +{ + struct pci_dev *physfn = virtfn->physfn; + + /* + * Some config registers are the same across all associated VFs. + * Read them once from VF0 so we can skip reading them from the + * other VFs. + * + * PCIe r4.0, sec 9.3.4.1, technically doesn't require all VFs to + * have the same Revision ID and Subsystem ID, but we assume they + * do. + */ + pci_read_config_dword(virtfn, PCI_CLASS_REVISION, + &physfn->sriov->class); + pci_read_config_byte(virtfn, PCI_HEADER_TYPE, + &physfn->sriov->hdr_type); + pci_read_config_word(virtfn, PCI_SUBSYSTEM_VENDOR_ID, + &physfn->sriov->subsystem_vendor); + pci_read_config_word(virtfn, PCI_SUBSYSTEM_ID, + &physfn->sriov->subsystem_device); +} + int pci_iov_add_virtfn(struct pci_dev *dev, int id) { int i; @@ -136,13 +157,17 @@ int pci_iov_add_virtfn(struct pci_dev *dev, int id) virtfn->devfn = pci_iov_virtfn_devfn(dev, id); virtfn->vendor = dev->vendor; virtfn->device = iov->vf_device; + virtfn->is_virtfn = 1; + virtfn->physfn = pci_dev_get(dev); + + if (id == 0) + pci_read_vf_config_common(virtfn); + rc = pci_setup_device(virtfn); if (rc) - goto failed0; + goto failed1; virtfn->dev.parent = dev->dev.parent; - virtfn->physfn = pci_dev_get(dev); - virtfn->is_virtfn = 1; virtfn->multifunction = 0; for (i = 0; i < PCI_SRIOV_NUM_BARS; i++) { @@ -163,10 +188,10 @@ int pci_iov_add_virtfn(struct pci_dev *dev, int id) sprintf(buf, "virtfn%u", id); rc = sysfs_create_link(&dev->dev.kobj, &virtfn->dev.kobj, buf); if (rc) - goto failed1; + goto failed2; rc = sysfs_create_link(&virtfn->dev.kobj, &dev->dev.kobj, "physfn"); if (rc) - goto failed2; + goto failed3; kobject_uevent(&virtfn->dev.kobj, KOBJ_CHANGE); @@ -174,11 +199,12 @@ int pci_iov_add_virtfn(struct pci_dev *dev, int id) return 0; -failed2: +failed3: sysfs_remove_link(&dev->dev.kobj, buf); +failed2: + pci_stop_and_remove_bus_device(virtfn); failed1: pci_dev_put(dev); - pci_stop_and_remove_bus_device(virtfn); failed0: virtfn_remove_bus(dev->bus, bus); failed: diff --git a/drivers/pci/mmap.c b/drivers/pci/mmap.c index 814a3ce341fc..24505b08de40 100644 --- a/drivers/pci/mmap.c +++ b/drivers/pci/mmap.c @@ -1,6 +1,6 @@ // SPDX-License-Identifier: GPL-2.0 /* - * mmap.c — generic PCI resource mmap helper + * Generic PCI resource mmap helper * * Copyright © 2017 Amazon.com, Inc. or its affiliates. * diff --git a/drivers/pci/msi.c b/drivers/pci/msi.c index 8b0729c94bb7..30250631efe7 100644 --- a/drivers/pci/msi.c +++ b/drivers/pci/msi.c @@ -1,7 +1,6 @@ // SPDX-License-Identifier: GPL-2.0 /* - * File: msi.c - * Purpose: PCI Message Signaled Interrupt (MSI) + * PCI Message Signaled Interrupt (MSI) * * Copyright (C) 2003-2004 Intel * Copyright (C) Tom Long Nguyen (tom.l.nguyen@intel.com) diff --git a/drivers/pci/pci-acpi.c b/drivers/pci/pci-acpi.c index 78157688dcc9..1abdbf267c19 100644 --- a/drivers/pci/pci-acpi.c +++ b/drivers/pci/pci-acpi.c @@ -1,7 +1,6 @@ // SPDX-License-Identifier: GPL-2.0 /* - * File: pci-acpi.c - * Purpose: Provide PCI support in ACPI + * PCI support in ACPI * * Copyright (C) 2005 David Shaohua Li <shaohua.li@intel.com> * Copyright (C) 2004 Tom Long Nguyen <tom.l.nguyen@intel.com> diff --git a/drivers/pci/pci-driver.c b/drivers/pci/pci-driver.c index 6a67cdbd0e6a..6ace47099fc5 100644 --- a/drivers/pci/pci-driver.c +++ b/drivers/pci/pci-driver.c @@ -1,7 +1,5 @@ // SPDX-License-Identifier: GPL-2.0 /* - * drivers/pci/pci-driver.c - * * (C) Copyright 2002-2004, 2007 Greg Kroah-Hartman <greg@kroah.com> * (C) Copyright 2007 Novell Inc. */ @@ -19,6 +17,7 @@ #include <linux/suspend.h> #include <linux/kexec.h> #include "pci.h" +#include "pcie/portdrv.h" struct pci_dynid { struct list_head node; @@ -714,6 +713,18 @@ static void pci_pm_complete(struct device *dev) #endif /* !CONFIG_PM_SLEEP */ #ifdef CONFIG_SUSPEND +static void pcie_pme_root_status_cleanup(struct pci_dev *pci_dev) +{ + /* + * Some BIOSes forget to clear Root PME Status bits after system + * wakeup, which breaks ACPI-based runtime wakeup on PCI Express. + * Clear those bits now just in case (shouldn't hurt). + */ + if (pci_is_pcie(pci_dev) && + (pci_pcie_type(pci_dev) == PCI_EXP_TYPE_ROOT_PORT || + pci_pcie_type(pci_dev) == PCI_EXP_TYPE_RC_EC)) + pcie_clear_root_pme_status(pci_dev); +} static int pci_pm_suspend(struct device *dev) { @@ -873,6 +884,8 @@ static int pci_pm_resume_noirq(struct device *dev) if (pci_has_legacy_pm_support(pci_dev)) return pci_legacy_resume_early(dev); + pcie_pme_root_status_cleanup(pci_dev); + if (drv && drv->pm && drv->pm->resume_noirq) error = drv->pm->resume_noirq(dev); @@ -1522,6 +1535,42 @@ static int pci_uevent(struct device *dev, struct kobj_uevent_env *env) return 0; } +#if defined(CONFIG_PCIEAER) || defined(CONFIG_EEH) +/** + * pci_uevent_ers - emit a uevent during recovery path of PCI device + * @pdev: PCI device undergoing error recovery + * @err_type: type of error event + */ +void pci_uevent_ers(struct pci_dev *pdev, enum pci_ers_result err_type) +{ + int idx = 0; + char *envp[3]; + + switch (err_type) { + case PCI_ERS_RESULT_NONE: + case PCI_ERS_RESULT_CAN_RECOVER: + envp[idx++] = "ERROR_EVENT=BEGIN_RECOVERY"; + envp[idx++] = "DEVICE_ONLINE=0"; + break; + case PCI_ERS_RESULT_RECOVERED: + envp[idx++] = "ERROR_EVENT=SUCCESSFUL_RECOVERY"; + envp[idx++] = "DEVICE_ONLINE=1"; + break; + case PCI_ERS_RESULT_DISCONNECT: + envp[idx++] = "ERROR_EVENT=FAILED_RECOVERY"; + envp[idx++] = "DEVICE_ONLINE=0"; + break; + default: + break; + } + + if (idx > 0) { + envp[idx++] = NULL; + kobject_uevent_env(&pdev->dev.kobj, KOBJ_CHANGE, envp); + } +} +#endif + static int pci_bus_num_vf(struct device *dev) { return pci_num_vf(to_pci_dev(dev)); @@ -1543,8 +1592,49 @@ struct bus_type pci_bus_type = { }; EXPORT_SYMBOL(pci_bus_type); +#ifdef CONFIG_PCIEPORTBUS +static int pcie_port_bus_match(struct device *dev, struct device_driver *drv) +{ + struct pcie_device *pciedev; + struct pcie_port_service_driver *driver; + + if (drv->bus != &pcie_port_bus_type || dev->bus != &pcie_port_bus_type) + return 0; + + pciedev = to_pcie_device(dev); + driver = to_service_driver(drv); + + if (driver->service != pciedev->service) + return 0; + + if (driver->port_type != PCIE_ANY_PORT && + driver->port_type != pci_pcie_type(pciedev->port)) + return 0; + + return 1; +} + +struct bus_type pcie_port_bus_type = { + .name = "pci_express", + .match = pcie_port_bus_match, +}; +EXPORT_SYMBOL_GPL(pcie_port_bus_type); +#endif + static int __init pci_driver_init(void) { - return bus_register(&pci_bus_type); + int ret; + + ret = bus_register(&pci_bus_type); + if (ret) + return ret; + +#ifdef CONFIG_PCIEPORTBUS + ret = bus_register(&pcie_port_bus_type); + if (ret) + return ret; +#endif + + return 0; } postcore_initcall(pci_driver_init); diff --git a/drivers/pci/pci-label.c b/drivers/pci/pci-label.c index a961a71d950f..a5910f942857 100644 --- a/drivers/pci/pci-label.c +++ b/drivers/pci/pci-label.c @@ -1,7 +1,8 @@ // SPDX-License-Identifier: GPL-2.0 /* - * Purpose: Export the firmware instance and label associated with - * a pci device to sysfs + * Export the firmware instance and label associated with a PCI device to + * sysfs + * * Copyright (C) 2010 Dell Inc. * by Narendra K <Narendra_K@dell.com>, * Jordan Hargrave <Jordan_Hargrave@dell.com> diff --git a/drivers/pci/pci-stub.c b/drivers/pci/pci-stub.c index 10d54f939048..66f8a59fadbd 100644 --- a/drivers/pci/pci-stub.c +++ b/drivers/pci/pci-stub.c @@ -1,5 +1,6 @@ // SPDX-License-Identifier: GPL-2.0 -/* pci-stub - simple stub driver to reserve a pci device +/* + * Simple stub driver to reserve a PCI device * * Copyright (C) 2008 Red Hat, Inc. * Author: diff --git a/drivers/pci/pci-sysfs.c b/drivers/pci/pci-sysfs.c index eb6bee8724cc..366d93af051d 100644 --- a/drivers/pci/pci-sysfs.c +++ b/drivers/pci/pci-sysfs.c @@ -1,7 +1,5 @@ // SPDX-License-Identifier: GPL-2.0 /* - * drivers/pci/pci-sysfs.c - * * (C) Copyright 2002-2004 Greg Kroah-Hartman <greg@kroah.com> * (C) Copyright 2002-2004 IBM Corp. * (C) Copyright 2003 Matthew Wilcox @@ -12,7 +10,6 @@ * File attributes for PCI devices * * Modeled after usb's driverfs.c - * */ @@ -158,45 +155,18 @@ static DEVICE_ATTR_RO(resource); static ssize_t max_link_speed_show(struct device *dev, struct device_attribute *attr, char *buf) { - struct pci_dev *pci_dev = to_pci_dev(dev); - u32 linkcap; - int err; - const char *speed; - - err = pcie_capability_read_dword(pci_dev, PCI_EXP_LNKCAP, &linkcap); - if (err) - return -EINVAL; - - switch (linkcap & PCI_EXP_LNKCAP_SLS) { - case PCI_EXP_LNKCAP_SLS_8_0GB: - speed = "8 GT/s"; - break; - case PCI_EXP_LNKCAP_SLS_5_0GB: - speed = "5 GT/s"; - break; - case PCI_EXP_LNKCAP_SLS_2_5GB: - speed = "2.5 GT/s"; - break; - default: - speed = "Unknown speed"; - } + struct pci_dev *pdev = to_pci_dev(dev); - return sprintf(buf, "%s\n", speed); + return sprintf(buf, "%s\n", PCIE_SPEED2STR(pcie_get_speed_cap(pdev))); } static DEVICE_ATTR_RO(max_link_speed); static ssize_t max_link_width_show(struct device *dev, struct device_attribute *attr, char *buf) { - struct pci_dev *pci_dev = to_pci_dev(dev); - u32 linkcap; - int err; - - err = pcie_capability_read_dword(pci_dev, PCI_EXP_LNKCAP, &linkcap); - if (err) - return -EINVAL; + struct pci_dev *pdev = to_pci_dev(dev); - return sprintf(buf, "%u\n", (linkcap & PCI_EXP_LNKCAP_MLW) >> 4); + return sprintf(buf, "%u\n", pcie_get_width_cap(pdev)); } static DEVICE_ATTR_RO(max_link_width); @@ -213,6 +183,9 @@ static ssize_t current_link_speed_show(struct device *dev, return -EINVAL; switch (linkstat & PCI_EXP_LNKSTA_CLS) { + case PCI_EXP_LNKSTA_CLS_16_0GB: + speed = "16 GT/s"; + break; case PCI_EXP_LNKSTA_CLS_8_0GB: speed = "8 GT/s"; break; @@ -982,38 +955,6 @@ static ssize_t pci_write_config(struct file *filp, struct kobject *kobj, return count; } -static ssize_t read_vpd_attr(struct file *filp, struct kobject *kobj, - struct bin_attribute *bin_attr, char *buf, - loff_t off, size_t count) -{ - struct pci_dev *dev = to_pci_dev(kobj_to_dev(kobj)); - - if (bin_attr->size > 0) { - if (off > bin_attr->size) - count = 0; - else if (count > bin_attr->size - off) - count = bin_attr->size - off; - } - - return pci_read_vpd(dev, off, count, buf); -} - -static ssize_t write_vpd_attr(struct file *filp, struct kobject *kobj, - struct bin_attribute *bin_attr, char *buf, - loff_t off, size_t count) -{ - struct pci_dev *dev = to_pci_dev(kobj_to_dev(kobj)); - - if (bin_attr->size > 0) { - if (off > bin_attr->size) - count = 0; - else if (count > bin_attr->size - off) - count = bin_attr->size - off; - } - - return pci_write_vpd(dev, off, count, buf); -} - #ifdef HAVE_PCI_LEGACY /** * pci_read_legacy_io - read byte(s) from legacy I/O port space @@ -1517,46 +1458,20 @@ static struct device_attribute reset_attr = __ATTR(reset, 0200, NULL, reset_stor static int pci_create_capabilities_sysfs(struct pci_dev *dev) { int retval; - struct bin_attribute *attr; - /* If the device has VPD, try to expose it in sysfs. */ - if (dev->vpd) { - attr = kzalloc(sizeof(*attr), GFP_ATOMIC); - if (!attr) - return -ENOMEM; - - sysfs_bin_attr_init(attr); - attr->size = 0; - attr->attr.name = "vpd"; - attr->attr.mode = S_IRUSR | S_IWUSR; - attr->read = read_vpd_attr; - attr->write = write_vpd_attr; - retval = sysfs_create_bin_file(&dev->dev.kobj, attr); - if (retval) { - kfree(attr); - return retval; - } - dev->vpd->attr = attr; - } - - /* Active State Power Management */ + pcie_vpd_create_sysfs_dev_files(dev); pcie_aspm_create_sysfs_dev_files(dev); - if (!pci_probe_reset_function(dev)) { + if (dev->reset_fn) { retval = device_create_file(&dev->dev, &reset_attr); if (retval) goto error; - dev->reset_fn = 1; } return 0; error: pcie_aspm_remove_sysfs_dev_files(dev); - if (dev->vpd && dev->vpd->attr) { - sysfs_remove_bin_file(&dev->dev.kobj, dev->vpd->attr); - kfree(dev->vpd->attr); - } - + pcie_vpd_remove_sysfs_dev_files(dev); return retval; } @@ -1630,11 +1545,7 @@ err: static void pci_remove_capabilities_sysfs(struct pci_dev *dev) { - if (dev->vpd && dev->vpd->attr) { - sysfs_remove_bin_file(&dev->dev.kobj, dev->vpd->attr); - kfree(dev->vpd->attr); - } - + pcie_vpd_remove_sysfs_dev_files(dev); pcie_aspm_remove_sysfs_dev_files(dev); if (dev->reset_fn) { device_remove_file(&dev->dev, &reset_attr); diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c index 99ec0ef5ba82..aa86e904f93c 100644 --- a/drivers/pci/pci.c +++ b/drivers/pci/pci.c @@ -1,11 +1,11 @@ // SPDX-License-Identifier: GPL-2.0 /* - * PCI Bus Services, see include/linux/pci.h for further explanation. + * PCI Bus Services, see include/linux/pci.h for further explanation. * - * Copyright 1993 -- 1997 Drew Eckhardt, Frederic Potter, - * David Mosberger-Tang + * Copyright 1993 -- 1997 Drew Eckhardt, Frederic Potter, + * David Mosberger-Tang * - * Copyright 1997 -- 2000 Martin Mares <mj@ucw.cz> + * Copyright 1997 -- 2000 Martin Mares <mj@ucw.cz> */ #include <linux/acpi.h> @@ -22,6 +22,7 @@ #include <linux/spinlock.h> #include <linux/string.h> #include <linux/log2.h> +#include <linux/logic_pio.h> #include <linux/pci-aspm.h> #include <linux/pm_wakeup.h> #include <linux/interrupt.h> @@ -126,6 +127,9 @@ static int __init pcie_port_pm_setup(char *str) } __setup("pcie_port_pm=", pcie_port_pm_setup); +/* Time to wait after a reset for device to become responsive */ +#define PCIE_RESET_READY_POLL_MS 60000 + /** * pci_bus_max_busnr - returns maximum PCI bus number of given bus' children * @bus: pointer to PCI bus structure to search @@ -1684,6 +1688,15 @@ int pci_set_pcie_reset_state(struct pci_dev *dev, enum pcie_reset_state state) EXPORT_SYMBOL_GPL(pci_set_pcie_reset_state); /** + * pcie_clear_root_pme_status - Clear root port PME interrupt status. + * @dev: PCIe root port or event collector. + */ +void pcie_clear_root_pme_status(struct pci_dev *dev) +{ + pcie_capability_set_dword(dev, PCI_EXP_RTSTA, PCI_EXP_RTSTA_PME); +} + +/** * pci_check_pme_status - Check if given device has generated PME. * @dev: Device to check. * @@ -3436,68 +3449,35 @@ int pci_request_regions_exclusive(struct pci_dev *pdev, const char *res_name) } EXPORT_SYMBOL(pci_request_regions_exclusive); -#ifdef PCI_IOBASE -struct io_range { - struct list_head list; - phys_addr_t start; - resource_size_t size; -}; - -static LIST_HEAD(io_range_list); -static DEFINE_SPINLOCK(io_range_lock); -#endif - /* * Record the PCI IO range (expressed as CPU physical address + size). * Return a negative value if an error has occured, zero otherwise */ -int __weak pci_register_io_range(phys_addr_t addr, resource_size_t size) +int pci_register_io_range(struct fwnode_handle *fwnode, phys_addr_t addr, + resource_size_t size) { - int err = 0; - + int ret = 0; #ifdef PCI_IOBASE - struct io_range *range; - resource_size_t allocated_size = 0; - - /* check if the range hasn't been previously recorded */ - spin_lock(&io_range_lock); - list_for_each_entry(range, &io_range_list, list) { - if (addr >= range->start && addr + size <= range->start + size) { - /* range already registered, bail out */ - goto end_register; - } - allocated_size += range->size; - } - - /* range not registed yet, check for available space */ - if (allocated_size + size - 1 > IO_SPACE_LIMIT) { - /* if it's too big check if 64K space can be reserved */ - if (allocated_size + SZ_64K - 1 > IO_SPACE_LIMIT) { - err = -E2BIG; - goto end_register; - } + struct logic_pio_hwaddr *range; - size = SZ_64K; - pr_warn("Requested IO range too big, new size set to 64K\n"); - } + if (!size || addr + size < addr) + return -EINVAL; - /* add the range to the list */ range = kzalloc(sizeof(*range), GFP_ATOMIC); - if (!range) { - err = -ENOMEM; - goto end_register; - } + if (!range) + return -ENOMEM; - range->start = addr; + range->fwnode = fwnode; range->size = size; + range->hw_start = addr; + range->flags = LOGIC_PIO_CPU_MMIO; - list_add_tail(&range->list, &io_range_list); - -end_register: - spin_unlock(&io_range_lock); + ret = logic_pio_register_range(range); + if (ret) + kfree(range); #endif - return err; + return ret; } phys_addr_t pci_pio_to_address(unsigned long pio) @@ -3505,21 +3485,10 @@ phys_addr_t pci_pio_to_address(unsigned long pio) phys_addr_t address = (phys_addr_t)OF_BAD_ADDR; #ifdef PCI_IOBASE - struct io_range *range; - resource_size_t allocated_size = 0; - - if (pio > IO_SPACE_LIMIT) + if (pio >= MMIO_UPPER_LIMIT) return address; - spin_lock(&io_range_lock); - list_for_each_entry(range, &io_range_list, list) { - if (pio >= allocated_size && pio < allocated_size + range->size) { - address = range->start + pio - allocated_size; - break; - } - allocated_size += range->size; - } - spin_unlock(&io_range_lock); + address = logic_pio_to_hwaddr(pio); #endif return address; @@ -3528,21 +3497,7 @@ phys_addr_t pci_pio_to_address(unsigned long pio) unsigned long __weak pci_address_to_pio(phys_addr_t address) { #ifdef PCI_IOBASE - struct io_range *res; - resource_size_t offset = 0; - unsigned long addr = -1; - - spin_lock(&io_range_lock); - list_for_each_entry(res, &io_range_list, list) { - if (address >= res->start && address < res->start + res->size) { - addr = address - res->start + offset; - break; - } - offset += res->size; - } - spin_unlock(&io_range_lock); - - return addr; + return logic_pio_trans_cpuaddr(address); #else if (address > IO_SPACE_LIMIT) return (unsigned long)-1; @@ -4013,20 +3968,13 @@ int pci_wait_for_pending_transaction(struct pci_dev *dev) } EXPORT_SYMBOL(pci_wait_for_pending_transaction); -static void pci_flr_wait(struct pci_dev *dev) +static int pci_dev_wait(struct pci_dev *dev, char *reset_type, int timeout) { - int delay = 1, timeout = 60000; + int delay = 1; u32 id; /* - * Per PCIe r3.1, sec 6.6.2, a device must complete an FLR within - * 100ms, but may silently discard requests while the FLR is in - * progress. Wait 100ms before trying to access the device. - */ - msleep(100); - - /* - * After 100ms, the device should not silently discard config + * After reset, the device should not silently discard config * requests, but it may still indicate that it needs more time by * responding to them with CRS completions. The Root Port will * generally synthesize ~0 data to complete the read (except when @@ -4040,14 +3988,14 @@ static void pci_flr_wait(struct pci_dev *dev) pci_read_config_dword(dev, PCI_COMMAND, &id); while (id == ~0) { if (delay > timeout) { - pci_warn(dev, "not ready %dms after FLR; giving up\n", - 100 + delay - 1); - return; + pci_warn(dev, "not ready %dms after %s; giving up\n", + delay - 1, reset_type); + return -ENOTTY; } if (delay > 1000) - pci_info(dev, "not ready %dms after FLR; waiting\n", - 100 + delay - 1); + pci_info(dev, "not ready %dms after %s; waiting\n", + delay - 1, reset_type); msleep(delay); delay *= 2; @@ -4055,7 +4003,10 @@ static void pci_flr_wait(struct pci_dev *dev) } if (delay > 1000) - pci_info(dev, "ready %dms after FLR\n", 100 + delay - 1); + pci_info(dev, "ready %dms after %s\n", delay - 1, + reset_type); + + return 0; } /** @@ -4084,13 +4035,21 @@ static bool pcie_has_flr(struct pci_dev *dev) * device supports FLR before calling this function, e.g. by using the * pcie_has_flr() helper. */ -void pcie_flr(struct pci_dev *dev) +int pcie_flr(struct pci_dev *dev) { if (!pci_wait_for_pending_transaction(dev)) pci_err(dev, "timed out waiting for pending transaction; performing function level reset anyway\n"); pcie_capability_set_word(dev, PCI_EXP_DEVCTL, PCI_EXP_DEVCTL_BCR_FLR); - pci_flr_wait(dev); + + /* + * Per PCIe r4.0, sec 6.6.2, a device must complete an FLR within + * 100ms, but may silently discard requests while the FLR is in + * progress. Wait 100ms before trying to access the device. + */ + msleep(100); + + return pci_dev_wait(dev, "FLR", PCIE_RESET_READY_POLL_MS); } EXPORT_SYMBOL_GPL(pcie_flr); @@ -4123,8 +4082,16 @@ static int pci_af_flr(struct pci_dev *dev, int probe) pci_err(dev, "timed out waiting for pending transaction; performing AF function level reset anyway\n"); pci_write_config_byte(dev, pos + PCI_AF_CTRL, PCI_AF_CTRL_FLR); - pci_flr_wait(dev); - return 0; + + /* + * Per Advanced Capabilities for Conventional PCI ECN, 13 April 2006, + * updated 27 July 2006; a device must complete an FLR within + * 100ms, but may silently discard requests while the FLR is in + * progress. Wait 100ms before trying to access the device. + */ + msleep(100); + + return pci_dev_wait(dev, "AF_FLR", PCIE_RESET_READY_POLL_MS); } /** @@ -4169,7 +4136,7 @@ static int pci_pm_reset(struct pci_dev *dev, int probe) pci_write_config_word(dev, dev->pm_cap + PCI_PM_CTRL, csr); pci_dev_d3_sleep(dev); - return 0; + return pci_dev_wait(dev, "PM D3->D0", PCIE_RESET_READY_POLL_MS); } void pci_reset_secondary_bus(struct pci_dev *dev) @@ -4179,6 +4146,7 @@ void pci_reset_secondary_bus(struct pci_dev *dev) pci_read_config_word(dev, PCI_BRIDGE_CONTROL, &ctrl); ctrl |= PCI_BRIDGE_CTL_BUS_RESET; pci_write_config_word(dev, PCI_BRIDGE_CONTROL, ctrl); + /* * PCI spec v3.0 7.6.4.2 requires minimum Trst of 1ms. Double * this to 2ms to ensure that we meet the minimum requirement. @@ -4210,9 +4178,11 @@ void __weak pcibios_reset_secondary_bus(struct pci_dev *dev) * Use the bridge control register to assert reset on the secondary bus. * Devices on the secondary bus are left in power-on state. */ -void pci_reset_bridge_secondary_bus(struct pci_dev *dev) +int pci_reset_bridge_secondary_bus(struct pci_dev *dev) { pcibios_reset_secondary_bus(dev); + + return pci_dev_wait(dev, "bus reset", PCIE_RESET_READY_POLL_MS); } EXPORT_SYMBOL_GPL(pci_reset_bridge_secondary_bus); @@ -4375,8 +4345,9 @@ int __pci_reset_function_locked(struct pci_dev *dev) if (rc != -ENOTTY) return rc; if (pcie_has_flr(dev)) { - pcie_flr(dev); - return 0; + rc = pcie_flr(dev); + if (rc != -ENOTTY) + return rc; } rc = pci_af_flr(dev, 0); if (rc != -ENOTTY) @@ -4446,9 +4417,8 @@ int pci_reset_function(struct pci_dev *dev) { int rc; - rc = pci_probe_reset_function(dev); - if (rc) - return rc; + if (!dev->reset_fn) + return -ENOTTY; pci_dev_lock(dev); pci_dev_save_and_disable(dev); @@ -4483,9 +4453,8 @@ int pci_reset_function_locked(struct pci_dev *dev) { int rc; - rc = pci_probe_reset_function(dev); - if (rc) - return rc; + if (!dev->reset_fn) + return -ENOTTY; pci_dev_save_and_disable(dev); @@ -4507,18 +4476,17 @@ int pci_try_reset_function(struct pci_dev *dev) { int rc; - rc = pci_probe_reset_function(dev); - if (rc) - return rc; + if (!dev->reset_fn) + return -ENOTTY; if (!pci_dev_trylock(dev)) return -EAGAIN; pci_dev_save_and_disable(dev); rc = __pci_reset_function_locked(dev); + pci_dev_restore(dev); pci_dev_unlock(dev); - pci_dev_restore(dev); return rc; } EXPORT_SYMBOL_GPL(pci_try_reset_function); @@ -4726,7 +4694,9 @@ static void pci_slot_restore(struct pci_slot *slot) list_for_each_entry(dev, &slot->bus->devices, bus_list) { if (!dev->slot || dev->slot != slot) continue; + pci_dev_lock(dev); pci_dev_restore(dev); + pci_dev_unlock(dev); if (dev->subordinate) pci_bus_restore(dev->subordinate); } @@ -5143,6 +5113,180 @@ int pcie_get_minimum_link(struct pci_dev *dev, enum pci_bus_speed *speed, EXPORT_SYMBOL(pcie_get_minimum_link); /** + * pcie_bandwidth_available - determine minimum link settings of a PCIe + * device and its bandwidth limitation + * @dev: PCI device to query + * @limiting_dev: storage for device causing the bandwidth limitation + * @speed: storage for speed of limiting device + * @width: storage for width of limiting device + * + * Walk up the PCI device chain and find the point where the minimum + * bandwidth is available. Return the bandwidth available there and (if + * limiting_dev, speed, and width pointers are supplied) information about + * that point. The bandwidth returned is in Mb/s, i.e., megabits/second of + * raw bandwidth. + */ +u32 pcie_bandwidth_available(struct pci_dev *dev, struct pci_dev **limiting_dev, + enum pci_bus_speed *speed, + enum pcie_link_width *width) +{ + u16 lnksta; + enum pci_bus_speed next_speed; + enum pcie_link_width next_width; + u32 bw, next_bw; + + if (speed) + *speed = PCI_SPEED_UNKNOWN; + if (width) + *width = PCIE_LNK_WIDTH_UNKNOWN; + + bw = 0; + + while (dev) { + pcie_capability_read_word(dev, PCI_EXP_LNKSTA, &lnksta); + + next_speed = pcie_link_speed[lnksta & PCI_EXP_LNKSTA_CLS]; + next_width = (lnksta & PCI_EXP_LNKSTA_NLW) >> + PCI_EXP_LNKSTA_NLW_SHIFT; + + next_bw = next_width * PCIE_SPEED2MBS_ENC(next_speed); + + /* Check if current device limits the total bandwidth */ + if (!bw || next_bw <= bw) { + bw = next_bw; + + if (limiting_dev) + *limiting_dev = dev; + if (speed) + *speed = next_speed; + if (width) + *width = next_width; + } + + dev = pci_upstream_bridge(dev); + } + + return bw; +} +EXPORT_SYMBOL(pcie_bandwidth_available); + +/** + * pcie_get_speed_cap - query for the PCI device's link speed capability + * @dev: PCI device to query + * + * Query the PCI device speed capability. Return the maximum link speed + * supported by the device. + */ +enum pci_bus_speed pcie_get_speed_cap(struct pci_dev *dev) +{ + u32 lnkcap2, lnkcap; + + /* + * PCIe r4.0 sec 7.5.3.18 recommends using the Supported Link + * Speeds Vector in Link Capabilities 2 when supported, falling + * back to Max Link Speed in Link Capabilities otherwise. + */ + pcie_capability_read_dword(dev, PCI_EXP_LNKCAP2, &lnkcap2); + if (lnkcap2) { /* PCIe r3.0-compliant */ + if (lnkcap2 & PCI_EXP_LNKCAP2_SLS_16_0GB) + return PCIE_SPEED_16_0GT; + else if (lnkcap2 & PCI_EXP_LNKCAP2_SLS_8_0GB) + return PCIE_SPEED_8_0GT; + else if (lnkcap2 & PCI_EXP_LNKCAP2_SLS_5_0GB) + return PCIE_SPEED_5_0GT; + else if (lnkcap2 & PCI_EXP_LNKCAP2_SLS_2_5GB) + return PCIE_SPEED_2_5GT; + return PCI_SPEED_UNKNOWN; + } + + pcie_capability_read_dword(dev, PCI_EXP_LNKCAP, &lnkcap); + if (lnkcap) { + if (lnkcap & PCI_EXP_LNKCAP_SLS_16_0GB) + return PCIE_SPEED_16_0GT; + else if (lnkcap & PCI_EXP_LNKCAP_SLS_8_0GB) + return PCIE_SPEED_8_0GT; + else if (lnkcap & PCI_EXP_LNKCAP_SLS_5_0GB) + return PCIE_SPEED_5_0GT; + else if (lnkcap & PCI_EXP_LNKCAP_SLS_2_5GB) + return PCIE_SPEED_2_5GT; + } + + return PCI_SPEED_UNKNOWN; +} + +/** + * pcie_get_width_cap - query for the PCI device's link width capability + * @dev: PCI device to query + * + * Query the PCI device width capability. Return the maximum link width + * supported by the device. + */ +enum pcie_link_width pcie_get_width_cap(struct pci_dev *dev) +{ + u32 lnkcap; + + pcie_capability_read_dword(dev, PCI_EXP_LNKCAP, &lnkcap); + if (lnkcap) + return (lnkcap & PCI_EXP_LNKCAP_MLW) >> 4; + + return PCIE_LNK_WIDTH_UNKNOWN; +} + +/** + * pcie_bandwidth_capable - calculate a PCI device's link bandwidth capability + * @dev: PCI device + * @speed: storage for link speed + * @width: storage for link width + * + * Calculate a PCI device's link bandwidth by querying for its link speed + * and width, multiplying them, and applying encoding overhead. The result + * is in Mb/s, i.e., megabits/second of raw bandwidth. + */ +u32 pcie_bandwidth_capable(struct pci_dev *dev, enum pci_bus_speed *speed, + enum pcie_link_width *width) +{ + *speed = pcie_get_speed_cap(dev); + *width = pcie_get_width_cap(dev); + + if (*speed == PCI_SPEED_UNKNOWN || *width == PCIE_LNK_WIDTH_UNKNOWN) + return 0; + + return *width * PCIE_SPEED2MBS_ENC(*speed); +} + +/** + * pcie_print_link_status - Report the PCI device's link speed and width + * @dev: PCI device to query + * + * Report the available bandwidth at the device. If this is less than the + * device is capable of, report the device's maximum possible bandwidth and + * the upstream link that limits its performance to less than that. + */ +void pcie_print_link_status(struct pci_dev *dev) +{ + enum pcie_link_width width, width_cap; + enum pci_bus_speed speed, speed_cap; + struct pci_dev *limiting_dev = NULL; + u32 bw_avail, bw_cap; + + bw_cap = pcie_bandwidth_capable(dev, &speed_cap, &width_cap); + bw_avail = pcie_bandwidth_available(dev, &limiting_dev, &speed, &width); + + if (bw_avail >= bw_cap) + pci_info(dev, "%u.%03u Gb/s available bandwidth (%s x%d link)\n", + bw_cap / 1000, bw_cap % 1000, + PCIE_SPEED2STR(speed_cap), width_cap); + else + pci_info(dev, "%u.%03u Gb/s available bandwidth, limited by %s x%d link at %s (capable of %u.%03u Gb/s with %s x%d link)\n", + bw_avail / 1000, bw_avail % 1000, + PCIE_SPEED2STR(speed), width, + limiting_dev ? pci_name(limiting_dev) : "<unknown>", + bw_cap / 1000, bw_cap % 1000, + PCIE_SPEED2STR(speed_cap), width_cap); +} +EXPORT_SYMBOL(pcie_print_link_status); + +/** * pci_select_bars - Make BAR mask from the type of resource * @dev: the PCI device for which BAR mask is made * @flags: resource type mask to be selected @@ -5607,8 +5751,9 @@ static int of_pci_bus_find_domain_nr(struct device *parent) use_dt_domains = 0; domain = pci_get_new_domain_nr(); } else { - dev_err(parent, "Node %pOF has inconsistent \"linux,pci-domain\" property in DT\n", - parent->of_node); + if (parent) + pr_err("Node %pOF has ", parent->of_node); + pr_err("Inconsistent \"linux,pci-domain\" property in DT\n"); domain = -1; } diff --git a/drivers/pci/pci.h b/drivers/pci/pci.h index fcd81911b127..023f7cf25bff 100644 --- a/drivers/pci/pci.h +++ b/drivers/pci/pci.h @@ -71,6 +71,7 @@ void pci_update_current_state(struct pci_dev *dev, pci_power_t state); void pci_power_up(struct pci_dev *dev); void pci_disable_enabled_device(struct pci_dev *dev); int pci_finish_runtime_suspend(struct pci_dev *dev); +void pcie_clear_root_pme_status(struct pci_dev *dev); int __pci_pme_wakeup(struct pci_dev *dev, void *ign); void pci_pme_restore(struct pci_dev *dev); bool pci_dev_keep_suspended(struct pci_dev *dev); @@ -104,25 +105,10 @@ static inline bool pci_power_manageable(struct pci_dev *pci_dev) return !pci_has_subordinate(pci_dev) || pci_dev->bridge_d3; } -struct pci_vpd_ops { - ssize_t (*read)(struct pci_dev *dev, loff_t pos, size_t count, void *buf); - ssize_t (*write)(struct pci_dev *dev, loff_t pos, size_t count, const void *buf); - int (*set_size)(struct pci_dev *dev, size_t len); -}; - -struct pci_vpd { - const struct pci_vpd_ops *ops; - struct bin_attribute *attr; /* Descriptor for sysfs VPD entry */ - struct mutex lock; - unsigned int len; - u16 flag; - u8 cap; - u8 busy:1; - u8 valid:1; -}; - int pci_vpd_init(struct pci_dev *dev); void pci_vpd_release(struct pci_dev *dev); +void pcie_vpd_create_sysfs_dev_files(struct pci_dev *dev); +void pcie_vpd_remove_sysfs_dev_files(struct pci_dev *dev); /* PCI /proc functions */ #ifdef CONFIG_PROC_FS @@ -253,6 +239,27 @@ bool pci_bus_clip_resource(struct pci_dev *dev, int idx); void pci_reassigndev_resource_alignment(struct pci_dev *dev); void pci_disable_bridge_window(struct pci_dev *dev); +/* PCIe link information */ +#define PCIE_SPEED2STR(speed) \ + ((speed) == PCIE_SPEED_16_0GT ? "16 GT/s" : \ + (speed) == PCIE_SPEED_8_0GT ? "8 GT/s" : \ + (speed) == PCIE_SPEED_5_0GT ? "5 GT/s" : \ + (speed) == PCIE_SPEED_2_5GT ? "2.5 GT/s" : \ + "Unknown speed") + +/* PCIe speed to Mb/s reduced by encoding overhead */ +#define PCIE_SPEED2MBS_ENC(speed) \ + ((speed) == PCIE_SPEED_16_0GT ? 16000*128/130 : \ + (speed) == PCIE_SPEED_8_0GT ? 8000*128/130 : \ + (speed) == PCIE_SPEED_5_0GT ? 5000*8/10 : \ + (speed) == PCIE_SPEED_2_5GT ? 2500*8/10 : \ + 0) + +enum pci_bus_speed pcie_get_speed_cap(struct pci_dev *dev); +enum pcie_link_width pcie_get_width_cap(struct pci_dev *dev); +u32 pcie_bandwidth_capable(struct pci_dev *dev, enum pci_bus_speed *speed, + enum pcie_link_width *width); + /* Single Root I/O Virtualization */ struct pci_sriov { int pos; /* Capability position */ @@ -271,6 +278,10 @@ struct pci_sriov { u16 driver_max_VFs; /* Max num VFs driver supports */ struct pci_dev *dev; /* Lowest numbered PF */ struct pci_dev *self; /* This PF */ + u32 class; /* VF device */ + u8 hdr_type; /* VF header type */ + u16 subsystem_vendor; /* VF subsystem vendor */ + u16 subsystem_device; /* VF subsystem device */ resource_size_t barsz[PCI_SRIOV_NUM_BARS]; /* VF BAR size */ bool drivers_autoprobe; /* Auto probing of VFs by driver */ }; diff --git a/drivers/pci/pcie/Makefile b/drivers/pci/pcie/Makefile index 223e4c34c29a..800e1d404a45 100644 --- a/drivers/pci/pcie/Makefile +++ b/drivers/pci/pcie/Makefile @@ -1,20 +1,13 @@ # SPDX-License-Identifier: GPL-2.0 # -# Makefile for PCI-Express PORT Driver -# - -# Build PCI Express ASPM if needed -obj-$(CONFIG_PCIEASPM) += aspm.o +# Makefile for PCI Express features and port driver -pcieportdrv-y := portdrv_core.o portdrv_pci.o portdrv_bus.o -pcieportdrv-$(CONFIG_ACPI) += portdrv_acpi.o +pcieportdrv-y := portdrv_core.o portdrv_pci.o obj-$(CONFIG_PCIEPORTBUS) += pcieportdrv.o -# Build PCI Express AER if needed +obj-$(CONFIG_PCIEASPM) += aspm.o obj-$(CONFIG_PCIEAER) += aer/ - -obj-$(CONFIG_PCIE_PME) += pme.o - -obj-$(CONFIG_PCIE_DPC) += pcie-dpc.o -obj-$(CONFIG_PCIE_PTM) += ptm.o +obj-$(CONFIG_PCIE_PME) += pme.o +obj-$(CONFIG_PCIE_DPC) += dpc.o +obj-$(CONFIG_PCIE_PTM) += ptm.o diff --git a/drivers/pci/pcie/aer/aer_inject.c b/drivers/pci/pcie/aer/aer_inject.c index 25e1feb962c5..a49090935303 100644 --- a/drivers/pci/pcie/aer/aer_inject.c +++ b/drivers/pci/pcie/aer/aer_inject.c @@ -344,7 +344,7 @@ static int aer_inject(struct aer_error_inj *einj) goto out_put; } - pos_cap_err = pci_find_ext_capability(dev, PCI_EXT_CAP_ID_ERR); + pos_cap_err = dev->aer_cap; if (!pos_cap_err) { pci_err(dev, "aer_inject: Device doesn't support AER\n"); ret = -EPROTONOSUPPORT; @@ -355,7 +355,7 @@ static int aer_inject(struct aer_error_inj *einj) pci_read_config_dword(dev, pos_cap_err + PCI_ERR_UNCOR_MASK, &uncor_mask); - rp_pos_cap_err = pci_find_ext_capability(rpdev, PCI_EXT_CAP_ID_ERR); + rp_pos_cap_err = rpdev->aer_cap; if (!rp_pos_cap_err) { pci_err(rpdev, "aer_inject: Root port doesn't support AER\n"); ret = -EPROTONOSUPPORT; diff --git a/drivers/pci/pcie/aer/aerdrv.c b/drivers/pci/pcie/aer/aerdrv.c index da8331f5684d..779b3879b1b5 100644 --- a/drivers/pci/pcie/aer/aerdrv.c +++ b/drivers/pci/pcie/aer/aerdrv.c @@ -1,15 +1,12 @@ // SPDX-License-Identifier: GPL-2.0 /* - * drivers/pci/pcie/aer/aerdrv.c - * - * This file implements the AER root port service driver. The driver will - * register an irq handler. When root port triggers an AER interrupt, the irq - * handler will collect root port status and schedule a work. + * Implement the AER root port service driver. The driver registers an IRQ + * handler. When a root port triggers an AER interrupt, the IRQ handler + * collects root port status and schedules work. * * Copyright (C) 2006 Intel Corp. * Tom Long Nguyen (tom.l.nguyen@intel.com) * Zhang Yanmin (yanmin.zhang@intel.com) - * */ #include <linux/pci.h> @@ -21,7 +18,6 @@ #include <linux/init.h> #include <linux/interrupt.h> #include <linux/delay.h> -#include <linux/pcieport_if.h> #include <linux/slab.h> #include "aerdrv.h" diff --git a/drivers/pci/pcie/aer/aerdrv.h b/drivers/pci/pcie/aer/aerdrv.h index 5449e5ce139d..08b4584f62fe 100644 --- a/drivers/pci/pcie/aer/aerdrv.h +++ b/drivers/pci/pcie/aer/aerdrv.h @@ -3,17 +3,17 @@ * Copyright (C) 2006 Intel Corp. * Tom Long Nguyen (tom.l.nguyen@intel.com) * Zhang Yanmin (yanmin.zhang@intel.com) - * */ #ifndef _AERDRV_H_ #define _AERDRV_H_ #include <linux/workqueue.h> -#include <linux/pcieport_if.h> #include <linux/aer.h> #include <linux/interrupt.h> +#include "../portdrv.h" + #define SYSTEM_ERROR_INTR_ON_MESG_MASK (PCI_EXP_RTCTL_SECEE| \ PCI_EXP_RTCTL_SENFEE| \ PCI_EXP_RTCTL_SEFEE) diff --git a/drivers/pci/pcie/aer/aerdrv_acpi.c b/drivers/pci/pcie/aer/aerdrv_acpi.c index b2019440e882..08c87de13cb8 100644 --- a/drivers/pci/pcie/aer/aerdrv_acpi.c +++ b/drivers/pci/pcie/aer/aerdrv_acpi.c @@ -5,7 +5,6 @@ * Copyright (C) 2006 Intel Corp. * Tom Long Nguyen (tom.l.nguyen@intel.com) * Zhang Yanmin (yanmin.zhang@intel.com) - * */ #include <linux/module.h> diff --git a/drivers/pci/pcie/aer/aerdrv_core.c b/drivers/pci/pcie/aer/aerdrv_core.c index a4bfea52e7d4..0ea5acc40323 100644 --- a/drivers/pci/pcie/aer/aerdrv_core.c +++ b/drivers/pci/pcie/aer/aerdrv_core.c @@ -1,16 +1,13 @@ // SPDX-License-Identifier: GPL-2.0 /* - * drivers/pci/pcie/aer/aerdrv_core.c - * - * This file implements the core part of PCIe AER. When a PCIe - * error is delivered, an error message will be collected and printed to - * console, then, an error recovery procedure will be executed by following - * the PCI error recovery rules. + * Implement the core part of PCIe AER. When a PCIe error is delivered, an + * error message will be collected and printed to console, then an error + * recovery procedure will be executed by following the PCI error recovery + * rules. * * Copyright (C) 2006 Intel Corp. * Tom Long Nguyen (tom.l.nguyen@intel.com) * Zhang Yanmin (yanmin.zhang@intel.com) - * */ #include <linux/module.h> diff --git a/drivers/pci/pcie/aer/aerdrv_errprint.c b/drivers/pci/pcie/aer/aerdrv_errprint.c index 6a352e638699..cfc89dd57831 100644 --- a/drivers/pci/pcie/aer/aerdrv_errprint.c +++ b/drivers/pci/pcie/aer/aerdrv_errprint.c @@ -1,13 +1,10 @@ // SPDX-License-Identifier: GPL-2.0 /* - * drivers/pci/pcie/aer/aerdrv_errprint.c - * * Format error messages and print them to console. * * Copyright (C) 2006 Intel Corp. * Tom Long Nguyen (tom.l.nguyen@intel.com) * Zhang Yanmin (yanmin.zhang@intel.com) - * */ #include <linux/module.h> diff --git a/drivers/pci/pcie/aer/ecrc.c b/drivers/pci/pcie/aer/ecrc.c index 26d3cac9e635..039efb606e31 100644 --- a/drivers/pci/pcie/aer/ecrc.c +++ b/drivers/pci/pcie/aer/ecrc.c @@ -1,8 +1,8 @@ // SPDX-License-Identifier: GPL-2.0 /* - * Enables/disables PCIe ECRC checking. + * Enable/disable PCIe ECRC checking * - * (C) Copyright 2009 Hewlett-Packard Development Company, L.P. + * (C) Copyright 2009 Hewlett-Packard Development Company, L.P. * Andrew Patterson <andrew.patterson@hp.com> */ @@ -40,7 +40,7 @@ static int enable_ecrc_checking(struct pci_dev *dev) if (!pci_is_pcie(dev)) return -ENODEV; - pos = pci_find_ext_capability(dev, PCI_EXT_CAP_ID_ERR); + pos = dev->aer_cap; if (!pos) return -ENODEV; @@ -68,7 +68,7 @@ static int disable_ecrc_checking(struct pci_dev *dev) if (!pci_is_pcie(dev)) return -ENODEV; - pos = pci_find_ext_capability(dev, PCI_EXT_CAP_ID_ERR); + pos = dev->aer_cap; if (!pos) return -ENODEV; diff --git a/drivers/pci/pcie/aspm.c b/drivers/pci/pcie/aspm.c index 57feef2ecfe7..f76eb7704f64 100644 --- a/drivers/pci/pcie/aspm.c +++ b/drivers/pci/pcie/aspm.c @@ -1,7 +1,6 @@ // SPDX-License-Identifier: GPL-2.0 /* - * File: drivers/pci/pcie/aspm.c - * Enabling PCIe link L0s/L1 state and Clock Power Management + * Enable PCIe link L0s/L1 state and Clock Power Management * * Copyright (C) 2007 Intel * Copyright (C) Zhang Yanmin (yanmin.zhang@intel.com) @@ -228,6 +227,24 @@ static void pcie_aspm_configure_common_clock(struct pcie_link_state *link) if (!(reg16 & PCI_EXP_LNKSTA_SLC)) same_clock = 0; + /* Port might be already in common clock mode */ + pcie_capability_read_word(parent, PCI_EXP_LNKCTL, ®16); + if (same_clock && (reg16 & PCI_EXP_LNKCTL_CCC)) { + bool consistent = true; + + list_for_each_entry(child, &linkbus->devices, bus_list) { + pcie_capability_read_word(child, PCI_EXP_LNKCTL, + ®16); + if (!(reg16 & PCI_EXP_LNKCTL_CCC)) { + consistent = false; + break; + } + } + if (consistent) + return; + pci_warn(parent, "ASPM: current common clock configuration is broken, reconfiguring\n"); + } + /* Configure downstream component, all functions */ list_for_each_entry(child, &linkbus->devices, bus_list) { pcie_capability_read_word(child, PCI_EXP_LNKCTL, ®16); @@ -322,7 +339,7 @@ static u32 calc_l1ss_pwron(struct pci_dev *pdev, u32 scale, u32 val) static void encode_l12_threshold(u32 threshold_us, u32 *scale, u32 *value) { - u64 threshold_ns = threshold_us * 1000; + u32 threshold_ns = threshold_us * 1000; /* See PCIe r3.1, sec 7.33.3 and sec 6.18 */ if (threshold_ns < 32) { diff --git a/drivers/pci/pcie/pcie-dpc.c b/drivers/pci/pcie/dpc.c index 38e40c6c576f..8c57d607e603 100644 --- a/drivers/pci/pcie/pcie-dpc.c +++ b/drivers/pci/pcie/dpc.c @@ -10,7 +10,8 @@ #include <linux/interrupt.h> #include <linux/init.h> #include <linux/pci.h> -#include <linux/pcieport_if.h> + +#include "portdrv.h" #include "../pci.h" #include "aer/aerdrv.h" diff --git a/drivers/pci/pcie/pme.c b/drivers/pci/pcie/pme.c index 5480f54f7612..3ed67676ea2a 100644 --- a/drivers/pci/pcie/pme.c +++ b/drivers/pci/pcie/pme.c @@ -14,7 +14,6 @@ #include <linux/init.h> #include <linux/interrupt.h> #include <linux/device.h> -#include <linux/pcieport_if.h> #include <linux/pm_runtime.h> #include "../pci.h" diff --git a/drivers/pci/pcie/portdrv.h b/drivers/pci/pcie/portdrv.h index a854bc569117..d0c6783dbfe3 100644 --- a/drivers/pci/pcie/portdrv.h +++ b/drivers/pci/pcie/portdrv.h @@ -1,6 +1,5 @@ /* SPDX-License-Identifier: GPL-2.0 */ /* - * File: portdrv.h * Purpose: PCI Express Port Bus Driver's Internal Data Structures * * Copyright (C) 2004 Intel @@ -12,7 +11,66 @@ #include <linux/compiler.h> -#define PCIE_PORT_DEVICE_MAXSERVICES 5 +extern bool pcie_ports_native; + +/* Service Type */ +#define PCIE_PORT_SERVICE_PME_SHIFT 0 /* Power Management Event */ +#define PCIE_PORT_SERVICE_PME (1 << PCIE_PORT_SERVICE_PME_SHIFT) +#define PCIE_PORT_SERVICE_AER_SHIFT 1 /* Advanced Error Reporting */ +#define PCIE_PORT_SERVICE_AER (1 << PCIE_PORT_SERVICE_AER_SHIFT) +#define PCIE_PORT_SERVICE_HP_SHIFT 2 /* Native Hotplug */ +#define PCIE_PORT_SERVICE_HP (1 << PCIE_PORT_SERVICE_HP_SHIFT) +#define PCIE_PORT_SERVICE_DPC_SHIFT 3 /* Downstream Port Containment */ +#define PCIE_PORT_SERVICE_DPC (1 << PCIE_PORT_SERVICE_DPC_SHIFT) + +#define PCIE_PORT_DEVICE_MAXSERVICES 4 + +/* Port Type */ +#define PCIE_ANY_PORT (~0) + +struct pcie_device { + int irq; /* Service IRQ/MSI/MSI-X Vector */ + struct pci_dev *port; /* Root/Upstream/Downstream Port */ + u32 service; /* Port service this device represents */ + void *priv_data; /* Service Private Data */ + struct device device; /* Generic Device Interface */ +}; +#define to_pcie_device(d) container_of(d, struct pcie_device, device) + +static inline void set_service_data(struct pcie_device *dev, void *data) +{ + dev->priv_data = data; +} + +static inline void *get_service_data(struct pcie_device *dev) +{ + return dev->priv_data; +} + +struct pcie_port_service_driver { + const char *name; + int (*probe) (struct pcie_device *dev); + void (*remove) (struct pcie_device *dev); + int (*suspend) (struct pcie_device *dev); + int (*resume) (struct pcie_device *dev); + + /* Device driver may resume normal operations */ + void (*error_resume)(struct pci_dev *dev); + + /* Link Reset Capability - AER service driver specific */ + pci_ers_result_t (*reset_link) (struct pci_dev *dev); + + int port_type; /* Type of the port this driver can handle */ + u32 service; /* Port service this device represents */ + + struct device_driver driver; +}; +#define to_service_driver(d) \ + container_of(d, struct pcie_port_service_driver, driver) + +int pcie_port_service_register(struct pcie_port_service_driver *new); +void pcie_port_service_unregister(struct pcie_port_service_driver *new); + /* * The PCIe Capability Interrupt Message Number (PCIe r3.1, sec 7.8.2) must * be one of the first 32 MSI-X entries. Per PCI r3.0, sec 6.8.3.1, MSI @@ -34,20 +92,6 @@ void pcie_port_bus_unregister(void); struct pci_dev; -void pcie_clear_root_pme_status(struct pci_dev *dev); - -#ifdef CONFIG_HOTPLUG_PCI_PCIE -extern bool pciehp_msi_disabled; - -static inline bool pciehp_no_msi(void) -{ - return pciehp_msi_disabled; -} - -#else /* !CONFIG_HOTPLUG_PCI_PCIE */ -static inline bool pciehp_no_msi(void) { return false; } -#endif /* !CONFIG_HOTPLUG_PCI_PCIE */ - #ifdef CONFIG_PCIE_PME extern bool pcie_pme_msi_disabled; @@ -68,15 +112,4 @@ static inline bool pcie_pme_no_msi(void) { return false; } static inline void pcie_pme_interrupt_enable(struct pci_dev *dev, bool en) {} #endif /* !CONFIG_PCIE_PME */ -#ifdef CONFIG_ACPI -void pcie_port_acpi_setup(struct pci_dev *port, int *mask); - -static inline void pcie_port_platform_notify(struct pci_dev *port, int *mask) -{ - pcie_port_acpi_setup(port, mask); -} -#else /* !CONFIG_ACPI */ -static inline void pcie_port_platform_notify(struct pci_dev *port, int *mask){} -#endif /* !CONFIG_ACPI */ - #endif /* _PORTDRV_H_ */ diff --git a/drivers/pci/pcie/portdrv_acpi.c b/drivers/pci/pcie/portdrv_acpi.c index 319c94976873..8ab5d434b9c6 100644 --- a/drivers/pci/pcie/portdrv_acpi.c +++ b/drivers/pci/pcie/portdrv_acpi.c @@ -10,7 +10,6 @@ #include <linux/errno.h> #include <linux/acpi.h> #include <linux/pci-acpi.h> -#include <linux/pcieport_if.h> #include "aer/aerdrv.h" #include "../pci.h" @@ -48,11 +47,11 @@ void pcie_port_acpi_setup(struct pci_dev *port, int *srv_mask) flags = root->osc_control_set; - *srv_mask = PCIE_PORT_SERVICE_VC | PCIE_PORT_SERVICE_DPC; + *srv_mask = 0; if (flags & OSC_PCI_EXPRESS_NATIVE_HP_CONTROL) *srv_mask |= PCIE_PORT_SERVICE_HP; if (flags & OSC_PCI_EXPRESS_PME_CONTROL) *srv_mask |= PCIE_PORT_SERVICE_PME; if (flags & OSC_PCI_EXPRESS_AER_CONTROL) - *srv_mask |= PCIE_PORT_SERVICE_AER; + *srv_mask |= PCIE_PORT_SERVICE_AER | PCIE_PORT_SERVICE_DPC; } diff --git a/drivers/pci/pcie/portdrv_bus.c b/drivers/pci/pcie/portdrv_bus.c deleted file mode 100644 index f0fba552a0e2..000000000000 --- a/drivers/pci/pcie/portdrv_bus.c +++ /dev/null @@ -1,56 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0 -/* - * File: portdrv_bus.c - * Purpose: PCI Express Port Bus Driver's Bus Overloading Functions - * - * Copyright (C) 2004 Intel - * Copyright (C) Tom Long Nguyen (tom.l.nguyen@intel.com) - */ - -#include <linux/module.h> -#include <linux/pci.h> -#include <linux/kernel.h> -#include <linux/errno.h> -#include <linux/pm.h> - -#include <linux/pcieport_if.h> -#include "portdrv.h" - -static int pcie_port_bus_match(struct device *dev, struct device_driver *drv); - -struct bus_type pcie_port_bus_type = { - .name = "pci_express", - .match = pcie_port_bus_match, -}; -EXPORT_SYMBOL_GPL(pcie_port_bus_type); - -static int pcie_port_bus_match(struct device *dev, struct device_driver *drv) -{ - struct pcie_device *pciedev; - struct pcie_port_service_driver *driver; - - if (drv->bus != &pcie_port_bus_type || dev->bus != &pcie_port_bus_type) - return 0; - - pciedev = to_pcie_device(dev); - driver = to_service_driver(drv); - - if (driver->service != pciedev->service) - return 0; - - if ((driver->port_type != PCIE_ANY_PORT) && - (driver->port_type != pci_pcie_type(pciedev->port))) - return 0; - - return 1; -} - -int pcie_port_bus_register(void) -{ - return bus_register(&pcie_port_bus_type); -} - -void pcie_port_bus_unregister(void) -{ - bus_unregister(&pcie_port_bus_type); -} diff --git a/drivers/pci/pcie/portdrv_core.c b/drivers/pci/pcie/portdrv_core.c index ef3bad4ad010..c9c0663db282 100644 --- a/drivers/pci/pcie/portdrv_core.c +++ b/drivers/pci/pcie/portdrv_core.c @@ -1,6 +1,5 @@ // SPDX-License-Identifier: GPL-2.0 /* - * File: portdrv_core.c * Purpose: PCI Express Port Bus Driver's Core Functions * * Copyright (C) 2004 Intel @@ -15,23 +14,11 @@ #include <linux/pm_runtime.h> #include <linux/string.h> #include <linux/slab.h> -#include <linux/pcieport_if.h> #include <linux/aer.h> #include "../pci.h" #include "portdrv.h" -bool pciehp_msi_disabled; - -static int __init pciehp_setup(char *str) -{ - if (!strncmp(str, "nomsi", 5)) - pciehp_msi_disabled = true; - - return 1; -} -__setup("pcie_hp=", pciehp_setup); - /** * release_pcie_device - free PCI Express port service device structure * @dev: Port service device to release @@ -52,7 +39,7 @@ static void release_pcie_device(struct device *dev) static int pcie_message_numbers(struct pci_dev *dev, int mask, u32 *pme, u32 *aer, u32 *dpc) { - u32 nvec = 0, pos, reg32; + u32 nvec = 0, pos; u16 reg16; /* @@ -68,8 +55,11 @@ static int pcie_message_numbers(struct pci_dev *dev, int mask, nvec = *pme + 1; } +#ifdef CONFIG_PCIEAER if (mask & PCIE_PORT_SERVICE_AER) { - pos = pci_find_ext_capability(dev, PCI_EXT_CAP_ID_ERR); + u32 reg32; + + pos = dev->aer_cap; if (pos) { pci_read_config_dword(dev, pos + PCI_ERR_ROOT_STATUS, ®32); @@ -77,6 +67,7 @@ static int pcie_message_numbers(struct pci_dev *dev, int mask, nvec = max(nvec, *aer + 1); } } +#endif if (mask & PCIE_PORT_SERVICE_DPC) { pos = pci_find_ext_capability(dev, PCI_EXT_CAP_ID_DPC); @@ -169,16 +160,13 @@ static int pcie_init_service_irqs(struct pci_dev *dev, int *irqs, int mask) irqs[i] = -1; /* - * If we support PME or hotplug, but we can't use MSI/MSI-X for - * them, we have to fall back to INTx or other interrupts, e.g., a - * system shared interrupt. + * If we support PME but can't use MSI/MSI-X for it, we have to + * fall back to INTx or other interrupts, e.g., a system shared + * interrupt. */ if ((mask & PCIE_PORT_SERVICE_PME) && pcie_pme_no_msi()) goto legacy_irq; - if ((mask & PCIE_PORT_SERVICE_HP) && pciehp_no_msi()) - goto legacy_irq; - /* Try to use MSI-X or MSI if supported */ if (pcie_port_enable_irq_vec(dev, irqs, mask) == 0) return 0; @@ -189,10 +177,8 @@ legacy_irq: if (ret < 0) return -ENODEV; - for (i = 0; i < PCIE_PORT_DEVICE_MAXSERVICES; i++) { - if (i != PCIE_PORT_SERVICE_VC_SHIFT) - irqs[i] = pci_irq_vector(dev, 0); - } + for (i = 0; i < PCIE_PORT_DEVICE_MAXSERVICES; i++) + irqs[i] = pci_irq_vector(dev, 0); return 0; } @@ -209,23 +195,13 @@ legacy_irq: */ static int get_port_device_capability(struct pci_dev *dev) { + struct pci_host_bridge *host = pci_find_host_bridge(dev->bus); int services = 0; - int cap_mask = 0; - - if (pcie_ports_disabled) - return 0; - - cap_mask = PCIE_PORT_SERVICE_PME | PCIE_PORT_SERVICE_HP - | PCIE_PORT_SERVICE_VC; - if (pci_aer_available()) - cap_mask |= PCIE_PORT_SERVICE_AER | PCIE_PORT_SERVICE_DPC; - if (pcie_ports_auto) - pcie_port_platform_notify(dev, &cap_mask); - - /* Hot-Plug Capable */ - if ((cap_mask & PCIE_PORT_SERVICE_HP) && dev->is_hotplug_bridge) { + if (dev->is_hotplug_bridge && + (pcie_ports_native || host->native_hotplug)) { services |= PCIE_PORT_SERVICE_HP; + /* * Disable hot-plug interrupts in case they have been enabled * by the BIOS and the hot-plug service driver is not loaded. @@ -233,23 +209,29 @@ static int get_port_device_capability(struct pci_dev *dev) pcie_capability_clear_word(dev, PCI_EXP_SLTCTL, PCI_EXP_SLTCTL_CCIE | PCI_EXP_SLTCTL_HPIE); } - /* AER capable */ - if ((cap_mask & PCIE_PORT_SERVICE_AER) - && pci_find_ext_capability(dev, PCI_EXT_CAP_ID_ERR)) { + +#ifdef CONFIG_PCIEAER + if (dev->aer_cap && pci_aer_available() && + (pcie_ports_native || host->native_aer)) { services |= PCIE_PORT_SERVICE_AER; + /* * Disable AER on this port in case it's been enabled by the * BIOS (the AER service driver will enable it when necessary). */ pci_disable_pcie_error_reporting(dev); } - /* VC support */ - if (pci_find_ext_capability(dev, PCI_EXT_CAP_ID_VC)) - services |= PCIE_PORT_SERVICE_VC; - /* Root ports are capable of generating PME too */ - if ((cap_mask & PCIE_PORT_SERVICE_PME) - && pci_pcie_type(dev) == PCI_EXP_TYPE_ROOT_PORT) { +#endif + + /* + * Root ports are capable of generating PME too. Root Complex + * Event Collectors can also generate PMEs, but we don't handle + * those yet. + */ + if (pci_pcie_type(dev) == PCI_EXP_TYPE_ROOT_PORT && + (pcie_ports_native || host->native_pme)) { services |= PCIE_PORT_SERVICE_PME; + /* * Disable PME interrupt on this port in case it's been enabled * by the BIOS (the PME service driver will enable it when @@ -257,7 +239,9 @@ static int get_port_device_capability(struct pci_dev *dev) */ pcie_pme_interrupt_enable(dev, false); } - if (pci_find_ext_capability(dev, PCI_EXT_CAP_ID_DPC)) + + if (pci_find_ext_capability(dev, PCI_EXT_CAP_ID_DPC) && + pci_aer_available() && services & PCIE_PORT_SERVICE_AER) services |= PCIE_PORT_SERVICE_DPC; return services; @@ -335,7 +319,7 @@ int pcie_port_device_register(struct pci_dev *dev) */ status = pcie_init_service_irqs(dev, irqs, capabilities); if (status) { - capabilities &= PCIE_PORT_SERVICE_VC | PCIE_PORT_SERVICE_HP; + capabilities &= PCIE_PORT_SERVICE_HP; if (!capabilities) goto error_disable; } diff --git a/drivers/pci/pcie/portdrv_pci.c b/drivers/pci/pcie/portdrv_pci.c index fb1c1bb87316..973f1b80a038 100644 --- a/drivers/pci/pcie/portdrv_pci.c +++ b/drivers/pci/pcie/portdrv_pci.c @@ -1,9 +1,7 @@ // SPDX-License-Identifier: GPL-2.0 /* - * File: portdrv_pci.c * Purpose: PCI Express Port Bus Driver * Author: Tom Nguyen <tom.l.nguyen@intel.com> - * Version: v1.0 * * Copyright (C) 2004 Intel * Copyright (C) Tom Long Nguyen (tom.l.nguyen@intel.com) @@ -15,10 +13,8 @@ #include <linux/pm.h> #include <linux/pm_runtime.h> #include <linux/init.h> -#include <linux/pcieport_if.h> #include <linux/aer.h> #include <linux/dmi.h> -#include <linux/pci-aspm.h> #include "../pci.h" #include "portdrv.h" @@ -27,22 +23,18 @@ bool pcie_ports_disabled; /* - * If this switch is set, ACPI _OSC will be used to determine whether or not to - * enable PCIe port native services. + * If the user specified "pcie_ports=native", use the PCIe services regardless + * of whether the platform has given us permission. On ACPI systems, this + * means we ignore _OSC. */ -bool pcie_ports_auto = true; +bool pcie_ports_native; static int __init pcie_port_setup(char *str) { - if (!strncmp(str, "compat", 6)) { + if (!strncmp(str, "compat", 6)) pcie_ports_disabled = true; - } else if (!strncmp(str, "native", 6)) { - pcie_ports_disabled = false; - pcie_ports_auto = false; - } else if (!strncmp(str, "auto", 4)) { - pcie_ports_disabled = false; - pcie_ports_auto = true; - } + else if (!strncmp(str, "native", 6)) + pcie_ports_native = true; return 1; } @@ -50,15 +42,6 @@ __setup("pcie_ports=", pcie_port_setup); /* global data */ -/** - * pcie_clear_root_pme_status - Clear root port PME interrupt status. - * @dev: PCIe root port or event collector. - */ -void pcie_clear_root_pme_status(struct pci_dev *dev) -{ - pcie_capability_set_dword(dev, PCI_EXP_RTSTA, PCI_EXP_RTSTA_PME); -} - static int pcie_portdrv_restore_config(struct pci_dev *dev) { int retval; @@ -71,20 +54,6 @@ static int pcie_portdrv_restore_config(struct pci_dev *dev) } #ifdef CONFIG_PM -static int pcie_port_resume_noirq(struct device *dev) -{ - struct pci_dev *pdev = to_pci_dev(dev); - - /* - * Some BIOSes forget to clear Root PME Status bits after system wakeup - * which breaks ACPI-based runtime wakeup on PCI Express, so clear those - * bits now just in case (shouldn't hurt). - */ - if (pci_pcie_type(pdev) == PCI_EXP_TYPE_ROOT_PORT) - pcie_clear_root_pme_status(pdev); - return 0; -} - static int pcie_port_runtime_suspend(struct device *dev) { return to_pci_dev(dev)->bridge_d3 ? 0 : -EBUSY; @@ -112,7 +81,6 @@ static const struct dev_pm_ops pcie_portdrv_pm_ops = { .thaw = pcie_port_device_resume, .poweroff = pcie_port_device_suspend, .restore = pcie_port_device_resume, - .resume_noirq = pcie_port_resume_noirq, .runtime_suspend = pcie_port_runtime_suspend, .runtime_resume = pcie_port_runtime_resume, .runtime_idle = pcie_port_runtime_idle, @@ -283,22 +251,11 @@ static const struct dmi_system_id pcie_portdrv_dmi_table[] __initconst = { static int __init pcie_portdrv_init(void) { - int retval; - if (pcie_ports_disabled) - return pci_register_driver(&pcie_portdriver); + return -EACCES; dmi_check_system(pcie_portdrv_dmi_table); - retval = pcie_port_bus_register(); - if (retval) { - printk(KERN_WARNING "PCIE: bus_register error: %d\n", retval); - goto out; - } - retval = pci_register_driver(&pcie_portdriver); - if (retval) - pcie_port_bus_unregister(); - out: - return retval; + return pci_register_driver(&pcie_portdriver); } device_initcall(pcie_portdrv_init); diff --git a/drivers/pci/probe.c b/drivers/pci/probe.c index 3c365dc996e7..ac91b6fd0bcd 100644 --- a/drivers/pci/probe.c +++ b/drivers/pci/probe.c @@ -1,6 +1,6 @@ // SPDX-License-Identifier: GPL-2.0 /* - * probe.c - PCI detection and setup code + * PCI detection and setup code */ #include <linux/kernel.h> @@ -330,6 +330,10 @@ static void pci_read_bases(struct pci_dev *dev, unsigned int howmany, int rom) if (dev->non_compliant_bars) return; + /* Per PCIe r4.0, sec 9.3.4.1.11, the VF BARs are all RO Zero */ + if (dev->is_virtfn) + return; + for (pos = 0; pos < howmany; pos++) { struct resource *res = &dev->resource[pos]; reg = PCI_BASE_ADDRESS_0 + (pos << 2); @@ -541,6 +545,16 @@ struct pci_host_bridge *pci_alloc_host_bridge(size_t priv) INIT_LIST_HEAD(&bridge->windows); bridge->dev.release = pci_release_host_bridge_dev; + /* + * We assume we can manage these PCIe features. Some systems may + * reserve these for use by the platform itself, e.g., an ACPI BIOS + * may implement its own AER handling and use _OSC to prevent the + * OS from interfering. + */ + bridge->native_aer = 1; + bridge->native_hotplug = 1; + bridge->native_pme = 1; + return bridge; } EXPORT_SYMBOL(pci_alloc_host_bridge); @@ -593,7 +607,7 @@ const unsigned char pcie_link_speed[] = { PCIE_SPEED_2_5GT, /* 1 */ PCIE_SPEED_5_0GT, /* 2 */ PCIE_SPEED_8_0GT, /* 3 */ - PCI_SPEED_UNKNOWN, /* 4 */ + PCIE_SPEED_16_0GT, /* 4 */ PCI_SPEED_UNKNOWN, /* 5 */ PCI_SPEED_UNKNOWN, /* 6 */ PCI_SPEED_UNKNOWN, /* 7 */ @@ -1231,6 +1245,13 @@ static void pci_read_irq(struct pci_dev *dev) { unsigned char irq; + /* VFs are not allowed to use INTx, so skip the config reads */ + if (dev->is_virtfn) { + dev->pin = 0; + dev->irq = 0; + return; + } + pci_read_config_byte(dev, PCI_INTERRUPT_PIN, &irq); dev->pin = irq; if (irq) @@ -1390,6 +1411,43 @@ int pci_cfg_space_size(struct pci_dev *dev) return PCI_CFG_SPACE_SIZE; } +static u32 pci_class(struct pci_dev *dev) +{ + u32 class; + +#ifdef CONFIG_PCI_IOV + if (dev->is_virtfn) + return dev->physfn->sriov->class; +#endif + pci_read_config_dword(dev, PCI_CLASS_REVISION, &class); + return class; +} + +static void pci_subsystem_ids(struct pci_dev *dev, u16 *vendor, u16 *device) +{ +#ifdef CONFIG_PCI_IOV + if (dev->is_virtfn) { + *vendor = dev->physfn->sriov->subsystem_vendor; + *device = dev->physfn->sriov->subsystem_device; + return; + } +#endif + pci_read_config_word(dev, PCI_SUBSYSTEM_VENDOR_ID, vendor); + pci_read_config_word(dev, PCI_SUBSYSTEM_ID, device); +} + +static u8 pci_hdr_type(struct pci_dev *dev) +{ + u8 hdr_type; + +#ifdef CONFIG_PCI_IOV + if (dev->is_virtfn) + return dev->physfn->sriov->hdr_type; +#endif + pci_read_config_byte(dev, PCI_HEADER_TYPE, &hdr_type); + return hdr_type; +} + #define LEGACY_IO_RESOURCE (IORESOURCE_IO | IORESOURCE_PCI_FIXED) static void pci_msi_setup_pci_dev(struct pci_dev *dev) @@ -1455,8 +1513,7 @@ int pci_setup_device(struct pci_dev *dev) struct pci_bus_region region; struct resource *res; - if (pci_read_config_byte(dev, PCI_HEADER_TYPE, &hdr_type)) - return -EIO; + hdr_type = pci_hdr_type(dev); dev->sysdata = dev->bus->sysdata; dev->dev.parent = dev->bus->bridge; @@ -1478,7 +1535,8 @@ int pci_setup_device(struct pci_dev *dev) dev->bus->number, PCI_SLOT(dev->devfn), PCI_FUNC(dev->devfn)); - pci_read_config_dword(dev, PCI_CLASS_REVISION, &class); + class = pci_class(dev); + dev->revision = class & 0xff; dev->class = class >> 8; /* upper 3 bytes */ @@ -1518,8 +1576,8 @@ int pci_setup_device(struct pci_dev *dev) goto bad; pci_read_irq(dev); pci_read_bases(dev, 6, PCI_ROM_ADDRESS); - pci_read_config_word(dev, PCI_SUBSYSTEM_VENDOR_ID, &dev->subsystem_vendor); - pci_read_config_word(dev, PCI_SUBSYSTEM_ID, &dev->subsystem_device); + + pci_subsystem_ids(dev, &dev->subsystem_vendor, &dev->subsystem_device); /* * Do the ugly legacy mode stuff here rather than broken chip @@ -2122,6 +2180,9 @@ static void pci_init_capabilities(struct pci_dev *dev) /* Advanced Error Reporting */ pci_aer_init(dev); + + if (pci_probe_reset_function(dev) == 0) + dev->reset_fn = 1; } /* diff --git a/drivers/pci/proc.c b/drivers/pci/proc.c index 58a662e3c4a6..1ee8927a0635 100644 --- a/drivers/pci/proc.c +++ b/drivers/pci/proc.c @@ -1,8 +1,8 @@ // SPDX-License-Identifier: GPL-2.0 /* - * Procfs interface for the PCI bus. + * Procfs interface for the PCI bus * - * Copyright (c) 1997--1999 Martin Mares <mj@ucw.cz> + * Copyright (c) 1997--1999 Martin Mares <mj@ucw.cz> */ #include <linux/init.h> diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c index 36db2098ee18..26141b1946ee 100644 --- a/drivers/pci/quirks.c +++ b/drivers/pci/quirks.c @@ -1,15 +1,15 @@ // SPDX-License-Identifier: GPL-2.0 /* - * This file contains work-arounds for many known PCI hardware - * bugs. Devices present only on certain architectures (host - * bridges et cetera) should be handled in arch-specific code. + * This file contains work-arounds for many known PCI hardware bugs. + * Devices present only on certain architectures (host bridges et cetera) + * should be handled in arch-specific code. * - * Note: any quirks for hotpluggable devices must _NOT_ be declared __init. + * Note: any quirks for hotpluggable devices must _NOT_ be declared __init. * - * Copyright (c) 1999 Martin Mares <mj@ucw.cz> + * Copyright (c) 1999 Martin Mares <mj@ucw.cz> * - * Init/reset quirks for USB host controllers should be in the - * USB quirks file, where their drivers can access reuse it. + * Init/reset quirks for USB host controllers should be in the USB quirks + * file, where their drivers can use them. */ #include <linux/types.h> @@ -1968,31 +1968,6 @@ static void quirk_netmos(struct pci_dev *dev) DECLARE_PCI_FIXUP_CLASS_HEADER(PCI_VENDOR_ID_NETMOS, PCI_ANY_ID, PCI_CLASS_COMMUNICATION_SERIAL, 8, quirk_netmos); -/* - * Quirk non-zero PCI functions to route VPD access through function 0 for - * devices that share VPD resources between functions. The functions are - * expected to be identical devices. - */ -static void quirk_f0_vpd_link(struct pci_dev *dev) -{ - struct pci_dev *f0; - - if (!PCI_FUNC(dev->devfn)) - return; - - f0 = pci_get_slot(dev->bus, PCI_DEVFN(PCI_SLOT(dev->devfn), 0)); - if (!f0) - return; - - if (f0->vpd && dev->class == f0->class && - dev->vendor == f0->vendor && dev->device == f0->device) - dev->dev_flags |= PCI_DEV_FLAGS_VPD_REF_F0; - - pci_dev_put(f0); -} -DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_INTEL, PCI_ANY_ID, - PCI_CLASS_NETWORK_ETHERNET, 8, quirk_f0_vpd_link); - static void quirk_e100_interrupt(struct pci_dev *dev) { u16 command, pmcsr; @@ -2183,83 +2158,6 @@ static void quirk_via_cx700_pci_parking_caching(struct pci_dev *dev) } DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_VIA, 0x324e, quirk_via_cx700_pci_parking_caching); -/* - * If a device follows the VPD format spec, the PCI core will not read or - * write past the VPD End Tag. But some vendors do not follow the VPD - * format spec, so we can't tell how much data is safe to access. Devices - * may behave unpredictably if we access too much. Blacklist these devices - * so we don't touch VPD at all. - */ -static void quirk_blacklist_vpd(struct pci_dev *dev) -{ - if (dev->vpd) { - dev->vpd->len = 0; - pci_warn(dev, FW_BUG "disabling VPD access (can't determine size of non-standard VPD format)\n"); - } -} - -DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_LSI_LOGIC, 0x0060, quirk_blacklist_vpd); -DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_LSI_LOGIC, 0x007c, quirk_blacklist_vpd); -DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_LSI_LOGIC, 0x0413, quirk_blacklist_vpd); -DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_LSI_LOGIC, 0x0078, quirk_blacklist_vpd); -DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_LSI_LOGIC, 0x0079, quirk_blacklist_vpd); -DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_LSI_LOGIC, 0x0073, quirk_blacklist_vpd); -DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_LSI_LOGIC, 0x0071, quirk_blacklist_vpd); -DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_LSI_LOGIC, 0x005b, quirk_blacklist_vpd); -DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_LSI_LOGIC, 0x002f, quirk_blacklist_vpd); -DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_LSI_LOGIC, 0x005d, quirk_blacklist_vpd); -DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_LSI_LOGIC, 0x005f, quirk_blacklist_vpd); -DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATTANSIC, PCI_ANY_ID, - quirk_blacklist_vpd); -DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_QLOGIC, 0x2261, quirk_blacklist_vpd); - -/* - * For Broadcom 5706, 5708, 5709 rev. A nics, any read beyond the - * VPD end tag will hang the device. This problem was initially - * observed when a vpd entry was created in sysfs - * ('/sys/bus/pci/devices/<id>/vpd'). A read to this sysfs entry - * will dump 32k of data. Reading a full 32k will cause an access - * beyond the VPD end tag causing the device to hang. Once the device - * is hung, the bnx2 driver will not be able to reset the device. - * We believe that it is legal to read beyond the end tag and - * therefore the solution is to limit the read/write length. - */ -static void quirk_brcm_570x_limit_vpd(struct pci_dev *dev) -{ - /* - * Only disable the VPD capability for 5706, 5706S, 5708, - * 5708S and 5709 rev. A - */ - if ((dev->device == PCI_DEVICE_ID_NX2_5706) || - (dev->device == PCI_DEVICE_ID_NX2_5706S) || - (dev->device == PCI_DEVICE_ID_NX2_5708) || - (dev->device == PCI_DEVICE_ID_NX2_5708S) || - ((dev->device == PCI_DEVICE_ID_NX2_5709) && - (dev->revision & 0xf0) == 0x0)) { - if (dev->vpd) - dev->vpd->len = 0x80; - } -} - -DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_BROADCOM, - PCI_DEVICE_ID_NX2_5706, - quirk_brcm_570x_limit_vpd); -DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_BROADCOM, - PCI_DEVICE_ID_NX2_5706S, - quirk_brcm_570x_limit_vpd); -DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_BROADCOM, - PCI_DEVICE_ID_NX2_5708, - quirk_brcm_570x_limit_vpd); -DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_BROADCOM, - PCI_DEVICE_ID_NX2_5708S, - quirk_brcm_570x_limit_vpd); -DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_BROADCOM, - PCI_DEVICE_ID_NX2_5709, - quirk_brcm_570x_limit_vpd); -DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_BROADCOM, - PCI_DEVICE_ID_NX2_5709S, - quirk_brcm_570x_limit_vpd); - static void quirk_brcm_5719_limit_mrrs(struct pci_dev *dev) { u32 rev; @@ -3086,16 +2984,10 @@ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x0e0d, quirk_intel_ntb); static ktime_t fixup_debug_start(struct pci_dev *dev, void (*fn)(struct pci_dev *dev)) { - ktime_t calltime = 0; + if (initcall_debug) + pci_info(dev, "calling %pF @ %i\n", fn, task_pid_nr(current)); - pci_dbg(dev, "calling %pF\n", fn); - if (initcall_debug) { - pr_debug("calling %pF @ %i for %s\n", - fn, task_pid_nr(current), dev_name(&dev->dev)); - calltime = ktime_get(); - } - - return calltime; + return ktime_get(); } static void fixup_debug_report(struct pci_dev *dev, ktime_t calltime, @@ -3104,13 +2996,11 @@ static void fixup_debug_report(struct pci_dev *dev, ktime_t calltime, ktime_t delta, rettime; unsigned long long duration; - if (initcall_debug) { - rettime = ktime_get(); - delta = ktime_sub(rettime, calltime); - duration = (unsigned long long) ktime_to_ns(delta) >> 10; - pr_debug("pci fixup %pF returned after %lld usecs for %s\n", - fn, duration, dev_name(&dev->dev)); - } + rettime = ktime_get(); + delta = ktime_sub(rettime, calltime); + duration = (unsigned long long) ktime_to_ns(delta) >> 10; + if (initcall_debug || duration > 10000) + pci_info(dev, "%pF took %lld usecs\n", fn, duration); } /* @@ -3399,32 +3289,6 @@ DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_CACTUS_RIDGE_4C DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_PORT_RIDGE, quirk_thunderbolt_hotplug_msi); -static void quirk_chelsio_extend_vpd(struct pci_dev *dev) -{ - int chip = (dev->device & 0xf000) >> 12; - int func = (dev->device & 0x0f00) >> 8; - int prod = (dev->device & 0x00ff) >> 0; - - /* - * If this is a T3-based adapter, there's a 1KB VPD area at offset - * 0xc00 which contains the preferred VPD values. If this is a T4 or - * later based adapter, the special VPD is at offset 0x400 for the - * Physical Functions (the SR-IOV Virtual Functions have no VPD - * Capabilities). The PCI VPD Access core routines will normally - * compute the size of the VPD by parsing the VPD Data Structure at - * offset 0x000. This will result in silent failures when attempting - * to accesses these other VPD areas which are beyond those computed - * limits. - */ - if (chip == 0x0 && prod >= 0x20) - pci_set_vpd_size(dev, 8192); - else if (chip >= 0x4 && func < 0x8) - pci_set_vpd_size(dev, 2048); -} - -DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_CHELSIO, PCI_ANY_ID, - quirk_chelsio_extend_vpd); - #ifdef CONFIG_ACPI /* * Apple: Shutdown Cactus Ridge Thunderbolt controller. @@ -3885,6 +3749,9 @@ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_MARVELL_EXT, 0x9182, /* https://bugzilla.kernel.org/show_bug.cgi?id=42679#c46 */ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_MARVELL_EXT, 0x91a0, quirk_dma_func1_alias); +/* https://bugzilla.kernel.org/show_bug.cgi?id=42679#c127 */ +DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_MARVELL_EXT, 0x9220, + quirk_dma_func1_alias); /* https://bugzilla.kernel.org/show_bug.cgi?id=42679#c49 */ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_MARVELL_EXT, 0x9230, quirk_dma_func1_alias); @@ -4505,6 +4372,15 @@ static const struct pci_dev_acs_enabled { { PCI_VENDOR_ID_CAVIUM, PCI_ANY_ID, pci_quirk_cavium_acs }, /* APM X-Gene */ { PCI_VENDOR_ID_AMCC, 0xE004, pci_quirk_xgene_acs }, + /* Ampere Computing */ + { PCI_VENDOR_ID_AMPERE, 0xE005, pci_quirk_xgene_acs }, + { PCI_VENDOR_ID_AMPERE, 0xE006, pci_quirk_xgene_acs }, + { PCI_VENDOR_ID_AMPERE, 0xE007, pci_quirk_xgene_acs }, + { PCI_VENDOR_ID_AMPERE, 0xE008, pci_quirk_xgene_acs }, + { PCI_VENDOR_ID_AMPERE, 0xE009, pci_quirk_xgene_acs }, + { PCI_VENDOR_ID_AMPERE, 0xE00A, pci_quirk_xgene_acs }, + { PCI_VENDOR_ID_AMPERE, 0xE00B, pci_quirk_xgene_acs }, + { PCI_VENDOR_ID_AMPERE, 0xE00C, pci_quirk_xgene_acs }, { 0 } }; diff --git a/drivers/pci/rom.c b/drivers/pci/rom.c index 374a33443be9..a7b5c37a85ec 100644 --- a/drivers/pci/rom.c +++ b/drivers/pci/rom.c @@ -1,11 +1,9 @@ // SPDX-License-Identifier: GPL-2.0 /* - * drivers/pci/rom.c + * PCI ROM access routines * * (C) Copyright 2004 Jon Smirl <jonsmirl@yahoo.com> * (C) Copyright 2004 Silicon Graphics, Inc. Jesse Barnes <jbarnes@sgi.com> - * - * PCI ROM access routines */ #include <linux/kernel.h> #include <linux/export.h> diff --git a/drivers/pci/search.c b/drivers/pci/search.c index bc1e023f1353..2b5f720862d3 100644 --- a/drivers/pci/search.c +++ b/drivers/pci/search.c @@ -1,11 +1,11 @@ // SPDX-License-Identifier: GPL-2.0 /* - * PCI searching functions. + * PCI searching functions * - * Copyright (C) 1993 -- 1997 Drew Eckhardt, Frederic Potter, + * Copyright (C) 1993 -- 1997 Drew Eckhardt, Frederic Potter, * David Mosberger-Tang - * Copyright (C) 1997 -- 2000 Martin Mares <mj@ucw.cz> - * Copyright (C) 2003 -- 2004 Greg Kroah-Hartman <greg@kroah.com> + * Copyright (C) 1997 -- 2000 Martin Mares <mj@ucw.cz> + * Copyright (C) 2003 -- 2004 Greg Kroah-Hartman <greg@kroah.com> */ #include <linux/pci.h> diff --git a/drivers/pci/setup-bus.c b/drivers/pci/setup-bus.c index 3cce29a069e6..072784f55ea5 100644 --- a/drivers/pci/setup-bus.c +++ b/drivers/pci/setup-bus.c @@ -1,16 +1,12 @@ // SPDX-License-Identifier: GPL-2.0 /* - * drivers/pci/setup-bus.c + * Support routines for initializing a PCI subsystem * * Extruded from code written by * Dave Rusling (david.rusling@reo.mts.dec.com) * David Mosberger (davidm@cs.arizona.edu) * David Miller (davem@redhat.com) * - * Support routines for initializing a PCI subsystem. - */ - -/* * Nov 2000, Ivan Kokshaysky <ink@jurassic.park.msu.ru> * PCI-PCI bridges cleanup, sorted resource allocation. * Feb 2002, Ivan Kokshaysky <ink@jurassic.park.msu.ru> diff --git a/drivers/pci/setup-irq.c b/drivers/pci/setup-irq.c index 5ad4ee7d7b1e..7129494754dd 100644 --- a/drivers/pci/setup-irq.c +++ b/drivers/pci/setup-irq.c @@ -1,13 +1,11 @@ // SPDX-License-Identifier: GPL-2.0 /* - * drivers/pci/setup-irq.c + * Support routines for initializing a PCI subsystem * * Extruded from code written by * Dave Rusling (david.rusling@reo.mts.dec.com) * David Mosberger (davidm@cs.arizona.edu) * David Miller (davem@redhat.com) - * - * Support routines for initializing a PCI subsystem. */ diff --git a/drivers/pci/setup-res.c b/drivers/pci/setup-res.c index 365447240d95..0cd83a4bd4c8 100644 --- a/drivers/pci/setup-res.c +++ b/drivers/pci/setup-res.c @@ -1,18 +1,14 @@ // SPDX-License-Identifier: GPL-2.0 /* - * drivers/pci/setup-res.c + * Support routines for initializing a PCI subsystem * * Extruded from code written by * Dave Rusling (david.rusling@reo.mts.dec.com) * David Mosberger (davidm@cs.arizona.edu) * David Miller (davem@redhat.com) * - * Support routines for initializing a PCI subsystem. - */ - -/* fixed for multiple pci buses, 1999 Andrea Arcangeli <andrea@suse.de> */ - -/* + * Fixed for multiple PCI buses, 1999 Andrea Arcangeli <andrea@suse.de> + * * Nov 2000, Ivan Kokshaysky <ink@jurassic.park.msu.ru> * Resource sorting */ diff --git a/drivers/pci/slot.c b/drivers/pci/slot.c index d10f556dc03e..e634229ece89 100644 --- a/drivers/pci/slot.c +++ b/drivers/pci/slot.c @@ -1,6 +1,5 @@ // SPDX-License-Identifier: GPL-2.0 /* - * drivers/pci/slot.c * Copyright (C) 2006 Matthew Wilcox <matthew@wil.cx> * Copyright (C) 2006-2009 Hewlett-Packard Development Company, L.P. * Alex Chiang <achiang@hp.com> @@ -76,6 +75,7 @@ static const char *pci_bus_speed_strings[] = { "2.5 GT/s PCIe", /* 0x14 */ "5.0 GT/s PCIe", /* 0x15 */ "8.0 GT/s PCIe", /* 0x16 */ + "16.0 GT/s PCIe", /* 0x17 */ }; static ssize_t bus_speed_read(enum pci_bus_speed speed, char *buf) diff --git a/drivers/pci/syscall.c b/drivers/pci/syscall.c index e725f99b5479..d96626c614f5 100644 --- a/drivers/pci/syscall.c +++ b/drivers/pci/syscall.c @@ -1,11 +1,8 @@ // SPDX-License-Identifier: GPL-2.0 /* - * pci_syscall.c - * - * For architectures where we want to allow direct access - * to the PCI config stuff - it would probably be preferable - * on PCs too, but there people just do it by hand with the - * magic northbridge registers.. + * For architectures where we want to allow direct access to the PCI config + * stuff - it would probably be preferable on PCs too, but there people + * just do it by hand with the magic northbridge registers. */ #include <linux/errno.h> diff --git a/drivers/pci/vpd.c b/drivers/pci/vpd.c index 70fba57d6103..8617565ba561 100644 --- a/drivers/pci/vpd.c +++ b/drivers/pci/vpd.c @@ -1,13 +1,465 @@ // SPDX-License-Identifier: GPL-2.0 /* - * File: vpd.c - * Purpose: Provide PCI VPD support + * PCI VPD support * * Copyright (C) 2010 Broadcom Corporation. */ #include <linux/pci.h> +#include <linux/delay.h> #include <linux/export.h> +#include <linux/sched/signal.h> +#include "pci.h" + +/* VPD access through PCI 2.2+ VPD capability */ + +struct pci_vpd_ops { + ssize_t (*read)(struct pci_dev *dev, loff_t pos, size_t count, void *buf); + ssize_t (*write)(struct pci_dev *dev, loff_t pos, size_t count, const void *buf); + int (*set_size)(struct pci_dev *dev, size_t len); +}; + +struct pci_vpd { + const struct pci_vpd_ops *ops; + struct bin_attribute *attr; /* Descriptor for sysfs VPD entry */ + struct mutex lock; + unsigned int len; + u16 flag; + u8 cap; + unsigned int busy:1; + unsigned int valid:1; +}; + +/** + * pci_read_vpd - Read one entry from Vital Product Data + * @dev: pci device struct + * @pos: offset in vpd space + * @count: number of bytes to read + * @buf: pointer to where to store result + */ +ssize_t pci_read_vpd(struct pci_dev *dev, loff_t pos, size_t count, void *buf) +{ + if (!dev->vpd || !dev->vpd->ops) + return -ENODEV; + return dev->vpd->ops->read(dev, pos, count, buf); +} +EXPORT_SYMBOL(pci_read_vpd); + +/** + * pci_write_vpd - Write entry to Vital Product Data + * @dev: pci device struct + * @pos: offset in vpd space + * @count: number of bytes to write + * @buf: buffer containing write data + */ +ssize_t pci_write_vpd(struct pci_dev *dev, loff_t pos, size_t count, const void *buf) +{ + if (!dev->vpd || !dev->vpd->ops) + return -ENODEV; + return dev->vpd->ops->write(dev, pos, count, buf); +} +EXPORT_SYMBOL(pci_write_vpd); + +/** + * pci_set_vpd_size - Set size of Vital Product Data space + * @dev: pci device struct + * @len: size of vpd space + */ +int pci_set_vpd_size(struct pci_dev *dev, size_t len) +{ + if (!dev->vpd || !dev->vpd->ops) + return -ENODEV; + return dev->vpd->ops->set_size(dev, len); +} +EXPORT_SYMBOL(pci_set_vpd_size); + +#define PCI_VPD_MAX_SIZE (PCI_VPD_ADDR_MASK + 1) + +/** + * pci_vpd_size - determine actual size of Vital Product Data + * @dev: pci device struct + * @old_size: current assumed size, also maximum allowed size + */ +static size_t pci_vpd_size(struct pci_dev *dev, size_t old_size) +{ + size_t off = 0; + unsigned char header[1+2]; /* 1 byte tag, 2 bytes length */ + + while (off < old_size && + pci_read_vpd(dev, off, 1, header) == 1) { + unsigned char tag; + + if (header[0] & PCI_VPD_LRDT) { + /* Large Resource Data Type Tag */ + tag = pci_vpd_lrdt_tag(header); + /* Only read length from known tag items */ + if ((tag == PCI_VPD_LTIN_ID_STRING) || + (tag == PCI_VPD_LTIN_RO_DATA) || + (tag == PCI_VPD_LTIN_RW_DATA)) { + if (pci_read_vpd(dev, off+1, 2, + &header[1]) != 2) { + pci_warn(dev, "invalid large VPD tag %02x size at offset %zu", + tag, off + 1); + return 0; + } + off += PCI_VPD_LRDT_TAG_SIZE + + pci_vpd_lrdt_size(header); + } + } else { + /* Short Resource Data Type Tag */ + off += PCI_VPD_SRDT_TAG_SIZE + + pci_vpd_srdt_size(header); + tag = pci_vpd_srdt_tag(header); + } + + if (tag == PCI_VPD_STIN_END) /* End tag descriptor */ + return off; + + if ((tag != PCI_VPD_LTIN_ID_STRING) && + (tag != PCI_VPD_LTIN_RO_DATA) && + (tag != PCI_VPD_LTIN_RW_DATA)) { + pci_warn(dev, "invalid %s VPD tag %02x at offset %zu", + (header[0] & PCI_VPD_LRDT) ? "large" : "short", + tag, off); + return 0; + } + } + return 0; +} + +/* + * Wait for last operation to complete. + * This code has to spin since there is no other notification from the PCI + * hardware. Since the VPD is often implemented by serial attachment to an + * EEPROM, it may take many milliseconds to complete. + * + * Returns 0 on success, negative values indicate error. + */ +static int pci_vpd_wait(struct pci_dev *dev) +{ + struct pci_vpd *vpd = dev->vpd; + unsigned long timeout = jiffies + msecs_to_jiffies(125); + unsigned long max_sleep = 16; + u16 status; + int ret; + + if (!vpd->busy) + return 0; + + while (time_before(jiffies, timeout)) { + ret = pci_user_read_config_word(dev, vpd->cap + PCI_VPD_ADDR, + &status); + if (ret < 0) + return ret; + + if ((status & PCI_VPD_ADDR_F) == vpd->flag) { + vpd->busy = 0; + return 0; + } + + if (fatal_signal_pending(current)) + return -EINTR; + + usleep_range(10, max_sleep); + if (max_sleep < 1024) + max_sleep *= 2; + } + + pci_warn(dev, "VPD access failed. This is likely a firmware bug on this device. Contact the card vendor for a firmware update\n"); + return -ETIMEDOUT; +} + +static ssize_t pci_vpd_read(struct pci_dev *dev, loff_t pos, size_t count, + void *arg) +{ + struct pci_vpd *vpd = dev->vpd; + int ret; + loff_t end = pos + count; + u8 *buf = arg; + + if (pos < 0) + return -EINVAL; + + if (!vpd->valid) { + vpd->valid = 1; + vpd->len = pci_vpd_size(dev, vpd->len); + } + + if (vpd->len == 0) + return -EIO; + + if (pos > vpd->len) + return 0; + + if (end > vpd->len) { + end = vpd->len; + count = end - pos; + } + + if (mutex_lock_killable(&vpd->lock)) + return -EINTR; + + ret = pci_vpd_wait(dev); + if (ret < 0) + goto out; + + while (pos < end) { + u32 val; + unsigned int i, skip; + + ret = pci_user_write_config_word(dev, vpd->cap + PCI_VPD_ADDR, + pos & ~3); + if (ret < 0) + break; + vpd->busy = 1; + vpd->flag = PCI_VPD_ADDR_F; + ret = pci_vpd_wait(dev); + if (ret < 0) + break; + + ret = pci_user_read_config_dword(dev, vpd->cap + PCI_VPD_DATA, &val); + if (ret < 0) + break; + + skip = pos & 3; + for (i = 0; i < sizeof(u32); i++) { + if (i >= skip) { + *buf++ = val; + if (++pos == end) + break; + } + val >>= 8; + } + } +out: + mutex_unlock(&vpd->lock); + return ret ? ret : count; +} + +static ssize_t pci_vpd_write(struct pci_dev *dev, loff_t pos, size_t count, + const void *arg) +{ + struct pci_vpd *vpd = dev->vpd; + const u8 *buf = arg; + loff_t end = pos + count; + int ret = 0; + + if (pos < 0 || (pos & 3) || (count & 3)) + return -EINVAL; + + if (!vpd->valid) { + vpd->valid = 1; + vpd->len = pci_vpd_size(dev, vpd->len); + } + + if (vpd->len == 0) + return -EIO; + + if (end > vpd->len) + return -EINVAL; + + if (mutex_lock_killable(&vpd->lock)) + return -EINTR; + + ret = pci_vpd_wait(dev); + if (ret < 0) + goto out; + + while (pos < end) { + u32 val; + + val = *buf++; + val |= *buf++ << 8; + val |= *buf++ << 16; + val |= *buf++ << 24; + + ret = pci_user_write_config_dword(dev, vpd->cap + PCI_VPD_DATA, val); + if (ret < 0) + break; + ret = pci_user_write_config_word(dev, vpd->cap + PCI_VPD_ADDR, + pos | PCI_VPD_ADDR_F); + if (ret < 0) + break; + + vpd->busy = 1; + vpd->flag = 0; + ret = pci_vpd_wait(dev); + if (ret < 0) + break; + + pos += sizeof(u32); + } +out: + mutex_unlock(&vpd->lock); + return ret ? ret : count; +} + +static int pci_vpd_set_size(struct pci_dev *dev, size_t len) +{ + struct pci_vpd *vpd = dev->vpd; + + if (len == 0 || len > PCI_VPD_MAX_SIZE) + return -EIO; + + vpd->valid = 1; + vpd->len = len; + + return 0; +} + +static const struct pci_vpd_ops pci_vpd_ops = { + .read = pci_vpd_read, + .write = pci_vpd_write, + .set_size = pci_vpd_set_size, +}; + +static ssize_t pci_vpd_f0_read(struct pci_dev *dev, loff_t pos, size_t count, + void *arg) +{ + struct pci_dev *tdev = pci_get_slot(dev->bus, + PCI_DEVFN(PCI_SLOT(dev->devfn), 0)); + ssize_t ret; + + if (!tdev) + return -ENODEV; + + ret = pci_read_vpd(tdev, pos, count, arg); + pci_dev_put(tdev); + return ret; +} + +static ssize_t pci_vpd_f0_write(struct pci_dev *dev, loff_t pos, size_t count, + const void *arg) +{ + struct pci_dev *tdev = pci_get_slot(dev->bus, + PCI_DEVFN(PCI_SLOT(dev->devfn), 0)); + ssize_t ret; + + if (!tdev) + return -ENODEV; + + ret = pci_write_vpd(tdev, pos, count, arg); + pci_dev_put(tdev); + return ret; +} + +static int pci_vpd_f0_set_size(struct pci_dev *dev, size_t len) +{ + struct pci_dev *tdev = pci_get_slot(dev->bus, + PCI_DEVFN(PCI_SLOT(dev->devfn), 0)); + int ret; + + if (!tdev) + return -ENODEV; + + ret = pci_set_vpd_size(tdev, len); + pci_dev_put(tdev); + return ret; +} + +static const struct pci_vpd_ops pci_vpd_f0_ops = { + .read = pci_vpd_f0_read, + .write = pci_vpd_f0_write, + .set_size = pci_vpd_f0_set_size, +}; + +int pci_vpd_init(struct pci_dev *dev) +{ + struct pci_vpd *vpd; + u8 cap; + + cap = pci_find_capability(dev, PCI_CAP_ID_VPD); + if (!cap) + return -ENODEV; + + vpd = kzalloc(sizeof(*vpd), GFP_ATOMIC); + if (!vpd) + return -ENOMEM; + + vpd->len = PCI_VPD_MAX_SIZE; + if (dev->dev_flags & PCI_DEV_FLAGS_VPD_REF_F0) + vpd->ops = &pci_vpd_f0_ops; + else + vpd->ops = &pci_vpd_ops; + mutex_init(&vpd->lock); + vpd->cap = cap; + vpd->busy = 0; + vpd->valid = 0; + dev->vpd = vpd; + return 0; +} + +void pci_vpd_release(struct pci_dev *dev) +{ + kfree(dev->vpd); +} + +static ssize_t read_vpd_attr(struct file *filp, struct kobject *kobj, + struct bin_attribute *bin_attr, char *buf, + loff_t off, size_t count) +{ + struct pci_dev *dev = to_pci_dev(kobj_to_dev(kobj)); + + if (bin_attr->size > 0) { + if (off > bin_attr->size) + count = 0; + else if (count > bin_attr->size - off) + count = bin_attr->size - off; + } + + return pci_read_vpd(dev, off, count, buf); +} + +static ssize_t write_vpd_attr(struct file *filp, struct kobject *kobj, + struct bin_attribute *bin_attr, char *buf, + loff_t off, size_t count) +{ + struct pci_dev *dev = to_pci_dev(kobj_to_dev(kobj)); + + if (bin_attr->size > 0) { + if (off > bin_attr->size) + count = 0; + else if (count > bin_attr->size - off) + count = bin_attr->size - off; + } + + return pci_write_vpd(dev, off, count, buf); +} + +void pcie_vpd_create_sysfs_dev_files(struct pci_dev *dev) +{ + int retval; + struct bin_attribute *attr; + + if (!dev->vpd) + return; + + attr = kzalloc(sizeof(*attr), GFP_ATOMIC); + if (!attr) + return; + + sysfs_bin_attr_init(attr); + attr->size = 0; + attr->attr.name = "vpd"; + attr->attr.mode = S_IRUSR | S_IWUSR; + attr->read = read_vpd_attr; + attr->write = write_vpd_attr; + retval = sysfs_create_bin_file(&dev->dev.kobj, attr); + if (retval) { + kfree(attr); + return; + } + + dev->vpd->attr = attr; +} + +void pcie_vpd_remove_sysfs_dev_files(struct pci_dev *dev) +{ + if (dev->vpd && dev->vpd->attr) { + sysfs_remove_bin_file(&dev->dev.kobj, dev->vpd->attr); + kfree(dev->vpd->attr); + } +} int pci_vpd_find_tag(const u8 *buf, unsigned int off, unsigned int len, u8 rdt) { @@ -61,3 +513,132 @@ int pci_vpd_find_info_keyword(const u8 *buf, unsigned int off, return -ENOENT; } EXPORT_SYMBOL_GPL(pci_vpd_find_info_keyword); + +#ifdef CONFIG_PCI_QUIRKS +/* + * Quirk non-zero PCI functions to route VPD access through function 0 for + * devices that share VPD resources between functions. The functions are + * expected to be identical devices. + */ +static void quirk_f0_vpd_link(struct pci_dev *dev) +{ + struct pci_dev *f0; + + if (!PCI_FUNC(dev->devfn)) + return; + + f0 = pci_get_slot(dev->bus, PCI_DEVFN(PCI_SLOT(dev->devfn), 0)); + if (!f0) + return; + + if (f0->vpd && dev->class == f0->class && + dev->vendor == f0->vendor && dev->device == f0->device) + dev->dev_flags |= PCI_DEV_FLAGS_VPD_REF_F0; + + pci_dev_put(f0); +} +DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_INTEL, PCI_ANY_ID, + PCI_CLASS_NETWORK_ETHERNET, 8, quirk_f0_vpd_link); + +/* + * If a device follows the VPD format spec, the PCI core will not read or + * write past the VPD End Tag. But some vendors do not follow the VPD + * format spec, so we can't tell how much data is safe to access. Devices + * may behave unpredictably if we access too much. Blacklist these devices + * so we don't touch VPD at all. + */ +static void quirk_blacklist_vpd(struct pci_dev *dev) +{ + if (dev->vpd) { + dev->vpd->len = 0; + pci_warn(dev, FW_BUG "disabling VPD access (can't determine size of non-standard VPD format)\n"); + } +} +DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_LSI_LOGIC, 0x0060, quirk_blacklist_vpd); +DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_LSI_LOGIC, 0x007c, quirk_blacklist_vpd); +DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_LSI_LOGIC, 0x0413, quirk_blacklist_vpd); +DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_LSI_LOGIC, 0x0078, quirk_blacklist_vpd); +DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_LSI_LOGIC, 0x0079, quirk_blacklist_vpd); +DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_LSI_LOGIC, 0x0073, quirk_blacklist_vpd); +DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_LSI_LOGIC, 0x0071, quirk_blacklist_vpd); +DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_LSI_LOGIC, 0x005b, quirk_blacklist_vpd); +DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_LSI_LOGIC, 0x002f, quirk_blacklist_vpd); +DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_LSI_LOGIC, 0x005d, quirk_blacklist_vpd); +DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_LSI_LOGIC, 0x005f, quirk_blacklist_vpd); +DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATTANSIC, PCI_ANY_ID, + quirk_blacklist_vpd); +DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_QLOGIC, 0x2261, quirk_blacklist_vpd); + +/* + * For Broadcom 5706, 5708, 5709 rev. A nics, any read beyond the + * VPD end tag will hang the device. This problem was initially + * observed when a vpd entry was created in sysfs + * ('/sys/bus/pci/devices/<id>/vpd'). A read to this sysfs entry + * will dump 32k of data. Reading a full 32k will cause an access + * beyond the VPD end tag causing the device to hang. Once the device + * is hung, the bnx2 driver will not be able to reset the device. + * We believe that it is legal to read beyond the end tag and + * therefore the solution is to limit the read/write length. + */ +static void quirk_brcm_570x_limit_vpd(struct pci_dev *dev) +{ + /* + * Only disable the VPD capability for 5706, 5706S, 5708, + * 5708S and 5709 rev. A + */ + if ((dev->device == PCI_DEVICE_ID_NX2_5706) || + (dev->device == PCI_DEVICE_ID_NX2_5706S) || + (dev->device == PCI_DEVICE_ID_NX2_5708) || + (dev->device == PCI_DEVICE_ID_NX2_5708S) || + ((dev->device == PCI_DEVICE_ID_NX2_5709) && + (dev->revision & 0xf0) == 0x0)) { + if (dev->vpd) + dev->vpd->len = 0x80; + } +} +DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_BROADCOM, + PCI_DEVICE_ID_NX2_5706, + quirk_brcm_570x_limit_vpd); +DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_BROADCOM, + PCI_DEVICE_ID_NX2_5706S, + quirk_brcm_570x_limit_vpd); +DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_BROADCOM, + PCI_DEVICE_ID_NX2_5708, + quirk_brcm_570x_limit_vpd); +DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_BROADCOM, + PCI_DEVICE_ID_NX2_5708S, + quirk_brcm_570x_limit_vpd); +DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_BROADCOM, + PCI_DEVICE_ID_NX2_5709, + quirk_brcm_570x_limit_vpd); +DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_BROADCOM, + PCI_DEVICE_ID_NX2_5709S, + quirk_brcm_570x_limit_vpd); + +static void quirk_chelsio_extend_vpd(struct pci_dev *dev) +{ + int chip = (dev->device & 0xf000) >> 12; + int func = (dev->device & 0x0f00) >> 8; + int prod = (dev->device & 0x00ff) >> 0; + + /* + * If this is a T3-based adapter, there's a 1KB VPD area at offset + * 0xc00 which contains the preferred VPD values. If this is a T4 or + * later based adapter, the special VPD is at offset 0x400 for the + * Physical Functions (the SR-IOV Virtual Functions have no VPD + * Capabilities). The PCI VPD Access core routines will normally + * compute the size of the VPD by parsing the VPD Data Structure at + * offset 0x000. This will result in silent failures when attempting + * to accesses these other VPD areas which are beyond those computed + * limits. + */ + if (chip == 0x0 && prod >= 0x20) + pci_set_vpd_size(dev, 8192); + else if (chip >= 0x4 && func < 0x8) + pci_set_vpd_size(dev, 2048); +} + +DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_CHELSIO, PCI_ANY_ID, + quirk_chelsio_extend_vpd); + +#endif diff --git a/drivers/pci/xen-pcifront.c b/drivers/pci/xen-pcifront.c index 8785014f656e..eba6e33147a2 100644 --- a/drivers/pci/xen-pcifront.c +++ b/drivers/pci/xen-pcifront.c @@ -1,8 +1,8 @@ // SPDX-License-Identifier: GPL-2.0 /* - * Xen PCI Frontend. + * Xen PCI Frontend * - * Author: Ryan Wilson <hap9@epoch.ncsc.mil> + * Author: Ryan Wilson <hap9@epoch.ncsc.mil> */ #include <linux/module.h> #include <linux/init.h> |