diff options
author | Dou Liyang <douliyangs@gmail.com> | 2018-12-04 23:51:20 +0800 |
---|---|---|
committer | Thomas Gleixner <tglx@linutronix.de> | 2018-12-19 11:32:08 +0100 |
commit | bec04037e4e484f41ee4d9409e40616874169d20 (patch) | |
tree | 5fde4bc9e597250dbdac3a4c0741b13766da8d4e /drivers/pci/msi.c | |
parent | c2899c3470de04823870b28646981695c0046efe (diff) | |
download | linux-bec04037e4e484f41ee4d9409e40616874169d20.tar.bz2 |
genirq/core: Introduce struct irq_affinity_desc
The interrupt affinity management uses straight cpumask pointers to convey
the automatically assigned affinity masks for managed interrupts. The core
interrupt descriptor allocation also decides based on the pointer being non
NULL whether an interrupt is managed or not.
Devices which use managed interrupts usually have two classes of
interrupts:
- Interrupts for multiple device queues
- Interrupts for general device management
Currently both classes are treated the same way, i.e. as managed
interrupts. The general interrupts get the default affinity mask assigned
while the device queue interrupts are spread out over the possible CPUs.
Treating the general interrupts as managed is both a limitation and under
certain circumstances a bug. Assume the following situation:
default_irq_affinity = 4..7
So if CPUs 4-7 are offlined, then the core code will shut down the device
management interrupts because the last CPU in their affinity mask went
offline.
It's also a limitation because it's desired to allow manual placement of
the general device interrupts for various reasons. If they are marked
managed then the interrupt affinity setting from both user and kernel space
is disabled.
To remedy that situation it's required to convey more information than the
cpumasks through various interfaces related to interrupt descriptor
allocation.
Instead of adding yet another argument, create a new data structure
'irq_affinity_desc' which for now just contains the cpumask. This struct
can be expanded to convey auxilliary information in the next step.
No functional change, just preparatory work.
[ tglx: Simplified logic and clarified changelog ]
Suggested-by: Thomas Gleixner <tglx@linutronix.de>
Suggested-by: Bjorn Helgaas <bhelgaas@google.com>
Signed-off-by: Dou Liyang <douliyangs@gmail.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-pci@vger.kernel.org
Cc: kashyap.desai@broadcom.com
Cc: shivasharan.srikanteshwara@broadcom.com
Cc: sumit.saxena@broadcom.com
Cc: ming.lei@redhat.com
Cc: hch@lst.de
Cc: douliyang1@huawei.com
Link: https://lkml.kernel.org/r/20181204155122.6327-2-douliyangs@gmail.com
Diffstat (limited to 'drivers/pci/msi.c')
-rw-r--r-- | drivers/pci/msi.c | 9 |
1 files changed, 4 insertions, 5 deletions
diff --git a/drivers/pci/msi.c b/drivers/pci/msi.c index 265ed3e4c920..7a1c8a09efa5 100644 --- a/drivers/pci/msi.c +++ b/drivers/pci/msi.c @@ -534,14 +534,13 @@ error_attrs: static struct msi_desc * msi_setup_entry(struct pci_dev *dev, int nvec, const struct irq_affinity *affd) { - struct cpumask *masks = NULL; + struct irq_affinity_desc *masks = NULL; struct msi_desc *entry; u16 control; if (affd) masks = irq_create_affinity_masks(nvec, affd); - /* MSI Entry Initialization */ entry = alloc_msi_entry(&dev->dev, nvec, masks); if (!entry) @@ -672,7 +671,7 @@ static int msix_setup_entries(struct pci_dev *dev, void __iomem *base, struct msix_entry *entries, int nvec, const struct irq_affinity *affd) { - struct cpumask *curmsk, *masks = NULL; + struct irq_affinity_desc *curmsk, *masks = NULL; struct msi_desc *entry; int ret, i; @@ -1264,7 +1263,7 @@ const struct cpumask *pci_irq_get_affinity(struct pci_dev *dev, int nr) for_each_pci_msi_entry(entry, dev) { if (i == nr) - return entry->affinity; + return &entry->affinity->mask; i++; } WARN_ON_ONCE(1); @@ -1276,7 +1275,7 @@ const struct cpumask *pci_irq_get_affinity(struct pci_dev *dev, int nr) nr >= entry->nvec_used)) return NULL; - return &entry->affinity[nr]; + return &entry->affinity[nr].mask; } else { return cpu_possible_mask; } |