Age | Commit message (Collapse) | Author | Files | Lines |
|
'arm/msm', 'arm/rockchip', 'arm/smmu' and 'core' into next
|
|
This new call-back will be used by the iommu driver to do
reserve the given dm_region in its iova space before the
mapping is created.
The call-back is temporary until the dma-ops implementation
is part of the common iommu code.
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
Ida handling can be much simplified by using the ida_simple_.. functions.
This change also fixes the bug that previously checking for errors
returned by ida_get_new() was incomplete.
ida_get_new() can return errors other than EAGAIN, e.g. ENOSPC.
This case wasn't handled.
Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
iommu_group_ida and iommu_group_mutex can be initialized statically.
There's no need to do this dynamically in the init function.
Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu
Pull IOMMU updates from Joerg Roedel:
"The updates include:
- rate limiting for the VT-d fault handler
- remove statistics code from the AMD IOMMU driver. It is unused and
should be replaced by something more generic if needed
- per-domain pagesize-bitmaps in IOMMU core code to support systems
with different types of IOMMUs
- support for ACPI devices in the AMD IOMMU driver
- 4GB mode support for Mediatek IOMMU driver
- ARM-SMMU updates from Will Deacon:
- support for 64k pages with SMMUv1 implementations (e.g MMU-401)
- remove open-coded 64-bit MMIO accessors
- initial support for 16-bit VMIDs, as supported by some ThunderX
SMMU implementations
- a couple of errata workarounds for silicon in the field
- various fixes here and there"
* tag 'iommu-updates-v4.7' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu: (44 commits)
iommu/arm-smmu: Use per-domain page sizes.
iommu/amd: Remove statistics code
iommu/dma: Finish optimising higher-order allocations
iommu: Allow selecting page sizes per domain
iommu: of: enforce const-ness of struct iommu_ops
iommu: remove unused priv field from struct iommu_ops
iommu/dma: Implement scatterlist segment merging
iommu/arm-smmu: Clear cache lock bit of ACR
iommu/arm-smmu: Support SMMUv1 64KB supplement
iommu/arm-smmu: Decouple context format from kernel config
iommu/arm-smmu: Tidy up 64-bit/atomic I/O accesses
io-64-nonatomic: Add relaxed accessor variants
iommu/arm-smmu: Work around MMU-500 prefetch errata
iommu/arm-smmu: Convert ThunderX workaround to new method
iommu/arm-smmu: Differentiate specific implementations
iommu/arm-smmu: Workaround for ThunderX erratum #27704
iommu/arm-smmu: Add support for 16 bit VMID
iommu/amd: Move get_device_id() and friends to beginning of file
iommu/amd: Don't use IS_ERR_VALUE to check integer values
iommu/amd: Signedness bug in acpihid_device_group()
...
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci
Pull PCI updates from Bjorn Helgaas:
"Enumeration:
- Refine PCI support check in pcibios_init() (Adrian-Ken Rueegsegger)
- Provide common functions for ECAM mapping (Jayachandran C)
- Allow all PCIe services on non-ACPI host bridges (Jon Derrick)
- Remove return values from pcie_port_platform_notify() and relatives (Jon Derrick)
- Widen portdrv service type from 4 bits to 8 bits (Keith Busch)
- Add Downstream Port Containment portdrv service type (Keith Busch)
- Add Downstream Port Containment driver (Keith Busch)
Resource management:
- Identify Enhanced Allocation (EA) BAR Equivalent resources in sysfs (Alex Williamson)
- Supply CPU physical address (not bus address) to iomem_is_exclusive() (Bjorn Helgaas)
- alpha: Call iomem_is_exclusive() for IORESOURCE_MEM, but not IORESOURCE_IO (Bjorn Helgaas)
- Mark Broadwell-EP Home Agent 1 as having non-compliant BARs (Prarit Bhargava)
- Disable all BAR sizing for devices with non-compliant BARs (Prarit Bhargava)
- Move PCI I/O space management from OF to PCI core code (Tomasz Nowicki)
PCI device hotplug:
- acpiphp_ibm: Avoid uninitialized variable reference (Dan Carpenter)
- Use cached copy of PCI_EXP_SLTCAP_HPC bit (Lukas Wunner)
Virtualization:
- Mark Intel i40e NIC INTx masking as broken (Alex Williamson)
- Reverse standard ACS vs device-specific ACS enabling (Alex Williamson)
- Work around Intel Sunrise Point PCH incorrect ACS capability (Alex Williamson)
IOMMU:
- Add pci_add_dma_alias() to abstract implementation (Bjorn Helgaas)
- Move informational printk to pci_add_dma_alias() (Bjorn Helgaas)
- Add support for multiple DMA aliases (Jacek Lawrynowicz)
- Add DMA alias quirk for mic_x200_dma (Jacek Lawrynowicz)
Thunderbolt:
- Fix double free of drom buffer (Andreas Noever)
- Add Intel Thunderbolt device IDs (Lukas Wunner)
- Fix typos and magic number (Lukas Wunner)
- Support 1st gen Light Ridge controller (Lukas Wunner)
Generic host bridge driver:
- Use generic ECAM API (Jayachandran C)
Cavium ThunderX host bridge driver:
- Don't clobber read-only bits in bridge config registers (David Daney)
- Use generic ECAM API (Jayachandran C)
Freescale i.MX6 host bridge driver:
- Use enum instead of bool for variant indicator (Andrey Smirnov)
- Implement reset sequence for i.MX6+ (Andrey Smirnov)
- Factor out ref clock enable (Bjorn Helgaas)
- Add initial imx6sx support (Christoph Fritz)
- Add reset-gpio-active-high boolean property to DT (Petr Štetiar)
- Add DT property for link gen, default to Gen1 (Tim Harvey)
- dts: Specify imx6qp version of PCIe core (Andrey Smirnov)
- dts: Fix PCIe reset GPIO polarity on Toradex Apalis Ixora (Petr Štetiar)
Marvell Armada host bridge driver:
- add DT binding for Marvell Armada 7K/8K PCIe controller (Thomas Petazzoni)
- Add driver for Marvell Armada 7K/8K PCIe controller (Thomas Petazzoni)
Marvell MVEBU host bridge driver:
- Constify mvebu_pcie_pm_ops structure (Jisheng Zhang)
- Use SET_NOIRQ_SYSTEM_SLEEP_PM_OPS for mvebu_pcie_pm_ops (Jisheng Zhang)
Microsoft Hyper-V host bridge driver:
- Report resources release after stopping the bus (Vitaly Kuznetsov)
- Add explicit barriers to config space access (Vitaly Kuznetsov)
Renesas R-Car host bridge driver:
- Select PCI_MSI_IRQ_DOMAIN (Arnd Bergmann)
Synopsys DesignWare host bridge driver:
- Remove incorrect RC memory base/limit configuration (Gabriele Paoloni)
- Move Root Complex setup code to dw_pcie_setup_rc() (Jisheng Zhang)
TI Keystone host bridge driver:
- Add error IRQ handler (Murali Karicheri)
- Remove unnecessary goto statement (Murali Karicheri)
Miscellaneous:
- Fix spelling errors (Colin Ian King)"
* tag 'pci-v4.7-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci: (48 commits)
PCI: Disable all BAR sizing for devices with non-compliant BARs
x86/PCI: Mark Broadwell-EP Home Agent 1 as having non-compliant BARs
PCI: Identify Enhanced Allocation (EA) BAR Equivalent resources in sysfs
PCI, of: Move PCI I/O space management to PCI core code
PCI: generic, thunder: Use generic ECAM API
PCI: Provide common functions for ECAM mapping
PCI: hv: Add explicit barriers to config space access
PCI: Use cached copy of PCI_EXP_SLTCAP_HPC bit
PCI: Add Downstream Port Containment driver
PCI: Add Downstream Port Containment portdrv service type
PCI: Widen portdrv service type from 4 bits to 8 bits
PCI: designware: Remove incorrect RC memory base/limit configuration
PCI: hv: Report resources release after stopping the bus
ARM: dts: imx6qp: Specify imx6qp version of PCIe core
PCI: imx6: Implement reset sequence for i.MX6+
PCI: imx6: Use enum instead of bool for variant indicator
PCI: thunder: Don't clobber read-only bits in bridge config registers
thunderbolt: Fix double free of drom buffer
PCI: rcar: Select PCI_MSI_IRQ_DOMAIN
PCI: armada: Add driver for Marvell Armada 7K/8K PCIe controller
...
|
|
Many IOMMUs support multiple page table formats, meaning that any given
domain may only support a subset of the hardware page sizes presented in
iommu_ops->pgsize_bitmap. There are also certain use-cases where the
creator of a domain may want to control which page sizes are used, for
example to force the use of hugepage mappings to reduce pagetable walk
depth.
To this end, add a per-domain pgsize_bitmap to represent the subset of
page sizes actually in use, to make it possible for domains with
different requirements to coexist.
Signed-off-by: Will Deacon <will.deacon@arm.com>
[rm: hijacked and rebased original patch with new commit message]
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
Solve IOMMU support issues with PCIe non-transparent bridges that use
Requester ID look-up tables (RID-LUT), e.g., the PEX8733.
The NTB connects devices in two independent PCI domains. Devices separated
by the NTB are not able to discover each other. A PCI packet being
forwared from one domain to another has to have its RID modified so it
appears on correct bus and completions are forwarded back to the original
domain through the NTB. The RID is translated using a preprogrammed table
(LUT) and the PCI packet propagates upstream away from the NTB. If the
destination system has IOMMU enabled, the packet will be discarded because
the new RID is unknown to the IOMMU. Adding a DMA alias for the new RID
allows IOMMU to properly recognize the packet.
Each device behind the NTB has a unique RID assigned in the RID-LUT. The
current DMA alias implementation supports only a single alias, so it's not
possible to support mutiple devices behind the NTB when IOMMU is enabled.
Enable all possible aliases on a given bus (256) that are stored in a
bitset. Alias devfn is directly translated to a bit number. The bitset is
not allocated for devices that have no need for DMA aliases.
More details can be found in the following article:
http://www.plxtech.com/files/pdf/technical/expresslane/RTC_Enabling%20MulitHostSystemDesigns.pdf
Signed-off-by: Jacek Lawrynowicz <jacek.lawrynowicz@intel.com>
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Reviewed-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: David Woodhouse <David.Woodhouse@intel.com>
Acked-by: Joerg Roedel <jroedel@suse.de>
|
|
IOMMU drivers that do not support default domains, but make
use of the the group->domain pointer can get that pointer
overwritten with NULL on device add/remove.
Make sure this can't happen by only overwriting the domain
pointer when it is NULL.
Cc: stable@vger.kernel.org # v4.4+
Fixes: 1228236de5f9 ('iommu: Move default domain allocation to iommu_group_get_for_dev()')
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
Since iommu_map() code added pgsize value to the paddr, trace_map()
used wrong paddr. So, this patch adds "orig_paddr" value in the
iommu_map() to use for the trace_map().
Signed-off-by: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
commit db0fa0cb0157 "scatterlist: use sg_phys()" did replacements of
the form:
phys_addr_t phys = page_to_phys(sg_page(s));
phys_addr_t phys = sg_phys(s) & PAGE_MASK;
However, this breaks platforms where sizeof(phys_addr_t) >
sizeof(unsigned long). Revert for 4.3 and 4.4 to make room for a
combined helper in 4.5.
Cc: <stable@vger.kernel.org>
Cc: Jens Axboe <axboe@fb.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: David Woodhouse <dwmw2@infradead.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Fixes: db0fa0cb0157 ("scatterlist: use sg_phys()")
Suggested-by: Joerg Roedel <joro@8bytes.org>
Reported-by: Vitaly Lavrov <vel21ripn@gmail.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|
|
Now that the iommu core support for iommu groups is not
pci-centric anymore, we can move default domain allocation
to the bus independent iommu_group_get_for_dev() function.
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
All callers of iommu_group_get_for_dev() provide a
device_group call-back now, so this fall-back is no longer
needed.
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
This function can be used as a device_group call-back and
just allocates one iommu-group per device.
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
Rename that function to pci_device_group() and export it, so
that IOMMU drivers can use it as their device_group
call-back.
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
That call-back is currently unused, change it into a
call-back function for finding the right IOMMU group for a
device.
This is a first step to remove the hard-coded PCI dependency
in the iommu-group code.
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
Coccinelle cleanup to replace open coded sg to physical address
translations. This is in preparation for introducing scatterlists that
reference __pfn_t.
// sg_phys.cocci: convert usage page_to_phys(sg_page(sg)) to sg_phys(sg)
// usage: make coccicheck COCCI=sg_phys.cocci MODE=patch
virtual patch
@@
struct scatterlist *sg;
@@
- page_to_phys(sg_page(sg)) + sg->offset
+ sg_phys(sg)
@@
struct scatterlist *sg;
@@
- page_to_phys(sg_page(sg))
+ sg_phys(sg) & PAGE_MASK
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
|
|
The -ENODEV error just means that the device is not
translated by an IOMMU. We shouldn't bail out of iommu
driver initialization when that happens, as this is a common
scenario on ARM.
Not returning -ENODEV in the drivers would be a bad idea, as
the IOMMU core would have no indication whether a device is
translated or not. This indication is not used at the
moment, but will probably be in the future.
Fixes: 19762d7 ("iommu: Propagate error in add_iommu_group")
Tested-by: Marek Szyprowski <m.szyprowski@samsung.com>
Tested-by: Eric Auger <eric.auger@linaro.org>
Tested-by: Heiko Stuebner <heiko@sntech.de>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
'x86/amd', 'default-domains' and 'core' into next
|
|
The iommu_group_alloc() and iommu_group_get_for_dev()
functions return error pointers, they never return NULL.
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
This function can be called by an IOMMU driver to request
that a device's default domain is direct mapped.
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
This will be used to handle unity mappings in the iommu
drivers.
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
Use the information exported by the IOMMU drivers to create
direct mapped regions in the default domains.
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
Add two new functions to the IOMMU-API to allow the IOMMU
drivers to export the requirements for direct mapped regions
per device.
This is useful for exporting the information in Intel VT-d's
RMRR entries or AMD-Vi's unity mappings.
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
This function can be used to request the current domain a
device is attached to.
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
Make use of the default domain and re-attach a device to it
when it is detached from another domain. Also enforce that a
device has to be in the default domain before it can be
attached to a different domain.
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
This patch changes the behavior of the iommu_attach_device
and iommu_detach_device functions. With this change these
functions only work on devices that have their own group.
For all other devices the iommu_group_attach/detach
functions must be used.
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
The default domain will be used (if supported by the iommu
driver) when the devices in the iommu group are not attached
to any other domain.
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
Do not remove the device from the IOMMU while the driver is
still attached.
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
Make sure we call the ->remove_device call-back on all
devices already initialized with ->add_device when the bus
initialization fails.
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
Make sure any errors reported from the IOMMU drivers get
progapated back to the IOMMU core.
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
Write a message to the kernel log when a device is added or
removed from a group and add debug messages to group
allocation and release routines.
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
Including the function name is only useful for debugging
messages. They don't belong into other messages from the
iommu core.
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
iommu_group_alloc might be called very early in case of iommu controllers
activated from of_iommu, so ensure that this part of subsystem is ready
when devices are being populated from device-tree (core_initcall seems to
be okay for this case).
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Tested-by: Javier Martinez Canillas <javier.martinez@collabora.co.uk>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
All drivers have been converted to the new domain_alloc and
domain_free iommu-ops. So remove the old ones and get rid of
iommu_domain->priv too, as this is no longer needed when the
struct iommu_domain is embedded in the private structures of
the iommu drivers.
Tested-by: Thierry Reding <treding@nvidia.com>
Tested-by: Heiko Stuebner <heiko@sntech.de>
Reviewed-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
Check for the new __IOMMU_DOMAIN_PAGING flag before calling
into the iommu drivers ->map and ->unmap call-backs.
Tested-by: Thierry Reding <treding@nvidia.com>
Tested-by: Heiko Stuebner <heiko@sntech.de>
Reviewed-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
This allows to handle domains differently based on their
type in the future. An IOMMU driver can implement certain
optimizations for DMA-API domains for example.
The domain types can be extended later and some of the
existing domain attributes can be migrated to become domain
flags.
Tested-by: Thierry Reding <treding@nvidia.com>
Tested-by: Heiko Stuebner <heiko@sntech.de>
Reviewed-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
These new call-backs defer the allocation and destruction of
'struct iommu_domain' to the iommu driver. This allows
drivers to embed this struct into their private domain
structures and to get rid of the domain_init and
domain_destroy call-backs when all drivers have been
converted.
Tested-by: Thierry Reding <treding@nvidia.com>
Tested-by: Heiko Stuebner <heiko@sntech.de>
Reviewed-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
The AMD address is dead for a long time already, replace it
with a working one.
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
iommu_map() calls trace_map() with iova and size. trace_map()
should report original iova and original size as opposed to
iova and size after they get changed during mapping. size is
always zero at the end of mapping which is useless to report
and iova as it gets incremented, it is not as useful as the
original iova. Change iommu_map() to call trace_map() to
report original iova and original size.
Signed-off-by: Shuah Khan <shuahkh@osg.samsung.com>
Reported-by: Alex Williamson <alex.williamson@redhat.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
Currently map and unmap are implemented as events under a
common trace class declaration. The common class forces
trace_unmap() to require a bogus physical address argument
that it doesn't use. Changing unmap to report unmapped size
will provide useful information for debugging. Remove common
map_unmap trace class and change map and unmap into separate
events as opposed to events under the same class to allow for
differences in the reporting information. In addition, map and
unmap are changed to handle size value as size_t instead of int
to match the passed size value and avoid overflow.
Signed-off-by: Shuah Khan <shuahkh@osg.samsung.com>
Suggested-by: Alex Williamson <alex.williamson@redhat.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
iommu_unmap() calls trace_unmap() with changed iova and original
size. trace_unmap() should report original iova instead. Change
iommu_unmap() to call trace_unmap() with original iova.
Signed-off-by: Shuah Khan <shuahkh@osg.samsung.com>
Reported-by: Alex Williamson <alex.williamson@redhat.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc
Pull ARM SoC/iommu configuration update from Arnd Bergmann:
"The iomm-config branch contains work from Will Deacon, quoting his
description:
This series adds automatic IOMMU and DMA-mapping configuration for
OF-based DMA masters described using the generic IOMMU devicetree
bindings. Although there is plenty of future work around splitting up
iommu_ops, adding default IOMMU domains and sorting out automatic IOMMU
group creation for the platform_bus, this is already useful enough for
people to port over their IOMMU drivers and start using the new probing
infrastructure (indeed, Marek has patches queued for the Exynos IOMMU).
The branch touches core ARM and IOMMU driver files, and the respective
maintainers (Russell King and Joerg Roedel) agreed to have the
contents merged through the arm-soc tree.
The final version was ready just before the merge window, so we ended
up delaying it a bit longer than the rest, but we don't expect to see
regressions because this is just additional infrastructure that will
get used in drivers starting in 3.20 but is unused so far"
* tag 'iommu-config-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc:
iommu: store DT-probed IOMMU data privately
arm: dma-mapping: plumb our iommu mapping ops into arch_setup_dma_ops
arm: call iommu_init before of_platform_populate
dma-mapping: detect and configure IOMMU in of_dma_configure
iommu: fix initialization without 'add_device' callback
iommu: provide helper function to configure an IOMMU for an of master
iommu: add new iommu_ops callback for adding an OF device
dma-mapping: replace set_arch_dma_coherent_ops with arch_setup_dma_ops
iommu: provide early initialisation hook for IOMMU drivers
|
|
If the IOMMU supports pages smaller than the CPU page size, segments
which lie at offsets within the CPU page may be mapped based on the
finer-grained IOMMU page boundaries. This minimises the amount of
non-buffer memory between the CPU page boundary and the start of the
segment which must be mapped and therefore exposed to the device, and
brings the default iommu_map_sg implementation in line with
iommu_map/unmap with respect to alignment.
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
IOMMU drivers can be initialized from of_iommu helpers. Such drivers don't
need to provide device_add callbacks to operate properly, so there is no
need to fail initialization if the callback is missing.
Acked-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
|
|
When some part of bus_set_iommu fails it should undo any made changes
and not simply leave everything as is.
This includes unregistering the bus notifier in iommu_bus_init when
add_iommu_group fails and also setting the bus->iommu_ops back to NULL.
Signed-off-by: Heiko Stuebner <heiko@sntech.de>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
The IOMMU-API works on page boundarys, unlike the DMA-API
which can work with sub-page buffers. The sg->offset
field does not make sense on the IOMMU level, so force it to
be 0. Do some error-path consolidation while at it.
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
Mapping and unmapping are more often than not in the critical path.
map_sg allows IOMMU driver implementations to optimize the process
of mapping buffers into the IOMMU page tables.
Instead of mapping a buffer one page at a time and requiring potentially
expensive TLB operations for each page, this function allows the driver
to map all pages in one go and defer TLB maintenance until after all
pages have been mapped.
Additionally, the mapping operation would be faster in general since
clients does not have to keep calling map API over and over again for
each physically contiguous chunk of memory that needs to be mapped to a
virtually contiguous region.
Signed-off-by: Olav Haugan <ohaugan@codeaurora.org>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
iommu_bus_init() registers a bus notifier on the given bus by using
a statically defined notifier block:
static struct notifier_block iommu_bus_nb = {
.notifier_call = iommu_bus_notifier,
};
This same notifier block is used for all busses. This causes a
problem for notifiers registered after iommu has registered this
callback on multiple busses. The problem is that a subsequent
notifier being registered on a bus which has this iommu notifier
will also get linked in to the notifier list of all other busses
which have this iommu notifier.
This patch fixes this by allocating the notifier_block at runtime.
Some error checking is also added to catch any allocation failure
or notifier registration error.
Signed-off-by: Mark Salter <msalter@redhat.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
It turns out that our assumption that aliases are always to the same
slot isn't true. One particular platform reports an IVRS alias of the
SATA controller (00:11.0) for the legacy IDE controller (00:14.1).
When we hit this, we attempt to use a single IOMMU group for
everything on the same bus, which in this case is the root complex.
We already have multiple groups defined for the root complex by this
point, resulting in multiple WARN_ON hits.
This patch makes these sorts of aliases work again with IOMMU groups
by reworking how we search through the PCI address space to find
existing groups. This should also now handle looped dependencies and
all sorts of crazy inter-dependencies that we'll likely never see.
The recursion used here should never be very deep. It's unlikely to
have individual aliases and only theoretical that we'd ever see a
chain where one alias causes us to search through to yet another
alias. We're also only dealing with PCIe device on a single bus,
which means we'll typically only see multiple slots in use on the root
complex. Loops are also a theoretically possibility, which I've
tested using fake DMA alias quirks and prevent from causing problems
using a bitmap of the devfn space that's been visited.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Cc: stable@vger.kernel.org # 3.17
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|