summaryrefslogtreecommitdiffstats
path: root/drivers/iommu/dma-iommu.c
AgeCommit message (Collapse)AuthorFilesLines
2020-10-15Merge tag 'dma-mapping-5.10' of git://git.infradead.org/users/hch/dma-mappingLinus Torvalds1-6/+40
Pull dma-mapping updates from Christoph Hellwig: - rework the non-coherent DMA allocator - move private definitions out of <linux/dma-mapping.h> - lower CMA_ALIGNMENT (Paul Cercueil) - remove the omap1 dma address translation in favor of the common code - make dma-direct aware of multiple dma offset ranges (Jim Quinlan) - support per-node DMA CMA areas (Barry Song) - increase the default seg boundary limit (Nicolin Chen) - misc fixes (Robin Murphy, Thomas Tai, Xu Wang) - various cleanups * tag 'dma-mapping-5.10' of git://git.infradead.org/users/hch/dma-mapping: (63 commits) ARM/ixp4xx: add a missing include of dma-map-ops.h dma-direct: simplify the DMA_ATTR_NO_KERNEL_MAPPING handling dma-direct: factor out a dma_direct_alloc_from_pool helper dma-direct check for highmem pages in dma_direct_alloc_pages dma-mapping: merge <linux/dma-noncoherent.h> into <linux/dma-map-ops.h> dma-mapping: move large parts of <linux/dma-direct.h> to kernel/dma dma-mapping: move dma-debug.h to kernel/dma/ dma-mapping: remove <asm/dma-contiguous.h> dma-mapping: merge <linux/dma-contiguous.h> into <linux/dma-map-ops.h> dma-contiguous: remove dma_contiguous_set_default dma-contiguous: remove dev_set_cma_area dma-contiguous: remove dma_declare_contiguous dma-mapping: split <linux/dma-mapping.h> cma: decrease CMA_ALIGNMENT lower limit to 2 firewire-ohci: use dma_alloc_pages dma-iommu: implement ->alloc_noncoherent dma-mapping: add new {alloc,free}_noncoherent dma_map_ops methods dma-mapping: add a new dma_alloc_pages API dma-mapping: remove dma_cache_sync 53c700: convert to dma_alloc_noncoherent ...
2020-10-06dma-mapping: merge <linux/dma-noncoherent.h> into <linux/dma-map-ops.h>Christoph Hellwig1-1/+0
Move more nitty gritty DMA implementation details into the common internal header. Signed-off-by: Christoph Hellwig <hch@lst.de>
2020-10-06dma-mapping: merge <linux/dma-contiguous.h> into <linux/dma-map-ops.h>Christoph Hellwig1-1/+0
Merge dma-contiguous.h into dma-map-ops.h, after removing the comment describing the contiguous allocator into kernel/dma/contigous.c. Signed-off-by: Christoph Hellwig <hch@lst.de>
2020-10-06dma-mapping: split <linux/dma-mapping.h>Christoph Hellwig1-0/+1
Split out all the bits that are purely for dma_map_ops implementations and related code into a new <linux/dma-map-ops.h> header so that they don't get pulled into all the drivers. That also means the architecture specific <asm/dma-mapping.h> is not pulled in by <linux/dma-mapping.h> any more, which leads to a missing includes that were pulled in by the x86 or arm versions in a few not overly portable drivers. Signed-off-by: Christoph Hellwig <hch@lst.de>
2020-09-25dma-iommu: implement ->alloc_noncoherentChristoph Hellwig1-4/+37
Implement the alloc_noncoherent method to provide memory that is neither coherent not contiguous. Signed-off-by: Christoph Hellwig <hch@lst.de>
2020-09-25dma-mapping: add a new dma_alloc_pages APIChristoph Hellwig1-0/+2
This API is the equivalent of alloc_pages, except that the returned memory is guaranteed to be DMA addressable by the passed in device. The implementation will also be used to provide a more sensible replacement for DMA_ATTR_NON_CONSISTENT flag. Additionally dma_alloc_noncoherent is switched over to use dma_alloc_pages as its backend. Signed-off-by: Christoph Hellwig <hch@lst.de> Acked-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de> (MIPS part)
2020-09-18iommu/dma: Handle init_iova_flush_queue() failure in dma-iommu pathTom Murphy1-2/+5
The init_iova_flush_queue() function can fail if we run out of memory. Fall back to noflush queue if it fails. Signed-off-by: Tom Murphy <murphyt7@tcd.ie> Reviewed-by: Robin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/20200910122539.3662-1-murphyt7@tcd.ie Signed-off-by: Joerg Roedel <jroedel@suse.de>
2020-09-04iommu/dma: Remove broken huge page handlingRobin Murphy1-8/+5
The attempt to handle huge page allocations was originally added since the comments around stripping __GFP_COMP in other implementations were nonsensical, and we naively assumed that split_huge_page() could simply be called equivalently to split_page(). It turns out that this doesn't actually work correctly, so just get rid of it - there's little point going to the effort of allocating huge pages if we're only going to split them anyway. Reported-by: Roman Gushchin <guro@fb.com> Signed-off-by: Robin Murphy <robin.murphy@arm.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Link: https://lore.kernel.org/r/e287dbe69aa0933abafd97c80631940fd188ddd1.1599132844.git.robin.murphy@arm.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2020-09-04iommu: Rename iommu_tlb_* functions to iommu_iotlb_*Tom Murphy1-1/+1
To keep naming consistent we should stick with *iotlb*. This patch renames a few remaining functions. Signed-off-by: Tom Murphy <murphyt7@tcd.ie> Link: https://lore.kernel.org/r/20200817210051.13546-1-murphyt7@tcd.ie Signed-off-by: Joerg Roedel <jroedel@suse.de>
2020-08-14dma-pool: fix coherent pool allocations for IOMMU mappingsChristoph Hellwig1-2/+2
When allocating coherent pool memory for an IOMMU mapping we don't care about the DMA mask. Move the guess for the initial GFP mask into the dma_direct_alloc_pages and pass dma_coherent_ok as a function pointer argument so that it doesn't get applied to the IOMMU case. Signed-off-by: Christoph Hellwig <hch@lst.de> Tested-by: Amit Pundir <amit.pundir@linaro.org>
2020-04-20dma-pool: add additional coherent pools to map to gfp maskDavid Rientjes1-2/+3
The single atomic pool is allocated from the lowest zone possible since it is guaranteed to be applicable for any DMA allocation. Devices may allocate through the DMA API but not have a strict reliance on GFP_DMA memory. Since the atomic pool will be used for all non-blockable allocations, returning all memory from ZONE_DMA may unnecessarily deplete the zone. Provision for multiple atomic pools that will map to the optimal gfp mask of the device. When allocating non-blockable memory, determine the optimal gfp mask of the device and use the appropriate atomic pool. The coherent DMA mask will remain the same between allocation and free and, thus, memory will be freed to the same atomic pool it was allocated from. __dma_atomic_pool_init() will be changed to return struct gen_pool * later once dynamic expansion is added. Signed-off-by: David Rientjes <rientjes@google.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
2020-03-04iommu/dma: Fix MSI reservation allocationMarc Zyngier1-8/+8
The way cookie_init_hw_msi_region() allocates the iommu_dma_msi_page structures doesn't match the way iommu_put_dma_cookie() frees them. The former performs a single allocation of all the required structures, while the latter tries to free them one at a time. It doesn't quite work for the main use case (the GICv3 ITS where the range is 64kB) when the base granule size is 4kB. This leads to a nice slab corruption on teardown, which is easily observable by simply creating a VF on a SRIOV-capable device, and tearing it down immediately (no need to even make use of it). Fortunately, this only affects systems where the ITS isn't translated by the SMMU, which are both rare and non-standard. Fix it by allocating iommu_dma_msi_page structures one at a time. Fixes: 7c1b058c8b5a3 ("iommu/dma: Handle IOMMU API reserved regions") Signed-off-by: Marc Zyngier <maz@kernel.org> Reviewed-by: Eric Auger <eric.auger@redhat.com> Cc: Robin Murphy <robin.murphy@arm.com> Cc: Joerg Roedel <jroedel@suse.de> Cc: Will Deacon <will@kernel.org> Cc: stable@vger.kernel.org Reviewed-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2020-01-07iommu/dma: fix variable 'cookie' set but not usedQian Cai1-3/+0
The commit c18647900ec8 ("iommu/dma: Relax locking in iommu_dma_prepare_msi()") introduced a compliation warning, drivers/iommu/dma-iommu.c: In function 'iommu_dma_prepare_msi': drivers/iommu/dma-iommu.c:1206:27: warning: variable 'cookie' set but not used [-Wunused-but-set-variable] struct iommu_dma_cookie *cookie; ^~~~~~ Fixes: c18647900ec8 ("iommu/dma: Relax locking in iommu_dma_prepare_msi()") Signed-off-by: Qian Cai <cai@lca.pw> Acked-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-12-18iommu/dma: Relax locking in iommu_dma_prepare_msi()Robin Murphy1-9/+8
Since commit ece6e6f0218b ("iommu/dma-iommu: Split iommu_dma_map_msi_msg() in two parts"), iommu_dma_prepare_msi() should no longer have to worry about preempting itself, nor being called in atomic context at all. Thus we can downgrade the IRQ-safe locking to a simple mutex to avoid angering the new might_sleep() check in iommu_map(). Reported-by: Qian Cai <cai@lca.pw> Tested-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-12-17iommu/dma: Rationalise types for DMA masksRobin Murphy1-3/+3
Since iommu_dma_alloc_iova() combines incoming masks with the u64 bus limit, it makes more sense to pass them around in their native u64 rather than converting to dma_addr_t early. Do that, and resolve the remaining type discrepancy against the domain geometry with a cheeky cast to keep things simple. Signed-off-by: Robin Murphy <robin.murphy@arm.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Tested-by: Nathan Chancellor <natechancellor@gmail.com> # build Reviewed-by: Nicolas Saenz Julienne <nsaenzjulienne@suse.de> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-12-02Merge tag 'iommu-updates-v5.5' of ↵Linus Torvalds1-8/+35
git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu Pull iommu updates from Joerg Roedel: - Conversion of the AMD IOMMU driver to use the dma-iommu code for imlementing the DMA-API. This gets rid of quite some code in the driver itself, but also has some potential for regressions (non are known at the moment). - Support for the Qualcomm SMMUv2 implementation in the SDM845 SoC. This also includes some firmware interface changes, but those are acked by the respective maintainers. - Preparatory work to support two distinct page-tables per domain in the ARM-SMMU driver - Power management improvements for the ARM SMMUv2 - Custom PASID allocator support - Multiple PCI DMA alias support for the AMD IOMMU driver - Adaption of the Mediatek driver to the changed IO/TLB flush interface of the IOMMU core code. - Preparatory patches for the Renesas IOMMU driver to support future hardware. * tag 'iommu-updates-v5.5' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu: (62 commits) iommu/rockchip: Don't provoke WARN for harmless IRQs iommu/vt-d: Turn off translations at shutdown iommu/vt-d: Check VT-d RMRR region in BIOS is reported as reserved iommu/arm-smmu: Remove duplicate error message iommu/arm-smmu-v3: Don't display an error when IRQ lines are missing iommu/ipmmu-vmsa: Add utlb_offset_base iommu/ipmmu-vmsa: Add helper functions for "uTLB" registers iommu/ipmmu-vmsa: Calculate context registers' offset instead of a macro iommu/ipmmu-vmsa: Add helper functions for MMU "context" registers iommu/ipmmu-vmsa: tidyup register definitions iommu/ipmmu-vmsa: Remove all unused register definitions iommu/mediatek: Reduce the tlb flush timeout value iommu/mediatek: Get rid of the pgtlock iommu/mediatek: Move the tlb_sync into tlb_flush iommu/mediatek: Delete the leaf in the tlb_flush iommu/mediatek: Use gather to achieve the tlb range flush iommu/mediatek: Add a new tlb_lock for tlb_flush iommu/mediatek: Correct the flush_iotlb_all callback iommu/io-pgtable-arm: Rename IOMMU_QCOM_SYS_CACHE and improve doc iommu/io-pgtable-arm: Rationalise MAIR handling ...
2019-11-21dma-mapping: treat dev->bus_dma_mask as a DMA limitNicolas Saenz Julienne1-2/+1
Using a mask to represent bus DMA constraints has a set of limitations. The biggest one being it can only hold a power of two (minus one). The DMA mapping code is already aware of this and treats dev->bus_dma_mask as a limit. This quirk is already used by some architectures although still rare. With the introduction of the Raspberry Pi 4 we've found a new contender for the use of bus DMA limits, as its PCIe bus can only address the lower 3GB of memory (of a total of 4GB). This is impossible to represent with a mask. To make things worse the device-tree code rounds non power of two bus DMA limits to the next power of two, which is unacceptable in this case. In the light of this, rename dev->bus_dma_mask to dev->bus_dma_limit all over the tree and treat it as such. Note that dev->bus_dma_limit should contain the higher accessible DMA address. Signed-off-by: Nicolas Saenz Julienne <nsaenzjulienne@suse.de> Reviewed-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
2019-11-20dma-mapping: drop the dev argument to arch_sync_dma_for_*Christoph Hellwig1-5/+5
These are pure cache maintainance routines, so drop the unused struct device argument. Signed-off-by: Christoph Hellwig <hch@lst.de> Suggested-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2019-10-15iommu/dma-iommu: Use the dev->coherent_dma_maskTom Murphy1-5/+7
Use the dev->coherent_dma_mask when allocating in the dma-iommu ops api. Signed-off-by: Tom Murphy <murphyt7@tcd.ie> Reviewed-by: Robin Murphy <robin.murphy@arm.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-10-15iommu/dma-iommu: Handle deferred devicesTom Murphy1-1/+26
Handle devices which defer their attach to the iommu in the dma-iommu api Signed-off-by: Tom Murphy <murphyt7@tcd.ie> Reviewed-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-10-15iommu: Add gfp parameter to iommu_ops::mapTom Murphy1-3/+3
Add a gfp_t parameter to the iommu_ops::map function. Remove the needless locking in the AMD iommu driver. The iommu_ops::map function (or the iommu_map function which calls it) was always supposed to be sleepable (according to Joerg's comment in this thread: https://lore.kernel.org/patchwork/patch/977520/ ) and so should probably have had a "might_sleep()" since it was written. However currently the dma-iommu api can call iommu_map in an atomic context, which it shouldn't do. This doesn't cause any problems because any iommu driver which uses the dma-iommu api uses gfp_atomic in it's iommu_ops::map function. But doing this wastes the memory allocators atomic pools. Signed-off-by: Tom Murphy <murphyt7@tcd.ie> Reviewed-by: Robin Murphy <robin.murphy@arm.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-09-19Merge tag 'dma-mapping-5.4' of git://git.infradead.org/users/hch/dma-mappingLinus Torvalds1-15/+14
Pull dma-mapping updates from Christoph Hellwig: - add dma-mapping and block layer helpers to take care of IOMMU merging for mmc plus subsequent fixups (Yoshihiro Shimoda) - rework handling of the pgprot bits for remapping (me) - take care of the dma direct infrastructure for swiotlb-xen (me) - improve the dma noncoherent remapping infrastructure (me) - better defaults for ->mmap, ->get_sgtable and ->get_required_mask (me) - cleanup mmaping of coherent DMA allocations (me) - various misc cleanups (Andy Shevchenko, me) * tag 'dma-mapping-5.4' of git://git.infradead.org/users/hch/dma-mapping: (41 commits) mmc: renesas_sdhi_internal_dmac: Add MMC_CAP2_MERGE_CAPABLE mmc: queue: Fix bigger segments usage arm64: use asm-generic/dma-mapping.h swiotlb-xen: merge xen_unmap_single into xen_swiotlb_unmap_page swiotlb-xen: simplify cache maintainance swiotlb-xen: use the same foreign page check everywhere swiotlb-xen: remove xen_swiotlb_dma_mmap and xen_swiotlb_dma_get_sgtable xen: remove the exports for xen_{create,destroy}_contiguous_region xen/arm: remove xen_dma_ops xen/arm: simplify dma_cache_maint xen/arm: use dev_is_dma_coherent xen/arm: consolidate page-coherent.h xen/arm: use dma-noncoherent.h calls for xen-swiotlb cache maintainance arm: remove wrappers for the generic dma remap helpers dma-mapping: introduce a dma_common_find_pages helper dma-mapping: always use VM_DMA_COHERENT for generic DMA remap vmalloc: lift the arm flag for coherent mappings to common code dma-mapping: provide a better default ->get_required_mask dma-mapping: remove the dma_declare_coherent_memory export remoteproc: don't allow modular build ...
2019-09-11Merge branches 'arm/omap', 'arm/exynos', 'arm/smmu', 'arm/mediatek', ↵Joerg Roedel1-3/+10
'arm/qcom', 'arm/renesas', 'x86/amd', 'x86/vt-d' and 'core' into next
2019-09-04dma-mapping: introduce a dma_common_find_pages helperChristoph Hellwig1-12/+3
A helper to find the backing page array based on a virtual address. This also ensures we do the same vm_flags check everywhere instead of slightly different or missing ones in a few places. Signed-off-by: Christoph Hellwig <hch@lst.de>
2019-09-04dma-mapping: always use VM_DMA_COHERENT for generic DMA remapChristoph Hellwig1-3/+3
Currently the generic dma remap allocator gets a vm_flags passed by the caller that is a little confusing. We just introduced a generic vmalloc-level flag to identify the dma coherent allocations, so use that everywhere and remove the now pointless argument. Signed-off-by: Christoph Hellwig <hch@lst.de>
2019-09-03iommu/dma: add a new dma_map_ops of get_merge_boundary()Yoshihiro Shimoda1-0/+8
This patch adds a new dma_map_ops of get_merge_boundary() to expose the DMA merge boundary if the domain type is IOMMU_DOMAIN_DMA. Signed-off-by: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com> Reviewed-by: Simon Horman <horms+renesas@verge.net.au> Acked-by: Joerg Roedel <jroedel@suse.de> Signed-off-by: Christoph Hellwig <hch@lst.de>
2019-08-30iommu/dma: Fix for dereferencing before null checkingYunsheng Lin1-1/+3
The cookie is dereferenced before null checking in the function iommu_dma_init_domain. This patch moves the dereferencing after the null checking. Fixes: fdbe574eb693 ("iommu/dma: Allow MSI-only cookies") Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-08-30Merge branch 'arm/smmu' into arm/mediatekJoerg Roedel1-2/+7
2019-08-23Merge branch 'for-joerg/arm-smmu/updates' of ↵Joerg Roedel1-2/+7
git://git.kernel.org/pub/scm/linux/kernel/git/will/linux into arm/smmu
2019-08-21dma-direct: fix zone selection after an unaddressable CMA allocationChristoph Hellwig1-0/+3
The new dma_alloc_contiguous hides if we allocate CMA or regular pages, and thus fails to retry a ZONE_NORMAL allocation if the CMA allocation succeeds but isn't addressable. That means we either fail outright or dip into a small zone that might not succeed either. Thanks to Hillf Danton for debugging this issue. Fixes: b1d2dc009dec ("dma-contiguous: add dma_{alloc,free}_contiguous() helpers") Reported-by: Tobias Klausmann <tobias.johannes.klausmann@mni.thm.de> Signed-off-by: Christoph Hellwig <hch@lst.de> Tested-by: Tobias Klausmann <tobias.johannes.klausmann@mni.thm.de>
2019-08-14Merge tag 'dma-mapping-5.3-4' of git://git.infradead.org/users/hch/dma-mappingLinus Torvalds1-3/+3
Pull dma-mapping fixes from Christoph Hellwig: - fix the handling of the bus_dma_mask in dma_get_required_mask, which caused a regression in this merge window (Lucas Stach) - fix a regression in the handling of DMA_ATTR_NO_KERNEL_MAPPING (me) - fix dma_mmap_coherent to not cause page attribute mismatches on coherent architectures like x86 (me) * tag 'dma-mapping-5.3-4' of git://git.infradead.org/users/hch/dma-mapping: dma-mapping: fix page attributes for dma_mmap_* dma-direct: don't truncate dma_required_mask to bus addressing capabilities dma-direct: fix DMA_ATTR_NO_KERNEL_MAPPING
2019-08-10dma-mapping: fix page attributes for dma_mmap_*Christoph Hellwig1-3/+3
All the way back to introducing dma_common_mmap we've defaulted to mark the pages as uncached. But this is wrong for DMA coherent devices. Later on DMA_ATTR_WRITE_COMBINE also got incorrect treatment as that flag is only treated special on the alloc side for non-coherent devices. Introduce a new dma_pgprot helper that deals with the check for coherent devices so that only the remapping cases ever reach arch_dma_mmap_pgprot and we thus ensure no aliasing of page attributes happens, which makes the powerpc version of arch_dma_mmap_pgprot obsolete and simplifies the remaining ones. Note that this means arch_dma_mmap_pgprot is a bit misnamed now, but we'll phase it out soon. Fixes: 64ccc9c033c6 ("common: dma-mapping: add support for generic dma_mmap_* calls") Reported-by: Shawn Anastasio <shawn@anastas.io> Reported-by: Gavin Li <git@thegavinli.com> Signed-off-by: Christoph Hellwig <hch@lst.de> Acked-by: Catalin Marinas <catalin.marinas@arm.com> # arm64
2019-08-09iommu/dma: Handle SG length overflow betterRobin Murphy1-1/+1
Since scatterlist dimensions are all unsigned ints, in the relatively rare cases where a device's max_segment_size is set to UINT_MAX, then the "cur_len + s_length <= max_len" check in __finalise_sg() will always return true. As a result, the corner case of such a device mapping an excessively large scatterlist which is mergeable to or beyond a total length of 4GB can lead to overflow and a bogus truncated dma_length in the resulting segment. As we already assume that any single segment must be no longer than max_len to begin with, this can easily be addressed by reshuffling the comparison. Fixes: 809eac54cdd6 ("iommu/dma: Implement scatterlist segment merging") Reported-by: Nicolin Chen <nicoleotsuka@gmail.com> Tested-by: Nicolin Chen <nicoleotsuka@gmail.com> Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-08-06iommu/dma: Handle MSI mappings separatelyRobin Murphy1-7/+10
MSI pages must always be mapped into a device's *current* domain, which *might* be the default DMA domain, but might instead be a VFIO domain with its own MSI cookie. This subtlety got accidentally lost in the streamlining of __iommu_dma_map(), but rather than reintroduce more complexity and/or special-casing, it turns out neater to just split this path out entirely. Since iommu_dma_get_msi_page() already duplicates much of what __iommu_dma_map() does, it can easily just make the allocation and mapping calls directly as well. That way we can further streamline the helper back to exclusively operating on DMA domains. Fixes: b61d271e59d7 ("iommu/dma: Move domain lookup into __iommu_dma_{map,unmap}") Reported-by: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com> Reported-by: Andre Przywara <andre.przywara@arm.com> Signed-off-by: Robin Murphy <robin.murphy@arm.com> Tested-by: Marc Zyngier <maz@kernel.org> Tested-by: Andre Przywara <andre.przywara@arm.com> Tested-by: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-07-24iommu: Introduce struct iommu_iotlb_gather for batching TLB flushesWill Deacon1-2/+7
To permit batching of TLB flushes across multiple calls to the IOMMU driver's ->unmap() implementation, introduce a new structure for tracking the address range to be flushed and the granularity at which the flushing is required. This is hooked into the IOMMU API and its caller are updated to make use of the new structure. Subsequent patches will plumb this into the IOMMU drivers as well, but for now the gathering information is ignored. Signed-off-by: Will Deacon <will@kernel.org>
2019-07-12Merge tag 'dma-mapping-5.3' of git://git.infradead.org/users/hch/dma-mappingLinus Torvalds1-10/+4
Pull dma-mapping updates from Christoph Hellwig: - move the USB special case that bounced DMA through a device bar into the USB code instead of handling it in the common DMA code (Laurentiu Tudor and Fredrik Noring) - don't dip into the global CMA pool for single page allocations (Nicolin Chen) - fix a crash when allocating memory for the atomic pool failed during boot (Florian Fainelli) - move support for MIPS-style uncached segments to the common code and use that for MIPS and nios2 (me) - make support for DMA_ATTR_NON_CONSISTENT and DMA_ATTR_NO_KERNEL_MAPPING generic (me) - convert nds32 to the generic remapping allocator (me) * tag 'dma-mapping-5.3' of git://git.infradead.org/users/hch/dma-mapping: (29 commits) dma-mapping: mark dma_alloc_need_uncached as __always_inline MIPS: only select ARCH_HAS_UNCACHED_SEGMENT for non-coherent platforms usb: host: Fix excessive alignment restriction for local memory allocations lib/genalloc.c: Add algorithm, align and zeroed family of DMA allocators nios2: use the generic uncached segment support in dma-direct nds32: use the generic remapping allocator for coherent DMA allocations arc: use the generic remapping allocator for coherent DMA allocations dma-direct: handle DMA_ATTR_NO_KERNEL_MAPPING in common code dma-direct: handle DMA_ATTR_NON_CONSISTENT in common code dma-mapping: add a dma_alloc_need_uncached helper openrisc: remove the partial DMA_ATTR_NON_CONSISTENT support arc: remove the partial DMA_ATTR_NON_CONSISTENT support arm-nommu: remove the partial DMA_ATTR_NON_CONSISTENT support ARM: dma-mapping: allow larger DMA mask than supported dma-mapping: truncate dma masks to what dma_addr_t can hold iommu/dma: Apply dma_{alloc,free}_contiguous functions dma-remap: Avoid de-referencing NULL atomic_pool MIPS: use the generic uncached segment support in dma-direct dma-direct: provide generic support for uncached kernel segments au1100fb: fix DMA API abuse ...
2019-07-04Merge branches 'x86/vt-d', 'x86/amd', 'arm/smmu', 'arm/omap', ↵Joerg Roedel1-102/+369
'generic-dma-ops' and 'core' into next
2019-06-24Merge tag 'v5.2-rc6' into generic-dma-opsJoerg Roedel1-1/+1
Linux 5.2-rc6
2019-06-19treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 234Thomas Gleixner1-12/+1
Based on 1 normalized pattern(s): this program is free software you can redistribute it and or modify it under the terms of the gnu general public license version 2 as published by the free software foundation this program is distributed in the hope that it will be useful but without any warranty without even the implied warranty of merchantability or fitness for a particular purpose see the gnu general public license for more details you should have received a copy of the gnu general public license along with this program if not see http www gnu org licenses extracted by the scancode license scanner the SPDX license identifier GPL-2.0-only has been chosen to replace the boilerplate/reference in 503 file(s). Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Alexios Zavras <alexios.zavras@intel.com> Reviewed-by: Allison Randal <allison@lohutok.net> Reviewed-by: Enrico Weigelt <info@metux.net> Cc: linux-spdx@vger.kernel.org Link: https://lkml.kernel.org/r/20190602204653.811534538@linutronix.de Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-06-18iommu: Fix integer truncationArnd Bergmann1-2/+2
On 32-bit architectures, phys_addr_t may be different from dma_add_t, both smaller and bigger. This can lead to an overflow during an assignment that clang warns about: drivers/iommu/dma-iommu.c:230:10: error: implicit conversion from 'dma_addr_t' (aka 'unsigned long long') to 'phys_addr_t' (aka 'unsigned int') changes value from 18446744073709551615 to 4294967295 [-Werror,-Wconstant-conversion] Use phys_addr_t here because that is the type that the variable was declared as. Fixes: aadad097cd46 ("iommu/dma: Reserve IOVA for PCIe inaccessible DMA address") Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-06-14iommu/dma: Apply dma_{alloc,free}_contiguous functionsNicolin Chen1-10/+4
This patch replaces dma_{alloc,release}_from_contiguous() with dma_{alloc,free}_contiguous() to simplify those function calls. Signed-off-by: Nicolin Chen <nicoleotsuka@gmail.com> Acked-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
2019-06-03iommu/dma: Fix condition check in iommu_dma_unmap_sgNathan Chancellor1-1/+1
Clang warns: drivers/iommu/dma-iommu.c:897:6: warning: logical not is only applied to the left hand side of this comparison [-Wlogical-not-parentheses] if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) == 0) ^ ~~ drivers/iommu/dma-iommu.c:897:6: note: add parentheses after the '!' to evaluate the comparison first if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) == 0) ^ ( ) drivers/iommu/dma-iommu.c:897:6: note: add parentheses around left hand side expression to silence this warning if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) == 0) ^ ( ) 1 warning generated. Judging from the rest of the commit and the conditional in iommu_dma_map_sg, either if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC)) or if ((attrs & DMA_ATTR_SKIP_CPU_SYNC) == 0) was intended, not a combination of the two. I personally think that the former is easier to understand so use that. Fixes: 06d60728ff5c ("iommu/dma: move the arm64 wrappers to common code") Link: https://github.com/ClangBuiltLinux/linux/issues/497 Signed-off-by: Nathan Chancellor <natechancellor@gmail.com> Reviewed-by: Robin Murphy <robin.murphy@arm.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-05-27iommu/dma: Switch copyright boilerplace to SPDXChristoph Hellwig1-12/+1
Signed-off-by: Christoph Hellwig <hch@lst.de> Acked-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-05-27iommu/dma: Don't depend on CONFIG_DMA_DIRECT_REMAPChristoph Hellwig1-7/+9
For entirely dma coherent architectures there is no requirement to ever remap dma coherent allocation. Move all the remap and pool code under IS_ENABLED() checks and drop the Kconfig dependency. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-05-27iommu/dma: Refactor iommu_dma_mmapChristoph Hellwig1-35/+11
Inline __iommu_dma_mmap_pfn into the main function, and use the fact that __iommu_dma_get_pages return NULL for remapped contigous allocations to simplify the code flow a bit. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-05-27iommu/dma: Refactor iommu_dma_get_sgtableChristoph Hellwig1-28/+17
Inline __iommu_dma_get_sgtable_page into the main function, and use the fact that __iommu_dma_get_pages return NULL for remapped contigous allocations to simplify the code flow a bit. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-05-27iommu/dma: Refactor iommu_dma_alloc, part 2Christoph Hellwig1-30/+35
All the logic in iommu_dma_alloc that deals with page allocation from the CMA or page allocators can be split into a self-contained helper, and we can than map the result of that or the atomic pool allocation with the iommu later. This also allows reusing __iommu_dma_free to tear down the allocations and MMU mappings when the IOMMU mapping fails. Based on a patch from Robin Murphy. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-05-27iommu/dma: Cleanup variable naming in iommu_dma_allocRobin Murphy1-23/+22
Most importantly clear up the size / iosize confusion. Also rename addr to cpu_addr to match the surrounding code and make the intention a little more clear. Signed-off-by: Robin Murphy <robin.murphy@arm.com> [hch: split from a larger patch] Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-05-27iommu/dma: Split iommu_dma_freeRobin Murphy1-4/+8
Most of it can double up to serve the failure cleanup path for iommu_dma_alloc(). Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-05-27iommu/dma: Merge the CMA and alloc_pages allocation pathsChristoph Hellwig1-20/+12
Instead of having a separate code path for the non-blocking alloc_pages and CMA allocations paths merge them into one. There is a slight behavior change here in that we try the page allocator if CMA fails. This matches what dma-direct and other iommu drivers do and will be needed to use the dma-iommu code on architectures without DMA remapping later on. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Joerg Roedel <jroedel@suse.de>