Age | Commit message (Collapse) | Author | Files | Lines |
|
To allow IOMMU drivers to batch up TLB flushing operations and postpone
them until ->iotlb_sync() is called, extend the prototypes for the
->unmap() and ->iotlb_sync() IOMMU ops callbacks to take a pointer to
the current iommu_iotlb_gather structure.
All affected IOMMU drivers are updated, but there should be no
functional change since the extra parameter is ignored for now.
Signed-off-by: Will Deacon <will@kernel.org>
|
|
'generic-dma-ops' and 'core' into next
|
|
device_domain_lock"
This reverts commit 7560cc3ca7d9d11555f80c830544e463fcdb28b8.
With 5.2.0-rc5 I can easily trigger this with lockdep and iommu=pt:
======================================================
WARNING: possible circular locking dependency detected
5.2.0-rc5 #78 Not tainted
------------------------------------------------------
swapper/0/1 is trying to acquire lock:
00000000ea2b3beb (&(&iommu->lock)->rlock){+.+.}, at: domain_context_mapping_one+0xa5/0x4e0
but task is already holding lock:
00000000a681907b (device_domain_lock){....}, at: domain_context_mapping_one+0x8d/0x4e0
which lock already depends on the new lock.
the existing dependency chain (in reverse order) is:
-> #1 (device_domain_lock){....}:
_raw_spin_lock_irqsave+0x3c/0x50
dmar_insert_one_dev_info+0xbb/0x510
domain_add_dev_info+0x50/0x90
dev_prepare_static_identity_mapping+0x30/0x68
intel_iommu_init+0xddd/0x1422
pci_iommu_init+0x16/0x3f
do_one_initcall+0x5d/0x2b4
kernel_init_freeable+0x218/0x2c1
kernel_init+0xa/0x100
ret_from_fork+0x3a/0x50
-> #0 (&(&iommu->lock)->rlock){+.+.}:
lock_acquire+0x9e/0x170
_raw_spin_lock+0x25/0x30
domain_context_mapping_one+0xa5/0x4e0
pci_for_each_dma_alias+0x30/0x140
dmar_insert_one_dev_info+0x3b2/0x510
domain_add_dev_info+0x50/0x90
dev_prepare_static_identity_mapping+0x30/0x68
intel_iommu_init+0xddd/0x1422
pci_iommu_init+0x16/0x3f
do_one_initcall+0x5d/0x2b4
kernel_init_freeable+0x218/0x2c1
kernel_init+0xa/0x100
ret_from_fork+0x3a/0x50
other info that might help us debug this:
Possible unsafe locking scenario:
CPU0 CPU1
---- ----
lock(device_domain_lock);
lock(&(&iommu->lock)->rlock);
lock(device_domain_lock);
lock(&(&iommu->lock)->rlock);
*** DEADLOCK ***
2 locks held by swapper/0/1:
#0: 00000000033eb13d (dmar_global_lock){++++}, at: intel_iommu_init+0x1e0/0x1422
#1: 00000000a681907b (device_domain_lock){....}, at: domain_context_mapping_one+0x8d/0x4e0
stack backtrace:
CPU: 2 PID: 1 Comm: swapper/0 Not tainted 5.2.0-rc5 #78
Hardware name: LENOVO 20KGS35G01/20KGS35G01, BIOS N23ET50W (1.25 ) 06/25/2018
Call Trace:
dump_stack+0x85/0xc0
print_circular_bug.cold.57+0x15c/0x195
__lock_acquire+0x152a/0x1710
lock_acquire+0x9e/0x170
? domain_context_mapping_one+0xa5/0x4e0
_raw_spin_lock+0x25/0x30
? domain_context_mapping_one+0xa5/0x4e0
domain_context_mapping_one+0xa5/0x4e0
? domain_context_mapping_one+0x4e0/0x4e0
pci_for_each_dma_alias+0x30/0x140
dmar_insert_one_dev_info+0x3b2/0x510
domain_add_dev_info+0x50/0x90
dev_prepare_static_identity_mapping+0x30/0x68
intel_iommu_init+0xddd/0x1422
? printk+0x58/0x6f
? lockdep_hardirqs_on+0xf0/0x180
? do_early_param+0x8e/0x8e
? e820__memblock_setup+0x63/0x63
pci_iommu_init+0x16/0x3f
do_one_initcall+0x5d/0x2b4
? do_early_param+0x8e/0x8e
? rcu_read_lock_sched_held+0x55/0x60
? do_early_param+0x8e/0x8e
kernel_init_freeable+0x218/0x2c1
? rest_init+0x230/0x230
kernel_init+0xa/0x100
ret_from_fork+0x3a/0x50
domain_context_mapping_one() is taking device_domain_lock first then
iommu lock, while dmar_insert_one_dev_info() is doing the reverse.
That should be introduced by commit:
7560cc3ca7d9 ("iommu/vt-d: Fix lock inversion between iommu->lock and
device_domain_lock", 2019-05-27)
So far I still cannot figure out how the previous deadlock was
triggered (I cannot find iommu lock taken before calling of
iommu_flush_dev_iotlb()), however I'm pretty sure that that change
should be incomplete at least because it does not fix all the places
so we're still taking the locks in different orders, while reverting
that commit is very clean to me so far that we should always take
device_domain_lock first then the iommu lock.
We can continue to try to find the real culprit mentioned in
7560cc3ca7d9, but for now I think we should revert it to fix current
breakage.
CC: Joerg Roedel <joro@8bytes.org>
CC: Lu Baolu <baolu.lu@linux.intel.com>
CC: dave.jiang@intel.com
Signed-off-by: Peter Xu <peterx@redhat.com>
Tested-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
The commit "iommu/vt-d: Probe DMA-capable ACPI name space devices"
introduced a compilation warning due to the "iommu" variable in
for_each_active_iommu() but never used the for each element, i.e,
"drhd->iommu".
drivers/iommu/intel-iommu.c: In function 'probe_acpi_namespace_devices':
drivers/iommu/intel-iommu.c:4639:22: warning: variable 'iommu' set but
not used [-Wunused-but-set-variable]
struct intel_iommu *iommu;
Silence the warning the same way as in the commit d3ed71e5cc50
("drivers/iommu/intel-iommu.c: fix variable 'iommu' set but not used")
Signed-off-by: Qian Cai <cai@lca.pw>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
The linux-next commit "iommu/vt-d: Duplicate iommu_resv_region objects
per device list" [1] left out an unused variable,
drivers/iommu/intel-iommu.c: In function 'dmar_parse_one_rmrr':
drivers/iommu/intel-iommu.c:4014:9: warning: variable 'length' set but
not used [-Wunused-but-set-variable]
[1] https://lore.kernel.org/patchwork/patch/1083073/
Signed-off-by: Qian Cai <cai@lca.pw>
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu
Pull iommu fixes from Joerg Roedel:
- three fixes for Intel VT-d to fix a potential dead-lock, a formatting
fix and a bit setting fix
- one fix for the ARM-SMMU to make it work on some platforms with
sub-optimal SMMU emulation
* tag 'iommu-fixes-v5.2-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu:
iommu/arm-smmu: Avoid constant zero in TLBI writes
iommu/vt-d: Set the right field for Page Walk Snoop
iommu/vt-d: Fix lock inversion between iommu->lock and device_domain_lock
iommu: Add missing new line for dma type
|
|
The domain_init() and md_domain_init() do almost the same job.
Consolidate them to avoid duplication.
Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
[No functional changes]
1. Starting with commit df4f3c603aeb ("iommu/vt-d: Remove static identity
map code") there are no callers for iommu_prepare_rmrr_dev() but the
implementation of the function still exists, so remove it. Also, as a
ripple effect remove get_domain_for_dev() and iommu_prepare_identity_map()
because they aren't being used either.
2. Remove extra new line in couple of places.
Signed-off-by: Sai Praneeth Prakhya <sai.praneeth.prakhya@intel.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
The drhd and device scope list should be iterated with the
iommu global lock held. Otherwise, a suspicious RCU usage
message will be displayed.
[ 3.695886] =============================
[ 3.695917] WARNING: suspicious RCU usage
[ 3.695950] 5.2.0-rc2+ #2467 Not tainted
[ 3.695981] -----------------------------
[ 3.696014] drivers/iommu/intel-iommu.c:4569 suspicious rcu_dereference_check() usage!
[ 3.696069]
other info that might help us debug this:
[ 3.696126]
rcu_scheduler_active = 2, debug_locks = 1
[ 3.696173] no locks held by swapper/0/1.
[ 3.696204]
stack backtrace:
[ 3.696241] CPU: 0 PID: 1 Comm: swapper/0 Not tainted 5.2.0-rc2+ #2467
[ 3.696370] Call Trace:
[ 3.696404] dump_stack+0x85/0xcb
[ 3.696441] intel_iommu_init+0x128c/0x13ce
[ 3.696478] ? kmem_cache_free+0x16b/0x2c0
[ 3.696516] ? __fput+0x14b/0x270
[ 3.696550] ? __call_rcu+0xb7/0x300
[ 3.696583] ? get_max_files+0x10/0x10
[ 3.696631] ? set_debug_rodata+0x11/0x11
[ 3.696668] ? e820__memblock_setup+0x60/0x60
[ 3.696704] ? pci_iommu_init+0x16/0x3f
[ 3.696737] ? set_debug_rodata+0x11/0x11
[ 3.696770] pci_iommu_init+0x16/0x3f
[ 3.696805] do_one_initcall+0x5d/0x2e4
[ 3.696844] ? set_debug_rodata+0x11/0x11
[ 3.696880] ? rcu_read_lock_sched_held+0x6b/0x80
[ 3.696924] kernel_init_freeable+0x1f0/0x27c
[ 3.696961] ? rest_init+0x260/0x260
[ 3.696997] kernel_init+0xa/0x110
[ 3.697028] ret_from_fork+0x3a/0x50
Fixes: fa212a97f3a36 ("iommu/vt-d: Probe DMA-capable ACPI name space devices")
Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
We don't allow a device to be assigned to user level when it is locked
by any RMRR's. Hence, intel_iommu_attach_device() will return error if
a domain of type IOMMU_DOMAIN_UNMANAGED is about to attach to a device
locked by rmrr. But this doesn't apply to a domain of type other than
IOMMU_DOMAIN_UNMANAGED. This adds a check to fix this.
Fixes: fa954e6831789 ("iommu/vt-d: Delegate the dma domain to upper layer")
Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com>
Reported-and-tested-by: Qian Cai <cai@lca.pw>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
The iommu driver will ignore some iommu units if there's no
device under its scope or those devices have been explicitly
set to bypass the DMA translation. Don't enable those iommu
units, otherwise the devices under its scope won't work.
Fixes: d8190dc638866 ("iommu/vt-d: Enable DMA remapping after rmrr mapped")
Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
Otherwise, domain_get_iommu() will be broken.
Fixes: 942067f1b6b97 ("iommu/vt-d: Identify default domains replaced with private")
Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
If a device gets a right domain in add_device ops, it shouldn't
return error.
Fixes: 942067f1b6b97 ("iommu/vt-d: Identify default domains replaced with private")
Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
Now we have a new IOMMU_RESV_DIRECT_RELAXABLE reserved memory
region type, let's report USB and GFX RMRRs as relaxable ones.
We introduce a new device_rmrr_is_relaxable() helper to check
whether the rmrr belongs to the relaxable category.
This allows to have a finer reporting at IOMMU API level of
reserved memory regions. This will be exploitable by VFIO to
define the usable IOVA range and detect potential conflicts
between the guest physical address space and host reserved
regions.
Signed-off-by: Eric Auger <eric.auger@redhat.com>
Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
In the case the RMRR device scope is a PCI-PCI bridge, let's check
the device belongs to the PCI sub-hierarchy.
Fixes: 0659b8dc45a6 ("iommu/vt-d: Implement reserved region get/put callbacks")
Signed-off-by: Eric Auger <eric.auger@redhat.com>
Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
When reading the vtd specification and especially the
Reserved Memory Region Reporting Structure chapter,
it is not obvious a device scope element cannot be a
PCI-PCI bridge, in which case all downstream ports are
likely to access the reserved memory region. Let's handle
this case in device_has_rmrr.
Fixes: ea2447f700ca ("intel-iommu: Prevent devices with RMRRs from being placed into SI Domain")
Signed-off-by: Eric Auger <eric.auger@redhat.com>
Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
Several call sites are about to check whether a device belongs
to the PCI sub-hierarchy of a candidate PCI-PCI bridge.
Introduce an helper to perform that check.
Signed-off-by: Eric Auger <eric.auger@redhat.com>
Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
intel_iommu_get_resv_regions() aims to return the list of
reserved regions accessible by a given @device. However several
devices can access the same reserved memory region and when
building the list it is not safe to use a single iommu_resv_region
object, whose container is the RMRR. This iommu_resv_region must
be duplicated per device reserved region list.
Let's remove the struct iommu_resv_region from the RMRR unit
and allocate the iommu_resv_region directly in
intel_iommu_get_resv_regions(). We hold the dmar_global_lock instead
of the rcu-lock to allow sleeping.
Fixes: 0659b8dc45a6 ("iommu/vt-d: Implement reserved region get/put callbacks")
Signed-off-by: Eric Auger <eric.auger@redhat.com>
Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
Based on 1 normalized pattern(s):
this program is free software you can redistribute it and or modify
it under the terms and conditions of the gnu general public license
version 2 as published by the free software foundation this program
is distributed in the hope it will be useful but without any
warranty without even the implied warranty of merchantability or
fitness for a particular purpose see the gnu general public license
for more details
extracted by the scancode license scanner the SPDX license identifier
GPL-2.0-only
has been chosen to replace the boilerplate/reference in 263 file(s).
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Allison Randal <allison@lohutok.net>
Reviewed-by: Alexios Zavras <alexios.zavras@intel.com>
Cc: linux-spdx@vger.kernel.org
Link: https://lkml.kernel.org/r/20190529141901.208660670@linutronix.de
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
The commit "iommu/vt-d: Delegate the dma domain to upper layer" left an
unused variable,
drivers/iommu/intel-iommu.c: In function 'disable_dmar_iommu':
drivers/iommu/intel-iommu.c:1652:23: warning: variable 'domain' set but
not used [-Wunused-but-set-variable]
Signed-off-by: Qian Cai <cai@lca.pw>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
Commit cf04eee8bf0e ("iommu/vt-d: Include ACPI devices in iommu=pt")
added for_each_active_iommu() in iommu_prepare_static_identity_mapping()
but never used the each element, i.e, "drhd->iommu".
drivers/iommu/intel-iommu.c: In function
'iommu_prepare_static_identity_mapping':
drivers/iommu/intel-iommu.c:3037:22: warning: variable 'iommu' set but
not used [-Wunused-but-set-variable]
struct intel_iommu *iommu;
Fixed the warning by appending a compiler attribute __maybe_unused for it.
Link: http://lkml.kernel.org/r/20190523013314.2732-1-cai@lca.pw
Signed-off-by: Qian Cai <cai@lca.pw>
Suggested-by: Andrew Morton <akpm@linux-foundation.org>
Cc: Joerg Roedel <jroedel@suse.de>
Cc: David Woodhouse <dwmw2@infradead.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
The code to prepare the static identity map for various reserved
memory ranges in intel_iommu_init() is duplicated with the default
domain mechanism now. Remove it to avoid duplication.
Signed-off-by: James Sewart <jamessewart@arista.com>
Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
The iommu generic code has handled the device hotplug cases.
Remove the duplicated code.
Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
It isn't used anywhere. Remove it to make code concise.
Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
Previously, get_valid_domain_for_dev() is used to retrieve the
DMA domain which has been attached to the device or allocate one
if no domain has been attached yet. As we have delegated the DMA
domain management to upper layer, this function is used purely to
allocate a private DMA domain if the default domain doesn't work
for ths device. Cleanup the code for readability.
Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
As a domain is now attached to a device earlier, we should
implement the is_attach_deferred call-back and use it to
defer the domain attach from iommu driver init to device
driver init when iommu is pre-enabled in kdump kernel.
Suggested-by: Tom Murphy <tmurphy@arista.com>
Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
Some platforms may support ACPI name-space enumerated devices
that are capable of generating DMA requests. Platforms which
support DMA remapping explicitly declares any such DMA-capable
ACPI name-space devices in the platform through ACPI Name-space
Device Declaration (ANDD) structure and enumerate them through
the Device Scope of the appropriate remapping hardware unit.
Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
The iommu driver doesn't know whether the bit width of a PCI
device is sufficient for access to the whole system memory.
Hence, the driver checks this when the driver calls into the
dma APIs. If a device is using an identity domain, but the
bit width is less than the system requirement, we need to use
a dma domain instead. This also applies after we delegated
the domain life cycle management to the upper layer.
Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
When we put a device into an iommu group, the group's default
domain will be attached to the device. There are some corner
cases where the type (identity or dma) of the default domain
doesn't work for the device and the request of a new default
domain results in failure (e.x. multiple devices have already
existed in the group). In order to be compatible with the past,
we used a private domain. Mark the private domains and disallow
some iommu apis (map/unmap/iova_to_phys) on them.
Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
This allows the iommu generic layer to allocate a dma domain and
attach it to a device through the iommu api's. With all types of
domains being delegated to upper layer, we can remove an internal
flag which was used to distinguish domains mananged internally or
externally.
Signed-off-by: James Sewart <jamessewart@arista.com>
Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
This allows the iommu generic layer to allocate an identity domain
and attach it to a device. Hence, the identity domain is delegated
to upper layer. As a side effect, iommu_identity_mapping can't be
used to check the existence of identity domains any more.
Signed-off-by: James Sewart <jamessewart@arista.com>
Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
This helper returns the default domain type that the device
requires.
Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
The rmrr devices require identity map of the rmrr regions before
enabling DMA remapping. Otherwise, there will be a window during
which DMA from/to the rmrr regions will be blocked. In order to
alleviate this, we move enabling DMA remapping after all rmrr
regions get mapped.
Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
To support mapping ISA region via iommu_group_create_direct_mappings,
make sure its exposed by iommu_get_resv_regions.
Signed-off-by: James Sewart <jamessewart@arista.com>
Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
Used by iommu.c before creating identity mappings for reserved
ranges to ensure dma-ops won't ever remap these ranges.
Signed-off-by: James Sewart <jamessewart@arista.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
Lockdep debug reported lock inversion related with the iommu code
caused by dmar_insert_one_dev_info() grabbing the iommu->lock and
the device_domain_lock out of order versus the code path in
iommu_flush_dev_iotlb(). Expanding the scope of the iommu->lock and
reversing the order of lock acquisition fixes the issue.
[ 76.238180] dsa_bus wq0.0: dsa wq wq0.0 disabled
[ 76.248706]
[ 76.250486] ========================================================
[ 76.257113] WARNING: possible irq lock inversion dependency detected
[ 76.263736] 5.1.0-rc5+ #162 Not tainted
[ 76.267854] --------------------------------------------------------
[ 76.274485] systemd-journal/521 just changed the state of lock:
[ 76.280685] 0000000055b330f5 (device_domain_lock){..-.}, at: iommu_flush_dev_iotlb.part.63+0x29/0x90
[ 76.290099] but this lock took another, SOFTIRQ-unsafe lock in the past:
[ 76.297093] (&(&iommu->lock)->rlock){+.+.}
[ 76.297094]
[ 76.297094]
[ 76.297094] and interrupts could create inverse lock ordering between them.
[ 76.297094]
[ 76.314257]
[ 76.314257] other info that might help us debug this:
[ 76.321448] Possible interrupt unsafe locking scenario:
[ 76.321448]
[ 76.328907] CPU0 CPU1
[ 76.333777] ---- ----
[ 76.338642] lock(&(&iommu->lock)->rlock);
[ 76.343165] local_irq_disable();
[ 76.349422] lock(device_domain_lock);
[ 76.356116] lock(&(&iommu->lock)->rlock);
[ 76.363154] <Interrupt>
[ 76.366134] lock(device_domain_lock);
[ 76.370548]
[ 76.370548] *** DEADLOCK ***
Fixes: 745f2586e78e ("iommu/vt-d: Simplify function get_domain_for_dev()")
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
A scalable mode DMAR table walk would involve looking at bits in each stage
of walk, like,
1. Is PASID enabled in the context entry?
2. What's the size of PASID directory?
3. Is the PASID directory entry present?
4. Is the PASID table entry present?
5. Number of PASID table entries?
Hence, add these macros that will later be used during this walk.
Apart from adding new macros, move existing macros (like
pasid_pde_is_present(), get_pasid_table_from_pde() and pasid_supported())
to appropriate header files so that they could be reused.
Cc: Joerg Roedel <joro@8bytes.org>
Cc: Ashok Raj <ashok.raj@intel.com>
Cc: Lu Baolu <baolu.lu@linux.intel.com>
Cc: Sohil Mehta <sohil.mehta@intel.com>
Cc: David Woodhouse <dwmw2@infradead.org>
Cc: Jacob Pan <jacob.jun.pan@linux.intel.com>
Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com>
Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Sai Praneeth Prakhya <sai.praneeth.prakhya@intel.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
We use RCU's for rarely updated lists like iommus, rmrr, atsr units.
I'm not sure why domain_remove_dev_info() in domain_exit() was surrounded
by rcu_read_lock. Lock was present before refactoring in d160aca527,
but it was related to rcu list, not domain_remove_dev_info function.
dmar_remove_one_dev_info() doesn't touch any of those lists, so it doesn't
require a lock. In fact it is called 6 times without it anyway.
Fixes: d160aca5276d ("iommu/vt-d: Unify domain->iommu attach/detachment")
Signed-off-by: Lukasz Odzioba <lukasz.odzioba@intel.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci
Pull PCI updates from Bjorn Helgaas:
"Enumeration changes:
- Add _HPX Type 3 settings support, which gives firmware more
influence over device configuration (Alexandru Gagniuc)
- Support fixed bus numbers from bridge Enhanced Allocation
capabilities (Subbaraya Sundeep)
- Add "external-facing" DT property to identify cases where we
require IOMMU protection against untrusted devices (Jean-Philippe
Brucker)
- Enable PCIe services for host controller drivers that use managed
host bridge alloc (Jean-Philippe Brucker)
- Log PCIe port service messages with pci_dev, not the pcie_device
(Frederick Lawler)
- Convert pciehp from pciehp_debug module parameter to generic
dynamic debug (Frederick Lawler)
Peer-to-peer DMA:
- Add whitelist of Root Complexes that support peer-to-peer DMA
between Root Ports (Christian König)
Native controller drivers:
- Add PCI host bridge DMA ranges for bridges that can't DMA
everywhere, e.g., iProc (Srinath Mannam)
- Add Amazon Annapurna Labs PCIe host controller driver (Jonathan
Chocron)
- Fix Tegra MSI target allocation so DMA doesn't generate unwanted
MSIs (Vidya Sagar)
- Fix of_node reference leaks (Wen Yang)
- Fix Hyper-V module unload & device removal issues (Dexuan Cui)
- Cleanup R-Car driver (Marek Vasut)
- Cleanup Keystone driver (Kishon Vijay Abraham I)
- Cleanup i.MX6 driver (Andrey Smirnov)
Significant bug fixes:
- Reset Lenovo ThinkPad P50 GPU so nouveau works after reboot (Lyude
Paul)
- Fix Switchtec firmware update performance issue (Wesley Sheng)
- Work around Pericom switch link retraining erratum (Stefan Mätje)"
* tag 'pci-v5.2-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci: (141 commits)
MAINTAINERS: Add Karthikeyan Mitran and Hou Zhiqiang for Mobiveil PCI
PCI: pciehp: Remove pointless MY_NAME definition
PCI: pciehp: Remove pointless PCIE_MODULE_NAME definition
PCI: pciehp: Remove unused dbg/err/info/warn() wrappers
PCI: pciehp: Log messages with pci_dev, not pcie_device
PCI: pciehp: Replace pciehp_debug module param with dyndbg
PCI: pciehp: Remove pciehp_debug uses
PCI/AER: Log messages with pci_dev, not pcie_device
PCI/DPC: Log messages with pci_dev, not pcie_device
PCI/PME: Replace dev_printk(KERN_DEBUG) with dev_info()
PCI/AER: Replace dev_printk(KERN_DEBUG) with dev_info()
PCI: Replace dev_printk(KERN_DEBUG) with dev_info(), etc
PCI: Replace printk(KERN_INFO) with pr_info(), etc
PCI: Use dev_printk() when possible
PCI: Cleanup setup-bus.c comments and whitespace
PCI: imx6: Allow asynchronous probing
PCI: dwc: Save root bus for driver remove hooks
PCI: dwc: Use devm_pci_alloc_host_bridge() to simplify code
PCI: dwc: Free MSI in dw_pcie_host_init() error path
PCI: dwc: Free MSI IRQ page in dw_pcie_free_msi()
...
|
|
The kernel parameter igfx_off is used by users to disable
DMA remapping for the Intel integrated graphic device. It
was designed for bare metal cases where a dedicated IOMMU
is used for graphic. This doesn't apply to virtual IOMMU
case where an include-all IOMMU is used. This makes the
kernel parameter work with virtual IOMMU as well.
Cc: Ashok Raj <ashok.raj@intel.com>
Cc: Jacob Pan <jacob.jun.pan@linux.intel.com>
Suggested-by: Kevin Tian <kevin.tian@intel.com>
Fixes: c0771df8d5297 ("intel-iommu: Export a flag indicating that the IOMMU is used for iGFX.")
Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com>
Tested-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
The intel_iommu_gfx_mapped flag is exported by the Intel
IOMMU driver to indicate whether an IOMMU is used for the
graphic device. In a virtualized IOMMU environment (e.g.
QEMU), an include-all IOMMU is used for graphic device.
This flag is found to be clear even the IOMMU is used.
Cc: Ashok Raj <ashok.raj@intel.com>
Cc: Jacob Pan <jacob.jun.pan@linux.intel.com>
Cc: Kevin Tian <kevin.tian@intel.com>
Reported-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Fixes: c0771df8d5297 ("intel-iommu: Export a flag indicating that the IOMMU is used for iGFX.")
Suggested-by: Kevin Tian <kevin.tian@intel.com>
Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
Replace the whitespaces at the start of a line with tabs. No
functional changes.
Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
Use new helper pci_dev_id() to simplify the code.
Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com>
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Reviewed-by: Joerg Roedel <jroedel@suse.de>
|
|
Requesting page reqest irq under dmar_global_lock could cause
potential lock race condition (caught by lockdep).
[ 4.100055] ======================================================
[ 4.100063] WARNING: possible circular locking dependency detected
[ 4.100072] 5.1.0-rc4+ #2169 Not tainted
[ 4.100078] ------------------------------------------------------
[ 4.100086] swapper/0/1 is trying to acquire lock:
[ 4.100094] 000000007dcbe3c3 (dmar_lock){+.+.}, at: dmar_alloc_hwirq+0x35/0x140
[ 4.100112] but task is already holding lock:
[ 4.100120] 0000000060bbe946 (dmar_global_lock){++++}, at: intel_iommu_init+0x191/0x1438
[ 4.100136] which lock already depends on the new lock.
[ 4.100146] the existing dependency chain (in reverse order) is:
[ 4.100155]
-> #2 (dmar_global_lock){++++}:
[ 4.100169] down_read+0x44/0xa0
[ 4.100178] intel_irq_remapping_alloc+0xb2/0x7b0
[ 4.100186] mp_irqdomain_alloc+0x9e/0x2e0
[ 4.100195] __irq_domain_alloc_irqs+0x131/0x330
[ 4.100203] alloc_isa_irq_from_domain.isra.4+0x9a/0xd0
[ 4.100212] mp_map_pin_to_irq+0x244/0x310
[ 4.100221] setup_IO_APIC+0x757/0x7ed
[ 4.100229] x86_late_time_init+0x17/0x1c
[ 4.100238] start_kernel+0x425/0x4e3
[ 4.100247] secondary_startup_64+0xa4/0xb0
[ 4.100254]
-> #1 (irq_domain_mutex){+.+.}:
[ 4.100265] __mutex_lock+0x7f/0x9d0
[ 4.100273] __irq_domain_add+0x195/0x2b0
[ 4.100280] irq_domain_create_hierarchy+0x3d/0x40
[ 4.100289] msi_create_irq_domain+0x32/0x110
[ 4.100297] dmar_alloc_hwirq+0x111/0x140
[ 4.100305] dmar_set_interrupt.part.14+0x1a/0x70
[ 4.100314] enable_drhd_fault_handling+0x2c/0x6c
[ 4.100323] apic_bsp_setup+0x75/0x7a
[ 4.100330] x86_late_time_init+0x17/0x1c
[ 4.100338] start_kernel+0x425/0x4e3
[ 4.100346] secondary_startup_64+0xa4/0xb0
[ 4.100352]
-> #0 (dmar_lock){+.+.}:
[ 4.100364] lock_acquire+0xb4/0x1c0
[ 4.100372] __mutex_lock+0x7f/0x9d0
[ 4.100379] dmar_alloc_hwirq+0x35/0x140
[ 4.100389] intel_svm_enable_prq+0x61/0x180
[ 4.100397] intel_iommu_init+0x1128/0x1438
[ 4.100406] pci_iommu_init+0x16/0x3f
[ 4.100414] do_one_initcall+0x5d/0x2be
[ 4.100422] kernel_init_freeable+0x1f0/0x27c
[ 4.100431] kernel_init+0xa/0x110
[ 4.100438] ret_from_fork+0x3a/0x50
[ 4.100444]
other info that might help us debug this:
[ 4.100454] Chain exists of:
dmar_lock --> irq_domain_mutex --> dmar_global_lock
[ 4.100469] Possible unsafe locking scenario:
[ 4.100476] CPU0 CPU1
[ 4.100483] ---- ----
[ 4.100488] lock(dmar_global_lock);
[ 4.100495] lock(irq_domain_mutex);
[ 4.100503] lock(dmar_global_lock);
[ 4.100512] lock(dmar_lock);
[ 4.100518]
*** DEADLOCK ***
Cc: Ashok Raj <ashok.raj@intel.com>
Cc: Jacob Pan <jacob.jun.pan@linux.intel.com>
Cc: Kevin Tian <kevin.tian@intel.com>
Reported-by: Dave Jiang <dave.jiang@intel.com>
Fixes: a222a7f0bb6c9 ("iommu/vt-d: Implement page request handling")
Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
By default, for performance consideration, Intel IOMMU
driver won't flush IOTLB immediately after a buffer is
unmapped. It schedules a thread and flushes IOTLB in a
batched mode. This isn't suitable for untrusted device
since it still can access the memory even if it isn't
supposed to do so.
Cc: Ashok Raj <ashok.raj@intel.com>
Cc: Jacob Pan <jacob.jun.pan@linux.intel.com>
Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com>
Tested-by: Xu Pengfei <pengfei.xu@intel.com>
Tested-by: Mika Westerberg <mika.westerberg@intel.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
We already do this in the caller.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
The intel-iommu driver currently has a partial reimplementation
of the direct mapping code for devices that use pass through
mode. Replace that code with calls to the relevant dma_direct
routines at the highest level. This means we have exactly the
same behvior as the dma direct code itself, and can prepare for
eventually only attaching the intel_iommu ops to devices that
actually need dynamic iommu mappings.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
Invert the return value to avoid double negatives, use a bool
instead of int as the return value, and reduce some indentation
after early returns.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
This adds support to return the default pasid associated with
an auxiliary domain. The PCI device which is bound with this
domain should use this value as the pasid for all DMA requests
of the subset of device which is isolated and protected with
this domain.
Cc: Ashok Raj <ashok.raj@intel.com>
Cc: Jacob Pan <jacob.jun.pan@linux.intel.com>
Cc: Kevin Tian <kevin.tian@intel.com>
Signed-off-by: Sanjay Kumar <sanjay.k.kumar@intel.com>
Signed-off-by: Liu Yi L <yi.l.liu@intel.com>
Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
When multiple domains per device has been enabled by the
device driver, the device will tag the default PASID for
the domain to all DMA traffics out of the subset of this
device; and the IOMMU should translate the DMA requests
in PASID granularity.
This adds the intel_iommu_aux_attach/detach_device() ops
to support managing PASID granular translation structures
when the device driver has enabled multiple domains per
device.
Cc: Ashok Raj <ashok.raj@intel.com>
Cc: Jacob Pan <jacob.jun.pan@linux.intel.com>
Cc: Kevin Tian <kevin.tian@intel.com>
Signed-off-by: Sanjay Kumar <sanjay.k.kumar@intel.com>
Signed-off-by: Liu Yi L <yi.l.liu@intel.com>
Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|