summaryrefslogtreecommitdiffstats
path: root/drivers/dma/idxd
AgeCommit message (Collapse)AuthorFilesLines
2022-12-28dmaengine: idxd: Do not call DMX TX callbacks during workqueue disableReinette Chatre1-0/+11
On driver unload any pending descriptors are flushed and pending DMA descriptors are explicitly completed: idxd_dmaengine_drv_remove() -> drv_disable_wq() -> idxd_wq_free_irq() -> idxd_flush_pending_descs() -> idxd_dma_complete_txd() With this done during driver unload any remaining descriptor is likely stuck and can be dropped. Even so, the descriptor may still have a callback set that could no longer be accessible. An example of such a problem is when the dmatest fails and the dmatest module is unloaded. The failure of dmatest leaves descriptors with dma_async_tx_descriptor::callback pointing to code that no longer exist. This causes a page fault as below at the time the IDXD driver is unloaded when it attempts to run the callback: BUG: unable to handle page fault for address: ffffffffc0665190 #PF: supervisor instruction fetch in kernel mode #PF: error_code(0x0010) - not-present page Fix this by clearing the callback pointers on the transmit descriptors only when workqueue is disabled. Fixes: 403a2e236538 ("dmaengine: idxd: change MSIX allocation based on per wq activation") Signed-off-by: Reinette Chatre <reinette.chatre@intel.com> Reviewed-by: Dave Jiang <dave.jiang@intel.com> Reviewed-by: Fenghua Yu <fenghua.yu@intel.com> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/37d06b772aa7f8863ca50f90930ea2fd80b38fc3.1670452419.git.reinette.chatre@intel.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
2022-12-28dmaengine: idxd: Prevent use after free on completion memoryReinette Chatre1-1/+1
On driver unload any pending descriptors are flushed at the time the interrupt is freed: idxd_dmaengine_drv_remove() -> drv_disable_wq() -> idxd_wq_free_irq() -> idxd_flush_pending_descs(). If there are any descriptors present that need to be flushed this flow triggers a "not present" page fault as below: BUG: unable to handle page fault for address: ff391c97c70c9040 #PF: supervisor read access in kernel mode #PF: error_code(0x0000) - not-present page The address that triggers the fault is the address of the descriptor that was freed moments earlier via: drv_disable_wq()->idxd_wq_free_resources() Fix the use after free by freeing the descriptors after any possible usage. This is done after idxd_wq_reset() to ensure that the memory remains accessible during possible completion writes by the device. Fixes: 63c14ae6c161 ("dmaengine: idxd: refactor wq driver enable/disable operations") Suggested-by: Dave Jiang <dave.jiang@intel.com> Signed-off-by: Reinette Chatre <reinette.chatre@intel.com> Reviewed-by: Dave Jiang <dave.jiang@intel.com> Reviewed-by: Fenghua Yu <fenghua.yu@intel.com> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/6c4657d9cff0a0a00501a7b928297ac966e9ec9d.1670452419.git.reinette.chatre@intel.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
2022-12-28dmaengine: idxd: Let probe fail when workqueue cannot be enabledReinette Chatre1-2/+1
The workqueue is enabled when the appropriate driver is loaded and disabled when the driver is removed. When the driver is removed it assumes that the workqueue was enabled successfully and proceeds to free allocations made during workqueue enabling. Failure during workqueue enabling does not prevent the driver from being loaded. This is because the error path within drv_enable_wq() returns success unless a second failure is encountered during the error path. By returning success it is possible to load the driver even if the workqueue cannot be enabled and allocations that do not exist are attempted to be freed during driver remove. Some examples of problematic flows: (a) idxd_dmaengine_drv_probe() -> drv_enable_wq() -> idxd_wq_request_irq(): In above flow, if idxd_wq_request_irq() fails then idxd_wq_unmap_portal() is called on error exit path, but drv_enable_wq() returns 0 because idxd_wq_disable() succeeds. The driver is thus loaded successfully. idxd_dmaengine_drv_remove()->drv_disable_wq()->idxd_wq_unmap_portal() Above flow on driver unload triggers the WARN in devm_iounmap() because the device resource has already been removed during error path of drv_enable_wq(). (b) idxd_dmaengine_drv_probe() -> drv_enable_wq() -> idxd_wq_request_irq(): In above flow, if idxd_wq_request_irq() fails then idxd_wq_init_percpu_ref() is never called to initialize the percpu counter, yet the driver loads successfully because drv_enable_wq() returns 0. idxd_dmaengine_drv_remove()->__idxd_wq_quiesce()->percpu_ref_kill(): Above flow on driver unload triggers a BUG when attempting to drop the initial ref of the uninitialized percpu ref: BUG: kernel NULL pointer dereference, address: 0000000000000010 Fix the drv_enable_wq() error path by returning the original error that indicates failure of workqueue enabling. This ensures that the probe fails when an error is encountered and the driver remove paths are only attempted when the workqueue was enabled successfully. Fixes: 1f2bb40337f0 ("dmaengine: idxd: move wq_enable() to device.c") Signed-off-by: Reinette Chatre <reinette.chatre@intel.com> Reviewed-by: Dave Jiang <dave.jiang@intel.com> Reviewed-by: Fenghua Yu <fenghua.yu@intel.com> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/e8d8116e5efa0fd14fadc5adae6ffd319f0e5ff1.1670452419.git.reinette.chatre@intel.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
2022-12-19Merge tag 'dmaengine-6.2-rc1' of ↵Linus Torvalds2-1/+68
git://git.kernel.org/pub/scm/linux/kernel/git/vkoul/dmaengine Pull dmaengine updates from Vinod Koul: "New support: - Qualcomm SDM670, SM6115 and SM6375 GPI controller support - Ingenic JZ4755 dmaengine support - Removal of iop-adma driver Updates: - Tegra support for dma-channel-mask - at_hdmac cleanup and virt-chan support for this driver" * tag 'dmaengine-6.2-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/vkoul/dmaengine: (46 commits) dmaengine: Revert "dmaengine: remove s3c24xx driver" dmaengine: tegra: Add support for dma-channel-mask dt-bindings: dmaengine: Add dma-channel-mask to Tegra GPCDMA dmaengine: idxd: Remove linux/msi.h include dt-bindings: dmaengine: qcom: gpi: add compatible for SM6375 dmaengine: idxd: Fix crc_val field for completion record dmaengine: at_hdmac: Convert driver to use virt-dma dmaengine: at_hdmac: Remove unused member of at_dma_chan dmaengine: at_hdmac: Rename "chan_common" to "dma_chan" dmaengine: at_hdmac: Rename "dma_common" to "dma_device" dmaengine: at_hdmac: Use bitfield access macros dmaengine: at_hdmac: Keep register definitions and structures private to at_hdmac.c dmaengine: at_hdmac: Set include entries in alphabetic order dmaengine: at_hdmac: Use pm_ptr() dmaengine: at_hdmac: Use devm_clk_get() dmaengine: at_hdmac: Use devm_platform_ioremap_resource dmaengine: at_hdmac: Use devm_kzalloc() and struct_size() dmaengine: at_hdmac: Introduce atc_get_llis_residue() dmaengine: at_hdmac: s/atc_get_bytes_left/atc_get_residue dmaengine: at_hdmac: Pass residue by address to avoid unnecessary implicit casts ...
2022-12-02Merge tag 'v6.1-rc7' into iommufd.git for-nextJason Gunthorpe5-12/+70
Resolve conflicts in drivers/vfio/vfio_main.c by using the iommfd version. The rc fix was done a different way when iommufd patches reworked this code. Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2022-11-14dmaengine: idxd: Remove linux/msi.h includeThomas Gleixner1-1/+0
Nothing in this file needs anything from linux/msi.h Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: Fenghua Yu <fenghua.yu@intel.com> Cc: Dave Jiang <dave.jiang@intel.com> Cc: Vinod Koul <vkoul@kernel.org> Cc: dmaengine@vger.kernel.org Link: https://lore.kernel.org/r/20221113202428.573536003@linutronix.de Signed-off-by: Vinod Koul <vkoul@kernel.org>
2022-11-11Merge branch 'fixes' into nextVinod Koul5-12/+70
Merge due to at_hdmac driver dependency
2022-11-08dmaengine: idxd: fix RO device state error after been disabled/resetFengqian Gao1-6/+14
When IDXD is not configurable, that means its WQ, engine, and group configurations cannot be changed. But it can be disabled and its state should be set as disabled regardless it's configurable or not. Fix this by setting device state IDXD_DEV_DISABLED for read-only device as well in idxd_device_clear_state(). Fixes: cf4ac3fef338 ("dmaengine: idxd: fix lockdep warning on device driver removal") Signed-off-by: Fengqian Gao <fengqian.gao@intel.com> Reviewed-by: Xiaochen Shen <xiaochen.shen@intel.com> Reviewed-by: Dave Jiang <dave.jiang@intel.com> Reviewed-by: Fenghua Yu <fenghua.yu@intel.com> Link: https://lore.kernel.org/r/20220930032835.2290-1-fengqian.gao@intel.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
2022-11-08dmaengine: idxd: Fix max batch size for Intel IAAXiaochen Shen4-6/+38
>From Intel IAA spec [1], Intel IAA does not support batch processing. Two batch related default values for IAA are incorrect in current code: (1) The max batch size of device is set during device initialization, that indicates batch is supported. It should be always 0 on IAA. (2) The max batch size of work queue is set to WQ_DEFAULT_MAX_BATCH (32) as the default value regardless of Intel DSA or IAA device during work queue setup and cleanup. It should be always 0 on IAA. Fix the issues by setting the max batch size of device and max batch size of work queue to 0 on IAA device, that means batch is not supported. [1]: https://cdrdv2.intel.com/v1/dl/getContent/721858 Fixes: 23084545dbb0 ("dmaengine: idxd: set max_xfer and max_batch for RO device") Fixes: 92452a72ebdf ("dmaengine: idxd: set defaults for wq configs") Fixes: bfe1d56091c1 ("dmaengine: idxd: Init and probe for Intel data accelerators") Signed-off-by: Xiaochen Shen <xiaochen.shen@intel.com> Reviewed-by: Dave Jiang <dave.jiang@intel.com> Reviewed-by: Fenghua Yu <fenghua.yu@intel.com> Link: https://lore.kernel.org/r/20220930201528.18621-2-xiaochen.shen@intel.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
2022-11-04dmaengine: idxd: Make read buffer sysfs attributes invisible for Intel IAAXiaochen Shen1-0/+36
In current code, the following sysfs attributes are exposed to user to show or update the values: max_read_buffers (max_tokens) read_buffer_limit (token_limit) group/read_buffers_allowed (group/tokens_allowed) group/read_buffers_reserved (group/tokens_reserved) group/use_read_buffer_limit (group/use_token_limit) >From Intel IAA spec [1], Intel IAA does not support Read Buffer allocation control. So these sysfs attributes should not be supported on IAA device. Fix this issue by making these sysfs attributes invisible through is_visible() filter when the device is IAA. Add description in the ABI documentation to mention that these attributes are not visible when the device does not support Read Buffer allocation control. [1]: https://cdrdv2.intel.com/v1/dl/getContent/721858 Fixes: fde212e44f45 ("dmaengine: idxd: deprecate token sysfs attributes for read buffers") Fixes: c52ca478233c ("dmaengine: idxd: add configuration component of driver") Signed-off-by: Xiaochen Shen <xiaochen.shen@intel.com> Reviewed-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/20221022074949.11719-1-xiaochen.shen@intel.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
2022-11-03iommu: Remove SVM_FLAG_SUPERVISOR_MODE supportLu Baolu2-26/+2
The current kernel DMA with PASID support is based on the SVA with a flag SVM_FLAG_SUPERVISOR_MODE. The IOMMU driver binds the kernel memory address space to a PASID of the device. The device driver programs the device with kernel virtual address (KVA) for DMA access. There have been security and functional issues with this approach: - The lack of IOTLB synchronization upon kernel page table updates. (vmalloc, module/BPF loading, CONFIG_DEBUG_PAGEALLOC etc.) - Other than slight more protection, using kernel virtual address (KVA) has little advantage over physical address. There are also no use cases yet where DMA engines need kernel virtual addresses for in-kernel DMA. This removes SVM_FLAG_SUPERVISOR_MODE support from the IOMMU interface. The device drivers are suggested to handle kernel DMA with PASID through the kernel DMA APIs. The drvdata parameter in iommu_sva_bind_device() and all callbacks is not needed anymore. Cleanup them as well. Link: https://lore.kernel.org/linux-iommu/20210511194726.GP1002214@nvidia.com/ Signed-off-by: Jacob Pan <jacob.jun.pan@linux.intel.com> Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Reviewed-by: Fenghua Yu <fenghua.yu@intel.com> Tested-by: Zhangfei Gao <zhangfei.gao@linaro.org> Tested-by: Tony Zhu <tony.zhu@intel.com> Link: https://lore.kernel.org/r/20221031005917.45690-4-baolu.lu@linux.intel.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2022-10-19dmaengine: idxd: Make max batch size attributes in sysfs invisible for Intel IAAXiaochen Shen1-0/+32
In current code, dev.max_batch_size and wq.max_batch_size attributes in sysfs are exposed to user to show or update the values. >From Intel IAA spec [1], Intel IAA does not support batch processing. So these sysfs attributes should not be supported on IAA device. Fix this issue by making the attributes of max_batch_size invisible in sysfs through is_visible() filter when the device is IAA. Add description in the ABI documentation to mention that the attributes are not visible when the device does not support batch. [1]: https://cdrdv2.intel.com/v1/dl/getContent/721858 Fixes: e7184b159dd3 ("dmaengine: idxd: add support for configurable max wq batch size") Fixes: c52ca478233c ("dmaengine: idxd: add configuration component of driver") Signed-off-by: Xiaochen Shen <xiaochen.shen@intel.com> Reviewed-by: Dave Jiang <dave.jiang@intel.com> Reviewed-by: Fenghua Yu <fenghua.yu@intel.com> Link: https://lore.kernel.org/r/20220930201528.18621-3-xiaochen.shen@intel.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
2022-10-19dmaengine: idxd: Do not enable user type Work Queue without Shared Virtual ↵Fenghua Yu1-0/+18
Addressing When the idxd_user_drv driver is bound to a Work Queue (WQ) device without IOMMU or with IOMMU Passthrough without Shared Virtual Addressing (SVA), the application gains direct access to physical memory via the device by programming physical address to a submitted descriptor. This allows direct userspace read and write access to arbitrary physical memory. This is inconsistent with the security goals of a good kernel API. Unlike vfio_pci driver, the IDXD char device driver does not provide any ways to pin user pages and translate the address from user VA to IOVA or PA without IOMMU SVA. Therefore the application has no way to instruct the device to perform DMA function. This makes the char device not usable for normal application usage. Since user type WQ without SVA cannot be used for normal application usage and presents the security issue, bind idxd_user_drv driver and enable user type WQ only when SVA is enabled (i.e. user PASID is enabled). Fixes: 448c3de8ac83 ("dmaengine: idxd: create user driver for wq 'device'") Cc: stable@vger.kernel.org Suggested-by: Arjan Van De Ven <arjan.van.de.ven@intel.com> Signed-off-by: Fenghua Yu <fenghua.yu@intel.com> Reviewed-by: Dave Jiang <dave.jiang@intel.com> Reviewed-by: Jerry Snitselaar <jsnitsel@redhat.com> Link: https://lore.kernel.org/r/20221014222541.3912195-1-fenghua.yu@intel.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
2022-09-29dmaengine: idxd: add configuration for concurrent batch descriptor processingDave Jiang4-3/+40
Add sysfs knob to allow control of the number of batch descriptors that can be concurrently processed by an engine in the group as a fraction of the Maximum Work Descriptors in Progress value specfied in ENGCAP register. This control knob is part of toggle for QoS control. Signed-off-by: Dave Jiang <dave.jiang@intel.com> Co-developed-by: Fenghua Yu <fenghua.yu@intel.com> Signed-off-by: Fenghua Yu <fenghua.yu@intel.com> Link: https://lore.kernel.org/r/20220917161222.2835172-6-fenghua.yu@intel.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
2022-09-29dmaengine: idxd: add configuration for concurrent work descriptor processingDave Jiang4-15/+75
Add sysfs knob to allow control of the number of work descriptors that can be concurrently processed by an engine in the group as a fraction of the Maximum Work Descriptors in Progress value specified in ENGCAP register. This control knob is part of toggle for QoS control. Signed-off-by: Dave Jiang <dave.jiang@intel.com> Co-developed-by: Fenghua Yu <fenghua.yu@intel.com> Signed-off-by: Fenghua Yu <fenghua.yu@intel.com> Link: https://lore.kernel.org/r/20220917161222.2835172-5-fenghua.yu@intel.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
2022-09-29dmaengine: idxd: add WQ operation cap restriction supportDave Jiang5-3/+117
DSA 2.0 add the capability of configuring DMA ops on a per workqueue basis. This means that certain ops can be disabled by the system administrator for certain wq. By default, all ops are available. A bitmap is used to store the ops due to total op size of 256 bits and it is more convenient to use a range list to specify which bits are enabled. One of the usage to support this is for VM migration between different iteration of devices. The newer ops are disabled in order to allow guest to migrate to a host that only support older ops. Another usage is to restrict the WQ to certain operations for QoS of performance. A sysfs of ops_config attribute is added per wq. It is only usable when the ops_config bit is set under WQ_CAP register. This means that this attribute will return -EOPNOTSUPP on DSA 1.x devices. The expected input is a range list for the bits per operation the WQ supports. Signed-off-by: Dave Jiang <dave.jiang@intel.com> Co-developed-by: Fenghua Yu <fenghua.yu@intel.com> Signed-off-by: Fenghua Yu <fenghua.yu@intel.com> Link: https://lore.kernel.org/r/20220917161222.2835172-4-fenghua.yu@intel.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
2022-09-29dmanegine: idxd: reformat opcap output to match bitmap_parse() inputDave Jiang4-7/+26
To make input and output consistent and prepping for the per WQ operation configuration support, change the output of opcap display to match the input that is expected by bitmap_parse() helper function. The output will be a bitmap with field width as the number of bits using the %*pb format specifier for printk() family. Signed-off-by: Dave Jiang <dave.jiang@intel.com> Co-developed-by: Fenghua Yu <fenghua.yu@intel.com> Signed-off-by: Fenghua Yu <fenghua.yu@intel.com> Link: https://lore.kernel.org/r/20220917161222.2835172-3-fenghua.yu@intel.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
2022-09-29dmaengine: idxd: convert ats_dis to a wq flagDave Jiang3-5/+8
Make wq attributes access consistent. Convert ats_dis to wq flag WQ_FLAG_ATS_DISABLE. Signed-off-by: Dave Jiang <dave.jiang@intel.com> Co-developed-by: Fenghua Yu <fenghua.yu@intel.com> Signed-off-by: Fenghua Yu <fenghua.yu@intel.com> Link: https://lore.kernel.org/r/20220917161222.2835172-2-fenghua.yu@intel.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
2022-09-29dmaengine: idxd: Remove unused struct idxd_faultYuan Can1-6/+0
Since fault processing code has been removed, struct idxd_fault is not used any more and can be removed as well. Signed-off-by: Yuan Can <yuancan@huawei.com> Link: https://lore.kernel.org/r/20220928014747.106808-1-yuancan@huawei.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
2022-09-29dmaengine: idxd: track enabled workqueues in bitmapJerry Snitselaar5-2/+14
Now that idxd_wq_disable_cleanup() sets the workqueue state to IDXD_WQ_DISABLED, use a bitmap to track which workqueues have been enabled. This will then be used to determine which workqueues should be re-enabled when attempting a software reset to recover from a device halt state. Cc: Fenghua Yu <fenghua.yu@intel.com> Cc: Dave Jiang <dave.jiang@intel.com> Cc: Vinod Koul <vkoul@kernel.org> Signed-off-by: Jerry Snitselaar <jsnitsel@redhat.com> Reviewed-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/20220928154856.623545-3-jsnitsel@redhat.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
2022-09-29dmaengine: idxd: Set wq state to disabled in idxd_wq_disable_cleanup()Jerry Snitselaar1-1/+1
If we are calling idxd_wq_disable_cleanup(), the workqueue should be in a disabled state. So set the workqueue state to IDXD_WQ_DISABLED so that the state reflects that. Currently if there is a device failure, and a software reset is attempted the workqueues will not be re-enabled due to idxd_wq_enable() seeing that state as already being IDXD_WQ_ENABLED. Cc: Fenghua Yu <fenghua.yu@intel.com> Cc: Dave Jiang <dave.jiang@intel.com> Cc: Vinod Koul <vkoul@kernel.org> Signed-off-by: Jerry Snitselaar <jsnitsel@redhat.com> Reviewed-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/20220928154856.623545-2-jsnitsel@redhat.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
2022-09-04dmaengine: idxd: avoid deadlock in process_misc_interrupts()Jerry Snitselaar1-2/+0
idxd_device_clear_state() now grabs the idxd->dev_lock itself, so don't grab the lock prior to calling it. This was seen in testing after dmar fault occurred on system, resulting in lockup stack traces. Cc: Fenghua Yu <fenghua.yu@intel.com> Cc: Dave Jiang <dave.jiang@intel.com> Cc: Vinod Koul <vkoul@kernel.org> Cc: dmaengine@vger.kernel.org Fixes: cf4ac3fef338 ("dmaengine: idxd: fix lockdep warning on device driver removal") Signed-off-by: Jerry Snitselaar <jsnitsel@redhat.com> Reviewed-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/20220823163709.2102468-1-jsnitsel@redhat.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
2022-07-05dmaengine: idxd: Only call idxd_enable_system_pasid() if succeeded in ↵Jerry Snitselaar1-6/+7
enabling SVA feature On a Sapphire Rapids system if boot without intel_iommu=on, the IDXD driver will crash during probe in iommu_sva_bind_device(). [ 21.423729] BUG: kernel NULL pointer dereference, address: 0000000000000038 [ 21.445108] #PF: supervisor read access in kernel mode [ 21.450912] #PF: error_code(0x0000) - not-present page [ 21.456706] PGD 0 [ 21.459047] Oops: 0000 [#1] PREEMPT SMP NOPTI [ 21.464004] CPU: 0 PID: 1420 Comm: kworker/0:3 Not tainted 5.19.0-0.rc3.27.eln120.x86_64 #1 [ 21.464011] Hardware name: Intel Corporation EAGLESTREAM/EAGLESTREAM, BIOS EGSDCRB1.SYS.0067.D12.2110190954 10/19/2021 [ 21.464015] Workqueue: events work_for_cpu_fn [ 21.464030] RIP: 0010:iommu_sva_bind_device+0x1d/0xe0 [ 21.464046] Code: c3 cc 66 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 41 57 41 56 49 89 d6 41 55 41 54 55 53 48 83 ec 08 48 8b 87 d8 02 00 00 <48> 8b 40 38 48 8b 50 10 48 83 7a 70 00 48 89 14 24 0f 84 91 00 00 [ 21.464050] RSP: 0018:ff7245d9096b7db8 EFLAGS: 00010296 [ 21.464054] RAX: 0000000000000000 RBX: ff1eadeec8a51000 RCX: 0000000000000000 [ 21.464058] RDX: ff7245d9096b7e24 RSI: 0000000000000000 RDI: ff1eadeec8a510d0 [ 21.464060] RBP: ff1eadeec8a51000 R08: ffffffffb1a12300 R09: ff1eadffbfce25b4 [ 21.464062] R10: ffffffffffffffff R11: 0000000000000038 R12: ffffffffc09f8000 [ 21.464065] R13: ff1eadeec8a510d0 R14: ff7245d9096b7e24 R15: ff1eaddf54429000 [ 21.464067] FS: 0000000000000000(0000) GS:ff1eadee7f600000(0000) knlGS:0000000000000000 [ 21.464070] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 21.464072] CR2: 0000000000000038 CR3: 00000008c0e10006 CR4: 0000000000771ef0 [ 21.464074] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 [ 21.464076] DR3: 0000000000000000 DR6: 00000000fffe07f0 DR7: 0000000000000400 [ 21.464078] PKRU: 55555554 [ 21.464079] Call Trace: [ 21.464083] <TASK> [ 21.464092] idxd_pci_probe+0x259/0x1070 [idxd] [ 21.464121] local_pci_probe+0x3e/0x80 [ 21.464132] work_for_cpu_fn+0x13/0x20 [ 21.464136] process_one_work+0x1c4/0x380 [ 21.464143] worker_thread+0x1ab/0x380 [ 21.464147] ? _raw_spin_lock_irqsave+0x23/0x50 [ 21.464158] ? process_one_work+0x380/0x380 [ 21.464161] kthread+0xe6/0x110 [ 21.464168] ? kthread_complete_and_exit+0x20/0x20 [ 21.464172] ret_from_fork+0x1f/0x30 iommu_sva_bind_device() requires SVA has been enabled successfully on the IDXD device before it's called. Otherwise, iommu_sva_bind_device() will access a NULL pointer. If Intel IOMMU is disabled, SVA cannot be enabled and thus idxd_enable_system_pasid() and iommu_sva_bind_device() should not be called. Fixes: 42a1b73852c4 ("dmaengine: idxd: Separate user and kernel pasid enabling") Cc: Vinod Koul <vkoul@kernel.org> Cc: linux-kernel@vger.kernel.org Cc: Dave Jiang <dave.jiang@intel.com> Cc: Fenghua Yu <fenghua.yu@intel.com> Link: https://lore.kernel.org/dmaengine/20220623170232.6whonfjuh3m5vcoy@cantor/ Signed-off-by: Jerry Snitselaar <jsnitsel@redhat.com> Acked-by: Fenghua Yu <fenghua.yu@intel.com> Link: https://lore.kernel.org/r/20220626051648.14249-1-jsnitsel@redhat.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
2022-07-01dmaengine: idxd: force wq context cleanup on device disable pathDave Jiang1-4/+1
Testing shown that when a wq mode is setup to be dedicated and then torn down and reconfigured to shared, the wq configured end up being dedicated anyays. The root cause is when idxd_device_wqs_clear_state() gets called during idxd_driver removal, idxd_wq_disable_cleanup() does not get called vs when the wq driver is removed first. The check of wq state being "enabled" causes the cleanup to be bypassed. However, idxd_driver->remove() releases all wq drivers. So the wqs goes to "disabled" state and will never be "enabled". By that point, the driver has no idea if the wq was previously configured or clean. So force call idxd_wq_disable_cleanup() on all wqs always to make sure everything gets cleaned up. Reported-by: Tony Zhu <tony.zhu@intel.com> Tested-by: Tony Zhu <tony.zhu@intel.com> Fixes: 0dcfe41e9a4c ("dmanegine: idxd: cleanup all device related bits after disabling device") Signed-off-by: Dave Jiang <dave.jiang@intel.com> Co-developed-by: Fenghua Yu <fenghua.yu@intel.com> Signed-off-by: Fenghua Yu <fenghua.yu@intel.com> Link: https://lore.kernel.org/r/20220628230056.2527816-1-fenghua.yu@intel.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
2022-05-29Merge tag 'dmaengine-5.19-rc1' of ↵Linus Torvalds7-113/+184
git://git.kernel.org/pub/scm/linux/kernel/git/vkoul/dmaengine Pull dmaengine updates from Vinod Koul: "Nothing special, this includes a couple of new device support and new driver support and bunch of driver updates. New support: - Tegra gpcdma driver support - Qualcomm SM8350, Sm8450 and SC7280 device support - Renesas RZN1 dma and platform support Updates: - stm32 device pause/resume support and updates - DMA memset ops Documentation and usage clarification - deprecate '#dma-channels' & '#dma-requests' bindings - driver updates for stm32, ptdma idsx etc" * tag 'dmaengine-5.19-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/vkoul/dmaengine: (87 commits) dmaengine: idxd: make idxd_wq_enable() return 0 if wq is already enabled dmaengine: sun6i: Add support for the D1 variant dmaengine: sun6i: Add support for 34-bit physical addresses dmaengine: sun6i: Do not use virt_to_phys dt-bindings: dma: sun50i-a64: Add compatible for D1 dmaengine: tegra: Remove unused switch case dmaengine: tegra: Fix uninitialized variable usage dmaengine: stm32-dma: add device_pause/device_resume support dmaengine: stm32-dma: rename pm ops before dma pause/resume introduction dmaengine: stm32-dma: pass DMA_SxSCR value to stm32_dma_handle_chan_done() dmaengine: stm32-dma: introduce stm32_dma_sg_inc to manage chan->next_sg dmaengine: stm32-dmamux: avoid reset of dmamux if used by coprocessor dmaengine: qcom: gpi: Add support for sc7280 dt-bindings: dma: pl330: Add power-domains dmaengine: stm32-mdma: use dev_dbg on non-busy channel spurious it dmaengine: stm32-mdma: fix chan initialization in stm32_mdma_irq_handler() dmaengine: stm32-mdma: remove GISR1 register dmaengine: ti: deprecate '#dma-channels' dmaengine: mmp: deprecate '#dma-channels' dmaengine: pxa: deprecate '#dma-channels' and '#dma-requests' ...
2022-05-19dmaengine: idxd: make idxd_wq_enable() return 0 if wq is already enabledDave Jiang1-1/+1
When calling idxd_wq_enable() and wq is already enabled, code should return 0 and indicate function is successful instead of return error code and fail. This should also put idxd_wq_enable() in sync with idxd_wq_disable() where it returns 0 if wq is already disabled. Signed-off-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/165090980906.1378449.1939401700832432886.stgit@djiang5-desk3.ch.intel.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
2022-05-16dmaengine: idxd: Remove unnecessary synchronize_irq() before free_irq()Minghao Chi1-1/+0
Calling synchronize_irq() right before free_irq() is quite useless. On one hand the IRQ can easily fire again before free_irq() is entered, on the other hand free_irq() itself calls synchronize_irq() internally (in a race condition free way), before any state associated with the IRQ is freed. Signed-off-by: Minghao Chi <chi.minghao@zte.com.cn> Link: https://lore.kernel.org/r/20220516115412.1651772-1-chi.minghao@zte.com.cn Acked-by: Dave Jiang <dave.jiang@intel.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
2022-05-16dmaengine: idxd: add missing callback function to support DMA_INTERRUPTDave Jiang1-0/+22
When setting DMA_INTERRUPT capability, a callback function dma->device_prep_dma_interrupt() is needed to support this capability. Without setting the callback, dma_async_device_register() will fail dma capability check. Fixes: 4e5a4eb20393 ("dmaengine: idxd: set DMA_INTERRUPT cap bit") Signed-off-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/165101232637.3951447.15765792791591763119.stgit@djiang5-desk3.ch.intel.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
2022-05-16dmaengine: idxd: skip irq free when wq type is not kernelDave Jiang1-0/+3
Skip wq irq resources freeing when wq type is not kernel since the driver skips the irq alloction during wq enable. Add check in wq type check in idxd_wq_free_irq() to mirror idxd_wq_request_irq(). Fixes: 63c14ae6c161 ("dmaengine: idxd: refactor wq driver enable/disable operations") Reported-by: Tony Zu <tony.zhu@intel.com> Tested-by: Tony Zu <tony.zhu@intel.com> Signed-off-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/165176310726.2112428.7474366910758522079.stgit@djiang5-desk3.ch.intel.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
2022-05-16dmaengine: idxd: make idxd_register/unregister_dma_channel() staticDave Jiang2-4/+2
Since idxd_register/unregister_dma_channel() are only called locally, make them static. Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/165187583222.3287435.12882651040433040246.stgit@djiang5-desk3.ch.intel.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
2022-05-16dmaengine: idxd: remove redudant idxd_wq_disable_cleanup() callDave Jiang1-1/+0
idxd_wq_device_reset_cleanup() already calls idxd_wq_disable_cleanup(). There is no need to call idxd_wq_disable_cleanup() again in idxd_device_wqs_clear_state(). Remove redudant call from idxd_wq_device_reset_cleanup(). Fixes: 0dcfe41e9a4c ("dmanegine: idxd: cleanup all device related bits after disabling device") Signed-off-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/165231365717.986350.2441351765955825964.stgit@djiang5-desk3.ch.intel.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
2022-05-16dmaengine: idxd: free irq before wq type is resetDave Jiang1-1/+1
Call idxd_wq_free_irq() in the drv_disable_wq() function before idxd_wq_reset() is called. Otherwise the wq type is reset and the irq does not get freed. Fixes: 63c14ae6c161 ("dmaengine: idxd: refactor wq driver enable/disable operations") Signed-off-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/165231367316.986407.11001767338124941736.stgit@djiang5-desk3.ch.intel.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
2022-05-16dmaengine: idxd: fix lockdep warning on device driver removalDave Jiang1-7/+7
Jacob reported that with lockdep debug turned on, idxd_device_driver removal causes kernel splat from lock assert warning for idxd_device_wqs_clear_state(). Make sure idxd_device_wqs_clear_state() holds the wq lock for each wq when cleaning the wq state. Move the call outside of the device spinlock. Reported-by: Jacob Pan <jacob.jun.pan@intel.com> Signed-off-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/165231364426.986304.9294302800482492780.stgit@djiang5-desk3.ch.intel.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
2022-05-16dmaengine: idxd: Separate user and kernel pasid enablingDave Jiang5-23/+35
The idxd driver always gated the pasid enabling under a single knob and this assumption is incorrect. The pasid used for kernel operation can be independently toggled and has no dependency on the user pasid (and vice versa). Split the two so they are independent "enabled" flags. Signed-off-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/165231431746.986466.5666862038354800551.stgit@djiang5-desk3.ch.intel.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
2022-05-16dmaengine: idxd: Fix the error handling path in idxd_cdev_register()Christophe JAILLET1-1/+7
If a call to alloc_chrdev_region() fails, the already allocated resources are leaking. Add the needed error handling path to fix the leak. Fixes: 42d279f9137a ("dmaengine: idxd: add char driver to expose submission portal to userland") Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Acked-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/1b5033dcc87b5f2a953c413f0306e883e6114542.1650521591.git.christophe.jaillet@wanadoo.fr Signed-off-by: Vinod Koul <vkoul@kernel.org>
2022-04-22dmaengine: idxd: refactor wq driver enable/disable operationsDave Jiang4-61/+43
Move the core driver operations from wq driver to the drv_enable_wq() and drv_disable_wq() functions. The move should reduce the wq driver's knowledge of the core driver operations and prevent code confusion for future wq drivers. Signed-off-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/165047301643.3841827.11222723219862233060.stgit@djiang5-desk3.ch.intel.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
2022-04-20dmaengine: idxd: move wq irq enabling to after device enableDave Jiang1-9/+9
Move the calling of request_irq() and other related irq setup code until after the WQ is successfully enabled. This reduces the amount of setup/teardown if the wq is not configured correctly and cannot be enabled. Signed-off-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/164642777730.179702.1880317757087484299.stgit@djiang5-desk3.ch.intel.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
2022-04-20dmaengine: idxd: skip clearing device context when device is read-onlyDave Jiang1-0/+3
If the device shows up as read-only configuration, skip the clearing of the state as the context must be preserved for device re-enable after being disabled. Fixes: 0dcfe41e9a4c ("dmanegine: idxd: cleanup all device related bits after disabling device") Reported-by: Tony Zhu <tony.zhu@intel.com> Tested-by: Tony Zhu <tony.zhu@intel.com> Signed-off-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/164971479479.2200566.13980022473526292759.stgit@djiang5-desk3.ch.intel.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
2022-04-20dmaengine: idxd: add RO check for wq max_transfer_size writeDave Jiang1-0/+3
Block wq_max_transfer_size_store() when the device is configured as read-only and not configurable. Fixes: d7aad5550eca ("dmaengine: idxd: add support for configurable max wq xfer size") Reported-by: Bernice Zhang <bernice.zhang@intel.com> Tested-by: Bernice Zhang <bernice.zhang@intel.com> Signed-off-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/164971488154.2200913.10706665404118545941.stgit@djiang5-desk3.ch.intel.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
2022-04-20dmaengine: idxd: add RO check for wq max_batch_size writeDave Jiang1-0/+3
Block wq_max_batch_size_store() when the device is configured as read-only and not configurable. Fixes: e7184b159dd3 ("dmaengine: idxd: add support for configurable max wq batch size") Reported-by: Bernice Zhang <bernice.zhang@intel.com> Tested-by: Bernice Zhang <bernice.zhang@intel.com> Signed-off-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/164971493551.2201159.1942042593642155209.stgit@djiang5-desk3.ch.intel.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
2022-04-20dmaengine: idxd: fix retry value to be constant for duration of function callDave Jiang1-2/+2
When retries is compared to wq->enqcmds_retries each loop of idxd_enqcmds(), wq->enqcmds_retries can potentially changed by user. Assign the value of retries to wq->enqcmds_retries during initialization so it is the original value set when entering the function. Fixes: 7930d8553575 ("dmaengine: idxd: add knob for enqcmds retries") Suggested-by: Dave Hansen <dave.hansen@intel.com> Signed-off-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/165031760154.3658664.1983547716619266558.stgit@djiang5-desk3.ch.intel.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
2022-04-20dmaengine: idxd: match type for retries var in idxd_enqcmds()Dave Jiang1-1/+2
wq->enqcmds_retries is defined as unsigned int. However, retries on the stack is defined as int. Change retries to unsigned int to compare the same type. Fixes: 7930d8553575 ("dmaengine: idxd: add knob for enqcmds retries") Suggested-by: Thiago Macieira <thiago.macieira@intel.com> Signed-off-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/165031747059.3658198.6035308204505664375.stgit@djiang5-desk3.ch.intel.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
2022-04-20dmaengine: idxd: set max_xfer and max_batch for RO deviceDave Jiang1-0/+3
Load the max_xfer_size and max_batch_size values from the values read from registers to the shadow variables. This will allow the read-only device to display the correct values for the sysfs attributes. Signed-off-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/164971507673.2201761.11244446608988838897.stgit@djiang5-desk3.ch.intel.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
2022-04-20dmaengine: idxd: set DMA_INTERRUPT cap bitDave Jiang1-0/+1
Even though idxd driver has always supported interrupt, it never actually set the DMA_INTERRUPT cap bit. Rectify this mistake so the interrupt capability is advertised. Reported-by: Ben Walker <benjamin.walker@intel.com> Signed-off-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/164971497859.2201379.17925303210723708961.stgit@djiang5-desk3.ch.intel.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
2022-04-11dmaengine: idxd: remove trailing white space on input str for wq nameDave Jiang1-2/+8
Add string processing with strim() in order to remove trailing white spaces that may be input by user for the wq->name. Signed-off-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/164789525123.2799661.13795829125221129132.stgit@djiang5-desk3.ch.intel.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
2022-04-11dmaengine: idxd: don't load pasid config until neededDave Jiang2-14/+53
The driver currently programs the system pasid to the WQ preemptively when system pasid is enabled. Given that a dwq will reprogram the pasid and possibly a different pasid, the programming is not necessary. The pasid_en bit can be set for swq as it does not need pasid programming but needs the pasid_en bit. Remove system pasid programming on device config write. Add pasid programming for kernel wq type on wq driver enable. The char dev driver already reprograms the dwq on ->open() call so there's no change. Signed-off-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/164935607115.1660372.6734518676950372366.stgit@djiang5-desk3.ch.intel.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
2022-04-08dmaengine: idxd: fix device cleanup on disableDave Jiang1-2/+1
There are certain parts of WQ that needs to be cleaned up even after WQ is disabled during the device disable. Those are the unchangeable parts for a WQ when the device is still enabled. Move the cleanup outside of WQ state check. Remove idxd_wq_disable_cleanup() inside idxd_wq_device_reset_cleanup() since only the unchangeable parts need to be cleared. Fixes: 0f225705cf65 ("dmaengine: idxd: fix wq settings post wq disable") Reported-by: Tony Zhu <tony.zhu@intel.com> Tested-by: Tony Zhu <tony.zhu@intel.com> Signed-off-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/164919561905.1455025.13542366389944678346.stgit@djiang5-desk3.ch.intel.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
2022-03-11dmaengine: idxd: Remove useless DMA-32 fallback configurationChristophe JAILLET1-2/+0
As stated in [1], dma_set_mask() with a 64-bit mask never fails if dev->dma_mask is non-NULL. So, if it fails, the 32 bits case will also fail for the same reason. Simplify code and remove some dead code accordingly. [1]: https://lore.kernel.org/linux-kernel/YL3vSPK5DXTNvgdx@infradead.org/#t Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Acked-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/009c80294dba72858cd8a6ed2ed81041df1b1e82.1642231430.git.christophe.jaillet@wanadoo.fr Signed-off-by: Vinod Koul <vkoul@kernel.org>
2022-02-15dmaengine: idxd: restore traffic class defaults after wq resetDave Jiang1-2/+7
When clearing the group configurations, the driver fails to restore the default setting for DSA 1.x based devices. Add defaults in idxd_groups_clear_state() for traffic class configuration. Fixes: ade8a86b512c ("dmaengine: idxd: Set defaults for GRPCFG traffic class") Reported-by: Binuraj Ravindran <binuraj.ravindran@intel.com> Signed-off-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/164304123369.824298.6952463420266592087.stgit@djiang5-desk3.ch.intel.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
2022-01-05dmaengine: idxd: deprecate token sysfs attributes for read buffersDave Jiang1-27/+118
The following sysfs attributes will be obsolete due to the name change of tokens to read buffers: max_tokens token_limit group/tokens_allowed group/tokens_reserved group/use_token_limit Create new entries and have old entry print warning of deprecation. New attributes to replace the token ones: max_read_buffers read_buffer_limit group/read_buffers_allowed group/read_buffers_reserved group/use_read_buffer_limit Signed-off-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/163951339488.2988321.2424012059911316373.stgit@djiang5-desk3.ch.intel.com Signed-off-by: Vinod Koul <vkoul@kernel.org>