diff options
author | Reinette Chatre <reinette.chatre@intel.com> | 2022-12-07 14:52:22 -0800 |
---|---|---|
committer | Vinod Koul <vkoul@kernel.org> | 2022-12-28 16:24:50 +0530 |
commit | 6744a030d81e456883bfbb627ac1f30465c1a989 (patch) | |
tree | 5a90827a4aba31b3ceb9c109df3b73ae1a176191 | |
parent | 1beeec45f9ac31eba52478379f70a5fa9c2ad005 (diff) | |
download | linux-6744a030d81e456883bfbb627ac1f30465c1a989.tar.bz2 |
dmaengine: idxd: Do not call DMX TX callbacks during workqueue disable
On driver unload any pending descriptors are flushed and pending
DMA descriptors are explicitly completed:
idxd_dmaengine_drv_remove() ->
drv_disable_wq() ->
idxd_wq_free_irq() ->
idxd_flush_pending_descs() ->
idxd_dma_complete_txd()
With this done during driver unload any remaining descriptor is
likely stuck and can be dropped. Even so, the descriptor may still
have a callback set that could no longer be accessible. An
example of such a problem is when the dmatest fails and the dmatest
module is unloaded. The failure of dmatest leaves descriptors with
dma_async_tx_descriptor::callback pointing to code that no longer
exist. This causes a page fault as below at the time the IDXD driver
is unloaded when it attempts to run the callback:
BUG: unable to handle page fault for address: ffffffffc0665190
#PF: supervisor instruction fetch in kernel mode
#PF: error_code(0x0010) - not-present page
Fix this by clearing the callback pointers on the transmit
descriptors only when workqueue is disabled.
Fixes: 403a2e236538 ("dmaengine: idxd: change MSIX allocation based on per wq activation")
Signed-off-by: Reinette Chatre <reinette.chatre@intel.com>
Reviewed-by: Dave Jiang <dave.jiang@intel.com>
Reviewed-by: Fenghua Yu <fenghua.yu@intel.com>
Cc: stable@vger.kernel.org
Link: https://lore.kernel.org/r/37d06b772aa7f8863ca50f90930ea2fd80b38fc3.1670452419.git.reinette.chatre@intel.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
-rw-r--r-- | drivers/dma/idxd/device.c | 11 |
1 files changed, 11 insertions, 0 deletions
diff --git a/drivers/dma/idxd/device.c b/drivers/dma/idxd/device.c index cd792f3f9873..29dbb0f52e18 100644 --- a/drivers/dma/idxd/device.c +++ b/drivers/dma/idxd/device.c @@ -1172,8 +1172,19 @@ static void idxd_flush_pending_descs(struct idxd_irq_entry *ie) spin_unlock(&ie->list_lock); list_for_each_entry_safe(desc, itr, &flist, list) { + struct dma_async_tx_descriptor *tx; + list_del(&desc->list); ctype = desc->completion->status ? IDXD_COMPLETE_NORMAL : IDXD_COMPLETE_ABORT; + /* + * wq is being disabled. Any remaining descriptors are + * likely to be stuck and can be dropped. callback could + * point to code that is no longer accessible, for example + * if dmatest module has been unloaded. + */ + tx = &desc->txd; + tx->callback = NULL; + tx->callback_result = NULL; idxd_dma_complete_txd(desc, ctype, true); } } |