diff options
author | Mike Marciniszyn <mike.marciniszyn@intel.com> | 2019-06-14 12:32:50 -0400 |
---|---|---|
committer | Doug Ledford <dledford@redhat.com> | 2019-06-17 21:15:40 -0400 |
commit | f972775b1cc0441ae22c9f8d06dd16b118463632 (patch) | |
tree | 4c5d0e33b6fded1d1cb5526962b861595208d06b /drivers/infiniband | |
parent | 4bb02e9572af1383038d83ad196d7166c515f2ee (diff) | |
download | linux-f972775b1cc0441ae22c9f8d06dd16b118463632.tar.bz2 |
IB/hfi1: Wakeup QPs orphaned on wait list after flush
Once an SDMA engine is taken down due to a link failure, any waiting QPs
that do not have outstanding descriptors in the ring will stay
on the dmawait list as long as the port is down.
Since there is no timer running, they will stay there for a long time.
The fix is to wake up all iowaits linked to dmawait. The send engine
will build and post packets that get flushed back.
Fixes: 7724105686e7 ("IB/hfi1: add driver files")
Reviewed-by: Kaike Wan <kaike.wan@intel.com>
Signed-off-by: Mike Marciniszyn <mike.marciniszyn@intel.com>
Signed-off-by: Dennis Dalessandro <dennis.dalessandro@intel.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
Diffstat (limited to 'drivers/infiniband')
-rw-r--r-- | drivers/infiniband/hw/hfi1/sdma.c | 17 |
1 files changed, 17 insertions, 0 deletions
diff --git a/drivers/infiniband/hw/hfi1/sdma.c b/drivers/infiniband/hw/hfi1/sdma.c index 70828de7436b..28b66bd70b74 100644 --- a/drivers/infiniband/hw/hfi1/sdma.c +++ b/drivers/infiniband/hw/hfi1/sdma.c @@ -405,6 +405,7 @@ static void sdma_flush(struct sdma_engine *sde) struct sdma_txreq *txp, *txp_next; LIST_HEAD(flushlist); unsigned long flags; + uint seq; /* flush from head to tail */ sdma_flush_descq(sde); @@ -415,6 +416,22 @@ static void sdma_flush(struct sdma_engine *sde) /* flush from flush list */ list_for_each_entry_safe(txp, txp_next, &flushlist, list) complete_tx(sde, txp, SDMA_TXREQ_S_ABORTED); + /* wakeup QPs orphaned on the dmawait list */ + do { + struct iowait *w, *nw; + + seq = read_seqbegin(&sde->waitlock); + if (!list_empty(&sde->dmawait)) { + write_seqlock(&sde->waitlock); + list_for_each_entry_safe(w, nw, &sde->dmawait, list) { + if (w->wakeup) { + w->wakeup(w, SDMA_AVAIL_REASON); + list_del_init(&w->list); + } + } + write_sequnlock(&sde->waitlock); + } + } while (read_seqretry(&sde->waitlock, seq)); } /* |