summaryrefslogtreecommitdiffstats
path: root/net
diff options
context:
space:
mode:
authorDavid S. Miller <davem@davemloft.net>2018-09-28 11:12:29 -0700
committerDavid S. Miller <davem@davemloft.net>2018-09-28 11:12:29 -0700
commitf13d1b48b95bdafce6f9ad2cccbb9a6494f5a049 (patch)
treee58ffe301d2056d8e4f1ea934febea418573c8e4 /net
parent05c5e9ff22e33cb9706b20fd3cf33b802f7edbf6 (diff)
parent0c3b9d1b37df16ae6046a5a01f769bf3d21b838c (diff)
downloadlinux-f13d1b48b95bdafce6f9ad2cccbb9a6494f5a049.tar.bz2
Merge branch 'netpoll-second-round-of-fixes'
Eric Dumazet says: ==================== netpoll: second round of fixes. As diagnosed by Song Liu, ndo_poll_controller() can be very dangerous on loaded hosts, since the cpu calling ndo_poll_controller() might steal all NAPI contexts (for all RX/TX queues of the NIC). This capture, showing one ksoftirqd eating all cycles can last for unlimited amount of time, since one cpu is generally not able to drain all the queues under load. It seems that all networking drivers that do use NAPI for their TX completions, should not provide a ndo_poll_controller() : Most NAPI drivers have netpoll support already handled in core networking stack, since netpoll_poll_dev() uses poll_napi(dev) to iterate through registered NAPI contexts for a device. First patch is a fix in poll_one_napi(). Then following patches take care of ten drivers. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'net')
-rw-r--r--net/core/netpoll.c20
1 files changed, 1 insertions, 19 deletions
diff --git a/net/core/netpoll.c b/net/core/netpoll.c
index 3219a2932463..3ae899805f8b 100644
--- a/net/core/netpoll.c
+++ b/net/core/netpoll.c
@@ -135,27 +135,9 @@ static void queue_process(struct work_struct *work)
}
}
-/*
- * Check whether delayed processing was scheduled for our NIC. If so,
- * we attempt to grab the poll lock and use ->poll() to pump the card.
- * If this fails, either we've recursed in ->poll() or it's already
- * running on another CPU.
- *
- * Note: we don't mask interrupts with this lock because we're using
- * trylock here and interrupts are already disabled in the softirq
- * case. Further, we test the poll_owner to avoid recursion on UP
- * systems where the lock doesn't exist.
- */
static void poll_one_napi(struct napi_struct *napi)
{
- int work = 0;
-
- /* net_rx_action's ->poll() invocations and our's are
- * synchronized by this test which is only made while
- * holding the napi->poll_lock.
- */
- if (!test_bit(NAPI_STATE_SCHED, &napi->state))
- return;
+ int work;
/* If we set this bit but see that it has already been set,
* that indicates that napi has been disabled and we need