summaryrefslogtreecommitdiffstats
path: root/net/xdp
diff options
context:
space:
mode:
authorMaciej Fijalkowski <maciej.fijalkowski@intel.com>2022-04-06 17:58:04 +0200
committerDaniel Borkmann <daniel@iogearbox.net>2022-04-07 23:09:08 +0200
commit8de8b71b787f38983d414d2dba169a3bfefa668a (patch)
tree3cbe894a066543b30f97213283d8402616d3b23c /net/xdp
parentec4eb8a86ade4d22633e1da2a7d85a846b7d1798 (diff)
downloadlinux-8de8b71b787f38983d414d2dba169a3bfefa668a.tar.bz2
xsk: Fix l2fwd for copy mode + busy poll combo
While checking AF_XDP copy mode combined with busy poll, strange results were observed. rxdrop and txonly scenarios worked fine, but l2fwd broke immediately. After a deeper look, it turned out that for l2fwd, Tx side was exiting early due to xsk_no_wakeup() returning true and in the end xsk_generic_xmit() was never called. Note that AF_XDP Tx in copy mode is syscall steered, so the current behavior is broken. Txonly scenario only worked due to the fact that sk_mark_napi_id_once_xdp() was never called - since Rx side is not in the picture for this case and mentioned function is called in xsk_rcv_check(), sk::sk_napi_id was never set, which in turn meant that xsk_no_wakeup() was returning false (see the sk->sk_napi_id >= MIN_NAPI_ID check in there). To fix this, prefer busy poll in xsk_sendmsg() only when zero copy is enabled on a given AF_XDP socket. By doing so, busy poll in copy mode would not exit early on Tx side and eventually xsk_generic_xmit() will be called. Fixes: a0731952d9cd ("xsk: Add busy-poll support for {recv,send}msg()") Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20220406155804.434493-1-maciej.fijalkowski@intel.com
Diffstat (limited to 'net/xdp')
-rw-r--r--net/xdp/xsk.c2
1 files changed, 1 insertions, 1 deletions
diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c
index 2c34caee0fd1..7d3a00cb24ec 100644
--- a/net/xdp/xsk.c
+++ b/net/xdp/xsk.c
@@ -639,7 +639,7 @@ static int __xsk_sendmsg(struct socket *sock, struct msghdr *m, size_t total_len
if (sk_can_busy_loop(sk))
sk_busy_loop(sk, 1); /* only support non-blocking sockets */
- if (xsk_no_wakeup(sk))
+ if (xs->zc && xsk_no_wakeup(sk))
return 0;
pool = xs->pool;