diff options
author | Daniel Borkmann <daniel@iogearbox.net> | 2021-03-10 01:06:34 +0100 |
---|---|---|
committer | Daniel Borkmann <daniel@iogearbox.net> | 2021-03-10 01:07:21 +0100 |
commit | 32f91529e2bdbe0d92edb3ced41dfba4beffa84a (patch) | |
tree | 2e6ca2aa0d6d1ac694002dd4aff11915473d4118 /net/xdp/xskmap.c | |
parent | 11d39cfeecfc9d92a5faa2a55c228e796478e0cb (diff) | |
parent | ee75aef23afe6e88497151c127c13ed69f41aaa2 (diff) | |
download | linux-32f91529e2bdbe0d92edb3ced41dfba4beffa84a.tar.bz2 |
Merge branch 'bpf-xdp-redirect'
Björn Töpel says:
====================
This two patch series contain two optimizations for the
bpf_redirect_map() helper and the xdp_do_redirect() function.
The bpf_redirect_map() optimization is about avoiding the map lookup
dispatching. Instead of having a switch-statement and selecting the
correct lookup function, we let bpf_redirect_map() be a map operation,
where each map has its own bpf_redirect_map() implementation. This way
the run-time lookup is avoided.
The xdp_do_redirect() patch restructures the code, so that the map
pointer indirection can be avoided.
Performance-wise I got 4% improvement for XSKMAP
(sample:xdpsock/rx-drop), and 8% (sample:xdp_redirect_map) on my
machine.
v5->v6: Removed REDIR enum, and instead use map_id and map_type. (Daniel)
Applied Daniel's fixups on patch 1. (Daniel)
v4->v5: Renamed map operation to map_redirect. (Daniel)
v3->v4: Made bpf_redirect_map() a map operation. (Daniel)
v2->v3: Fix build when CONFIG_NET is not set. (lkp)
v1->v2: Removed warning when CONFIG_BPF_SYSCALL was not set. (lkp)
Cleaned up case-clause in xdp_do_generic_redirect_map(). (Toke)
Re-added comment. (Toke)
rfc->v1: Use map_id, and remove bpf_clear_redirect_map(). (Toke)
Get rid of the macro and use __always_inline. (Jesper)
====================
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Diffstat (limited to 'net/xdp/xskmap.c')
-rw-r--r-- | net/xdp/xskmap.c | 17 |
1 files changed, 16 insertions, 1 deletions
diff --git a/net/xdp/xskmap.c b/net/xdp/xskmap.c index 113fd9017203..67b4ce504852 100644 --- a/net/xdp/xskmap.c +++ b/net/xdp/xskmap.c @@ -87,7 +87,6 @@ static void xsk_map_free(struct bpf_map *map) { struct xsk_map *m = container_of(map, struct xsk_map, map); - bpf_clear_redirect_map(map); synchronize_net(); bpf_map_area_free(m); } @@ -125,6 +124,16 @@ static int xsk_map_gen_lookup(struct bpf_map *map, struct bpf_insn *insn_buf) return insn - insn_buf; } +static void *__xsk_map_lookup_elem(struct bpf_map *map, u32 key) +{ + struct xsk_map *m = container_of(map, struct xsk_map, map); + + if (key >= map->max_entries) + return NULL; + + return READ_ONCE(m->xsk_map[key]); +} + static void *xsk_map_lookup_elem(struct bpf_map *map, void *key) { WARN_ON_ONCE(!rcu_read_lock_held()); @@ -215,6 +224,11 @@ static int xsk_map_delete_elem(struct bpf_map *map, void *key) return 0; } +static int xsk_map_redirect(struct bpf_map *map, u32 ifindex, u64 flags) +{ + return __bpf_xdp_redirect_map(map, ifindex, flags, __xsk_map_lookup_elem); +} + void xsk_map_try_sock_delete(struct xsk_map *map, struct xdp_sock *xs, struct xdp_sock **map_entry) { @@ -247,4 +261,5 @@ const struct bpf_map_ops xsk_map_ops = { .map_check_btf = map_check_no_btf, .map_btf_name = "xsk_map", .map_btf_id = &xsk_map_btf_id, + .map_redirect = xsk_map_redirect, }; |