diff options
author | Joe Stringer <joe@wand.net.nz> | 2020-03-29 15:53:38 -0700 |
---|---|---|
committer | Alexei Starovoitov <ast@kernel.org> | 2020-03-30 13:45:04 -0700 |
commit | cf7fbe660f2dbd738ab58aea8e9b0ca6ad232449 (patch) | |
tree | 57b9db0835dacc52bcd96bea8b072af239e13ae2 /include/net/sock.h | |
parent | b49e42a2dffd8d0202ddba98aa5ec23849cf5c3d (diff) | |
download | linux-cf7fbe660f2dbd738ab58aea8e9b0ca6ad232449.tar.bz2 |
bpf: Add socket assign support
Add support for TPROXY via a new bpf helper, bpf_sk_assign().
This helper requires the BPF program to discover the socket via a call
to bpf_sk*_lookup_*(), then pass this socket to the new helper. The
helper takes its own reference to the socket in addition to any existing
reference that may or may not currently be obtained for the duration of
BPF processing. For the destination socket to receive the traffic, the
traffic must be routed towards that socket via local route. The
simplest example route is below, but in practice you may want to route
traffic more narrowly (eg by CIDR):
$ ip route add local default dev lo
This patch avoids trying to introduce an extra bit into the skb->sk, as
that would require more invasive changes to all code interacting with
the socket to ensure that the bit is handled correctly, such as all
error-handling cases along the path from the helper in BPF through to
the orphan path in the input. Instead, we opt to use the destructor
variable to switch on the prefetch of the socket.
Signed-off-by: Joe Stringer <joe@wand.net.nz>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Martin KaFai Lau <kafai@fb.com>
Link: https://lore.kernel.org/bpf/20200329225342.16317-2-joe@wand.net.nz
Diffstat (limited to 'include/net/sock.h')
-rw-r--r-- | include/net/sock.h | 11 |
1 files changed, 11 insertions, 0 deletions
diff --git a/include/net/sock.h b/include/net/sock.h index b5cca7bae69b..dc398cee7873 100644 --- a/include/net/sock.h +++ b/include/net/sock.h @@ -1659,6 +1659,7 @@ void sock_rfree(struct sk_buff *skb); void sock_efree(struct sk_buff *skb); #ifdef CONFIG_INET void sock_edemux(struct sk_buff *skb); +void sock_pfree(struct sk_buff *skb); #else #define sock_edemux sock_efree #endif @@ -2526,6 +2527,16 @@ void sock_net_set(struct sock *sk, struct net *net) write_pnet(&sk->sk_net, net); } +static inline bool +skb_sk_is_prefetched(struct sk_buff *skb) +{ +#ifdef CONFIG_INET + return skb->destructor == sock_pfree; +#else + return false; +#endif /* CONFIG_INET */ +} + static inline struct sock *skb_steal_sock(struct sk_buff *skb) { if (skb->sk) { |