summaryrefslogtreecommitdiffstats
path: root/include/net
diff options
context:
space:
mode:
authorDaniel Borkmann <daniel@iogearbox.net>2019-07-03 16:52:03 +0200
committerDaniel Borkmann <daniel@iogearbox.net>2019-07-03 16:52:03 +0200
commite5a3e259ef239f443951d401db10db7d426c9497 (patch)
tree6ef3c235c14a2ed5352d166637965af44cd8e103 /include/net
parentd2f5bbbc350050895d9f33c2744a61f9e0af1caa (diff)
parentd78e3f0614f866d4456f860dc5773a6176eb9937 (diff)
downloadlinux-e5a3e259ef239f443951d401db10db7d426c9497.tar.bz2
Merge branch 'bpf-tcp-rtt-hook'
Stanislav Fomichev says: ==================== Congestion control team would like to have a periodic callback to track some TCP statistics. Let's add a sock_ops callback that can be selectively enabled on a socket by socket basis and is executed for every RTT. BPF program frequency can be further controlled by calling bpf_ktime_get_ns and bailing out early. I run neper tcp_stream and tcp_rr tests with the sample program from the last patch and didn't observe any noticeable performance difference. v2: * add a comment about second accept() in selftest (Yonghong Song) * refer to tcp_bpf.readme in sample program (Yonghong Song) ==================== Suggested-by: Eric Dumazet <edumazet@google.com> Cc: Eric Dumazet <edumazet@google.com> Cc: Priyaranjan Jha <priyarjha@google.com> Cc: Yuchung Cheng <ycheng@google.com> Cc: Soheil Hassas Yeganeh <soheil@google.com> Acked-by: Soheil Hassas Yeganeh <soheil@google.com> Acked-by: Yuchung Cheng <ycheng@google.com> Acked-by: Yonghong Song <yhs@fb.com> Acked-by: Lawrence Brakmo <brakmo@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Diffstat (limited to 'include/net')
-rw-r--r--include/net/tcp.h8
1 files changed, 8 insertions, 0 deletions
diff --git a/include/net/tcp.h b/include/net/tcp.h
index 9d36cc88d043..e16d8a3fd3b4 100644
--- a/include/net/tcp.h
+++ b/include/net/tcp.h
@@ -2221,6 +2221,14 @@ static inline bool tcp_bpf_ca_needs_ecn(struct sock *sk)
return (tcp_call_bpf(sk, BPF_SOCK_OPS_NEEDS_ECN, 0, NULL) == 1);
}
+static inline void tcp_bpf_rtt(struct sock *sk)
+{
+ struct tcp_sock *tp = tcp_sk(sk);
+
+ if (BPF_SOCK_OPS_TEST_FLAG(tp, BPF_SOCK_OPS_RTT_CB_FLAG))
+ tcp_call_bpf(sk, BPF_SOCK_OPS_RTT_CB, 0, NULL);
+}
+
#if IS_ENABLED(CONFIG_SMC)
extern struct static_key_false tcp_have_smc;
#endif