diff options
author | Eric Dumazet <edumazet@google.com> | 2014-09-22 13:19:44 -0700 |
---|---|---|
committer | David S. Miller <davem@davemloft.net> | 2014-09-22 16:27:10 -0400 |
commit | fcdd1cf4dd63aecf86c987d7f4ec7187be5c2fbc (patch) | |
tree | 9f74f24f8fe931ffac65805a30bf7e53de7e89b1 /include | |
parent | 35f7aa5309c048bb70e58571942795fa9411ce6a (diff) | |
download | linux-fcdd1cf4dd63aecf86c987d7f4ec7187be5c2fbc.tar.bz2 |
tcp: avoid possible arithmetic overflows
icsk_rto is a 32bit field, and icsk_backoff can reach 15 by default,
or more if some sysctl (eg tcp_retries2) are changed.
Better use 64bit to perform icsk_rto << icsk_backoff operations
As Joe Perches suggested, add a helper for this.
Yuchung spotted the tcp_v4_err() case.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'include')
-rw-r--r-- | include/net/inet_connection_sock.h | 9 |
1 files changed, 9 insertions, 0 deletions
diff --git a/include/net/inet_connection_sock.h b/include/net/inet_connection_sock.h index 5fbe6568c3cf..848e85cb5c61 100644 --- a/include/net/inet_connection_sock.h +++ b/include/net/inet_connection_sock.h @@ -242,6 +242,15 @@ static inline void inet_csk_reset_xmit_timer(struct sock *sk, const int what, #endif } +static inline unsigned long +inet_csk_rto_backoff(const struct inet_connection_sock *icsk, + unsigned long max_when) +{ + u64 when = (u64)icsk->icsk_rto << icsk->icsk_backoff; + + return (unsigned long)min_t(u64, when, max_when); +} + struct sock *inet_csk_accept(struct sock *sk, int flags, int *err); struct request_sock *inet_csk_search_req(const struct sock *sk, |