diff options
author | Eric Dumazet <edumazet@google.com> | 2016-09-15 08:12:33 -0700 |
---|---|---|
committer | David S. Miller <davem@davemloft.net> | 2016-09-17 09:59:30 -0400 |
commit | ffb4d6c8508657824bcef68a36b2a0f9d8c09d10 (patch) | |
tree | 63d82b7ef90a1d7346f6bc14737bf59334af998e | |
parent | cce94483e47e8e3d74cf4475dea33f9fd4b6ad9f (diff) | |
download | linux-ffb4d6c8508657824bcef68a36b2a0f9d8c09d10.tar.bz2 |
tcp: fix overflow in __tcp_retransmit_skb()
If a TCP socket gets a large write queue, an overflow can happen
in a test in __tcp_retransmit_skb() preventing all retransmits.
The flow then stalls and resets after timeouts.
Tested:
sysctl -w net.core.wmem_max=1000000000
netperf -H dest -- -s 1000000000
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
-rw-r--r-- | net/ipv4/tcp_output.c | 3 |
1 files changed, 2 insertions, 1 deletions
diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c index bdaef7fd6e47..f53d0cca5fa4 100644 --- a/net/ipv4/tcp_output.c +++ b/net/ipv4/tcp_output.c @@ -2605,7 +2605,8 @@ int __tcp_retransmit_skb(struct sock *sk, struct sk_buff *skb, int segs) * copying overhead: fragmentation, tunneling, mangling etc. */ if (atomic_read(&sk->sk_wmem_alloc) > - min(sk->sk_wmem_queued + (sk->sk_wmem_queued >> 2), sk->sk_sndbuf)) + min_t(u32, sk->sk_wmem_queued + (sk->sk_wmem_queued >> 2), + sk->sk_sndbuf)) return -EAGAIN; if (skb_still_in_host_queue(sk, skb)) |