diff options
author | Jakub Kicinski <kuba@kernel.org> | 2022-06-10 16:21:39 -0700 |
---|---|---|
committer | Jakub Kicinski <kuba@kernel.org> | 2022-06-10 16:21:40 -0700 |
commit | e10b02ee5b6c95872064cf0a8e65f31951a31967 (patch) | |
tree | e061107c999e33aac6a61f87cd45a24cd4258422 /net/sched/em_ipt.c | |
parent | 5c281b4e529cd5a73b32ac561d79f448d18dda6f (diff) | |
parent | 0f2c2693988aeeb4c83a581fe58a28d526eecd39 (diff) | |
download | linux-e10b02ee5b6c95872064cf0a8e65f31951a31967.tar.bz2 |
Merge branch 'net-reduce-tcp_memory_allocated-inflation'
Eric Dumazet says:
====================
net: reduce tcp_memory_allocated inflation
Hosts with a lot of sockets tend to hit so called TCP memory pressure,
leading to very bad TCP performance and/or OOM.
The problem is that some TCP sockets can hold up to 2MB of 'forward
allocations' in their per-socket cache (sk->sk_forward_alloc),
and there is no mechanism to make them relinquish their share
under mem pressure.
Only under some potentially rare events their share is reclaimed,
one socket at a time.
In this series, I implemented a per-cpu cache instead of a per-socket one.
Each CPU has a +1/-1 MB (256 pages on x86) forward alloc cache, in order
to not dirty tcp_memory_allocated shared cache line too often.
We keep sk->sk_forward_alloc values as small as possible, to meet
memcg page granularity constraint.
Note that memcg already has a per-cpu cache, although MEMCG_CHARGE_BATCH
is defined to 32 pages, which seems a bit small.
Note that while this cover letter mentions TCP, this work is generic
and supports TCP, UDP, DECNET, SCTP.
====================
Link: https://lore.kernel.org/r/20220609063412.2205738-1-eric.dumazet@gmail.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Diffstat (limited to 'net/sched/em_ipt.c')
0 files changed, 0 insertions, 0 deletions