summaryrefslogtreecommitdiffstats
path: root/include/net
diff options
context:
space:
mode:
authorEric Dumazet <edumazet@google.com>2016-05-02 10:56:27 -0700
committerDavid S. Miller <davem@davemloft.net>2016-05-03 16:02:36 -0400
commit1d2077ac0165c0d173a2255e37cf4dc5033d92c7 (patch)
treefc18bebb7e5ffe2109fd61c56230c5f47d5f313b /include/net
parente34b1638d02bef8c3278af30ee73077c5babc082 (diff)
downloadlinux-1d2077ac0165c0d173a2255e37cf4dc5033d92c7.tar.bz2
net: add __sock_wfree() helper
Hosts sending lot of ACK packets exhibit high sock_wfree() cost because of cache line miss to test SOCK_USE_WRITE_QUEUE We could move this flag close to sk_wmem_alloc but it is better to perform the atomic_sub_and_test() on a clean cache line, as it avoid one extra bus transaction. skb_orphan_partial() can also have a fast track for packets that either are TCP acks, or already went through another skb_orphan_partial() Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'include/net')
-rw-r--r--include/net/sock.h1
1 files changed, 1 insertions, 0 deletions
diff --git a/include/net/sock.h b/include/net/sock.h
index 1dbb1f9f7c1b..45f5b492c658 100644
--- a/include/net/sock.h
+++ b/include/net/sock.h
@@ -1445,6 +1445,7 @@ struct sock *sk_clone_lock(const struct sock *sk, const gfp_t priority);
struct sk_buff *sock_wmalloc(struct sock *sk, unsigned long size, int force,
gfp_t priority);
+void __sock_wfree(struct sk_buff *skb);
void sock_wfree(struct sk_buff *skb);
void skb_orphan_partial(struct sk_buff *skb);
void sock_rfree(struct sk_buff *skb);