summaryrefslogtreecommitdiffstats
path: root/net
diff options
context:
space:
mode:
authorAlexander Lobakin <alobakin@pm.me>2021-01-15 15:04:40 +0000
committerJakub Kicinski <kuba@kernel.org>2021-01-16 19:02:04 -0800
commit66c556025d687dbdd0f748c5e1df89c977b6c02a (patch)
tree7b1594ff69b679f7e09a990113dbfb55cb0ea3b5 /net
parentbcd0cf19ef8258ac31b9a20248b05c15a1f4b4b0 (diff)
downloadlinux-66c556025d687dbdd0f748c5e1df89c977b6c02a.tar.bz2
skbuff: back tiny skbs with kmalloc() in __netdev_alloc_skb() too
Commit 3226b158e67c ("net: avoid 32 x truesize under-estimation for tiny skbs") ensured that skbs with data size lower than 1025 bytes will be kmalloc'ed to avoid excessive page cache fragmentation and memory consumption. However, the fix adressed only __napi_alloc_skb() (primarily for virtio_net and napi_get_frags()), but the issue can still be achieved through __netdev_alloc_skb(), which is still used by several drivers. Drivers often allocate a tiny skb for headers and place the rest of the frame to frags (so-called copybreak). Mirror the condition to __netdev_alloc_skb() to handle this case too. Since v1 [0]: - fix "Fixes:" tag; - refine commit message (mention copybreak usecase). [0] https://lore.kernel.org/netdev/20210114235423.232737-1-alobakin@pm.me Fixes: a1c7fff7e18f ("net: netdev_alloc_skb() use build_skb()") Signed-off-by: Alexander Lobakin <alobakin@pm.me> Link: https://lore.kernel.org/r/20210115150354.85967-1-alobakin@pm.me Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Diffstat (limited to 'net')
-rw-r--r--net/core/skbuff.c6
1 files changed, 5 insertions, 1 deletions
diff --git a/net/core/skbuff.c b/net/core/skbuff.c
index c1a6f262636a..785daff48030 100644
--- a/net/core/skbuff.c
+++ b/net/core/skbuff.c
@@ -437,7 +437,11 @@ struct sk_buff *__netdev_alloc_skb(struct net_device *dev, unsigned int len,
len += NET_SKB_PAD;
- if ((len > SKB_WITH_OVERHEAD(PAGE_SIZE)) ||
+ /* If requested length is either too small or too big,
+ * we use kmalloc() for skb->head allocation.
+ */
+ if (len <= SKB_WITH_OVERHEAD(1024) ||
+ len > SKB_WITH_OVERHEAD(PAGE_SIZE) ||
(gfp_mask & (__GFP_DIRECT_RECLAIM | GFP_DMA))) {
skb = __alloc_skb(len, gfp_mask, SKB_ALLOC_RX, NUMA_NO_NODE);
if (!skb)