summaryrefslogtreecommitdiffstats
path: root/block/blk-ia-ranges.c
diff options
context:
space:
mode:
authorRafał Miłecki <rafal@milecki.pl>2022-10-25 15:22:45 +0200
committerPaolo Abeni <pabeni@redhat.com>2022-10-27 13:04:43 +0200
commit3a1cc23a75abcd9cea585eb84846507363d58397 (patch)
treed004518bb79848ec071e82ed7cc7521e2117cb0a /block/blk-ia-ranges.c
parentc926b4c3fa1fdce5e128bc954cad94ca16acce41 (diff)
downloadlinux-3a1cc23a75abcd9cea585eb84846507363d58397.tar.bz2
net: broadcom: bcm4908_enet: use build_skb()
RX code can be more efficient with the build_skb(). Allocating actual SKB around eth packet buffer - right before passing it up - results in a better cache usage. Without RPS (echo 0 > rps_cpus) BCM4908 NAT masq performance "jumps" between two speeds: ~900 Mbps and 940 Mbps (it's a 4 CPUs SoC). This change bumps the lower speed from 905 Mb/s to 918 Mb/s (tested using single stream iperf 2.0.5 traffic). There are more optimizations to consider. One obvious to try is GRO however as BCM4908 doesn't do hw csum is may actually lower performance. Sometimes. Some early testing: ┌─────────────────────────────────┬─────────────────────┬────────────────────┐ │ │ netif_receive_skb() │ napi_gro_receive() │ ├─────────────────────────────────┼─────────────────────┼────────────────────┤ │ netdev_alloc_skb() │ 905 Mb/s │ 892 Mb/s │ │ napi_alloc_frag() + build_skb() │ 918 Mb/s │ 917 Mb/s │ └─────────────────────────────────┴─────────────────────┴────────────────────┘ Another ideas: 1. napi_build_skb() 2. skb_copy_from_linear_data() for small packets Those need proper testing first though. That can be done later. Signed-off-by: Rafał Miłecki <rafal@milecki.pl> Link: https://lore.kernel.org/r/20221025132245.22871-1-zajec5@gmail.com Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Diffstat (limited to 'block/blk-ia-ranges.c')
0 files changed, 0 insertions, 0 deletions