summaryrefslogtreecommitdiffstats
path: root/kernel/locking/Makefile
diff options
context:
space:
mode:
authorEric Dumazet <edumazet@google.com>2020-05-02 19:54:18 -0700
committerDavid S. Miller <davem@davemloft.net>2020-05-03 15:50:31 -0700
commitdde0a648fc00e2156a3358600c5fbfb3f53256ac (patch)
treead72abd43309b697b9e39866eb7ebc1a9760972c /kernel/locking/Makefile
parentee1bd483cc062d5050f9537064651dd2e06baee7 (diff)
downloadlinux-dde0a648fc00e2156a3358600c5fbfb3f53256ac.tar.bz2
net_sched: sch_fq: avoid touching f->next from fq_gc()
A significant amount of cpu cycles is spent in fq_gc() When fq_gc() does its lookup in the rb-tree, it needs the following fields from struct fq_flow : f->sk (lookup key in the rb-tree) f->fq_node (anchor in the rb-tree) f->next (used to determine if the flow is detached) f->age (used to determine if the flow is candidate for gc) This unfortunately spans two cache lines (assuming 64 bytes cache lines) We can avoid using f->next, if we use the low order bit of f->{age|tail} This low order bit is 0, if f->tail points to an sk_buff. We set the low order bit to 1, if the union contains a jiffies value. Combined with the following patch, this makes sure we only need to bring into cpu caches one cache line per flow. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'kernel/locking/Makefile')
0 files changed, 0 insertions, 0 deletions