diff options
author | David S. Miller <davem@davemloft.net> | 2022-12-09 09:18:08 +0000 |
---|---|---|
committer | David S. Miller <davem@davemloft.net> | 2022-12-09 09:18:08 +0000 |
commit | b602d00384bdfb8874cc4dcb298fc87cc369630b (patch) | |
tree | c0260cb8a962dbc3ec94000c349417719eb1da2e /lib/kasprintf.c | |
parent | 0bdff1152c2496acf29930ec9b3c3cd7790b3f68 (diff) | |
parent | 9f3101dca3a7c69027c65770ac28803768efefa5 (diff) | |
download | linux-b602d00384bdfb8874cc4dcb298fc87cc369630b.tar.bz2 |
Merge branch 'net-sched-retpoline'
Pedro Tammela says:
====================
net/sched: retpoline wrappers for tc
In tc all qdics, classifiers and actions can be compiled as modules.
This results today in indirect calls in all transitions in the tc hierarchy.
Due to CONFIG_RETPOLINE, CPUs with mitigations=on might pay an extra cost on
indirect calls. For newer Intel cpus with IBRS the extra cost is
nonexistent, but AMD Zen cpus and older x86 cpus still go through the
retpoline thunk.
Known built-in symbols can be optimized into direct calls, thus
avoiding the retpoline thunk. So far, tc has not been leveraging this
build information and leaving out a performance optimization for some
CPUs. In this series we wire up 'tcf_classify()' and 'tcf_action_exec()'
with direct calls when known modules are compiled as built-in as an
opt-in optimization.
We measured these changes in one AMD Zen 4 cpu (Retpoline), one AMD Zen 3 cpu (Retpoline),
one Intel 10th Gen CPU (IBRS), one Intel 3rd Gen cpu (Retpoline) and one
Intel Xeon CPU (IBRS) using pktgen with 64b udp packets. Our test setup is a
dummy device with clsact and matchall in a kernel compiled with every
tc module as built-in. We observed a 3-8% speed up on the retpoline CPUs,
when going through 1 tc filter, and a 60-100% speed up when going through 100 filters.
For the IBRS cpus we observed a 1-2% degradation in both scenarios, we believe
the extra branches check introduced a small overhead therefore we added
a static key that bypasses the wrapper on kernels not using the retpoline mitigation,
but compiled with CONFIG_RETPOLINE.
1 filter:
CPU | before (pps) | after (pps) | diff
R9 7950X | 5914980 | 6380227 | +7.8%
R9 5950X | 4237838 | 4412241 | +4.1%
R9 5950X | 4265287 | 4413757 | +3.4% [*]
i5-3337U | 1580565 | 1682406 | +6.4%
i5-10210U | 3006074 | 3006857 | +0.0%
i5-10210U | 3160245 | 3179945 | +0.6% [*]
Xeon 6230R | 3196906 | 3197059 | +0.0%
Xeon 6230R | 3190392 | 3196153 | +0.01% [*]
100 filters:
CPU | before (pps) | after (pps) | diff
R9 7950X | 373598 | 820396 | +119.59%
R9 5950X | 313469 | 633303 | +102.03%
R9 5950X | 313797 | 633150 | +101.77% [*]
i5-3337U | 127454 | 211210 | +65.71%
i5-10210U | 389259 | 381765 | -1.9%
i5-10210U | 408812 | 412730 | +0.9% [*]
Xeon 6230R | 415420 | 406612 | -2.1%
Xeon 6230R | 416705 | 405869 | -2.6% [*]
[*] In these tests we ran pktgen with clone set to 1000.
On the 7950x system we also tested the impact of filters if iteration order
placement varied, first by compiling a kernel with the filter under test being
the first one in the static iteration and then repeating it with being last (of 15 classifiers existing today).
We saw a difference of +0.5-1% in pps between being the first in the iteration vs being the last.
Therefore we order the classifiers and actions according to relevance per our current thinking.
v5->v6:
- Address Eric Dumazet suggestions
v4->v5:
- Rebase
v3->v4:
- Address Eric Dumazet suggestions
v2->v3:
- Address suggestions by Jakub, Paolo and Eric
- Dropped RFC tag (I forgot to add it on v2)
v1->v2:
- Fix build errors found by the bots
- Address Kuniyuki Iwashima suggestions
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'lib/kasprintf.c')
0 files changed, 0 insertions, 0 deletions