diff options
author | Robin Murphy <robin.murphy@arm.com> | 2018-12-10 14:00:33 +0000 |
---|---|---|
committer | Christoph Hellwig <hch@lst.de> | 2018-12-11 14:32:13 +0100 |
commit | ad78dee0b630527bdfed809d1f5ed95c601886ae (patch) | |
tree | 5f71279e0474d43da345329c8a51def891a007af /arch | |
parent | 0cb0e25e421436a83ee39857923e4213b983e463 (diff) | |
download | linux-ad78dee0b630527bdfed809d1f5ed95c601886ae.tar.bz2 |
dma-debug: Batch dma_debug_entry allocation
DMA debug entries are one of those things which aren't that useful
individually - we will always want some larger quantity of them - and
which we don't really need to manage the exact number of - we only care
about having 'enough'. In that regard, the current behaviour of creating
them one-by-one leads to a lot of unwarranted function call overhead and
memory wasted on alignment padding.
Now that we don't have to worry about freeing anything via
dma_debug_resize_entries(), we can optimise the allocation behaviour by
grabbing whole pages at once, which will save considerably on the
aforementioned overheads, and probably offer a little more cache/TLB
locality benefit for traversing the lists under normal operation. This
should also give even less reason for an architecture-level override of
the preallocation size, so make the definition unconditional - if there
is still any desire to change the compile-time value for some platforms
it would be better off as a Kconfig option anyway.
Since freeing a whole page of entries at once becomes enough of a
challenge that it's not really worth complicating dma_debug_init(), we
may as well tweak the preallocation behaviour such that as long as we
manage to allocate *some* pages, we can leave debugging enabled on a
best-effort basis rather than otherwise wasting them.
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Tested-by: Qian Cai <cai@lca.pw>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Diffstat (limited to 'arch')
0 files changed, 0 insertions, 0 deletions