diff options
author | Xianting Tian <tian.xianting@h3c.com> | 2020-10-19 16:20:47 +0800 |
---|---|---|
committer | Jens Axboe <axboe@kernel.dk> | 2020-10-20 07:08:17 -0600 |
commit | 576e85c5e92486f1aa8be3cb1a30cb59d4415981 (patch) | |
tree | 3026de0e15160a6f1acbe5ebb1280de3c1633d5e /block/blk-mq.c | |
parent | 0669d2b265d0f6f9e16f1abbf5c5d2e22b219a6b (diff) | |
download | linux-576e85c5e92486f1aa8be3cb1a30cb59d4415981.tar.bz2 |
blk-mq: remove the calling of local_memory_node()
We don't need to check whether the node is memoryless numa node before
calling allocator interface. SLUB(and SLAB,SLOB) relies on the page
allocator to pick a node. Page allocator should deal with memoryless
nodes just fine. It has zonelists constructed for each possible nodes.
And it will automatically fall back into a node which is closest to the
requested node. As long as __GFP_THISNODE is not enforced of course.
The code comments of kmem_cache_alloc_node() of SLAB also showed this:
* Fallback to other node is possible if __GFP_THISNODE is not set.
blk-mq code doesn't set __GFP_THISNODE, so we can remove the calling
of local_memory_node().
Signed-off-by: Xianting Tian <tian.xianting@h3c.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Diffstat (limited to 'block/blk-mq.c')
-rw-r--r-- | block/blk-mq.c | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/block/blk-mq.c b/block/blk-mq.c index deca157032c2..615da7de8855 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -2744,7 +2744,7 @@ static void blk_mq_init_cpu_queues(struct request_queue *q, for (j = 0; j < set->nr_maps; j++) { hctx = blk_mq_map_queue_type(q, j, i); if (nr_hw_queues > 1 && hctx->numa_node == NUMA_NO_NODE) - hctx->numa_node = local_memory_node(cpu_to_node(i)); + hctx->numa_node = cpu_to_node(i); } } } |