diff options
author | Chao Yu <yuchao0@huawei.com> | 2020-05-08 17:50:20 +0800 |
---|---|---|
committer | Jaegeuk Kim <jaegeuk@kernel.org> | 2020-05-11 20:36:46 -0700 |
commit | 042be373adf719ab64c4a44ae809d110826becbf (patch) | |
tree | 8e62751444ceec0ddc8c7b79f705dc8cd3e67c42 /fs/f2fs/node.h | |
parent | 84c9c2de0626567c0d964ee5fa1ae3310911ddf8 (diff) | |
download | linux-042be373adf719ab64c4a44ae809d110826becbf.tar.bz2 |
f2fs: shrink spinlock coverage
In f2fs_try_to_free_nids(), .nid_list_lock spinlock critical region will
increase as expected shrink number increase, to avoid spining other CPUs
for long time, we change to release nid caches with small batch each time
under .nid_list_lock coverage.
Signed-off-by: Chao Yu <yuchao0@huawei.com>
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
Diffstat (limited to 'fs/f2fs/node.h')
-rw-r--r-- | fs/f2fs/node.h | 3 |
1 files changed, 3 insertions, 0 deletions
diff --git a/fs/f2fs/node.h b/fs/f2fs/node.h index 6a2011deea23..69e5859e993c 100644 --- a/fs/f2fs/node.h +++ b/fs/f2fs/node.h @@ -15,6 +15,9 @@ #define FREE_NID_PAGES 8 #define MAX_FREE_NIDS (NAT_ENTRY_PER_BLOCK * FREE_NID_PAGES) +/* size of free nid batch when shrinking */ +#define SHRINK_NID_BATCH_SIZE 8 + #define DEF_RA_NID_PAGES 0 /* # of nid pages to be readaheaded */ /* maximum readahead size for node during getting data blocks */ |