summaryrefslogtreecommitdiffstats
path: root/fs/f2fs
diff options
context:
space:
mode:
authorJaegeuk Kim <jaegeuk.kim@samsung.com>2013-01-25 18:33:41 +0900
committerJaegeuk Kim <jaegeuk.kim@samsung.com>2013-02-12 07:15:00 +0900
commitbd43df021ac37247f2db58ff376fb4032170f754 (patch)
treee22482138f5ea62d84a1cbe327e0f546670b1a06 /fs/f2fs
parent577e349514452fa3fcd99fd06e587b02d3d1cf28 (diff)
downloadlinux-bd43df021ac37247f2db58ff376fb4032170f754.tar.bz2
f2fs: cover global locks for reserve_new_block
The fill_zero() from fallocate() calls get_new_data_page() in which calls reserve_new_block(). The reserve_new_block() should be covered by *DATA_NEW*, one of global locks. And also, before getting the lock, we should check free sections by calling f2fs_balance_fs(). If we break this rule, f2fs is able to face with out-of-control free space management and fall into infinite loop like the following scenario as well. [f2fs_sync_fs()] [fallocate()] - write_checkpoint() - fill_zero() - block_operations() - get_new_data_page() : grab NODE_NEW - get_dnode_of_data() : get locked dirty node page - sync_node_pages() : try to grab NODE_NEW for data allocation : trylock and skip the dirty node page : call sync_node_pages() repeatedly in order to flush all the dirty node pages! In order to avoid this, we should grab another global lock such as DATA_NEW before calling get_new_data_page() in fill_zero(). Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com>
Diffstat (limited to 'fs/f2fs')
-rw-r--r--fs/f2fs/file.c5
1 files changed, 5 insertions, 0 deletions
diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c
index 3191b52aafb0..6cdab2c64fc6 100644
--- a/fs/f2fs/file.c
+++ b/fs/f2fs/file.c
@@ -387,12 +387,17 @@ const struct inode_operations f2fs_file_inode_operations = {
static void fill_zero(struct inode *inode, pgoff_t index,
loff_t start, loff_t len)
{
+ struct f2fs_sb_info *sbi = F2FS_SB(inode->i_sb);
struct page *page;
if (!len)
return;
+ f2fs_balance_fs(sbi);
+
+ mutex_lock_op(sbi, DATA_NEW);
page = get_new_data_page(inode, index, false);
+ mutex_unlock_op(sbi, DATA_NEW);
if (!IS_ERR(page)) {
wait_on_page_writeback(page);