summaryrefslogtreecommitdiffstats
path: root/fs/reiserfs
diff options
context:
space:
mode:
authorFrederic Weisbecker <fweisbec@gmail.com>2009-05-08 20:01:09 +0200
committerFrederic Weisbecker <fweisbec@gmail.com>2009-09-14 07:18:21 +0200
commitc63e3c0b2498adec921b06c670d12c8c74b85538 (patch)
tree27ca06415b5fdd51e369361ea8aa8f8e8e1ac741 /fs/reiserfs
parentd6f5b0aa08078c3dabe377d5b1a6077e9c9352d3 (diff)
downloadlinux-c63e3c0b2498adec921b06c670d12c8c74b85538.tar.bz2
kill-the-bkl/reiserfs: use mutex_lock in reiserfs_mutex_lock_safe
reiserfs_mutex_lock_safe() is a hack to avoid any dependency between an internal reiserfs mutex and the write lock, it has been proposed to follow the old bkl logic. The code does the following: while (!mutex_trylock(m)) { reiserfs_write_unlock(s); schedule(); reiserfs_write_lock(s); } It then imitate the implicit behaviour of the lock when it was a Bkl and hadn't such dependency: mutex_lock(m) { if (fastpath) let's go else { wait_for_mutex() { schedule() { unlock_kernel() reacquire_lock_kernel() } } } } The problem is that by using such explicit schedule(), we don't benefit of the adaptive mutex spinning on owner. The logic in use now is: reiserfs_write_unlock(s); mutex_lock(m); // -> possible adaptive spinning reiserfs_write_lock(s); [ Impact: restore the use of adaptive spinning mutexes in reiserfs ] Cc: Jeff Mahoney <jeffm@suse.com> Cc: Chris Mason <chris.mason@oracle.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Alexander Beregalov <a.beregalov@gmail.com> Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Diffstat (limited to 'fs/reiserfs')
-rw-r--r--fs/reiserfs/journal.c8
1 files changed, 3 insertions, 5 deletions
diff --git a/fs/reiserfs/journal.c b/fs/reiserfs/journal.c
index ffb7f50abc2f..e9a972bd0323 100644
--- a/fs/reiserfs/journal.c
+++ b/fs/reiserfs/journal.c
@@ -566,11 +566,9 @@ static inline void insert_journal_hash(struct reiserfs_journal_cnode **table,
static inline void reiserfs_mutex_lock_safe(struct mutex *m,
struct super_block *s)
{
- while (!mutex_trylock(m)) {
- reiserfs_write_unlock(s);
- schedule();
- reiserfs_write_lock(s);
- }
+ reiserfs_write_unlock(s);
+ mutex_lock(m);
+ reiserfs_write_lock(s);
}
/* lock the current transaction */