diff options
author | Peter Zijlstra <peterz@infradead.org> | 2015-04-15 17:11:57 +0200 |
---|---|---|
committer | Ingo Molnar <mingo@kernel.org> | 2015-04-17 09:42:14 +0200 |
commit | d7bc3197b41e0a1af6677e83f8736e93a1575ce0 (patch) | |
tree | 8b9dbb106e7c287182b3812c0469a1fea8e46c95 /kernel/locking | |
parent | 6a16dda86ebbcfe690c753c3fb469b4f9ad5a5ef (diff) | |
download | linux-d7bc3197b41e0a1af6677e83f8736e93a1575ce0.tar.bz2 |
lockdep: Make print_lock() robust against concurrent release
During sysrq's show-held-locks command it is possible that
hlock_class() returns NULL for a given lock. The result is then (after
the warning):
|BUG: unable to handle kernel NULL pointer dereference at 0000001c
|IP: [<c1088145>] get_usage_chars+0x5/0x100
|Call Trace:
| [<c1088263>] print_lock_name+0x23/0x60
| [<c1576b57>] print_lock+0x5d/0x7e
| [<c1088314>] lockdep_print_held_locks+0x74/0xe0
| [<c1088652>] debug_show_all_locks+0x132/0x1b0
| [<c1315c48>] sysrq_handle_showlocks+0x8/0x10
This *might* happen because the thread on the other CPU drops the lock
after we are looking ->lockdep_depth and ->held_locks points no longer
to a lock that is held.
The fix here is to simply ignore it and continue.
Reported-by: Andreas Messerschmid <andreas@linutronix.de>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Diffstat (limited to 'kernel/locking')
-rw-r--r-- | kernel/locking/lockdep.c | 16 |
1 files changed, 15 insertions, 1 deletions
diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c index ba77ab5f64dd..a0831e1b99f4 100644 --- a/kernel/locking/lockdep.c +++ b/kernel/locking/lockdep.c @@ -551,7 +551,21 @@ static void print_lockdep_cache(struct lockdep_map *lock) static void print_lock(struct held_lock *hlock) { - print_lock_name(hlock_class(hlock)); + /* + * We can be called locklessly through debug_show_all_locks() so be + * extra careful, the hlock might have been released and cleared. + */ + unsigned int class_idx = hlock->class_idx; + + /* Don't re-read hlock->class_idx, can't use READ_ONCE() on bitfields: */ + barrier(); + + if (!class_idx || (class_idx - 1) >= MAX_LOCKDEP_KEYS) { + printk("<RELEASED>\n"); + return; + } + + print_lock_name(lock_classes + class_idx - 1); printk(", at: "); print_ip_sym(hlock->acquire_ip); } |