diff options
author | Ingo Molnar <mingo@elte.hu> | 2006-07-14 00:24:27 -0700 |
---|---|---|
committer | Linus Torvalds <torvalds@g5.osdl.org> | 2006-07-14 21:53:55 -0700 |
commit | 3a5f5e488ceee9e08df3dff3f01b12fafc9e7e68 (patch) | |
tree | 12ebd936831e797780b9cf716cc7aaf337b25141 /kernel/sched.c | |
parent | 2e8f7a3128bb8fac8351a994f1fc325717899308 (diff) | |
download | linux-3a5f5e488ceee9e08df3dff3f01b12fafc9e7e68.tar.bz2 |
[PATCH] lockdep: core, fix rq-lock handling on __ARCH_WANT_UNLOCKED_CTXSW
On platforms that have __ARCH_WANT_UNLOCKED_CTXSW set and want to implement
lock validator support there's a bug in rq->lock handling: in this case we
dont 'carry over' the runqueue lock into another task - but still we did a
spinlock_release() of it. Fix this by making the spinlock_release() in
context_switch() dependent on !__ARCH_WANT_UNLOCKED_CTXSW.
(Reported by Ralf Baechle on MIPS, which has __ARCH_WANT_UNLOCKED_CTXSW.
This fixes a lockdep-internal BUG message on such platforms.)
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Arjan van de Ven <arjan@linux.intel.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Diffstat (limited to 'kernel/sched.c')
-rw-r--r-- | kernel/sched.c | 8 |
1 files changed, 8 insertions, 0 deletions
diff --git a/kernel/sched.c b/kernel/sched.c index d714611f1691..e9a0b61f12ab 100644 --- a/kernel/sched.c +++ b/kernel/sched.c @@ -1788,7 +1788,15 @@ context_switch(struct rq *rq, struct task_struct *prev, WARN_ON(rq->prev_mm); rq->prev_mm = oldmm; } + /* + * Since the runqueue lock will be released by the next + * task (which is an invalid locking op but in the case + * of the scheduler it's an obvious special-case), so we + * do an early lockdep release here: + */ +#ifndef __ARCH_WANT_UNLOCKED_CTXSW spin_release(&rq->lock.dep_map, 1, _THIS_IP_); +#endif /* Here we just switch the register state and the stack. */ switch_to(prev, next, prev); |