summaryrefslogtreecommitdiffstats
path: root/kernel/sched_rt.c
diff options
context:
space:
mode:
authorSteven Rostedt <rostedt@goodmis.org>2008-03-05 10:00:12 -0500
committerIngo Molnar <mingo@elte.hu>2008-03-07 16:43:00 +0100
commit6fa46fa526f2cab9ce21fa5e39501553a40d196d (patch)
tree5b89e31c030c1b5a780da7c73c031bc7df656a18 /kernel/sched_rt.c
parent810b38179e9e4d4f57b4b733767bb08f8291a965 (diff)
downloadlinux-6fa46fa526f2cab9ce21fa5e39501553a40d196d.tar.bz2
sched: balance RT task resched only on runqueue
Sripathi Kodi reported a crash in the -rt kernel: https://bugzilla.redhat.com/show_bug.cgi?id=435674 this is due to a place that can reschedule a task without holding the tasks runqueue lock. This was caused by the RT balancing code that pulls RT tasks to the current run queue and will reschedule the current task. There's a slight chance that the pulling of the RT tasks will release the current runqueue's lock and retake it (in the double_lock_balance). During this time that the runqueue is released, the current task can migrate to another runqueue. In the prio_changed_rt code, after the pull, if the current task is of lesser priority than one of the RT tasks pulled, resched_task is called on the current task. If the current task had migrated in that small window, resched_task will be called without holding the runqueue lock for the runqueue that the task is on. This race condition also exists in the mainline kernel and this patch adds a check to make sure the task hasn't migrated before calling resched_task. Signed-off-by: Steven Rostedt <srostedt@redhat.com> Tested-by: Sripathi Kodi <sripathik@in.ibm.com> Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: Ingo Molnar <mingo@elte.hu>
Diffstat (limited to 'kernel/sched_rt.c')
-rw-r--r--kernel/sched_rt.c6
1 files changed, 4 insertions, 2 deletions
diff --git a/kernel/sched_rt.c b/kernel/sched_rt.c
index 76e828517541..0a6d2e516420 100644
--- a/kernel/sched_rt.c
+++ b/kernel/sched_rt.c
@@ -1107,9 +1107,11 @@ static void prio_changed_rt(struct rq *rq, struct task_struct *p,
pull_rt_task(rq);
/*
* If there's a higher priority task waiting to run
- * then reschedule.
+ * then reschedule. Note, the above pull_rt_task
+ * can release the rq lock and p could migrate.
+ * Only reschedule if p is still on the same runqueue.
*/
- if (p->prio > rq->rt.highest_prio)
+ if (p->prio > rq->rt.highest_prio && rq->curr == p)
resched_task(p);
#else
/* For UP simply resched on drop of prio */