summaryrefslogtreecommitdiffstats
path: root/kernel/sched.c
diff options
context:
space:
mode:
authorOleg Nesterov <oleg@redhat.com>2010-03-15 10:10:23 +0100
committerIngo Molnar <mingo@elte.hu>2010-04-02 20:12:03 +0200
commit6a1bdc1b577ebcb65f6603c57f8347309bc4ab13 (patch)
tree516130eedf782dd14505bd111e06bcfad9923b07 /kernel/sched.c
parent30da688ef6b76e01969b00608202fff1eed2accc (diff)
downloadlinux-6a1bdc1b577ebcb65f6603c57f8347309bc4ab13.tar.bz2
sched: _cpu_down(): Don't play with current->cpus_allowed
_cpu_down() changes the current task's affinity and then recovers it at the end. The problems are well known: we can't restore old_allowed if it was bound to the now-dead-cpu, and we can race with the userspace which can change cpu-affinity during unplug. _cpu_down() should not play with current->cpus_allowed at all. Instead, take_cpu_down() can migrate the caller of _cpu_down() after __cpu_disable() removes the dying cpu from cpu_online_mask. Signed-off-by: Oleg Nesterov <oleg@redhat.com> Acked-by: Rafael J. Wysocki <rjw@sisk.pl> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <20100315091023.GA9148@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
Diffstat (limited to 'kernel/sched.c')
-rw-r--r--kernel/sched.c2
1 files changed, 1 insertions, 1 deletions
diff --git a/kernel/sched.c b/kernel/sched.c
index 165b532dd8c2..11119deffa48 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -5442,7 +5442,7 @@ static int migration_thread(void *data)
/*
* Figure out where task on dead CPU should go, use force if necessary.
*/
-static void move_task_off_dead_cpu(int dead_cpu, struct task_struct *p)
+void move_task_off_dead_cpu(int dead_cpu, struct task_struct *p)
{
struct rq *rq = cpu_rq(dead_cpu);
int needs_cpu, uninitialized_var(dest_cpu);