diff options
author | Peter Zijlstra <peterz@infradead.org> | 2019-11-08 14:15:56 +0100 |
---|---|---|
committer | Ingo Molnar <mingo@kernel.org> | 2019-11-11 08:35:19 +0100 |
commit | f488e1057bb97b881ed317557dc5e62ff8747393 (patch) | |
tree | 427549c89f668f363ef6a175f769081c55b90063 | |
parent | 7277a34c6be0b2972bdd1fea88c7cef409bed5b4 (diff) | |
download | linux-f488e1057bb97b881ed317557dc5e62ff8747393.tar.bz2 |
sched/core: Make pick_next_task_idle() more consistent
Only pick_next_task_fair() needs the @prev and @rf argument; these are
required to implement the cpu-cgroup optimization. None of the other
pick_next_task() methods need this. Make pick_next_task_idle() more
consistent.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: bsegall@google.com
Cc: dietmar.eggemann@arm.com
Cc: juri.lelli@redhat.com
Cc: ktkhai@virtuozzo.com
Cc: mgorman@suse.de
Cc: qais.yousef@arm.com
Cc: qperret@google.com
Cc: rostedt@goodmis.org
Cc: valentin.schneider@arm.com
Cc: vincent.guittot@linaro.org
Link: https://lkml.kernel.org/r/20191108131909.545730862@infradead.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
-rw-r--r-- | kernel/sched/core.c | 6 | ||||
-rw-r--r-- | kernel/sched/idle.c | 3 |
2 files changed, 5 insertions, 4 deletions
diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 0f2eb3629070..59c4f2963417 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -3922,8 +3922,10 @@ pick_next_task(struct rq *rq, struct task_struct *prev, struct rq_flags *rf) goto restart; /* Assumes fair_sched_class->next == idle_sched_class */ - if (unlikely(!p)) - p = idle_sched_class.pick_next_task(rq, prev, rf); + if (unlikely(!p)) { + put_prev_task(rq, prev); + p = idle_sched_class.pick_next_task(rq, NULL, NULL); + } return p; } diff --git a/kernel/sched/idle.c b/kernel/sched/idle.c index f65ef1e2f204..179d1d4ac5a6 100644 --- a/kernel/sched/idle.c +++ b/kernel/sched/idle.c @@ -396,8 +396,7 @@ pick_next_task_idle(struct rq *rq, struct task_struct *prev, struct rq_flags *rf { struct task_struct *next = rq->idle; - if (prev) - put_prev_task(rq, prev); + WARN_ON_ONCE(prev || rf); set_next_task_idle(rq, next); |