diff options
author | Peter Zijlstra <peterz@infradead.org> | 2019-05-29 20:36:42 +0000 |
---|---|---|
committer | Peter Zijlstra <peterz@infradead.org> | 2019-08-08 09:09:31 +0200 |
commit | 5ba553eff0c3a7c099b1e29a740277a82c0c3314 (patch) | |
tree | a1e2d4736d569ca0a9fd5dca0a731505e30b2a33 /kernel/sched/sched.h | |
parent | 03b7fad167efca3b7abbbb39733933f9df56e79c (diff) | |
download | linux-5ba553eff0c3a7c099b1e29a740277a82c0c3314.tar.bz2 |
sched/fair: Expose newidle_balance()
For pick_next_task_fair() it is the newidle balance that requires
dropping the rq->lock; provided we do put_prev_task() early, we can
also detect the condition for doing newidle early.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Aaron Lu <aaron.lwe@gmail.com>
Cc: Valentin Schneider <valentin.schneider@arm.com>
Cc: mingo@kernel.org
Cc: Phil Auld <pauld@redhat.com>
Cc: Julien Desfossez <jdesfossez@digitalocean.com>
Cc: Nishanth Aravamudan <naravamudan@digitalocean.com>
Link: https://lkml.kernel.org/r/9e3eb1859b946f03d7e500453a885725b68957ba.1559129225.git.vpillai@digitalocean.com
Diffstat (limited to 'kernel/sched/sched.h')
-rw-r--r-- | kernel/sched/sched.h | 4 |
1 files changed, 4 insertions, 0 deletions
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index f3c50445bf22..304d98e712bf 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -1445,10 +1445,14 @@ static inline void unregister_sched_domain_sysctl(void) } #endif +extern int newidle_balance(struct rq *this_rq, struct rq_flags *rf); + #else static inline void sched_ttwu_pending(void) { } +static inline int newidle_balance(struct rq *this_rq, struct rq_flags *rf) { return 0; } + #endif /* CONFIG_SMP */ #include "stats.h" |