diff options
author | Ingo Molnar <mingo@elte.hu> | 2007-10-15 17:00:04 +0200 |
---|---|---|
committer | Ingo Molnar <mingo@elte.hu> | 2007-10-15 17:00:04 +0200 |
commit | 1091985b482fdd577a5c511059b9d7b4467bd15d (patch) | |
tree | 1ea76f48b0f4c68072eb5eaa5113af6aa7dbd357 | |
parent | 19ccd97a03a026c2341b35af3ed2078a83c4a22b (diff) | |
download | linux-1091985b482fdd577a5c511059b9d7b4467bd15d.tar.bz2 |
sched: speed up update_load_add/_sub()
speed up update_load_add/_sub() by not delaying the division - this
reduces CPU pipeline dependencies.
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: Mike Galbraith <efault@gmx.de>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
-rw-r--r-- | kernel/sched.c | 9 |
1 files changed, 5 insertions, 4 deletions
diff --git a/kernel/sched.c b/kernel/sched.c index 3209e2cc2c2e..992a1fae72a7 100644 --- a/kernel/sched.c +++ b/kernel/sched.c @@ -697,16 +697,17 @@ calc_delta_fair(unsigned long delta_exec, struct load_weight *lw) return calc_delta_mine(delta_exec, NICE_0_LOAD, lw); } -static void update_load_add(struct load_weight *lw, unsigned long inc) +static inline void update_load_add(struct load_weight *lw, unsigned long inc) { lw->weight += inc; - lw->inv_weight = 0; + lw->inv_weight = WMULT_CONST / lw->weight; } -static void update_load_sub(struct load_weight *lw, unsigned long dec) +static inline void update_load_sub(struct load_weight *lw, unsigned long dec) { lw->weight -= dec; - lw->inv_weight = 0; + if (likely(lw->weight)) + lw->inv_weight = WMULT_CONST / lw->weight; } /* |