summaryrefslogtreecommitdiffstats
path: root/kernel/sched_debug.c
diff options
context:
space:
mode:
authorMike Galbraith <efault@gmx.de>2010-03-11 17:15:38 +0100
committerIngo Molnar <mingo@elte.hu>2010-03-11 18:32:50 +0100
commitb42e0c41a422a212ddea0666d5a3a0e3c35206db (patch)
tree443cf5918548cab86c3f9f3f34a1b700d809070b /kernel/sched_debug.c
parent39c0cbe2150cbd848a25ba6cdb271d1ad46818ad (diff)
downloadlinux-b42e0c41a422a212ddea0666d5a3a0e3c35206db.tar.bz2
sched: Remove avg_wakeup
Testing the load which led to this heuristic (nfs4 kbuild) shows that it has outlived it's usefullness. With intervening load balancing changes, I cannot see any difference with/without, so recover there fastpath cycles. Signed-off-by: Mike Galbraith <efault@gmx.de> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <1268301062.6785.29.camel@marge.simson.net> Signed-off-by: Ingo Molnar <mingo@elte.hu>
Diffstat (limited to 'kernel/sched_debug.c')
-rw-r--r--kernel/sched_debug.c1
1 files changed, 0 insertions, 1 deletions
diff --git a/kernel/sched_debug.c b/kernel/sched_debug.c
index ad9df4422763..20b95a420fec 100644
--- a/kernel/sched_debug.c
+++ b/kernel/sched_debug.c
@@ -408,7 +408,6 @@ void proc_sched_show_task(struct task_struct *p, struct seq_file *m)
PN(se.vruntime);
PN(se.sum_exec_runtime);
PN(se.avg_overlap);
- PN(se.avg_wakeup);
nr_switches = p->nvcsw + p->nivcsw;