summaryrefslogtreecommitdiffstats
path: root/kernel/sched/fair.c
AgeCommit message (Expand)AuthorFilesLines
2016-01-06sched/fair: Fix new task's load avg removed from source CPU in wake_up_new_ta...Yuyang Du1-10/+28
2016-01-06Merge branch 'sched/urgent' into sched/core, to pick up fixes before merging ...Ingo Molnar1-2/+2
2016-01-06sched/fair: Fix multiplication overflow on 32-bit systemsAndrey Ryabinin1-1/+1
2015-12-04sched/fair: Disable the task group load_avg update for the root_task_groupWaiman Long1-0/+6
2015-12-04sched/fair: Avoid redundant idle_cpu() call in update_sg_lb_stats()Waiman Long1-3/+7
2015-12-04sched/fair: Make it possible to account fair load avg consistentlyByungchul Park1-0/+46
2015-11-23sched/fair: Modify the comment about lock assumptions in migrate_task_rq_fair()Byungchul Park1-2/+1
2015-11-23sched/core: Fix incorrect wait time and wait count statisticsJoonwoo Park1-20/+47
2015-11-23treewide: Remove old email addressPeter Zijlstra1-1/+1
2015-11-23sched/numa: Cap PTE scanning overhead to 3% of run timeRik van Riel1-0/+12
2015-11-23sched/fair: Consider missed ticks in NOHZ_FULL in update_cpu_load_nohz()Byungchul Park1-4/+6
2015-11-23sched/fair: Prepare __update_cpu_load() to handle active ticklessByungchul Park1-8/+41
2015-11-23sched/fair: Clean up the explanation around decaying load update missesPeter Zijlstra1-29/+24
2015-11-23sched/fair: Remove empty idle enter and exit functionsDietmar Eggemann1-23/+1
2015-11-09sched/numa: Fix math underflow in task_tick_numa()Rik van Riel1-1/+1
2015-10-20Merge branch 'sched/urgent' into sched/core, to pick up fixes and resolve con...Ingo Molnar1-4/+5
2015-10-20sched/fair: Update task group's load_avg after task migrationYuyang Du1-2/+3
2015-10-20sched/fair: Fix overly small weight for interactive group entitiesYuyang Du1-2/+2
2015-10-06sched/core: Remove a parameter in the migrate_task_rq() functionxiaofeng.yan1-1/+1
2015-10-06sched/numa: Fix task_tick_fair() from disabling numa_balancingSrikar Dronamraju1-1/+1
2015-09-18sched/fair: Remove unnecessary parameter for group_classify()Leo Yan1-5/+5
2015-09-18sched/fair: Polish comments for LOAD_AVG_MAXLeo Yan1-2/+3
2015-09-18sched/numa: Limit the amount of virtual memory scanned in task_numa_work()Rik van Riel1-6/+12
2015-09-13sched/fair: Optimize per entity utilization trackingPeter Zijlstra1-7/+10
2015-09-13sched/fair: Defer calling scaling functionsDietmar Eggemann1-2/+4
2015-09-13sched/fair: Optimize __update_load_avg()Peter Zijlstra1-1/+1
2015-09-13sched/fair: Rename scale() to cap_scale()Peter Zijlstra1-7/+7
2015-09-13sched/fair: Get rid of scaling utilization by capacity_origDietmar Eggemann1-16/+22
2015-09-13sched/fair: Name utilization related data and functions consistentlyDietmar Eggemann1-18/+19
2015-09-13sched/fair: Make utilization tracking CPU scale-invariantDietmar Eggemann1-3/+4
2015-09-13sched/fair: Convert arch_scale_cpu_capacity() from weak function to #defineMorten Rasmussen1-21/+1
2015-09-13sched/fair: Make load tracking frequency scale-invariantDietmar Eggemann1-10/+17
2015-09-13sched/numa: Convert sched_numa_balancing to a static_branchSrikar Dronamraju1-3/+3
2015-09-13sched/numa: Disable sched_numa_balancing on UMA systemsSrikar Dronamraju1-2/+2
2015-09-13sched/numa: Rename numabalancing_enabled to sched_numa_balancingSrikar Dronamraju1-2/+2
2015-09-13sched/fair: Fix nohz.next_balance updateVincent Guittot1-4/+30
2015-09-13sched/core: Remove unused argument from sched_class::task_move_groupPeter Zijlstra1-1/+1
2015-09-13sched/fair: Unify switched_{from,to}_fair() and task_move_group_fair()Byungchul Park1-77/+52
2015-09-13sched/fair: Make the entity load aging on attaching tunablePeter Zijlstra1-0/+4
2015-09-13sched/fair: Fix switched_to_fair()'s per entity load trackingByungchul Park1-0/+23
2015-09-13sched/fair: Have task_move_group_fair() also detach entity load from the old ...Byungchul Park1-1/+5
2015-09-13sched/fair: Have task_move_group_fair() unconditionally add the entity load t...Byungchul Park1-5/+4
2015-09-13sched/fair: Factor out the {at,de}taching of the per entity load {to,from} th...Byungchul Park1-39/+38
2015-08-12sched: Make sched_class::set_cpus_allowed() unconditionalPeter Zijlstra1-0/+1
2015-08-12sched: Ensure a task has a non-normalized vruntime when returning back to CFSByungchul Park1-2/+17
2015-08-03sched/fair: Clean up load average referencesYuyang Du1-15/+29
2015-08-03sched/fair: Provide runnable_load_avg back to cfs_rqYuyang Du1-10/+45
2015-08-03sched/fair: Remove task and group entity load when they are deadYuyang Du1-1/+10
2015-08-03sched/fair: Init cfs_rq's sched_entity load averageYuyang Du1-5/+6
2015-08-03sched/fair: Implement update_blocked_averages() for CONFIG_FAIR_GROUP_SCHED=nVincent Guittot1-0/+8