summaryrefslogtreecommitdiffstats
path: root/arch/x86/kernel/cpu/mcheck/mce.c
diff options
context:
space:
mode:
authorChristoph Lameter <cl@linux.com>2010-12-14 10:28:46 -0600
committerTejun Heo <tj@kernel.org>2010-12-18 15:54:49 +0100
commit7c83912062c801738d7d19acaf8f7fec25ea663c (patch)
tree52b696a502b871da55fc877ddd8d0e4a271511ad /arch/x86/kernel/cpu/mcheck/mce.c
parent20b876918c065818b3574a426d418f68b4f8ad19 (diff)
downloadlinux-7c83912062c801738d7d19acaf8f7fec25ea663c.tar.bz2
vmstat: User per cpu atomics to avoid interrupt disable / enable
Currently the operations to increment vm counters must disable interrupts in order to not mess up their housekeeping of counters. So use this_cpu_cmpxchg() to avoid the overhead. Since we can no longer count on preremption being disabled we still have some minor issues. The fetching of the counter thresholds is racy. A threshold from another cpu may be applied if we happen to be rescheduled on another cpu. However, the following vmstat operation will then bring the counter again under the threshold limit. The operations for __xxx_zone_state are not changed since the caller has taken care of the synchronization needs (and therefore the cycle count is even less than the optimized version for the irq disable case provided here). The optimization using this_cpu_cmpxchg will only be used if the arch supports efficient this_cpu_ops (must have CONFIG_CMPXCHG_LOCAL set!) The use of this_cpu_cmpxchg reduces the cycle count for the counter operations by %80 (inc_zone_page_state goes from 170 cycles to 32). Signed-off-by: Christoph Lameter <cl@linux.com>
Diffstat (limited to 'arch/x86/kernel/cpu/mcheck/mce.c')
0 files changed, 0 insertions, 0 deletions