summaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorPeter Zijlstra <a.p.zijlstra@chello.nl>2011-09-12 15:50:49 +0200
committerIngo Molnar <mingo@elte.hu>2011-10-04 12:44:03 +0200
commitf0f1d32f931b705c4ee5dd374074d34edf3eae14 (patch)
tree414b04e63a8bcad89543723074baea7283fdbad7
parentfa14ff4accfb24e59d2473f3d864d6648d80563b (diff)
downloadlinux-f0f1d32f931b705c4ee5dd374074d34edf3eae14.tar.bz2
llist: Remove cpu_relax() usage in cmpxchg loops
Initial benchmarks show they're a net loss: $ for i in /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor ; do echo performance > $i; done $ echo 4096 32000 64 128 > /proc/sys/kernel/sem $ ./sembench -t 2048 -w 1900 -o 0 Pre: run time 30 seconds 778936 worker burns per second run time 30 seconds 912190 worker burns per second run time 30 seconds 817506 worker burns per second run time 30 seconds 830870 worker burns per second run time 30 seconds 845056 worker burns per second Post: run time 30 seconds 905920 worker burns per second run time 30 seconds 849046 worker burns per second run time 30 seconds 886286 worker burns per second run time 30 seconds 822320 worker burns per second run time 30 seconds 900283 worker burns per second So about 4% faster. (!) cpu_relax() stalls the pipeline, therefore, when used in a tight loop it has the following benefits: - allows SMT siblings to have a go; - reduces pressure on the CPU interconnect. However, cmpxchg loops are unfair and thus have unbounded completion time, therefore we should avoid getting in such heavily contended situations where the above benefits make any difference. A typical cmpxchg loop should not go round more than a handfull of times at worst, therefore adding extra delays just slows things down. Since the llist primitives are new, there aren't any bad users yet, and we should avoid growing them. Heavily contended sites should generally be better off using the ticket locks for serialization since they provide bounded completion times (fifo-fair over the cpus). Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Huang Ying <ying.huang@intel.com> Cc: Andrew Morton <akpm@linux-foundation.org> Link: http://lkml.kernel.org/r/1315836358.26517.43.camel@twins Signed-off-by: Ingo Molnar <mingo@elte.hu>
-rw-r--r--include/linux/llist.h1
-rw-r--r--lib/llist.c2
2 files changed, 0 insertions, 3 deletions
diff --git a/include/linux/llist.h b/include/linux/llist.h
index e2e96d04ee48..837fb4ae66fb 100644
--- a/include/linux/llist.h
+++ b/include/linux/llist.h
@@ -161,7 +161,6 @@ static inline bool llist_add(struct llist_node *new, struct llist_head *head)
entry = cmpxchg(&head->first, old_entry, new);
if (entry == old_entry)
break;
- cpu_relax();
}
return old_entry == NULL;
diff --git a/lib/llist.c b/lib/llist.c
index 878985c4d19d..700cff77a387 100644
--- a/lib/llist.c
+++ b/lib/llist.c
@@ -49,7 +49,6 @@ bool llist_add_batch(struct llist_node *new_first, struct llist_node *new_last,
entry = cmpxchg(&head->first, old_entry, new_first);
if (entry == old_entry)
break;
- cpu_relax();
}
return old_entry == NULL;
@@ -83,7 +82,6 @@ struct llist_node *llist_del_first(struct llist_head *head)
entry = cmpxchg(&head->first, old_entry, next);
if (entry == old_entry)
break;
- cpu_relax();
}
return entry;