diff options
author | Eric Dumazet <eric.dumazet@gmail.com> | 2011-04-04 17:04:03 +0200 |
---|---|---|
committer | Patrick McHardy <kaber@trash.net> | 2011-04-04 17:04:03 +0200 |
commit | 7f5c6d4f665bb57a19a34ce1fb16cc708c04f219 (patch) | |
tree | e804faa506bbf9edcfd1fdadb2ab3749f58836cd /kernel | |
parent | 8f7b01a178b8e6a7b663a1bbaa1710756d67b69b (diff) | |
download | linux-7f5c6d4f665bb57a19a34ce1fb16cc708c04f219.tar.bz2 |
netfilter: get rid of atomic ops in fast path
We currently use a percpu spinlock to 'protect' rule bytes/packets
counters, after various attempts to use RCU instead.
Lately we added a seqlock so that get_counters() can run without
blocking BH or 'writers'. But we really only need the seqcount in it.
Spinlock itself is only locked by the current/owner cpu, so we can
remove it completely.
This cleanups api, using correct 'writer' vs 'reader' semantic.
At replace time, the get_counters() call makes sure all cpus are done
using the old table.
Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
Cc: Jan Engelhardt <jengelh@medozas.de>
Signed-off-by: Patrick McHardy <kaber@trash.net>
Diffstat (limited to 'kernel')
0 files changed, 0 insertions, 0 deletions