diff options
author | Linus Torvalds <torvalds@linux-foundation.org> | 2020-10-15 18:42:13 -0700 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2020-10-15 18:42:13 -0700 |
commit | 9ff9b0d392ea08090cd1780fb196f36dbb586529 (patch) | |
tree | 276a3a5c4525b84dee64eda30b423fc31bf94850 /net/openvswitch/flow_table.c | |
parent | 840e5bb326bbcb16ce82dd2416d2769de4839aea (diff) | |
parent | 105faa8742437c28815b2a3eb8314ebc5fd9288c (diff) | |
download | linux-9ff9b0d392ea08090cd1780fb196f36dbb586529.tar.bz2 |
Merge tag 'net-next-5.10' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next
Pull networking updates from Jakub Kicinski:
- Add redirect_neigh() BPF packet redirect helper, allowing to limit
stack traversal in common container configs and improving TCP
back-pressure.
Daniel reports ~10Gbps => ~15Gbps single stream TCP performance gain.
- Expand netlink policy support and improve policy export to user
space. (Ge)netlink core performs request validation according to
declared policies. Expand the expressiveness of those policies
(min/max length and bitmasks). Allow dumping policies for particular
commands. This is used for feature discovery by user space (instead
of kernel version parsing or trial and error).
- Support IGMPv3/MLDv2 multicast listener discovery protocols in
bridge.
- Allow more than 255 IPv4 multicast interfaces.
- Add support for Type of Service (ToS) reflection in SYN/SYN-ACK
packets of TCPv6.
- In Multi-patch TCP (MPTCP) support concurrent transmission of data on
multiple subflows in a load balancing scenario. Enhance advertising
addresses via the RM_ADDR/ADD_ADDR options.
- Support SMC-Dv2 version of SMC, which enables multi-subnet
deployments.
- Allow more calls to same peer in RxRPC.
- Support two new Controller Area Network (CAN) protocols - CAN-FD and
ISO 15765-2:2016.
- Add xfrm/IPsec compat layer, solving the 32bit user space on 64bit
kernel problem.
- Add TC actions for implementing MPLS L2 VPNs.
- Improve nexthop code - e.g. handle various corner cases when nexthop
objects are removed from groups better, skip unnecessary
notifications and make it easier to offload nexthops into HW by
converting to a blocking notifier.
- Support adding and consuming TCP header options by BPF programs,
opening the doors for easy experimental and deployment-specific TCP
option use.
- Reorganize TCP congestion control (CC) initialization to simplify
life of TCP CC implemented in BPF.
- Add support for shipping BPF programs with the kernel and loading
them early on boot via the User Mode Driver mechanism, hence reusing
all the user space infra we have.
- Support sleepable BPF programs, initially targeting LSM and tracing.
- Add bpf_d_path() helper for returning full path for given 'struct
path'.
- Make bpf_tail_call compatible with bpf-to-bpf calls.
- Allow BPF programs to call map_update_elem on sockmaps.
- Add BPF Type Format (BTF) support for type and enum discovery, as
well as support for using BTF within the kernel itself (current use
is for pretty printing structures).
- Support listing and getting information about bpf_links via the bpf
syscall.
- Enhance kernel interfaces around NIC firmware update. Allow
specifying overwrite mask to control if settings etc. are reset
during update; report expected max time operation may take to users;
support firmware activation without machine reboot incl. limits of
how much impact reset may have (e.g. dropping link or not).
- Extend ethtool configuration interface to report IEEE-standard
counters, to limit the need for per-vendor logic in user space.
- Adopt or extend devlink use for debug, monitoring, fw update in many
drivers (dsa loop, ice, ionic, sja1105, qed, mlxsw, mv88e6xxx,
dpaa2-eth).
- In mlxsw expose critical and emergency SFP module temperature alarms.
Refactor port buffer handling to make the defaults more suitable and
support setting these values explicitly via the DCBNL interface.
- Add XDP support for Intel's igb driver.
- Support offloading TC flower classification and filtering rules to
mscc_ocelot switches.
- Add PTP support for Marvell Octeontx2 and PP2.2 hardware, as well as
fixed interval period pulse generator and one-step timestamping in
dpaa-eth.
- Add support for various auth offloads in WiFi APs, e.g. SAE (WPA3)
offload.
- Add Lynx PHY/PCS MDIO module, and convert various drivers which have
this HW to use it. Convert mvpp2 to split PCS.
- Support Marvell Prestera 98DX3255 24-port switch ASICs, as well as
7-port Mediatek MT7531 IP.
- Add initial support for QCA6390 and IPQ6018 in ath11k WiFi driver,
and wcn3680 support in wcn36xx.
- Improve performance for packets which don't require much offloads on
recent Mellanox NICs by 20% by making multiple packets share a
descriptor entry.
- Move chelsio inline crypto drivers (for TLS and IPsec) from the
crypto subtree to drivers/net. Move MDIO drivers out of the phy
directory.
- Clean up a lot of W=1 warnings, reportedly the actively developed
subsections of networking drivers should now build W=1 warning free.
- Make sure drivers don't use in_interrupt() to dynamically adapt their
code. Convert tasklets to use new tasklet_setup API (sadly this
conversion is not yet complete).
* tag 'net-next-5.10' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next: (2583 commits)
Revert "bpfilter: Fix build error with CONFIG_BPFILTER_UMH"
net, sockmap: Don't call bpf_prog_put() on NULL pointer
bpf, selftest: Fix flaky tcp_hdr_options test when adding addr to lo
bpf, sockmap: Add locking annotations to iterator
netfilter: nftables: allow re-computing sctp CRC-32C in 'payload' statements
net: fix pos incrementment in ipv6_route_seq_next
net/smc: fix invalid return code in smcd_new_buf_create()
net/smc: fix valid DMBE buffer sizes
net/smc: fix use-after-free of delayed events
bpfilter: Fix build error with CONFIG_BPFILTER_UMH
cxgb4/ch_ipsec: Replace the module name to ch_ipsec from chcr
net: sched: Fix suspicious RCU usage while accessing tcf_tunnel_info
bpf: Fix register equivalence tracking.
rxrpc: Fix loss of final ack on shutdown
rxrpc: Fix bundle counting for exclusive connections
netfilter: restore NF_INET_NUMHOOKS
ibmveth: Identify ingress large send packets.
ibmveth: Switch order of ibmveth_helper calls.
cxgb4: handle 4-tuple PEDIT to NAT mode translation
selftests: Add VRF route leaking tests
...
Diffstat (limited to 'net/openvswitch/flow_table.c')
-rw-r--r-- | net/openvswitch/flow_table.c | 70 |
1 files changed, 35 insertions, 35 deletions
diff --git a/net/openvswitch/flow_table.c b/net/openvswitch/flow_table.c index e2235849a57e..87c286ad660e 100644 --- a/net/openvswitch/flow_table.c +++ b/net/openvswitch/flow_table.c @@ -111,12 +111,16 @@ static void flow_free(struct sw_flow *flow) if (ovs_identifier_is_key(&flow->id)) kfree(flow->id.unmasked_key); if (flow->sf_acts) - ovs_nla_free_flow_actions((struct sw_flow_actions __force *)flow->sf_acts); + ovs_nla_free_flow_actions((struct sw_flow_actions __force *) + flow->sf_acts); /* We open code this to make sure cpu 0 is always considered */ - for (cpu = 0; cpu < nr_cpu_ids; cpu = cpumask_next(cpu, &flow->cpu_used_mask)) + for (cpu = 0; cpu < nr_cpu_ids; + cpu = cpumask_next(cpu, &flow->cpu_used_mask)) { if (flow->stats[cpu]) kmem_cache_free(flow_stats_cache, (struct sw_flow_stats __force *)flow->stats[cpu]); + } + kmem_cache_free(flow_cache, flow); } @@ -164,7 +168,6 @@ static struct table_instance *table_instance_alloc(int new_size) ti->n_buckets = new_size; ti->node_ver = 0; - ti->keep_flows = false; get_random_bytes(&ti->hash_seed, sizeof(u32)); return ti; @@ -192,7 +195,7 @@ static void tbl_mask_array_reset_counters(struct mask_array *ma) * zero based counter we store the value at reset, and subtract it * later when processing. */ - for (i = 0; i < ma->max; i++) { + for (i = 0; i < ma->max; i++) { ma->masks_usage_zero_cntr[i] = 0; for_each_possible_cpu(cpu) { @@ -273,7 +276,7 @@ static int tbl_mask_array_add_mask(struct flow_table *tbl, if (ma_count >= ma->max) { err = tbl_mask_array_realloc(tbl, ma->max + - MASK_ARRAY_SIZE_MIN); + MASK_ARRAY_SIZE_MIN); if (err) return err; @@ -288,7 +291,7 @@ static int tbl_mask_array_add_mask(struct flow_table *tbl, BUG_ON(ovsl_dereference(ma->masks[ma_count])); rcu_assign_pointer(ma->masks[ma_count], new); - WRITE_ONCE(ma->count, ma_count +1); + WRITE_ONCE(ma->count, ma_count + 1); return 0; } @@ -309,10 +312,10 @@ static void tbl_mask_array_del_mask(struct flow_table *tbl, return; found: - WRITE_ONCE(ma->count, ma_count -1); + WRITE_ONCE(ma->count, ma_count - 1); - rcu_assign_pointer(ma->masks[i], ma->masks[ma_count -1]); - RCU_INIT_POINTER(ma->masks[ma_count -1], NULL); + rcu_assign_pointer(ma->masks[i], ma->masks[ma_count - 1]); + RCU_INIT_POINTER(ma->masks[ma_count - 1], NULL); kfree_rcu(mask, rcu); @@ -448,26 +451,23 @@ free_mask_cache: static void flow_tbl_destroy_rcu_cb(struct rcu_head *rcu) { - struct table_instance *ti = container_of(rcu, struct table_instance, rcu); + struct table_instance *ti; + ti = container_of(rcu, struct table_instance, rcu); __table_instance_destroy(ti); } static void table_instance_flow_free(struct flow_table *table, - struct table_instance *ti, - struct table_instance *ufid_ti, - struct sw_flow *flow, - bool count) + struct table_instance *ti, + struct table_instance *ufid_ti, + struct sw_flow *flow) { hlist_del_rcu(&flow->flow_table.node[ti->node_ver]); - if (count) - table->count--; + table->count--; if (ovs_identifier_is_ufid(&flow->id)) { hlist_del_rcu(&flow->ufid_table.node[ufid_ti->node_ver]); - - if (count) - table->ufid_count--; + table->ufid_count--; } flow_mask_remove(table, flow->mask); @@ -480,22 +480,25 @@ void table_instance_flow_flush(struct flow_table *table, { int i; - if (ti->keep_flows) - return; - for (i = 0; i < ti->n_buckets; i++) { - struct sw_flow *flow; struct hlist_head *head = &ti->buckets[i]; struct hlist_node *n; + struct sw_flow *flow; hlist_for_each_entry_safe(flow, n, head, flow_table.node[ti->node_ver]) { table_instance_flow_free(table, ti, ufid_ti, - flow, false); + flow); ovs_flow_free(flow, true); } } + + if (WARN_ON(table->count != 0 || + table->ufid_count != 0)) { + table->count = 0; + table->ufid_count = 0; + } } static void table_instance_destroy(struct table_instance *ti, @@ -596,8 +599,6 @@ static void flow_table_copy_flows(struct table_instance *old, lockdep_ovsl_is_held()) table_instance_insert(new, flow); } - - old->keep_flows = true; } static struct table_instance *table_instance_rehash(struct table_instance *ti, @@ -632,8 +633,6 @@ int ovs_flow_tbl_flush(struct flow_table *flow_table) rcu_assign_pointer(flow_table->ti, new_ti); rcu_assign_pointer(flow_table->ufid_ti, new_ufid_ti); flow_table->last_rehash = jiffies; - flow_table->count = 0; - flow_table->ufid_count = 0; table_instance_flow_flush(flow_table, old_ti, old_ufid_ti); table_instance_destroy(old_ti, old_ufid_ti); @@ -661,7 +660,7 @@ static int flow_key_start(const struct sw_flow_key *key) return 0; else return rounddown(offsetof(struct sw_flow_key, phy), - sizeof(long)); + sizeof(long)); } static bool cmp_key(const struct sw_flow_key *key1, @@ -673,7 +672,7 @@ static bool cmp_key(const struct sw_flow_key *key1, long diffs = 0; int i; - for (i = key_start; i < key_end; i += sizeof(long)) + for (i = key_start; i < key_end; i += sizeof(long)) diffs |= *cp1++ ^ *cp2++; return diffs == 0; @@ -713,7 +712,7 @@ static struct sw_flow *masked_flow_lookup(struct table_instance *ti, (*n_mask_hit)++; hlist_for_each_entry_rcu(flow, head, flow_table.node[ti->node_ver], - lockdep_ovsl_is_held()) { + lockdep_ovsl_is_held()) { if (flow->mask == mask && flow->flow_table.hash == hash && flow_cmp_masked_key(flow, &masked_key, &mask->range)) return flow; @@ -897,7 +896,8 @@ static bool ovs_flow_cmp_ufid(const struct sw_flow *flow, return !memcmp(flow->id.ufid, sfid->ufid, sfid->ufid_len); } -bool ovs_flow_cmp(const struct sw_flow *flow, const struct sw_flow_match *match) +bool ovs_flow_cmp(const struct sw_flow *flow, + const struct sw_flow_match *match) { if (ovs_identifier_is_ufid(&flow->id)) return flow_cmp_masked_key(flow, match->key, &match->range); @@ -916,7 +916,7 @@ struct sw_flow *ovs_flow_tbl_lookup_ufid(struct flow_table *tbl, hash = ufid_hash(ufid); head = find_bucket(ti, hash); hlist_for_each_entry_rcu(flow, head, ufid_table.node[ti->node_ver], - lockdep_ovsl_is_held()) { + lockdep_ovsl_is_held()) { if (flow->ufid_table.hash == hash && ovs_flow_cmp_ufid(flow, ufid)) return flow; @@ -950,7 +950,7 @@ void ovs_flow_tbl_remove(struct flow_table *table, struct sw_flow *flow) struct table_instance *ufid_ti = ovsl_dereference(table->ufid_ti); BUG_ON(table->count == 0); - table_instance_flow_free(table, ti, ufid_ti, flow, true); + table_instance_flow_free(table, ti, ufid_ti, flow); } static struct sw_flow_mask *mask_alloc(void) @@ -1107,7 +1107,7 @@ void ovs_flow_masks_rebalance(struct flow_table *table) if (!masks_and_count) return; - for (i = 0; i < ma->max; i++) { + for (i = 0; i < ma->max; i++) { struct sw_flow_mask *mask; unsigned int start; int cpu; |