summaryrefslogtreecommitdiffstats
path: root/net
AgeCommit message (Collapse)AuthorFilesLines
2019-06-12net: sched: ingress: set 'unlocked' flag for Qdisc opsVlad Buslov1-0/+1
To remove rtnl lock dependency in tc filter update API when using ingress Qdisc, set QDISC_CLASS_OPS_DOIT_UNLOCKED flag in ingress Qdisc_class_ops. Ingress Qdisc ops don't require any modifications to be used without rtnl lock on tc filter update path. Ingress implementation never changes its q->block and only releases it when Qdisc is being destroyed. This means it is enough for RTM_{NEWTFILTER|DELTFILTER|GETTFILTER} message handlers to hold ingress Qdisc reference while using it without relying on rtnl lock protection. Unlocked Qdisc ops support is already implemented in filter update path by unlocked cls API patch set. Signed-off-by: Vlad Buslov <vladbu@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-11net/tls: add kernel-driven resync mechanism for TXJakub Kicinski1-0/+27
TLS offload drivers keep track of TCP seq numbers to make sure the packets are fed into the HW in order. When packets get dropped on the way through the stack, the driver will get out of sync and have to use fallback encryption, but unless TCP seq number is resynced it will never match the packets correctly (or even worse - use incorrect record sequence number after TCP seq wraps). Existing drivers (mlx5) feed the entire record on every out-of-order event, allowing FW/HW to always be in sync. This patch adds an alternative, more akin to the RX resync. When driver sees a frame which is past its expected sequence number the stream must have gotten out of order (if the sequence number is smaller than expected its likely a retransmission which doesn't require resync). Driver will ask the stack to perform TX sync before it submits the next full record, and fall back to software crypto until stack has performed the sync. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Dirk van der Merwe <dirk.vandermerwe@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-11net/tls: generalize the resync callbackJakub Kicinski1-2/+3
Currently only RX direction is ever resynced, however, TX may also get out of sequence if packets get dropped on the way to the driver. Rename the resync callback and add a direction parameter. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Dirk van der Merwe <dirk.vandermerwe@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-11net/tls: add kernel-driven TLS RX resyncJakub Kicinski2-13/+94
TLS offload device may lose sync with the TCP stream if packets arrive out of order. Drivers can currently request a resync at a specific TCP sequence number. When a record is found starting at that sequence number kernel will inform the device of the corresponding record number. This requires the device to constantly scan the stream for a known pattern (constant bytes of the header) after sync is lost. This patch adds an alternative approach which is entirely under the control of the kernel. Kernel tracks records it had to fully decrypt, even though TLS socket is in TLS_HW mode. If multiple records did not have any decrypted parts - it's a pretty strong indication that the device is out of sync. We choose the min number of fully encrypted records to be 2, which should hopefully be more than will get retransmitted at a time. After kernel decides the device is out of sync it schedules a resync request. If the TCP socket is empty the resync gets performed immediately. If socket is not empty we leave the record parser to resync when next record comes. Before resync in message parser we peek at the TCP socket and don't attempt the sync if the socket already has some of the next record queued. On resync failure (encrypted data continues to flow in) we retry with exponential backoff, up to once every 128 records (with a 16k record thats at most once every 2M of data). Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Dirk van der Merwe <dirk.vandermerwe@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-11net/tls: rename handle_device_resync()Jakub Kicinski2-2/+3
handle_device_resync() doesn't describe the function very well. The function checks if resync should be issued upon parsing of a new record. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Dirk van der Merwe <dirk.vandermerwe@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-11net/tls: pass record number as a byte arrayJakub Kicinski2-7/+13
TLS offload code casts record number to a u64. The buffer should be aligned to 8 bytes, but its actually a __be64, and the rest of the TLS code treats it as big int. Make the offload callbacks take a byte array, drivers can make the choice to do the ugly cast if they want to. Prepare for copying the record number onto the stack by defining a constant for max size of the byte array. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Dirk van der Merwe <dirk.vandermerwe@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-11net/tls: simplify seq calculation in handle_device_resync()Jakub Kicinski1-4/+3
We subtract "TLS_HEADER_SIZE - 1" from req_seq, then if they match we add the same constant to seq. Just add it to seq, and we don't have to touch req_seq. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Dirk van der Merwe <dirk.vandermerwe@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-11packet: remove unused variable 'status' in __packet_lookup_frame_in_blockMao Wenan1-2/+1
The variable 'status' in __packet_lookup_frame_in_block() is never used since introduction in commit f6fb8f100b80 ("af-packet: TPACKET_V3 flexible buffer implementation."), we can remove it. Signed-off-by: Mao Wenan <maowenan@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-11net: openvswitch: remove unnecessary ASSERT_OVSL in ovs_vport_del()Taehee Yoo1-2/+0
ASSERT_OVSL() in ovs_vport_del() is unnecessary because ovs_vport_del() is only called by ovs_dp_detach_port() and ovs_dp_detach_port() calls ASSERT_OVSL() too. Signed-off-by: Taehee Yoo <ap420073@gmail.com> Reviewed-by: Greg Rose <gvrose8192@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-11net: netlink: make netlink_walk_start() void return typeTaehee Yoo1-12/+3
netlink_walk_start() needed to return an error code because of rhashtable_walk_init(). but that was converted to rhashtable_walk_enter() and it is a void type function. so now netlink_walk_start() doesn't need any return value. Signed-off-by: Taehee Yoo <ap420073@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-10nexthops: add support for replaceDavid Ahern1-5/+214
Add support for atomically upating a nexthop config. When updating a nexthop, walk the lists of associated fib entries and verify the new config is valid. Replace is done by swapping nh_info for single nexthops - new config is applied to old nexthop struct, and old config is moved to new nexthop struct. For nexthop groups the same applies but for nh_group. In addition for groups the nh_parent reference needs to be updated. The old config is released by calling __remove_nexthop on the 'new' nexthop which now has the old config. This is done to avoid messing around with the list_heads that track which fib entries are using the nexthop. After the swap of config data, bump the sequence counters for FIB entries to invalidate any dst entries and send notifications to userspace. The notifications include the new nexthop spec as well as any fib entries using the updated nexthop struct. Signed-off-by: David Ahern <dsahern@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-10ipv6: Allow routes to use nexthop objectsDavid Ahern1-8/+81
Add support for RTA_NH_ID attribute to allow a user to specify a nexthop id to use with a route. fc_nh_id is added to fib6_config to hold the value passed in the RTA_NH_ID attribute. If a nexthop id is given, the gateway, device, encap and multipath attributes can not be set. Update ip6_route_del to check metric and protocol before nexthop specs. If fc_nh_id is set, then it must match the id in the route entry. Since IPv6 allows delete of a cached entry (an exception), add ip6_del_cached_rt_nh to cycle through all of the fib6_nh in a fib entry if it is using a nexthop. Signed-off-by: David Ahern <dsahern@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-10ipv4: Optimization for fib_info lookup with nexthopsDavid Ahern1-6/+65
Be optimistic about re-using a fib_info when nexthop id is given and the route does not use metrics. Avoids a memory allocation which in most cases is expected to be freed anyways. Signed-off-by: David Ahern <dsahern@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-10ipv4: Allow routes to use nexthop objectsDavid Ahern2-0/+34
Add support for RTA_NH_ID attribute to allow a user to specify a nexthop id to use with a route. fc_nh_id is added to fib_config to hold the value passed in the RTA_NH_ID attribute. If a nexthop id is given, the gateway, device, encap and multipath attributes can not be set. Update fib_nh_match to check ids on a route delete. Signed-off-by: David Ahern <dsahern@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-10ipv6: Handle all fib6_nh in a nexthop in mtu updatesDavid Ahern1-1/+28
Use nexthop_for_each_fib6_nh to call fib6_nh_mtu_change for each fib6_nh in a nexthop for rt6_mtu_change_route. For __ip6_rt_update_pmtu, we need to find the nexthop that correlates to the device and gateway in the rt6_info. Signed-off-by: David Ahern <dsahern@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-10ipv6: Handle all fib6_nh in a nexthop in rt6_do_redirectDavid Ahern1-1/+19
Use nexthop_for_each_fib6_nh and fib6_nh_find_match to find the fib6_nh in a nexthop that correlates to the device and gateway in the rt6_info. Signed-off-by: David Ahern <dsahern@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-10ipv6: Handle all fib6_nh in a nexthop in __ip6_route_redirectDavid Ahern1-4/+35
Add a hook in __ip6_route_redirect to handle a nexthop struct in a fib6_info. Use nexthop_for_each_fib6_nh and fib6_nh_redirect_match to call ip6_redirect_nh_match for each fib6_nh looking for a match. Signed-off-by: David Ahern <dsahern@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-10ipv6: Handle all fib6_nh in a nexthop in exception handlingDavid Ahern1-3/+108
Add a hook in rt6_flush_exceptions, rt6_remove_exception_rt, rt6_update_exception_stamp_rt, and rt6_age_exceptions to handle nexthop struct in a fib6_info. Signed-off-by: David Ahern <dsahern@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-10ipv6: Handle all fib6_nh in a nexthop in fib6_info_uses_devDavid Ahern1-0/+18
Add a hook in fib6_info_uses_dev to handle nexthop struct in a fib6_info. Signed-off-by: David Ahern <dsahern@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-10ipv6: Handle all fib6_nh in a nexthop in rt6_nlmsg_sizeDavid Ahern1-12/+37
Add a hook in rt6_nlmsg_size to handle nexthop struct in a fib6_info. rt6_nh_nlmsg_size is used to sum the space needed for all nexthops in the fib entry. Signed-off-by: David Ahern <dsahern@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-10ipv6: Handle all fib6_nh in a nexthop in __find_rr_leafDavid Ahern1-2/+47
Add a hook in __find_rr_leaf to handle nexthop struct in a fib6_info. nexthop_for_each_fib6_nh is used to walk each fib6_nh in a nexthop and call find_match. On a match, use the fib6_nh saved in the callback arg to setup fib6_result. Signed-off-by: David Ahern <dsahern@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-10ipv6: Handle all fib6_nh in a nexthop in rt6_device_matchDavid Ahern1-2/+52
Add a hook in rt6_device_match to handle nexthop struct in a fib6_info. The new rt6_nh_dev_match uses nexthop_for_each_fib6_nh to walk each fib6_nh in a nexthop and call __rt6_device_match. On match, rt6_nh_dev_match returns the fib6_nh and rt6_device_match uses it to setup fib6_result. Signed-off-by: David Ahern <dsahern@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-10ipv6: Handle all fib6_nh in a nexthop in fib6_drop_pcpu_fromDavid Ahern1-4/+27
Use nexthop_for_each_fib6_nh to walk all fib6_nh in a nexthop when dropping 'from' reference in pcpu routes. Signed-off-by: David Ahern <dsahern@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-10nexthops: Add ipv6 helper to walk all fib6_nh in a nexthop structDavid Ahern1-0/+31
IPv6 has traditionally had a single fib6_nh per fib6_info. With nexthops we can have multiple fib6_nh associated with a fib6_info. Add a nexthop helper to invoke a callback for each fib6_nh in a 'struct nexthop'. If the callback returns non-0, the loop is stopped and the return value passed to the caller. Signed-off-by: David Ahern <dsahern@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-10tcp: Make tcp_fastopen_alloc_ctx staticYueHaibing1-3/+3
Fix sparse warning: net/ipv4/tcp_fastopen.c:75:29: warning: symbol 'tcp_fastopen_alloc_ctx' was not declared. Should it be static? Reported-by: Hulk Robot <hulkci@huawei.com> Signed-off-by: YueHaibing <yuehaibing@huawei.com> Acked-by: Jason Baron <jbaron@akamai.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-09ipv6: tcp: send consistent autoflowlabel in TIME_WAIT stateEric Dumazet2-3/+11
In case autoflowlabel is in action, skb_get_hash_flowi6() derives a non zero skb->hash to the flowlabel. If skb->hash is zero, a flow dissection is performed. Since all TCP skbs sent from ESTABLISH state inherit their skb->hash from sk->sk_txhash, we better keep a copy of sk->sk_txhash into the TIME_WAIT socket. After this patch, ACK or RST packets sent on behalf of a TIME_WAIT socket have the flowlabel that was previously used by the flow. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-09af_key: make use of BUG_ON macroHariprasad Kelam1-4/+2
fix below warnings reported by coccicheck net/key/af_key.c:932:2-5: WARNING: Use BUG_ON instead of if condition followed by BUG. net/key/af_key.c:948:2-5: WARNING: Use BUG_ON instead of if condition followed by BUG. Signed-off-by: Hariprasad Kelam <hariprasad.kelam@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-09ipv6: tcp: fix potential NULL deref in tcp_v6_send_reset()Eric Dumazet1-1/+1
syzbot found a crash in tcp_v6_send_reset() caused by my latest change. Problem is that if an skb has been queued to socket prequeue, skb_dst(skb)->dev can not anymore point to the device. Fortunately in this case the socket pointer is not NULL. A similar issue has been fixed in commit 0f85feae6b71 ("tcp: fix more NULL deref after prequeue changes"), I should have known better. Fixes: 323a53c41292 ("ipv6: tcp: enable flowlabel reflection in some RST packets") Signed-off-by: Eric Dumazet <edumazet@google.com> Reported-by: syzbot <syzkaller@googlegroups.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-09net: hwbm: Make the hwbm_pool lock a mutexSebastian Andrzej Siewior1-8/+7
Based on review, `lock' is only acquired in hwbm_pool_add() which is invoked via ->probe(), ->resume() and ->ndo_change_mtu(). Based on this the lock can become a mutex and there is no need to disable interrupts during the procedure. Now that the lock is a mutex, hwbm_pool_add() no longer invokes hwbm_pool_refill() in an atomic context so we can pass GFP_KERNEL to hwbm_pool_refill() and remove the `gfp' argument from hwbm_pool_add(). Cc: Thomas Petazzoni <thomas.petazzoni@bootlin.com> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-09net: Don't disable interrupts in __netdev_alloc_skb()Sebastian Andrzej Siewior1-8/+11
__netdev_alloc_skb() can be used from any context and is used by NAPI and non-NAPI drivers. Non-NAPI drivers use it in interrupt context and NAPI drivers use it during initial allocation (->ndo_open() or ->ndo_change_mtu()). Some NAPI drivers share the same function for the initial allocation and the allocation in their NAPI callback. The interrupts are disabled in order to ensure locked access from every context to `netdev_alloc_cache'. Let __netdev_alloc_skb() check if interrupts are disabled. If they are, use `netdev_alloc_cache'. Otherwise disable BH and use `napi_alloc_cache.page'. The IRQ check is cheaper compared to disabling & enabling interrupts and memory allocation with disabled interrupts does not work on -RT. Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-09net: Don't disable interrupts in napi_alloc_frag()Sebastian Andrzej Siewior1-26/+23
netdev_alloc_frag() can be used from any context and is used by NAPI and non-NAPI drivers. Non-NAPI drivers use it in interrupt context and NAPI drivers use it during initial allocation (->ndo_open() or ->ndo_change_mtu()). Some NAPI drivers share the same function for the initial allocation and the allocation in their NAPI callback. The interrupts are disabled in order to ensure locked access from every context to `netdev_alloc_cache'. Let netdev_alloc_frag() check if interrupts are disabled. If they are, use `netdev_alloc_cache' otherwise disable BH and invoke __napi_alloc_frag() for the allocation. The IRQ check is cheaper compared to disabling & enabling interrupts and memory allocation with disabled interrupts does not work on -RT. Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-08net: dsa: sja1105: Add a state machine for RX timestampingVladimir Oltean1-1/+120
Meta frame reception relies on the hardware keeping its promise that it will send no other traffic towards the CPU port between a link-local frame and a meta frame. Otherwise there is no other way to associate the meta frame with the link-local frame it's holding a timestamp of. The receive function is made stateful, and buffers a timestampable frame until its meta frame arrives, then merges the two, drops the meta and releases the link-local frame up the stack. Signed-off-by: Vladimir Oltean <olteanv@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-08net: dsa: sja1105: Receive and decode meta framesVladimir Oltean1-3/+41
This adds support in the tagger for understanding the source port and switch id of meta frames. Their timestamp is also extracted but not used yet - this needs to be done in a state machine that modifies the previously received timestampable frame - will be added in a follow-up patch. Also take the opportunity to: - Remove a comment in sja1105_filter made obsolete by e8d67fa5696e ("net: dsa: sja1105: Don't store frame type in skb->cb") - Reorder the checks in sja1105_filter to optimize for the most likely scenario first: regular traffic. Signed-off-by: Vladimir Oltean <olteanv@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-08net: dsa: sja1105: Make sja1105_is_link_local not match meta framesVladimir Oltean1-0/+2
Although meta frames are configured to be sent at SJA1105_META_DMAC (01-80-C2-00-00-0E) which is a multicast MAC address that would also be trapped by the switch to the CPU, were it to receive it on a front-panel port, meta frames are conceptually not link-local frames, they only carry their RX timestamps. The choice of sending meta frames at a multicast DMAC is a pragmatic one, to avoid installing an extra entry to the DSA master port's multicast MAC filter. Signed-off-by: Vladimir Oltean <olteanv@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-08net: dsa: sja1105: Build a minimal understanding of meta framesVladimir Oltean1-0/+15
Meta frames are sent on the CPU port by the switch if RX timestamping is enabled. They contain a partial timestamp of the previous frame. They are Ethernet frames with the Ethernet header constructed out of: - SJA1105_META_DMAC - SJA1105_META_SMAC - ETH_P_SJA1105_META The Ethernet payload will be decoded in a follow-up patch. Signed-off-by: Vladimir Oltean <olteanv@gmail.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-08net: dsa: sja1105: Limit use of incl_srcpt to bridge+vlan modeVladimir Oltean1-7/+11
The incl_srcpt setting makes the switch mangle the destination MACs of multicast frames trapped to the CPU - a primitive tagging mechanism that works even when we cannot use the 802.1Q software features. The downside is that the two multicast MAC addresses that the switch traps for L2 PTP (01-80-C2-00-00-0E and 01-1B-19-00-00-00) quickly turn into a lot more, as the switch encodes the source port and switch id into bytes 3 and 4 of the MAC. The resulting range of MAC addresses would need to be installed manually into the DSA master port's multicast MAC filter, and even then, most devices might not have a large enough MAC filtering table. As a result, only limit use of incl_srcpt to when it's strictly necessary: when under a VLAN filtering bridge. This fixes PTP in non-bridged mode (standalone ports). Otherwise, PTP frames, as well as metadata follow-up frames holding RX timestamps won't be received because they will be blocked by the master port's MAC filter. Linuxptp doesn't help, because it only requests the addition of the unmodified PTP MACs to the multicast filter. This issue is not seen in bridged mode because the master port is put in promiscuous mode when the slave ports are enslaved to a bridge. Therefore, there is no downside to having the incl_srcpt mechanism active there. Signed-off-by: Vladimir Oltean <olteanv@gmail.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-08net: dsa: tag_8021q: Create helper function for removing VLAN headerVladimir Oltean2-30/+46
This removes the existing implementation from tag_sja1105, which was partially incorrect (it was not changing the MAC header offset, thereby leaving it to point 4 bytes earlier than it should have). This overwrites the VLAN tag by moving the Ethernet source and destination MACs 4 bytes to the right. Then skb->data (assumed to be pointing immediately after the EtherType) is temporarily pushed to the beginning of the new Ethernet header, the new Ethernet header offset and length are recorded, then skb->data is moved back to where it was. Signed-off-by: Vladimir Oltean <olteanv@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-08net: dsa: Add teardown callback for driversVladimir Oltean1-0/+3
This is helpful for e.g. draining per-driver (not per-port) tagger queues. Signed-off-by: Vladimir Oltean <olteanv@gmail.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-08net: dsa: Keep a pointer to the skb clone for TX timestampingVladimir Oltean1-0/+3
For drivers that use deferred_xmit for PTP frames (such as sja1105), there is no need to perform matching between PTP frames and their egress timestamps, since the sending process can be serialized. In that case, it makes sense to have the pointer to the skb clone that DSA made directly in the skb->cb. It will be used for pushing the egress timestamp back in the application socket's error queue. Signed-off-by: Vladimir Oltean <olteanv@gmail.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-07Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/netDavid S. Miller428-2419/+533
Some ISDN files that got removed in net-next had some changes done in mainline, take the removals. Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-07Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/netLinus Torvalds15-59/+92
Pull networking fixes from David Miller: 1) Free AF_PACKET po->rollover properly, from Willem de Bruijn. 2) Read SFP eeprom in max 16 byte increments to avoid problems with some SFP modules, from Russell King. 3) Fix UDP socket lookup wrt. VRF, from Tim Beale. 4) Handle route invalidation properly in s390 qeth driver, from Julian Wiedmann. 5) Memory leak on unload in RDS, from Zhu Yanjun. 6) sctp_process_init leak, from Neil HOrman. 7) Fix fib_rules rule insertion semantic change that broke Android, from Hangbin Liu. * git://git.kernel.org/pub/scm/linux/kernel/git/davem/net: (33 commits) pktgen: do not sleep with the thread lock held. net: mvpp2: Use strscpy to handle stat strings net: rds: fix memory leak in rds_ib_flush_mr_pool ipv6: fix EFAULT on sendto with icmpv6 and hdrincl ipv6: use READ_ONCE() for inet->hdrincl as in ipv4 Revert "fib_rules: return 0 directly if an exactly same rule exists when NLM_F_EXCL not supplied" net: aquantia: fix wol configuration not applied sometimes ethtool: fix potential userspace buffer overflow Fix memory leak in sctp_process_init net: rds: fix memory leak when unload rds_rdma ipv6: fix the check before getting the cookie in rt6_get_cookie ipv4: not do cache for local delivery if bc_forwarding is enabled s390/qeth: handle error when updating TX queue count s390/qeth: fix VLAN attribute in bridge_hostnotify udev event s390/qeth: check dst entry before use s390/qeth: handle limited IPv4 broadcast in L3 TX path net: fix indirect calls helpers for ptype list hooks. net: ipvlan: Fix ipvlan device tso disabled while NETIF_F_IP_CSUM is set udp: only choose unbound UDP socket for multicast when not in a VRF net/tls: replace the sleeping lock around RX resync with a bit lock ...
2019-06-06net/tls: export TLS per skb encryptionDirk van der Merwe1-0/+6
While offloading TLS connections, drivers need to handle the case where out of order packets need to be transmitted. Other drivers obtain the entire TLS record for the specific skb to provide as context to hardware for encryption. However, other designs may also want to keep the hardware state intact and perform the out of order encryption entirely on the host. To achieve this, export the already existing software encryption fallback path so drivers could access this. Signed-off-by: Dirk van der Merwe <dirk.vandermerwe@netronome.com> Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-06Merge tag 'nfs-for-5.2-2' of git://git.linux-nfs.org/projects/anna/linux-nfsLinus Torvalds2-17/+16
Pull NFS client fixes from Anna Schumaker: "These are mostly stable bugfixes found during testing, many during the recent NFS bake-a-thon. Stable bugfixes: - SUNRPC: Fix regression in umount of a secure mount - SUNRPC: Fix a use after free when a server rejects the RPCSEC_GSS credential - NFSv4.1: Again fix a race where CB_NOTIFY_LOCK fails to wake a waiter - NFSv4.1: Fix bug only first CB_NOTIFY_LOCK is handled Other bugfixes: - xprtrdma: Use struct_size() in kzalloc()" * tag 'nfs-for-5.2-2' of git://git.linux-nfs.org/projects/anna/linux-nfs: NFSv4.1: Fix bug only first CB_NOTIFY_LOCK is handled NFSv4.1: Again fix a race where CB_NOTIFY_LOCK fails to wake a waiter SUNRPC: Fix a use after free when a server rejects the RPCSEC_GSS credential SUNRPC fix regression in umount of a secure mount xprtrdma: Use struct_size() in kzalloc()
2019-06-06pktgen: do not sleep with the thread lock held.Paolo Abeni1-0/+11
Currently, the process issuing a "start" command on the pktgen procfs interface, acquires the pktgen thread lock and never release it, until all pktgen threads are completed. The above can blocks indefinitely any other pktgen command and any (even unrelated) netdevice removal - as the pktgen netdev notifier acquires the same lock. The issue is demonstrated by the following script, reported by Matteo: ip -b - <<'EOF' link add type dummy link add type veth link set dummy0 up EOF modprobe pktgen echo reset >/proc/net/pktgen/pgctrl { echo rem_device_all echo add_device dummy0 } >/proc/net/pktgen/kpktgend_0 echo count 0 >/proc/net/pktgen/dummy0 echo start >/proc/net/pktgen/pgctrl & sleep 1 rmmod veth Fix the above releasing the thread lock around the sleep call. Additionally we must prevent racing with forcefull rmmod - as the thread lock no more protects from them. Instead, acquire a self-reference before waiting for any thread. As a side effect, running rmmod pktgen while some thread is running now fails with "module in use" error, before this patch such command hanged indefinitely. Note: the issue predates the commit reported in the fixes tag, but this fix can't be applied before the mentioned commit. v1 -> v2: - no need to check for thread existence after flipping the lock, pktgen threads are freed only at net exit time - Fixes: 6146e6a43b35 ("[PKTGEN]: Removes thread_{un,}lock() macros.") Reported-and-tested-by: Matteo Croce <mcroce@redhat.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-06ipv6: fix spelling mistake: "wtih" -> "with"Colin Ian King1-1/+1
There is a spelling mistake in a NL_SET_ERR_MSG message. Fix it. Signed-off-by: Colin Ian King <colin.king@canonical.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-06net: rds: fix memory leak in rds_ib_flush_mr_poolZhu Yanjun1-4/+6
When the following tests last for several hours, the problem will occur. Server: rds-stress -r 1.1.1.16 -D 1M Client: rds-stress -r 1.1.1.14 -s 1.1.1.16 -D 1M -T 30 The following will occur. " Starting up.... tsks tx/s rx/s tx+rx K/s mbi K/s mbo K/s tx us/c rtt us cpu % 1 0 0 0.00 0.00 0.00 0.00 0.00 -1.00 1 0 0 0.00 0.00 0.00 0.00 0.00 -1.00 1 0 0 0.00 0.00 0.00 0.00 0.00 -1.00 1 0 0 0.00 0.00 0.00 0.00 0.00 -1.00 " >From vmcore, we can find that clean_list is NULL. >From the source code, rds_mr_flushd calls rds_ib_mr_pool_flush_worker. Then rds_ib_mr_pool_flush_worker calls " rds_ib_flush_mr_pool(pool, 0, NULL); " Then in function " int rds_ib_flush_mr_pool(struct rds_ib_mr_pool *pool, int free_all, struct rds_ib_mr **ibmr_ret) " ibmr_ret is NULL. In the source code, " ... list_to_llist_nodes(pool, &unmap_list, &clean_nodes, &clean_tail); if (ibmr_ret) *ibmr_ret = llist_entry(clean_nodes, struct rds_ib_mr, llnode); /* more than one entry in llist nodes */ if (clean_nodes->next) llist_add_batch(clean_nodes->next, clean_tail, &pool->clean_list); ... " When ibmr_ret is NULL, llist_entry is not executed. clean_nodes->next instead of clean_nodes is added in clean_list. So clean_nodes is discarded. It can not be used again. The workqueue is executed periodically. So more and more clean_nodes are discarded. Finally the clean_list is NULL. Then this problem will occur. Fixes: 1bc144b62524 ("net, rds, Replace xlist in net/rds/xlist.h with llist") Signed-off-by: Zhu Yanjun <yanjun.zhu@oracle.com> Acked-by: Santosh Shilimkar <santosh.shilimkar@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-06ipv6: fix EFAULT on sendto with icmpv6 and hdrinclOlivier Matz1-5/+8
The following code returns EFAULT (Bad address): s = socket(AF_INET6, SOCK_RAW, IPPROTO_ICMPV6); setsockopt(s, SOL_IPV6, IPV6_HDRINCL, 1); sendto(ipv6_icmp6_packet, addr); /* returns -1, errno = EFAULT */ The IPv4 equivalent code works. A workaround is to use IPPROTO_RAW instead of IPPROTO_ICMPV6. The failure happens because 2 bytes are eaten from the msghdr by rawv6_probe_proto_opt() starting from commit 19e3c66b52ca ("ipv6 equivalent of "ipv4: Avoid reading user iov twice after raw_probe_proto_opt""), but at that time it was not a problem because IPV6_HDRINCL was not yet introduced. Only eat these 2 bytes if hdrincl == 0. Fixes: 715f504b1189 ("ipv6: add IPV6_HDRINCL option for raw sockets") Signed-off-by: Olivier Matz <olivier.matz@6wind.com> Acked-by: Nicolas Dichtel <nicolas.dichtel@6wind.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-06ipv6: use READ_ONCE() for inet->hdrincl as in ipv4Olivier Matz1-2/+10
As it was done in commit 8f659a03a0ba ("net: ipv4: fix for a race condition in raw_sendmsg") and commit 20b50d79974e ("net: ipv4: emulate READ_ONCE() on ->hdrincl bit-field in raw_sendmsg()") for ipv4, copy the value of inet->hdrincl in a local variable, to avoid introducing a race condition in the next commit. Signed-off-by: Olivier Matz <olivier.matz@6wind.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-05ipv6: tcp: send consistent flowlabel in TIME_WAIT stateEric Dumazet1-0/+2
After commit 1d13a96c74fc ("ipv6: tcp: fix flowlabel value in ACK messages"), we stored in tw_flowlabel the flowlabel, in the case ACK packets needed to be sent on behalf of a TIME_WAIT socket. We can use the same field so that RST packets sent from TIME_WAIT state also use a consistent flowlabel. Signed-off-by: Eric Dumazet <edumazet@google.com> Cc: Florent Fourcot <florent.fourcot@wifirst.fr> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-05ipv6: tcp: enable flowlabel reflection in some RST packetsEric Dumazet3-4/+14
When RST packets are sent because no socket could be found, it makes sense to use flowlabel_reflect sysctl to decide if a reflection of the flowlabel is requested. This extends commit 22b6722bfa59 ("ipv6: Add sysctl for per namespace flow label reflection"), for some TCP RST packets. In order to provide full control of this new feature, flowlabel_reflect becomes a bitmask. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>