summaryrefslogtreecommitdiffstats
AgeCommit message (Collapse)AuthorFilesLines
2020-10-12netfilter: nf_tables: add inet ingress supportPablo Neira Ayuso5-8/+126
This patch adds a new ingress hook for the inet family. The inet ingress hook emulates the IP receive path code, therefore, unclean packets are drop before walking over the ruleset in this basechain. This patch also introduces the nft_base_chain_netdev() helper function to check if this hook is bound to one or more devices (through the hook list infrastructure). This check allows to perform the same handling for the inet ingress as it would be a netdev ingress chain from the control plane. Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2020-10-12netfilter: add inet ingress supportPablo Neira Ayuso2-21/+83
This patch adds the NF_INET_INGRESS pseudohook for the NFPROTO_INET family. This is a mapping this new hook to the existing NFPROTO_NETDEV and NF_NETDEV_INGRESS hook. The hook does not guarantee that packets are inet only, users must filter out non-ip traffic explicitly. This infrastructure makes it easier to support this new hook in nf_tables. Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2020-10-12netfilter: add nf_ingress_hook() helper functionPablo Neira Ayuso1-2/+7
Add helper function to check if this is an ingress hook. Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2020-10-12netfilter: add nf_static_key_{inc,dec}Pablo Neira Ayuso1-6/+17
Add helper functions increment and decrement the hook static keys. Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2020-10-12ipvs: inspect reply packets from DR/TUN real serverslongguang.yue2-15/+22
Just like for MASQ, inspect the reply packets coming from DR/TUN real servers and alter the connection's state and timeout according to the protocol. It's ipvs's duty to do traffic statistic if packets get hit, no matter what mode it is. Signed-off-by: longguang.yue <bigclouds@163.com> Signed-off-by: Julian Anastasov <ja@ssi.bg> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2020-10-11bpf: Migrate from patchwork.ozlabs.org to patchwork.kernel.org.Alexei Starovoitov1-2/+2
Move the bpf/bpf-next patch processing queue to patchwork.kernel.org. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20201011200149.66537-1-alexei.starovoitov@gmail.com
2020-10-11bpf: Always return target ifindex in bpf_fib_lookupToke Høiland-Jørgensen1-1/+2
The bpf_fib_lookup() helper performs a neighbour lookup for the destination IP and returns BPF_FIB_LKUP_NO_NEIGH if this fails, with the expectation that the BPF program will pass the packet up the stack in this case. However, with the addition of bpf_redirect_neigh() that can be used instead to perform the neighbour lookup, at the cost of a bit of duplicated work. For that we still need the target ifindex, and since bpf_fib_lookup() already has that at the time it performs the neighbour lookup, there is really no reason why it can't just return it in any case. So let's just always return the ifindex if the FIB lookup itself succeeds. Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Cc: David Ahern <dsahern@gmail.com> Link: https://lore.kernel.org/bpf/20201009184234.134214-1-toke@redhat.com
2020-10-11Merge branch 'samples: bpf: Refactor XDP programs with libbpf'Alexei Starovoitov6-161/+230
"Daniel T. Lee" says: ==================== To avoid confusion caused by the increasing fragmentation of the BPF Loader program, this commit would like to convert the previous bpf_load loader with the libbpf loader. Thanks to libbpf's bpf_link interface, managing the tracepoint BPF program is much easier. bpf_program__attach_tracepoint manages the enable of tracepoint event and attach of BPF programs to it with a single interface bpf_link, so there is no need to manage event_fd and prog_fd separately. And due to addition of generic bpf_program__attach() to libbpf, it is now possible to attach BPF programs with __attach() instead of explicitly calling __attach_<type>(). This patchset refactors xdp_monitor with using this libbpf API, and the bpf_load is removed and migrated to libbpf. Also, attach_tracepoint() is replaced with the generic __attach() method in xdp_redirect_cpu. Moreover, maps in kern program have been converted to BTF-defined map. --- Changes in v2: - added cleanup logic for bpf_link and bpf_object in xdp_monitor - program section match with bpf_program__is_<type> instead of strncmp - revert BTF key/val type to default of BPF_MAP_TYPE_PERF_EVENT_ARRAY - split increment into seperate satement - refactor pointer array initialization - error code cleanup ==================== Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2020-10-11samples: bpf: Refactor XDP kern program maps with BTF-defined mapDaniel T. Lee3-39/+36
Most of the samples were converted to use the new BTF-defined MAP as they moved to libbpf, but some of the samples were missing. Instead of using the previous BPF MAP definition, this commit refactors xdp_monitor and xdp_sample_pkts_kern MAP definition with the new BTF-defined MAP format. Also, this commit removes the max_entries attribute at PERF_EVENT_ARRAY map type. The libbpf's bpf_object__create_map() will automatically set max_entries to the maximum configured number of CPUs on the host. Signed-off-by: Daniel T. Lee <danieltimlee@gmail.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20201010181734.1109-4-danieltimlee@gmail.com
2020-10-11samples: bpf: Replace attach_tracepoint() to attach() in xdp_redirect_cpuDaniel T. Lee2-82/+73
>From commit d7a18ea7e8b6 ("libbpf: Add generic bpf_program__attach()"), for some BPF programs, it is now possible to attach BPF programs with __attach() instead of explicitly calling __attach_<type>(). This commit refactors the __attach_tracepoint() with libbpf's generic __attach() method. In addition, this refactors the logic of setting the map FD to simplify the code. Also, the missing removal of bpf_load.o in Makefile has been fixed. Signed-off-by: Daniel T. Lee <danieltimlee@gmail.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20201010181734.1109-3-danieltimlee@gmail.com
2020-10-11samples: bpf: Refactor xdp_monitor with libbpfDaniel T. Lee2-40/+121
To avoid confusion caused by the increasing fragmentation of the BPF Loader program, this commit would like to change to the libbpf loader instead of using the bpf_load. Thanks to libbpf's bpf_link interface, managing the tracepoint BPF program is much easier. bpf_program__attach_tracepoint manages the enable of tracepoint event and attach of BPF programs to it with a single interface bpf_link, so there is no need to manage event_fd and prog_fd separately. This commit refactors xdp_monitor with using this libbpf API, and the bpf_load is removed and migrated to libbpf. Signed-off-by: Daniel T. Lee <danieltimlee@gmail.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20201010181734.1109-2-danieltimlee@gmail.com
2020-10-11Merge branch 'Offload-tc-vlan-mangle-to-mscc_ocelot-switch'Jakub Kicinski4-6/+119
Vladimir Oltean says: ==================== Offload tc-vlan mangle to mscc_ocelot switch This series offloads one more action to the VCAP IS1 ingress TCAM, which is to change the classified VLAN for packets, according to the VCAP IS1 keys (VLAN, source MAC, source IP, EtherType, etc). ==================== Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-10-11selftests: net: mscc: ocelot: add test for VLAN modify actionVladimir Oltean1-2/+45
Create a test that changes a VLAN ID from 200 to 300. We also need to modify the preferences of the filters installed for the other rules so that they are unique, because we now install the "tc-vlan modify" filter in VCAP IS1 only temporarily, and we need to perform the deletion by filter preference number. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-10-11net: dsa: tag_ocelot: use VLAN information from tagging header when availableVladimir Oltean1-0/+34
When the Extraction Frame Header contains a valid classified VLAN, use that instead of the VLAN header present in the packet. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-10-11net: mscc: ocelot: offload VLAN mangle action to VCAP IS1Vladimir Oltean2-4/+40
The VCAP_IS1_ACT_VID_REPLACE_ENA action, from the VCAP IS1 ingress TCAM, changes the classified VLAN. We are only exposing this ability for switch ports that are under VLAN aware bridges. This is because in standalone ports mode and under a bridge with vlan_filtering=0, the ocelot driver configures the switch to operate as VLAN-unaware, so the classified VLAN is not derived from the 802.1Q header from the packet, but instead is always equal to the port-based VLAN ID of the ingress port. We _can_ still change the classified VLAN for packets when operating in this mode, but the end result will most likely be a drop, since both the ingress and the egress port need to be members of the modified VLAN. And even if we install the new classified VLAN into the VLAN table of the switch, the result would still not be as expected: we wouldn't see, on the output port, the modified VLAN tag, but the original one, even though the classified VLAN was indeed modified. This is because of how the hardware works: on egress, what is pushed to the frame is a "port tag", which gives us the following options: - Tag all frames with port tag (derived from the classified VLAN) - Tag all frames with port tag, except if the classified VLAN is 0 or equal to the native VLAN of the egress port - No port tag Needless to say, in VLAN-unaware mode we are disabling the port tag. Otherwise, the existing VLAN tag would be ignored, and a second VLAN tag (the port tag), holding the classified VLAN, would be pushed (instead of replacing the existing 802.1Q tag). This is definitely not what the user wanted when installing a "vlan modify" action. So it is simply not worth bothering with VLAN modify rules under other configurations except when the ports are fully VLAN-aware. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-10-11Merge branch 'enetc-Migrate-to-PHYLINK-and-PCS_LYNX'Jakub Kicinski8-203/+243
Claudiu Manoil says: ==================== enetc: Migrate to PHYLINK and PCS_LYNX Transitioning the enetc driver from phylib to phylink. Offloading the serdes configuration to the PCS_LYNX module is a mandatory part of this transition. Aiming for a cleaner, more maintainable design, and better code reuse. The first 2 patches are clean up prerequisites. Tested on a p1028rdb board. v2: validate() explicitly rejects now all interface modes not supported by the driver instead of relying on the device tree to provide only supported interfaces, and dropped redundant activation of pcs_poll (addressing Ioana's findings) ==================== Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-10-11enetc: Migrate to PHYLINK and PCS_LYNXClaudiu Manoil7-166/+191
This is a methodical transition of the driver from phylib to phylink, following the guidelines from sfp-phylink.rst. The MAC register configurations based on interface mode were moved from the probing path to the mac_config() hook. MAC enable and disable commands (enabling Rx and Tx paths at MAC level) were also extracted and assigned to their corresponding phylink hooks. As part of the migration to phylink, the serdes configuration from the driver was offloaded to the PCS_LYNX module, introduced in commit 0da4c3d393e4 ("net: phy: add Lynx PCS module"), the PCS_LYNX module being a mandatory component required to make the enetc driver work with phylink. Signed-off-by: Claudiu Manoil <claudiu.manoil@nxp.com> Reviewed-by: Ioana Ciornei <ioana.cionei@nxp.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-10-11arm64: dts: fsl-ls1028a-rdb: Specify in-band mode for ENETC port 0Claudiu Manoil1-0/+1
As part of the transition of the enetc ethernet driver from phylib to phylink, the in-band operation mode of the SGMII interface from enetc port 0 needs to be specified explicitly for phylink. Signed-off-by: Claudiu Manoil <claudiu.manoil@nxp.com> Reviewed-by: Vladimir Oltean <vladimir.oltean@nxp.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-10-11enetc: Clean up serdes configurationClaudiu Manoil1-53/+48
Decouple internal mdio bus creation from serdes configuration, as a prerequisite to offloading serdes configuration to a different module. Group together mdio bus creation routines, cleanup. Signed-off-by: Claudiu Manoil <claudiu.manoil@nxp.com> Reviewed-by: Ioana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-10-11enetc: Clean up MAC and link configurationClaudiu Manoil1-48/+67
Decouple level MAC configuration based on phy interface type from general port configuration. Group together MAC and link configuration code. Decouple external mdio bus creation from interface type parsing. No longer return an (unhandled) error code when phy_node not found, use phy_node to indicate whether the port has a phy or not. No longer fall-through when serdes configuration fails for the link modes that require internal link configuration. Signed-off-by: Claudiu Manoil <claudiu.manoil@nxp.com> Reviewed-by: Ioana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-10-11Merge branch 'Follow-up BPF helper improvements'Alexei Starovoitov17-225/+489
Daniel Borkmann says: ==================== This series addresses most of the feedback [0] that was to be followed up from the last series, that is, UAPI helper comment improvements and getting rid of the ifindex obj file hacks in the selftest by using a BPF map instead. The __sk_buff data/data_end pointer work, I'm planning to do in a later round as well as the mem*() BPF improvements we have in Cilium for libbpf. Next, the series adds two features, i) a helper called redirect_peer() to improve latency on netns switch, and ii) to allow map in map with dynamic inner array map sizes. Selftests for each are added as well. For details, please check individual patches, thanks! [0] https://lore.kernel.org/bpf/cover.1601477936.git.daniel@iogearbox.net/ v5 -> v6: - Going with Andrii's suggestion to make the misconfigured verifier test more robust, and only probe on -EOPNOTSUPP (Andrii) v4 -> v5: - Replace cnt == -EOPNOTSUPP check with cnt < 0; I've used < 0 here as I think it's useful to keep the existing cnt == 0 || cnt >= ARRAY_SIZE(insn_buf) for error detection (Andrii) v3 -> v4: - Rename new array map flag to BPF_F_INNER_MAP (Alexei) v2 -> v3: - Remove tab that slipped into uapi helper desc (Jakub) - Rework map in map for array to error from map_gen_lookup (Andrii) v1 -> v2: - Fixed selftest comment wrt inner1/inner2 value (Yonghong) ==================== Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2020-10-11bpf, selftests: Add redirect_peer selftestDaniel Borkmann2-9/+61
Extend the test_tc_redirect test and add a small test that exercises the new redirect_peer() helper for the IPv4 and IPv6 case. Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20201010234006.7075-7-daniel@iogearbox.net
2020-10-11bpf, selftests: Make redirect_neigh test more extensibleDaniel Borkmann3-186/+219
Rename into test_tc_redirect.sh and move setup and test code into separate functions so they can be reused for newly added tests in here. Also remove the crude hack to override ifindex inside the object file via xxd and sed and just use a simple map instead. Map given iproute2 does not support BTF fully and therefore neither global data at this point. Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20201010234006.7075-6-daniel@iogearbox.net
2020-10-11bpf, selftests: Add test for different array inner map sizeDaniel Borkmann2-10/+72
Extend the "diff_size" subtest to also include a non-inlined array map variant where dynamic inner #elems are possible. Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20201010234006.7075-5-daniel@iogearbox.net
2020-10-11bpf: Allow for map-in-map with dynamic inner array map entriesDaniel Borkmann7-13/+26
Recent work in f4d05259213f ("bpf: Add map_meta_equal map ops") and 134fede4eecf ("bpf: Relax max_entries check for most of the inner map types") added support for dynamic inner max elements for most map-in-map types. Exceptions were maps like array or prog array where the map_gen_lookup() callback uses the maps' max_entries field as a constant when emitting instructions. We recently implemented Maglev consistent hashing into Cilium's load balancer which uses map-in-map with an outer map being hash and inner being array holding the Maglev backend table for each service. This has been designed this way in order to reduce overall memory consumption given the outer hash map allows to avoid preallocating a large, flat memory area for all services. Also, the number of service mappings is not always known a-priori. The use case for dynamic inner array map entries is to further reduce memory overhead, for example, some services might just have a small number of back ends while others could have a large number. Right now the Maglev backend table for small and large number of backends would need to have the same inner array map entries which adds a lot of unneeded overhead. Dynamic inner array map entries can be realized by avoiding the inlined code generation for their lookup. The lookup will still be efficient since it will be calling into array_map_lookup_elem() directly and thus avoiding retpoline. The patch adds a BPF_F_INNER_MAP flag to map creation which therefore skips inline code generation and relaxes array_map_meta_equal() check to ignore both maps' max_entries. This also still allows to have faster lookups for map-in-map when BPF_F_INNER_MAP is not specified and hence dynamic max_entries not needed. Example code generation where inner map is dynamic sized array: # bpftool p d x i 125 int handle__sys_enter(void * ctx): ; int handle__sys_enter(void *ctx) 0: (b4) w1 = 0 ; int key = 0; 1: (63) *(u32 *)(r10 -4) = r1 2: (bf) r2 = r10 ; 3: (07) r2 += -4 ; inner_map = bpf_map_lookup_elem(&outer_arr_dyn, &key); 4: (18) r1 = map[id:468] 6: (07) r1 += 272 7: (61) r0 = *(u32 *)(r2 +0) 8: (35) if r0 >= 0x3 goto pc+5 9: (67) r0 <<= 3 10: (0f) r0 += r1 11: (79) r0 = *(u64 *)(r0 +0) 12: (15) if r0 == 0x0 goto pc+1 13: (05) goto pc+1 14: (b7) r0 = 0 15: (b4) w6 = -1 ; if (!inner_map) 16: (15) if r0 == 0x0 goto pc+6 17: (bf) r2 = r10 ; 18: (07) r2 += -4 ; val = bpf_map_lookup_elem(inner_map, &key); 19: (bf) r1 = r0 | No inlining but instead 20: (85) call array_map_lookup_elem#149280 | call to array_map_lookup_elem() ; return val ? *val : -1; | for inner array lookup. 21: (15) if r0 == 0x0 goto pc+1 ; return val ? *val : -1; 22: (61) r6 = *(u32 *)(r0 +0) ; } 23: (bc) w0 = w6 24: (95) exit Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20201010234006.7075-4-daniel@iogearbox.net
2020-10-11bpf: Add redirect_peer helperDaniel Borkmann6-10/+106
Add an efficient ingress to ingress netns switch that can be used out of tc BPF programs in order to redirect traffic from host ns ingress into a container veth device ingress without having to go via CPU backlog queue [0]. For local containers this can also be utilized and path via CPU backlog queue only needs to be taken once, not twice. On a high level this borrows from ipvlan which does similar switch in __netif_receive_skb_core() and then iterates via another_round. This helps to reduce latency for mentioned use cases. Pod to remote pod with redirect(), TCP_RR [1]: # percpu_netperf 10.217.1.33 RT_LATENCY: 122.450 (per CPU: 122.666 122.401 122.333 122.401 ) MEAN_LATENCY: 121.210 (per CPU: 121.100 121.260 121.320 121.160 ) STDDEV_LATENCY: 120.040 (per CPU: 119.420 119.910 125.460 115.370 ) MIN_LATENCY: 46.500 (per CPU: 47.000 47.000 47.000 45.000 ) P50_LATENCY: 118.500 (per CPU: 118.000 119.000 118.000 119.000 ) P90_LATENCY: 127.500 (per CPU: 127.000 128.000 127.000 128.000 ) P99_LATENCY: 130.750 (per CPU: 131.000 131.000 129.000 132.000 ) TRANSACTION_RATE: 32666.400 (per CPU: 8152.200 8169.842 8174.439 8169.897 ) Pod to remote pod with redirect_peer(), TCP_RR: # percpu_netperf 10.217.1.33 RT_LATENCY: 44.449 (per CPU: 43.767 43.127 45.279 45.622 ) MEAN_LATENCY: 45.065 (per CPU: 44.030 45.530 45.190 45.510 ) STDDEV_LATENCY: 84.823 (per CPU: 66.770 97.290 84.380 90.850 ) MIN_LATENCY: 33.500 (per CPU: 33.000 33.000 34.000 34.000 ) P50_LATENCY: 43.250 (per CPU: 43.000 43.000 43.000 44.000 ) P90_LATENCY: 46.750 (per CPU: 46.000 47.000 47.000 47.000 ) P99_LATENCY: 52.750 (per CPU: 51.000 54.000 53.000 53.000 ) TRANSACTION_RATE: 90039.500 (per CPU: 22848.186 23187.089 22085.077 21919.130 ) [0] https://linuxplumbersconf.org/event/7/contributions/674/attachments/568/1002/plumbers_2020_cilium_load_balancer.pdf [1] https://github.com/borkmann/netperf_scripts/blob/master/percpu_netperf Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20201010234006.7075-3-daniel@iogearbox.net
2020-10-11bpf: Improve bpf_redirect_neigh helper descriptionDaniel Borkmann2-6/+14
Follow-up to address David's feedback that we should better describe internals of the bpf_redirect_neigh() helper. Suggested-by: David Ahern <dsahern@gmail.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Reviewed-by: David Ahern <dsahern@gmail.com> Link: https://lore.kernel.org/bpf/20201010234006.7075-2-daniel@iogearbox.net
2020-10-10drivers/net/wan/hdlc_fr: Move the skb_headroom check out of fr_hard_headerXie He1-13/+17
Move the skb_headroom check out of fr_hard_header and into pvc_xmit. This has two benefits: 1. Originally we only do this check for skbs sent by users on Ethernet- emulating PVC devices. After the change we do this check for skbs sent on normal PVC devices, too. (Also add a comment to make it clear that this is only a protection against upper layers that don't take dev->needed_headroom into account. Such upper layers should be rare and I believe they should be fixed.) 2. After the change we can simplify the parameter list of fr_hard_header. We no longer need to use a pointer to pointers (skb_p) because we no longer need to replace the skb inside fr_hard_header. Cc: Krzysztof Halasa <khc@pm.waw.pl> Signed-off-by: Xie He <xie.he.0141@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-10-10net: dsa: rtl8366rb: Roof MTU for switchLinus Walleij3-5/+39
The MTU setting for this DSA switch is global so we need to keep track of the MTU set for each port, then as soon as any MTU changes, roof the MTU to the biggest common denominator and poke that into the switch MTU setting. To achieve this we need a per-chip-variant state container for the RTL8366RB to use for the RTL8366RB-specific stuff. Other SMI switches does seem to have per-port MTU setting capabilities. Fixes: 5f4a8ef384db ("net: dsa: rtl8366rb: Support setting MTU") Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-10-10net: phy: Move of_mdio from drivers/of to drivers/net/mdioCalvin Johnson6-9/+11
Better place for of_mdio.c is drivers/net/mdio. Move of_mdio.c from drivers/of to drivers/net/mdio Signed-off-by: Calvin Johnson <calvin.johnson@oss.nxp.com> Acked-by: Rob Herring <robh@kernel.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-10-10dpaa_eth: enable NETIF_MSG_HW by defaultMaxim Kochetkov1-1/+1
When packets are received on the error queue, this function under net_ratelimit(): netif_err(priv, hw, net_dev, "Err FD status = 0x%08x\n"); does not get printed. Instead we only see: [ 3658.845592] net_ratelimit: 244 callbacks suppressed [ 3663.969535] net_ratelimit: 230 callbacks suppressed [ 3669.085478] net_ratelimit: 228 callbacks suppressed Enabling NETIF_MSG_HW fixes this issue, and we can see some information about the frame descriptors of packets. Signed-off-by: Maxim Kochetkov <fido_max@inbox.ru> Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Madalin Bucur <madalin.bucur@oss.nxp.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-10-10r8169: factor out handling rtl8169_statsHeiner Kallweit1-21/+25
Factor out handling the private packet/byte counters to new functions rtl_get_priv_stats() and rtl_inc_priv_stats(). Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-10-10net: usbnet: remove driver versionHeiner Kallweit1-4/+0
Obviously this driver version doesn't make sense. Go with the default and let ethtool display the kernel version. Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-10-10net: thunderx: Use struct_size() helper in kmalloc()Gustavo A. R. Silva1-2/+2
Make use of the new struct_size() helper instead of the offsetof() idiom. Signed-off-by: Gustavo A. R. Silva <gustavoars@kernel.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-10-10Merge tag 'wireless-drivers-next-2020-10-09' of ↵Jakub Kicinski66-314/+1434
git://git.kernel.org/pub/scm/linux/kernel/git/kvalo/wireless-drivers-next Kalle Valo says: ==================== wireless-drivers-next patches for v5.10 Fourth and last set of patches for v5.10. Most of these are iwlwifi patches, but few small fixes to other drivers as well. Major changes: iwlwifi * PNVM support (platform-specific phy config data) * bump the FW API support to 59 ==================== Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-10-10Merge tag 'mac80211-next-for-net-next-2020-10-08' of ↵Jakub Kicinski13-495/+240
git://git.kernel.org/pub/scm/linux/kernel/git/jberg/mac80211-next Johannes Berg says: ==================== A handful of changes: * fixes for the recent S1G work * a docbook build time improvement * API to pass beacon rate to lower-level driver ==================== Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-10-09Merge branch 'netlink-export-policy-on-validation-failures'Jakub Kicinski6-53/+159
Johannes Berg says: ==================== netlink: export policy on validation failures Export the policy used for attribute validation when it fails, so e.g. for an out-of-range attribute userspace immediately gets the valid ranges back. v2 incorporates the suggestion from Jakub to have a function to estimate the size (netlink_policy_dump_attr_size_estimate()) and check that it does the right thing on the *normal* policy dumps, not (just) when calling it from the error scenario. v3 only addresses a few minor style issues. v4 fixes up a forgotten 'git add' ... sorry. v5 is a resend, I messed up v4's cover letter subject (saying v3) and apparently the second patch didn't go out at all. Tested using nl80211/iw in a few scenarios, seems to work fine and return the policy back, e.g. kernel reports: integer out of range policy: 04 00 0b 00 0c 00 04 00 01 00 00 00 00 00 00 00 ^ padding ^ minimum allowed value policy: 04 00 0b 00 0c 00 05 00 ff ff ff ff 00 00 00 00 ^ padding ^ maximum allowed value policy: 08 00 01 00 04 00 00 00 ^ type 4 == U32 for an out-of-range case. ==================== Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-10-09netlink: export policy in extended ACKJohannes Berg6-27/+110
Add a new attribute NLMSGERR_ATTR_POLICY to the extended ACK to advertise the policy, e.g. if an attribute was out of range, you'll know the range that's permissible. Add new NL_SET_ERR_MSG_ATTR_POL() and NL_SET_ERR_MSG_ATTR_POL() macros to set this, since realistically it's only useful to do this when the bad attribute (offset) is also returned. Use it in lib/nlattr.c which practically does all the policy validation. v2: - add and use netlink_policy_dump_attr_size_estimate() v3: - remove redundant break v4: - really remove redundant break ... sorry Reviewed-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: Johannes Berg <johannes.berg@intel.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-10-09netlink: policy: refactor per-attr policy writingJohannes Berg1-28/+51
Refactor the per-attribute policy writing into a new helper function, to be used later for dumping out the policy of a rejected attribute. v2: - fix some indentation v3: - change variable order in netlink_policy_dump_write() Reviewed-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: Johannes Berg <johannes.berg@intel.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-10-09Merge branch 'net-smc-updates-2020-10-07'Jakub Kicinski1-44/+48
Karsten Graul says: ==================== net/smc: updates 2020-10-07 Patch 1 and 2 address warnings from static code checkers, and patch 3 handles a case when all proposed ISM V2 devices fail to init and no V1 devices are tried afterwards. ==================== Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-10-09net/smc: restore smcd_version when all ISM V2 devices failed to initKarsten Graul1-1/+5
Field ini->smcd_version is set to SMC_V2 before calling smc_listen_ism_init(). This clears the V1 bit that may be set. When all matching ISM V2 devices fail to initialize then the smcd_version field needs to get restored to allow any possible V1 devices to initialize. And be consistent, always go to the not_found label when no device was found. Signed-off-by: Karsten Graul <kgraul@linux.ibm.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-10-09net/smc: cleanup buffer usage in smc_listen_work()Karsten Graul1-6/+3
coccinelle informs about net/smc/af_smc.c:1770:10-11: WARNING: opportunity for kzfree/kvfree_sensitive Its not that kzfree() would help here, the memset() is done to prepare the buffer for another socket receive. Fix that warning message by reordering the calls, while at it eliminate the unneeded variable cclc2 and use sizeof(*buf) as above in the same function. No functional changes. Signed-off-by: Karsten Graul <kgraul@linux.ibm.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-10-09net/smc: consolidate unlocking in same functionKarsten Graul1-37/+40
Static code checkers warn of inconsistent returns because the lgr mutex is locked in one function and unlocked in a function called by the locking function: net/smc/af_smc.c:823 smc_connect_rdma() warn: inconsistent returns 'smc_client_lgr_pending'. net/smc/af_smc.c:897 smc_connect_ism() warn: inconsistent returns 'smc_server_lgr_pending'. Make the code consistent by doing the unlock in the same function that fetches the lock. No functional changes. Signed-off-by: Karsten Graul <kgraul@linux.ibm.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-10-09Merge tag 'linux-can-next-for-5.10-20201007' of ↵Jakub Kicinski17-39/+1676
git://git.kernel.org/pub/scm/linux/kernel/git/mkl/linux-can-next Marc Kleine-Budde says: ==================== linux-can-next-for-5.10-20201007 The first 3 patches are by me and fix several warnings found when compiling the kernel with W=1. Lukas Bulwahn's patch adjusts the MAINTAINERS file, to accommodate the renaming of the mcp251xfd driver. Vincent Mailhol contributes 3 patches for the CAN networking layer. First error queue support is added the the CAN RAW protocol. The second patch converts the get_can_dlc() and get_canfd_dlc() in-Kernel-only macros from using __u8 to u8. The third patch adds a helper function to calculate the length of one bit in in multiple of time quanta. Oliver Hartkopp's patch add support for the ISO 15765-2:2016 transport protocol to the CAN stack. Three patches by Lad Prabhakar add documentation for various new rcar controllers to the device tree bindings of the rcar_can and rcan_canfd driver. Michael Walle's patch adds various processors to the flexcan driver binding documentation. The next two patches are by me and target the flexcan driver aswell. The remove the ack_grp and ack_bit from the fsl,stop-mode DT property and the driver, as they are not used anymore. As these are the last two arguments this change will not break existing device trees. The last three patches are by Srinivas Neeli and target the xilinx_can driver. The first one increases the lower limit for the bit rate prescaler to 2, the other two fix sparse and coverity findings. ==================== Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-10-09Merge branch '100GbE-Intel-Wired-LAN-Driver-Updates-2020-10-07'Jakub Kicinski11-58/+162
Tony Nguyen says: ==================== 100GbE Intel Wired LAN Driver Updates 2020-10-07 This series contains updates to ice driver only. Andy Shevchenko changes usage to %*phD format to print small buffer as hex string. Bruce removes repeated words reported by checkpatch. Ani changes ice_info_get_dsn() to return void as it always returns success. Jake adds devlink reporting of fw.app.bundle_id. Moves devlink_port structure to ice_vsi to resolve issues with cleanup. Adds additional debug info for firmware updates. Bixuan Cui resolves -Wpointer-to-int-cast warnings. Dan adds additional packet type masks and checks to prevent overwriting existing Flow Director rules. ==================== Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-10-09ice: fix adding IP4 IP6 Flow Director rulesDan Nowlin1-1/+61
A subsequent addition of an IP4 or IP6 rule after other rules would overwrite any existing TCAM entries of related L4 protocols(ex: tcp4 or udp6). This was due to the mask including too many TCAM entries. Add new packet type masks with bits properly excluded so rules are not overwritten. Signed-off-by: Dan Nowlin <dan.nowlin@intel.com> Signed-off-by: Henry Tieman <henry.w.tieman@intel.com> Tested-by: Brijesh Behera <brijeshx.behera@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-10-09ice: Fix pointer cast warningsBixuan Cui1-2/+2
pointers should be casted to unsigned long to avoid -Wpointer-to-int-cast warnings: drivers/net/ethernet/intel/ice/ice_flow.h:197:33: warning: cast from pointer to integer of different size drivers/net/ethernet/intel/ice/ice_flow.h:198:32: warning: cast to pointer from integer of different size Signed-off-by: Bixuan Cui <cuibixuan@huawei.com> Tested-by: Aaron Brown <aaron.f.brown@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-10-09ice: add additional debug logging for firmware updateJacob Keller3-6/+33
While debugging a recent failure to update the flash of an ice device, I found it helpful to add additional logging which helped determine the root cause of the problem being a timeout issue. Add some extra dev_dbg() logging messages which can be enabled using the dynamic debug facility, including one for ice_aq_wait_for_event that will use jiffies to capture a rough estimate of how long we waited for the completion of a firmware command. Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Brijesh Behera <brijeshx.behera@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-10-09ice: refactor devlink_port to be per-VSIJacob Keller5-33/+45
Currently, the devlink_port structure is stored within the ice_pf. This made sense because we create a single devlink_port for each PF. This setup does not mesh with the abstractions in the driver very well, and led to a flow where we accidentally call devlink_port_unregister twice during error cleanup. In particular, if devlink_port_register or devlink_port_unregister are called twice, this leads to a kernel panic. This appears to occur during some possible flows while cleaning up from a failure during driver probe. If register_netdev fails, then we will call devlink_port_unregister in ice_cfg_netdev as it cleans up. Later, we again call devlink_port_unregister since we assume that we must cleanup the port that is associated with the PF structure. This occurs because we cleanup the devlink_port for the main PF even though it was not allocated. We allocated the port within a per-VSI function for managing the main netdev, but did not release the port when cleaning up that VSI, the allocation and destruction are not aligned. Instead of attempting to manage the devlink_port as part of the PF structure, manage it as part of the PF VSI. Doing this has advantages, as we can match the de-allocation of the devlink_port with the unregister_netdev associated with the main PF VSI. Moving the port to the VSI is preferable as it paves the way for handling devlink ports allocated for other purposes such as SR-IOV VFs. Since we're changing up how we allocate the devlink_port, also change the indexing. Originally, we indexed the port using the PF id number. This came from an old goal of sharing a devlink for each physical function. Managing devlink instances across multiple function drivers is not workable. Instead, lets set the port number to the logical port number returned by firmware and set the index using the VSI index (sometimes referred to as VSI handle). Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Aaron Brown <aaron.f.brown@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-10-09ice: add the DDP Track ID to devlink infoJacob Keller2-0/+13
Add "fw.app.bundle_id" to display the DDP Track ID of the active DDP package. This id is similar to "fw.bundle_id" and is a unique identifier for the DDP package that is loaded in the device. Each new DDP has a unique Track ID generated for it, and the ID can be used to identify and track the DDP package. Add documentation for the new devlink info version. Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Aaron Brown <aaron.f.brown@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>