summaryrefslogtreecommitdiffstats
AgeCommit message (Collapse)AuthorFilesLines
2020-11-09wireguard: switch to dev_get_tstats64Heiner Kallweit1-1/+1
Replace ip_tunnel_get_stats64() with the new identical core function dev_get_tstats64(). Reviewed-by: Jason A. Donenfeld <Jason@zx2c4.com> Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-09gtp: switch to dev_get_tstats64Heiner Kallweit1-1/+1
Replace ip_tunnel_get_stats64() with the new identical core function dev_get_tstats64(). Acked-by: Harald Welte <laforge@gnumonks.org> Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-09net: switch to dev_get_tstats64Heiner Kallweit3-4/+4
Replace ip_tunnel_get_stats64() with the new identical core function dev_get_tstats64(). Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-09ip6_tunnel: use ip_tunnel_get_stats64 as ndo_get_stats64 callbackHeiner Kallweit1-31/+1
Switch ip6_tunnel to the standard statistics pattern: - use dev->stats for the less frequently accessed counters - use dev->tstats for the frequently accessed counters An additional benefit is that we now have 64bit statistics also on 32bit systems. Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-09tun: switch to net core provided statistics countersHeiner Kallweit1-87/+34
Switch tun to the standard statistics pattern: - use netdev->stats for the less frequently accessed counters - use netdev->tstats for the frequently accessed per-cpu counters v3: - add atomic_long_t member rx_frame_errors for making counter updates atomic Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-09net: dsa: use net core stats64 handlingHeiner Kallweit3-30/+8
Use netdev->tstats instead of a member of dsa_slave_priv for storing a pointer to the per-cpu counters. This allows us to use core functionality for statistics handling. Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Tested-by: Vladimir Oltean <olteanv@gmail.com> Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-09net: core: add dev_get_tstats64 as a ndo_get_stats64 implementationHeiner Kallweit2-0/+16
It's a frequent pattern to use netdev->stats for the less frequently accessed counters and per-cpu counters for the frequently accessed counters (rx/tx bytes/packets). Add a default ndo_get_stats64() implementation for this use case. Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-09net: dsa: mv88e6xxx: Export VTU as devlink regionTobias Waldekranz4-3/+109
Export the raw VTU data and related registers in a devlink region so that it can be inspected from userspace and compared to the current bridge configuration. Signed-off-by: Tobias Waldekranz <tobias@waldekranz.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Link: https://lore.kernel.org/r/20201109082927.8684-1-tobias@waldekranz.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-09net: phy: microchip_t1: Don't set .config_anegJisheng Zhang1-1/+0
The .config_aneg in microchip_t1 is genphy_config_aneg, so it's not needed, because the phy core will call genphy_config_aneg() if the .config_aneg is NULL. Signed-off-by: Jisheng Zhang <Jisheng.Zhang@synaptics.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Link: https://lore.kernel.org/r/20201109091605.3951c969@xhacker.debian Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-09net/mlx4: Assign boolean values to a bool variableKaixu Xia1-1/+1
Fix the following coccinelle warnings: ./drivers/net/ethernet/mellanox/mlx4/en_rx.c:687:1-17: WARNING: Assignment of 0/1 to bool variable Reported-by: Tosk Robot <tencent_os_robot@tencent.com> Signed-off-by: Kaixu Xia <kaixuxia@tencent.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Link: https://lore.kernel.org/r/1604732038-6057-1-git-send-email-kaixuxia@tencent.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-09net: udp: remove redundant initialization in udp_dump_oneMenglong Dong1-1/+1
The initialization for 'err' with '-EINVAL' is redundant and can be removed, as it is updated soon and not used. Signed-off-by: Menglong Dong <dong.menglong@zte.com.cn> Link: https://lore.kernel.org/r/1604644960-48378-2-git-send-email-dong.menglong@zte.com.cn Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-09net: udp: remove redundant initialization in udp_send_skbMenglong Dong1-1/+1
The initialization for 'err' with 0 is redundant and can be removed, as it is updated by ip_send_skb and not used before that. Signed-off-by: Menglong Dong <dong.menglong@zte.com.cn> Link: https://lore.kernel.org/r/1604644960-48378-4-git-send-email-dong.menglong@zte.com.cn Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-09bridge: mrp: Use hlist_head instead of list_head for mrpHoratiu Vultur5-17/+17
Replace list_head with hlist_head for MRP list under the bridge. There is no need for a circular list when a linear list will work. This will also decrease the size of 'struct net_bridge'. Signed-off-by: Horatiu Vultur <horatiu.vultur@microchip.com> Link: https://lore.kernel.org/r/20201106215049.1448185-1-horatiu.vultur@microchip.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-09Merge branch 'net-packet-make-packet_fanout-arr-size-configurable-up-to-64k'Jakub Kicinski4-17/+109
Tanner Love says: ==================== net/packet: make packet_fanout.arr size configurable up to 64K First patch makes the change; second patch adds unit tests. ==================== Link: https://lore.kernel.org/r/20201106180741.2839668-1-tannerlove.kernel@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-09selftests/net: test max_num_members, fanout_args in psock_fanoutTanner Love1-3/+69
Add an additional control test that verifies: -specifying two different max_num_members values fails -specifying max_num_members > PACKET_FANOUT_MAX fails In datapath tests, set max_num_members to PACKET_FANOUT_MAX. Signed-off-by: Tanner Love <tannerlove@google.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-09net/packet: make packet_fanout.arr size configurable up to 64KTanner Love3-14/+40
One use case of PACKET_FANOUT is lockless reception with one socket per CPU. 256 is a practical limit on increasingly many machines. Increase PACKET_FANOUT_MAX to 64K. Expand setsockopt PACKET_FANOUT to take an extra argument max_num_members. Also explicitly define a fanout_args struct, instead of implicitly casting to an integer. This documents the API and simplifies the control flow. If max_num_members is not specified or is set to 0, then 256 is used, same as before. Signed-off-by: Tanner Love <tannerlove@google.com> Signed-off-by: Willem de Bruijn <willemb@google.com> Reviewed-by: Eric Dumazet <edumazet@google.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-09net: udp: introduce UDP_MIB_MEMERRORS for udp_memMenglong Dong5-0/+10
When udp_memory_allocated is at the limit, __udp_enqueue_schedule_skb will return a -ENOBUFS, and skb will be dropped in __udp_queue_rcv_skb without any counters being done. It's hard to find out what happened once this happen. So we introduce a UDP_MIB_MEMERRORS to do this job. Well, this change looks friendly to the existing users, such as netstat: $ netstat -u -s Udp: 0 packets received 639 packets to unknown port received. 158689 packet receive errors 180022 packets sent RcvbufErrors: 20930 MemErrors: 137759 UdpLite: IpExt: InOctets: 257426235 OutOctets: 257460598 InNoECTPkts: 181177 v2: - Fix some alignment problems Signed-off-by: Menglong Dong <dong.menglong@zte.com.cn> Link: https://lore.kernel.org/r/1604627354-43207-1-git-send-email-dong.menglong@zte.com.cn Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-07stmmac: intel: change all EHL/TGL to auto detect phy addrVoon Weifeng1-5/+1
Set all EHL/TGL phy_addr to -1 so that the driver will automatically detect it at run-time by probing all the possible 32 addresses. Signed-off-by: Voon Weifeng <weifeng.voon@intel.com> Signed-off-by: Wong Vee Khee <vee.khee.wong@intel.com> Link: https://lore.kernel.org/r/20201106094341.4241-1-vee.khee.wong@intel.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-07net: usb: fix spelling typo in cdc_ncm.cWang Qing1-1/+1
Actually, withing should be within. Signed-off-by: Wang Qing <wangqing@vivo.com> Link: https://lore.kernel.org/r/1604649025-22559-1-git-send-email-wangqing@vivo.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-07net: core: fix spelling typo in flow_dissector.cWang Qing1-1/+1
withing should be within. Signed-off-by: Wang Qing <wangqing@vivo.com> Link: https://lore.kernel.org/r/1604650310-30432-1-git-send-email-wangqing@vivo.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-07Merge branch 'net-ipa-constrain-gsi-interrupts'Jakub Kicinski3-89/+204
Alex Elder says: ==================== net: ipa: constrain GSI interrupts The goal of this series is to more tightly control when GSI interrupts are enabled. This is a long-ish series, so I'll describe it in parts. The first patch is actually unrelated... I forgot to include it in my previous series (which exposed the GSI layer to the IPA version). It is a trivial comments-only update patch. The second patch defers registering the GSI interrupt handler until *after* all of the resources that handler touches have been initialized. In practice, we don't see this interrupt that early, but this precludes an obvious problem. The next two patches are simple changes. The first just trivially renames a field. The second switches from using constant mask values to using an enumerated type of bit positions to represent each GSI interrupt type. The rest implement the "real work." First, all interrupts are disabled at initialization time. Next, we keep track of a bitmask of enabled GSI interrupt types, updating it each time we enable or disable one of them. From there we have a set of patches that one-by-one enable each interrupt type only during the period it is required. This includes allowing a channel to generate IEOB interrupts only when it has been enabled. And finally, the last patch simplifies some code now that all GSI interrupt types are handled uniformly. ==================== Link: https://lore.kernel.org/r/20201105181407.8006-1-elder@linaro.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-07net: ipa: pass a value to gsi_irq_type_update()Alex Elder1-18/+13
Now that all of the GSI interrupts are handled uniformly, change gsi_irq_type_update() so it takes a value. Have the function assign that value to the cached mask of enabled GSI IRQ types before writing it to hardware. Note that gsi_irq_teardown() will only be called after gsi_irq_disable(), so it's not necessary for the former to disable all IRQ types. Get rid of that. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-07net: ipa: only enable GSI general IRQs when neededAlex Elder2-5/+10
Most GSI general errors are unrecoverable without a full reset. Despite that, we want to receive these errors so we can at least report what happened before whatever undefined behavior ensues. Explicitly disable all such interrupts in gsi_irq_setup(), then enable those we want in gsi_irq_enable(). List the interrupt types we are interested in (everything but breakpoint) explicitly rather than using GSI_CNTXT_GSI_IRQ_ALL, and remove that symbol's definition. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-07net: ipa: explicitly disallow inter-EE interruptsAlex Elder1-2/+2
It is possible for other execution environments (EEs, like the modem) to request changes to local (AP) channel or event ring state. We do not support this feature. In gsi_irq_setup(), explicitly zero the mask that defines which channels are permitted to generate inter-EE channel state change interrupts. Do the same for the event ring mask. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-07net: ipa: only enable GSI IEOB IRQs when neededAlex Elder1-5/+11
A GSI channel must be started in order to use it to perform a transfer data (or command) transaction. And the only time we'll see an IEOB interrupt is if we send a transaction to a started channel. Therefore we do not need to have the IEOB interrupt type enabled until at least one channel has been started. And once the last started channel has been stopped, we can disable the IEOB interrupt type again. We already enable the IEOB interrupt for a particular channel only when it is started. Extend that by having the IEOB interrupt *type* be enabled only when at least one channel is in STARTED state. Disallow all channels from triggering the IEOB interrupt in gsi_irq_setup(). We only enable an channel's interrupt when needed, so there is no longer any need to zero the channel mask in gsi_irq_disable(). Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-07net: ipa: only enable generic command completion IRQ when neededAlex Elder2-9/+27
The completion of a generic EE GSI command is signaled by a global interrupt of type GP_INT1. The only other used type for a global interrupt is a hardware error report. First, disallow all global interrupt types in gsi_irq_setup(). We want to know about hardware errors, so re-enable the interrupt type in gsi_irq_enable(), to allow hardware errors to be reported. Disable that interrupt type again in gsi_irq_disable(). We only issue generic EE commands one at a time, and there's no reason to keep the completion interrupt enabled when no generic EE command is pending. We furthermore have no need to enable the GP_INT2 or GP_INT3 interrupt types (which aren't used). The change in gsi_irq_enable() makes GSI_CNTXT_GLOB_IRQ_ALL unused, so get rid of it. Have gsi_generic_command() enable the GP_INT1 interrupt type (in addition to the ERROR_INT type) only while a generic command is pending. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-07net: ipa: only enable GSI event control IRQs when neededAlex Elder1-7/+20
A GSI event ring causes an event control interrupt to fire whenever its state changes (between NOT_ALLOCATED and ALLOCATED). No event ring should ever change state except when we request it to. Currently, we permit *all* events rings to generate event control interrupts--even those that are never used. And we enable event control interrupts essentially at all times, from setup to teardown. Instead, only enable the event control interrupt type for the duration of an event ring command, and when doing so, only allow the event ring being operated upon to cause the interrupt to fire. Disallow all event rings from issuing the event control interrupt in gsi_irq_setup(). Because an event ring's interrupt is only enabled when needed, there is no longer any need to zero the event channel mask in gsi_irq_disable(). Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-07net: ipa: only enable GSI channel control IRQs when neededAlex Elder1-7/+32
A GSI channel causes a channel control interrupt to fire whenever its state changes (between NOT_ALLOCATED, ALLOCATED, STARTED, etc.). We do not support inter-EE channel commands (initiated by other EEs), so no channel should ever change state except when we request it to. Currently, we permit *all* channels to generate channel control interrupts--even those that are never used. And we enable channel control interrupts essentially at all times, from setup to teardown. Instead, disable all channel control interrupts initially in gsi_irq_setup(), and only enable the channel control interrupt type for the duration of a channel command. When doing so, only allow the channel being operated upon to cause the interrupt to fire. Because a channel's interrupt is now enabled only when needed (one channel at a time), there is no longer any need to zero the channel mask in gsi_irq_disable(). Add new gsi_irq_type_enable() and gsi_irq_type_disable() as helper functions to control whether a given GSI interrupt type is enabled. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-07net: ipa: cache last-saved GSI IRQ enabled typeAlex Elder2-12/+24
Keep track of the set of GSI interrupt types that are currently enabled by recording the mask value to write (or last written) to the TYPE_IRQ_MSK register. Create a new helper function gsi_irq_type_update() to handle actually writing the register. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-07net: ipa: disable all GSI interrupt types initiallyAlex Elder1-10/+32
Introduce gsi_irq_setup() and gsi_irq_teardown() to disable all GSI interrupts when first setting up GSI hardware, and to clean things up when we're done. Re-enable all GSI interrupt types in gsi_irq_enable(), but do so only after each of the type-specific interrupt masks has been configured. Similarly, disable all interrupt types in gsi_irq_disable()--first--before zeroing out the type-specific masks. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-07net: ipa: define GSI interrupt types with an enumAlex Elder2-18/+22
Define the GSI interrupt types with an enumerated type whose values are the bit positions representing each interrupt type. Include a short comment describing how each interrupt type is used. Build up the enabled interrupt mask explicitly in gsi_irq_enable(), and get rid of the definition of GSI_CNTXT_TYPE_IRQ_MSK_ALL. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-07net: ipa: rename gsi->event_enable_bitmapAlex Elder2-8/+8
Rename the "event_enable_bitmap" field of the GSI structure to be "ieob_enabled_bitmap". An upcoming patch will cache the last value stored for another interrupt mask and this is a more direct naming convention to follow. Add a few comments to explain the bitmap fields in the GSI structure. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-07net: ipa: request GSI IRQ laterAlex Elder1-26/+41
Introduce gsi_irq_init() and gsi_irq_exit(), to encapsulate looking up the GSI IRQ and registering its handler. Call gsi_irq_init() a little later in gsi_init(), and initialize the completion earlier. The IRQ handler accesses both the GSI virtual memory pointer and the completion, and this way these things will have been initialized before the gsi_irq() can ever be called. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-07net: ipa: refer to IPA versions, not GSIAlex Elder1-5/+5
The GSI code is now exposed to IPA version numbers, and we handle version-specific behavior based on the IPA version. Modify some comments that talk about GSI versions so they reference IPA versions instead. Correct version number errors in a couple of these comments. The (comment) mapping between IPA and GSI versions in the definition of the ipa_version enumerated type remains. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-07net: x25_asy: Delete the x25_asy driverXie He9-903/+0
This driver transports LAPB (X.25 link layer) frames over TTY links. I can safely say that this driver has no actual user because it was not working at all until: commit 8fdcabeac398 ("drivers/net/wan/x25_asy: Fix to make it work") The code in its current state still has problems: 1. The uses of "struct x25_asy" in x25_asy_unesc (when receiving) and in x25_asy_write_wakeup (when sending) are not protected by locks against x25_asy_change_mtu's changing of the transmitting/receiving buffers. Also, all "netif_running" checks in this driver are not protected by locks against the ndo_stop function. 2. The driver stops all TTY read/write when the netif is down. I think this is not right because this may cause the last outgoing frame before the netif goes down to be incompletely transmitted, and the first incoming frame after the netif goes up to be incompletely received. And there may also be other problems. I was planning to fix these problems but after recent discussions about deleting other old networking code, I think we may just delete this driver, too. Signed-off-by: Xie He <xie.he.0141@gmail.com> Acked-by: Martin Schiller <ms@dev.tdt.de> Link: https://lore.kernel.org/r/20201105073434.429307-1-xie.he.0141@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-07net: macb: fix NULL dereference due to no pcs_config methodParshuram Thombare1-2/+15
This patch fixes NULL pointer dereference due to NULL pcs_config in pcs_ops. Fixes: e4e143e26ce8 ("net: macb: add support for high speed interface") Reported-by: Nicolas Ferre <Nicolas.Ferre@microchip.com> Link: https://lore.kernel.org/netdev/2db854c7-9ffb-328a-f346-f68982723d29@microchip.com/ Signed-off-by: Parshuram Thombare <pthombar@cadence.com> Acked-by: Nicolas Ferre <nicolas.ferre@microchip.com> Link: https://lore.kernel.org/r/1604599113-2488-1-git-send-email-pthombar@cadence.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-07ptp: idt82p33: optimize _idt82p33_adjfineMin Li1-9/+1
Use div_s64 so that the neg_adj is not needed. Signed-off-by: Min Li <min.li.xe@renesas.com> Acked-by: Richard Cochran <richardcochran@gmail.com> Link: https://lore.kernel.org/r/1604634729-24960-3-git-send-email-min.li.xe@renesas.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-07ptp: idt82p33: use i2c_master_send for bus writeMin Li2-11/+37
Refactor idt82p33_xfer and use i2c_master_send for write operation. Because some I2C controllers are only working with single-burst write transaction. Signed-off-by: Min Li <min.li.xe@renesas.com> Acked-by: Richard Cochran <richardcochran@gmail.com> Link: https://lore.kernel.org/r/1604634729-24960-2-git-send-email-min.li.xe@renesas.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-07ptp: idt82p33: add adjphase supportMin Li2-66/+153
Add idt82p33_adjphase() to support PHC write phase mode. Signed-off-by: Min Li <min.li.xe@renesas.com> Acked-by: Richard Cochran <richardcochran@gmail.com> Link: https://lore.kernel.org/r/1604634729-24960-1-git-send-email-min.li.xe@renesas.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-07net: macvlan: remove redundant initialization in macvlan_dev_netpoll_setupMenglong Dong1-1/+1
The initialization for err with 0 seems useless, as it is soon updated with -ENOMEM. So, we can remove it. Changes since v1: -Keep -ENOMEM still. Signed-off-by: Menglong Dong <dong.menglong@zte.com.cn> Link: https://lore.kernel.org/r/1604541244-3241-1-git-send-email-dong.menglong@zte.com.cn Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-07cxgb4: Fix the -Wmisleading-indentation warningKaixu Xia1-1/+1
Fix the gcc warning: drivers/net/ethernet/chelsio/cxgb4/cxgb4_debugfs.c:2673:9: warning: this 'for' clause does not guard... [-Wmisleading-indentation] 2673 | for (i = 0; i < n; ++i) \ Reported-by: Tosk Robot <tencent_os_robot@tencent.com> Signed-off-by: Kaixu Xia <kaixuxia@tencent.com> Link: https://lore.kernel.org/r/1604467444-23043-1-git-send-email-kaixuxia@tencent.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-07Merge branch 'net-axienet-dynamically-enable-mdio-interface'Jakub Kicinski3-28/+51
Radhey Shyam Pandey says: ==================== net: axienet: Dynamically enable MDIO interface This patchset dynamically enable MDIO interface. The background for this change is coming from Cadence GEM controller(macb) in which MDC is active only during MDIO read or write operations while the PHY registers are read or written. It is implemented as an IP feature. For axiethernet as dynamic MDC enable/disable is not supported in hw we are implementing it in sw. This change doesn't affect any existing functionality. ==================== Link: https://lore.kernel.org/r/1604402770-78045-1-git-send-email-radhey.shyam.pandey@xilinx.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-07net: xilinx: axiethernet: Enable dynamic MDIO MDCClayton Rayment2-23/+25
MDIO spec does not require an MDC at all times, only when MDIO transactions are occurring. This patch allows the xilinx_axienet driver to disable the MDC when not in use, and re-enable it when needed. It also simplifies the driver by removing MDC disable and enable in device reset sequence. Signed-off-by: Clayton Rayment <clayton.rayment@xilinx.com> Signed-off-by: Radhey Shyam Pandey <radhey.shyam.pandey@xilinx.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-07net: xilinx: axiethernet: Introduce helper functions for MDC enable/disableRadhey Shyam Pandey2-5/+26
Introduce helper functions to enable/disable MDIO interface clock. This change serves a preparatory patch for the coming feature to dynamically control the management bus clock. Signed-off-by: Radhey Shyam Pandey <radhey.shyam.pandey@xilinx.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-07Merge branch 'net-convert-tasklets-to-use-new-tasklet_setup-api'Jakub Kicinski12-50/+46
Allen Pais says: ==================== net: convert tasklets to use new tasklet_setup API Commit 12cc923f1ccc ("tasklet: Introduce new initialization API")' introduced a new tasklet initialization API. This series converts all the net/* drivers to use the new tasklet_setup() API The following series is based on net-next (9faebeb2d) v3: introduce qdisc_from_priv, suggested by Eric Dumazet. v2: get rid of QDISC_ALIGN() v1: fix kerneldoc ==================== Link: https://lore.kernel.org/r/20201103091823.586717-1-allen.lkml@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-07net: xfrm: convert tasklets to use new tasklet_setup() APIAllen Pais1-4/+3
In preparation for unconditionally passing the struct tasklet_struct pointer to all tasklet callbacks, switch to using the new tasklet_setup() and from_tasklet() to pass the tasklet pointer explicitly. Signed-off-by: Romain Perier <romain.perier@gmail.com> Signed-off-by: Allen Pais <apais@linux.microsoft.com> Acked-by: Steffen Klassert <steffen.klassert@secunet.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-07net: smc: convert tasklets to use new tasklet_setup() APIAllen Pais2-11/+9
In preparation for unconditionally passing the struct tasklet_struct pointer to all tasklet callbacks, switch to using the new tasklet_setup() and from_tasklet() to pass the tasklet pointer explicitly. Signed-off-by: Romain Perier <romain.perier@gmail.com> Signed-off-by: Allen Pais <apais@linux.microsoft.com> Acked-by: Karsten Graul <kgraul@linux.ibm.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-07net: sched: convert tasklets to use new tasklet_setup() APIAllen Pais2-4/+9
In preparation for unconditionally passing the struct tasklet_struct pointer to all tasklet callbacks, switch to using the new tasklet_setup() and from_tasklet() to pass the tasklet pointer explicitly. Signed-off-by: Romain Perier <romain.perier@gmail.com> Signed-off-by: Allen Pais <apais@linux.microsoft.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-07net: mac802154: convert tasklets to use new tasklet_setup() APIAllen Pais1-5/+3
In preparation for unconditionally passing the struct tasklet_struct pointer to all tasklet callbacks, switch to using the new tasklet_setup() and from_tasklet() to pass the tasklet pointer explicitly. Acked-by: Stefan Schmidt <stefan@datenfreihafen.org> Signed-off-by: Romain Perier <romain.perier@gmail.com> Signed-off-by: Allen Pais <apais@linux.microsoft.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-07net: mac80211: convert tasklets to use new tasklet_setup() APIAllen Pais4-15/+13
In preparation for unconditionally passing the struct tasklet_struct pointer to all tasklet callbacks, switch to using the new tasklet_setup() and from_tasklet() to pass the tasklet pointer explicitly. Reviewed-by: Johannes Berg <johannes@sipsolutions.net> Signed-off-by: Romain Perier <romain.perier@gmail.com> Signed-off-by: Allen Pais <apais@linux.microsoft.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>