Age | Commit message (Collapse) | Author | Files | Lines |
|
The continue statement at the end of a for-loop has no effect,
invert the if expression and remove the continue.
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/tnguy/next-queue
Tony Nguyen says:
====================
100GbE Intel Wired LAN Driver Updates 2021-06-17
This series contains updates to ice driver only.
Jake corrects a couple of entries in the PTYPE table to properly
reflect the datasheet and removes unneeded NULL checks for some
PTP calls.
Paul reduces the scope of variables and removes the use of a local
variable.
Shaokun Zhang removes a duplicate function declaration.
Lorenzo Bianconi fixes a compilation warning if PTP_1588_CLOCK is
disabled.
Colin Ian King changes a for loop to remove an unneeded 'continue'.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Devices with RMON support has a 16-bit RDRP counter. It provides: "Receive
dropped packets counter. Increments for frames received which are streamed
to system but are later dropped due to lack of system resources."
To handle more than 2^16 dropped packets, a carry bit in CAR1 register is
set on overflow, so we enable irq when this is set, extending the counter
to 2^64 for handling situations where lots of packets are missed (e.g.
during heavy network storms).
Signed-off-by: Esben Haabendal <esben@geanix.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
These are for carry status and interrupt mask bits of statistics registers.
Signed-off-by: Esben Haabendal <esben@geanix.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The memset on CAMx is wrong, as it actually unmasks all carry irq's,
which we clearly are not interested in.
The memset on CARx registers is just pointless, as they are W1C.
So let's just stop the memset before CAR1.
Signed-off-by: Esben Haabendal <esben@geanix.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The CAR1 and CAR2 registers are W1C style registers, to the memset does not
actually clear them.
Signed-off-by: Esben Haabendal <esben@geanix.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
No reason to wrap counter values at 2^32. Especially the bytes counters
can wrap pretty fast on Gbit networks.
Signed-off-by: Esben Haabendal <esben@geanix.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
No reason to produce the legacy net_device_stats struct, only to have it
converted to rtnl_link_stats64. And as a bonus, this allows for improving
counter size to 64 bit.
Signed-off-by: Esben Haabendal <esben@geanix.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The continue statement in the for-loop is redundant. Re-work the hw_lock
check to remove it.
Addresses-Coverity: ("Continue has no effect")
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Reviewed-by: Jacob Keller <jacob.e.keller@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
|
|
Fix the following compilation warning if PTP_1588_CLOCK is not enabled
drivers/net/ethernet/intel/ice/ice_ptp.h:149:1:
error: return type defaults to ‘int’ [-Werror=return-type]
ice_ptp_request_ts(struct ice_ptp_tx *tx, struct sk_buff *skb)
Fixes: ea9b847cda647 ("ice: enable transmit timestamps for E810 devices")
Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
Reviewed-by: Jacob Keller <jacob.e.keller@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
|
|
The ptp_read_system_prets and ptp_read_system_postts functions already
check for the NULL value of the ptp_system_timestamp structure pointer.
There is no need to check this manually in the ice driver code. Remove
the checks.
Reported-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
|
|
Function 'ice_is_vsi_valid' is declared twice, remove the
repeated declaration.
Cc: Jesse Brandeburg <jesse.brandeburg@intel.com>
Cc: Tony Nguyen <anthony.l.nguyen@intel.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: Shaokun Zhang <zhangshaokun@hisilicon.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
|
|
Remove the local variable since it's only used once. Instead, use it
directly.
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Tested-by: Tony Brelinski <tonyx.brelinski@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
|
|
There are some places where the scope of a variable can
be reduced so do that.
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Tested-by: Tony Brelinski <tonyx.brelinski@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
|
|
The entry for PTYPE 2 in the ice_ptype_lkup table incorrectly states
that this is an L2 packet with no payload. According to the datasheet,
this PTYPE is actually unused and reserved.
Fix the lookup entry to indicate this is an unused entry that is
reserved.
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Tony Brelinski <tonyx.brelinski@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
|
|
The entry for PTYPE 90 indicates that the payload is layer 3. This does
not match the specification in the datasheet which indicates the packet
is a MAC, IPv6, UDP packet, with a payload in layer 4.
Fix the lookup table to match the data sheet.
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Tony Brelinski <tonyx.brelinski@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
|
|
- Introduce matchall filter support
- Add SPAN API to configure port mirroring.
- Add tc mirror action.
At this moment, only mirror (egress) action is supported.
Example:
tc filter ... action mirred egress mirror dev DEV
Co-developed-by: Volodymyr Mytnyk <vmytnyk@marvell.com>
Signed-off-by: Volodymyr Mytnyk <vmytnyk@marvell.com>
Signed-off-by: Serhiy Boiko <serhiy.boiko@plvision.eu>
Signed-off-by: Vadym Kochan <vkochan@marvell.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Add ACL infrastructure for Prestera Switch ASICs family devices to
offload cls_flower rules to be processed in the HW.
ACL implementation is based on tc filter api. The flower classifier
is supported to configure ACL rules/matches/action.
Supported actions:
- drop
- trap
- pass
Supported dissector keys:
- indev
- src_mac
- dst_mac
- src_ip
- dst_ip
- ip_proto
- src_port
- dst_port
- vlan_id
- vlan_ethtype
- icmp type/code
Co-developed-by: Volodymyr Mytnyk <vmytnyk@marvell.com>
Signed-off-by: Volodymyr Mytnyk <vmytnyk@marvell.com>
Signed-off-by: Serhiy Boiko <serhiy.boiko@plvision.eu>
Signed-off-by: Vadym Kochan <vkochan@marvell.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The continue statement at the end of a for-loop has no effect,
remove it.
Addresses-Coverity: ("Continue has no effect")
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Reviewed-by: Ido Schimmel <idosch@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Fill in code stub to check that the flow actions are valid for
merge. The actions of the flow X should not conflict with the
matches of flow X+1. For now this check is quite strict and
set_actions are very limited, will need to update this when
NAT support is added.
Signed-off-by: Louis Peens <louis.peens@corigine.com>
Signed-off-by: Yinjun Zhang <yinjun.zhang@corigine.com>
Signed-off-by: Simon Horman <simon.horman@corigine.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Fill in check_meta stub to check that ct_metadata action fields in
the nft flow matches the ct_match data of the post_ct flow.
Signed-off-by: Louis Peens <louis.peens@corigine.com>
Signed-off-by: Yinjun Zhang <yinjun.zhang@corigine.com>
Signed-off-by: Simon Horman <simon.horman@corigine.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Replace merge check stub code with the actual implementation. This
checks that the match parts of two tc flows does not conflict.
Only overlapping keys needs to be checked, and only the narrowest
masked parts needs to be checked, so each key is masked with the
AND'd result of both masks before comparing.
Signed-off-by: Louis Peens <louis.peens@corigine.com>
Signed-off-by: Yinjun Zhang <yinjun.zhang@corigine.com>
Signed-off-by: Simon Horman <simon.horman@corigine.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Add in the code to merge the tc_merge objects with the flows
received from nft. At the moment flows are just merged blindly
as the validity check functions are stubbed out, this will
be populated in follow-up patches.
Signed-off-by: Louis Peens <louis.peens@corigine.com>
Signed-off-by: Yinjun Zhang <yinjun.zhang@corigine.com>
Signed-off-by: Simon Horman <simon.horman@corigine.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Add table and struct to save the result of the three-way merge
between pre_ct,post_ct, and nft flows. Merging code is to be
added in follow-up patches.
Signed-off-by: Louis Peens <louis.peens@corigine.com>
Signed-off-by: Yinjun Zhang <yinjun.zhang@corigine.com>
Signed-off-by: Simon Horman <simon.horman@corigine.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The nft flow will be destroyed after offload cb returns. This means
we need save a full copy of it since it can be referenced through
other paths other than just the offload cb, for example when a new
pre_ct or post_ct entry is added, and it needs to be merged with
an existing nft entry.
Signed-off-by: Yinjun Zhang <yinjun.zhang@corigine.com>
Signed-off-by: Louis Peens <louis.peens@corigine.com>
Signed-off-by: Simon Horman <simon.horman@corigine.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Implement code to add and remove nft flows to the relevant list.
Registering and deregistering the callback function for the nft
table is quite complicated. The safest is to delete the callback
on the removal of the last pre_ct flow. This is because if this
is also the latest pre_ct flow in software it means that this
specific nft table will be freed, so there will not be a later
opportunity to do this. Another place where it looks possible
to delete the callback is when the last nft_flow is deleted,
but this happens under the flow_table lock, which is also taken
when deregistering the callback, leading to a deadlock situation.
This means the final solution here is to delete the callback
when removing the last pre_ct flow, and then clean up any
remaining nft_flow entries which may still be present, since
there will never be a callback now to do this, leaving them
orphaned if not cleaned up here as well.
Signed-off-by: Louis Peens <louis.peens@corigine.com>
Signed-off-by: Yinjun Zhang <yinjun.zhang@corigine.com>
Signed-off-by: Simon Horman <simon.horman@corigine.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Add register/unregister of the nft callback. For now just add
stub code to accept the flows, but don't do anything with it.
Decided to accept the flows since netfilter will keep on trying
to offload a flow if it was rejected, which is quite noisy.
Follow-up patches will start implementing the functions to add
nft flows to the relevant tables.
Signed-off-by: Louis Peens <louis.peens@corigine.com>
Signed-off-by: Yinjun Zhang <yinjun.zhang@corigine.com>
Signed-off-by: Simon Horman <simon.horman@corigine.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Add functions to handle delete flow callbacks for ct flows. Also
accept the flows for offloading by returning 0 instead of -EOPNOTSUPP.
Flows will still not actually be offloaded to hw, but at this point
it's difficult to not accept the flows and also exercise the cleanup
paths properly. Traffic will still be handled safely through the
fallback path.
Signed-off-by: Louis Peens <louis.peens@corigine.com>
Signed-off-by: Yinjun Zhang <yinjun.zhang@corigine.com>
Signed-off-by: Simon Horman <simon.horman@corigine.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Remove the explicit casts in the checksum complement functions
and pass the actual protocol specific headers instead.
Signed-off-by: Subash Abhinov Kasiviswanathan <subashab@codeaurora.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The idiomatic way to handle the changelink flags/mask pair seems to be
allow partial updates of the driver's link flags. In contrast the rmnet
driver masks the incoming flags and then use that as the new flags.
Change the rmnet driver to follow the common scheme, before the
introduction of IFLA_RMNET_FLAGS handling in iproute2 et al.
Signed-off-by: Bjorn Andersson <bjorn.andersson@linaro.org>
Reviewed-by: Alex Elder <elder@linaro.org>
Reviewed-by: Subash Abhinov Kasiviswanathan <subashab@codeaurora.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Fix to return a negative error code from the error handling
case instead of 0, as done elsewhere in this function.
Fixes: 2bb4b98b60d7 ("net: stmmac: Add Ingenic SoCs MAC support.")
Reported-by: Hulk Robot <hulkci@huawei.com>
Signed-off-by: Wei Yongjun <weiyongjun1@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Using eth_zero_addr() to assign zero address insetad of
inefficient copy from an array.
Signed-off-by: Yang Yingliang <yangyingliang@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Convert list_for_each() to list_for_each_entry() where
applicable. This simplifies the code.
Reported-by: Hulk Robot <hulkci@huawei.com>
Signed-off-by: Wang Hai <wanghai38@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Currently rx page will be reused to receive future packet when
the stack releases the previous skb quickly. If the old page
can not be reused, a new page will be allocated and mapped,
which comsumes a lot of cpu when IOMMU is in the strict mode,
especially when the application and irq/NAPI happens to run on
the same cpu.
So allocate a new frag to memcpy the data to avoid the costly
IOMMU unmapping/mapping operation, and add "frag_alloc_err"
and "frag_alloc" stats in "ethtool -S ethX" cmd.
The throughput improves above 50% when running single thread of
iperf using TCP when IOMMU is in strict mode and iperf shares the
same cpu with irq/NAPI(rx_copybreak = 2048 and mtu = 1500).
Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
Signed-off-by: Guangbin Huang <huangguangbin2@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Current rx page offset only reset to zero when all the below
conditions are satisfied:
1. rx page is only owned by driver.
2. rx page is reusable.
3. the page offset that is above to be given to the stack has
reached the end of the page.
If the page offset is over the hns3_buf_size(), it means the
buffer below the offset of the page is usable when the above
condition 1 & 2 are satisfied, so page offset can be reset to
zero instead of increasing the offset. We may be able to always
reuse the first 4K buffer of a 64K page, which means we can
limit the hot buffer size as much as possible.
The above optimization is a side effect when refacting the
rx page reuse handling in order to support the rx copybreak.
Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
Signed-off-by: Guangbin Huang <huangguangbin2@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Using the queue based tx buffer, it is also possible to allocate a
sgl buffer, and use skb_to_sgvec() to convert the skb to the sgvec
in order to support the dma_map_sg() to decreases the overhead of
IOMMU mapping and unmapping.
Firstly, it reduces the number of buffers. For example, a tcp skb
may have a 66-byte header and 3 fragments of 4328, 32768, and 28064
bytes. With this patch, dma_map_sg() will combine them into two
buffers, 66-bytes header and one 65160-bytes fragment by using IOMMU.
Secondly, it reduces the number of dma mapping and unmapping. All the
original 4 buffers are mapped only once rather than 4 times.
The throughput improves above 10% when running single thread of iperf
using TCP when IOMMU is in strict mode.
Suggested-by: Barry Song <song.bao.hua@hisilicon.com>
Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
Signed-off-by: Guangbin Huang <huangguangbin2@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Add support to query tx spare buffer size from configuration
file, and use this info to do spare buffer initialization when
the module parameter 'tx_spare_buf_size' is not specified.
Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com>
Signed-off-by: Guangbin Huang <huangguangbin2@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
when the packet or frag size is small, it causes both security and
performance issue. As dma can't map sub-page, this means some extra
kernel data is visible to devices. On the other hand, the overhead
of dma map and unmap is huge when IOMMU is on.
So add a queue based tx shared bounce buffer to memcpy the small
packet when the len of the xmitted skb is below tx_copybreak.
Add tx_spare_buf_size module param to set the size of tx spare
buffer, and add set/get_tunable to set or query the tx_copybreak.
The throughtput improves from 30 Gbps to 90+ Gbps when running 16
netperf threads with 32KB UDP message size when IOMMU is in the
strict mode(tx_copybreak = 2000 and mtu = 1500).
Suggested-by: Barry Song <song.bao.hua@hisilicon.com>
Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
Signed-off-by: Guangbin Huang <huangguangbin2@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Factor out hns3_fill_desc() so that it can be reused in the
tx bounce supporting.
Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
Signed-off-by: Guangbin Huang <huangguangbin2@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
desc_cb is used to store mapping and freeing info for the
corresponding desc, which is used in the cleaning process.
There will be more desc_cb type coming up when supporting the
tx bounce buffer, change desc_cb type to bit-wise value in order
to reduce the desc_cb type checking operation in the data path.
Also move the desc_cb type definition to hns3_enet.h because it
is only used in hns3_enet.c, and declare a local variable desc_cb
in hns3_clear_desc() to reduce lines of code.
Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
Signed-off-by: Guangbin Huang <huangguangbin2@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
As already done for mvneta and mvpp2, enable skb recycling for ti
ethernet drivers
ti driver on net-next:
----------------------
[perf top]
47.15% [kernel] [k] _raw_spin_unlock_irqrestore
11.77% [kernel] [k] __cpdma_chan_free
3.16% [kernel] [k] ___bpf_prog_run
2.52% [kernel] [k] cpsw_rx_vlan_encap
2.34% [kernel] [k] __netif_receive_skb_core
2.27% [kernel] [k] free_unref_page
2.26% [kernel] [k] kmem_cache_free
2.24% [kernel] [k] kmem_cache_alloc
1.69% [kernel] [k] __softirqentry_text_start
1.61% [kernel] [k] cpsw_rx_handler
1.19% [kernel] [k] page_pool_release_page
1.19% [kernel] [k] clear_bits_ll
1.15% [kernel] [k] page_frag_free
1.06% [kernel] [k] __dma_page_dev_to_cpu
0.99% [kernel] [k] memset
0.94% [kernel] [k] __alloc_pages_bulk
0.92% [kernel] [k] kfree_skb
0.85% [kernel] [k] packet_rcv
0.78% [kernel] [k] page_address
0.75% [kernel] [k] v7_dma_inv_range
0.71% [kernel] [k] __lock_text_start
[iperf3 tcp]
[ 5] 0.00-10.00 sec 873 MBytes 732 Mbits/sec 0 sender
[ 5] 0.00-10.01 sec 866 MBytes 726 Mbits/sec receiver
ti + skb recycling:
-------------------
[perf top]
40.58% [kernel] [k] _raw_spin_unlock_irqrestore
16.18% [kernel] [k] __softirqentry_text_start
10.33% [kernel] [k] __cpdma_chan_free
2.62% [kernel] [k] ___bpf_prog_run
2.05% [kernel] [k] cpsw_rx_vlan_encap
2.00% [kernel] [k] kmem_cache_alloc
1.86% [kernel] [k] __netif_receive_skb_core
1.80% [kernel] [k] kmem_cache_free
1.63% [kernel] [k] cpsw_rx_handler
1.12% [kernel] [k] cpsw_rx_mq_poll
1.11% [kernel] [k] page_pool_put_page
1.04% [kernel] [k] _raw_spin_unlock
0.97% [kernel] [k] clear_bits_ll
0.90% [kernel] [k] packet_rcv
0.88% [kernel] [k] __dma_page_dev_to_cpu
0.85% [kernel] [k] kfree_skb
0.80% [kernel] [k] memset
0.71% [kernel] [k] __lock_text_start
0.66% [kernel] [k] v7_dma_inv_range
0.64% [kernel] [k] gen_pool_free_owner
[iperf3 tcp]
[ 5] 0.00-10.00 sec 884 MBytes 742 Mbits/sec 0 sender
[ 5] 0.00-10.01 sec 878 MBytes 735 Mbits/sec receiver
Tested-by: Grygorii Strashko <grygorii.strashko@ti.com>
Reviewed-by: Grygorii Strashko <grygorii.strashko@ti.com>
Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
There is a spelling mistake in a dev_err message. Fix it.
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/saeed/linux
Saeed Mahameed says:
====================
mlx5-updates-2021-06-14
1) Trivial Lag refactroing in preparation for upcomming Single FDB lag feature
- First 3 patches
2) Scalable IRQ distriburion for Sub-functions
A subfunction (SF) is a lightweight function that has a parent PCI
function (PF) on which it is deployed.
Currently, mlx5 subfunction is sharing the IRQs (MSI-X) with their
parent PCI function.
Before this series the PF allocates enough IRQs to cover
all the cores in a system, Newly created SFs will re-use all the IRQs
that the PF has allocated for itself.
Hence, the more SFs are created, there are more EQs per IRQs. Therefore,
whenever we handle an interrupt, we need to pull all SFs EQs and PF EQs
instead of PF EQs without SFs on the system. This leads to a hard impact
on the performance of SFs and PF.
For example, on machine with:
Intel(R) Xeon(R) CPU E5-2697 v3 @ 2.60GHz with 56 cores.
PCI Express 3 with BW of 126 Gb/s.
ConnectX-5 Ex; EDR IB (100Gb/s) and 100GbE; dual-port QSFP28; PCIe4.0 x16.
test case: iperf TX BW single CPU, affinity of app and IRQ are the same.
PF only: no SFs on the system, 56 IRQs.
SF (before), 250 SFs Sharing the same 56 IRQs .
SF (now), 250 SFs + 255 avaiable IRQs for the NIC. (please see IRQ spread scheme below).
application SF-IRQ channel BW(Gb/sec) interrupts/sec
iperf TX affinity
PF only cpu={0} cpu={0} cpu={0} 79 8200
SF (before) cpu={0} cpu={0} cpu={0} 51.3 (-35%) 9500
SF (now) cpu={0} cpu={0} cpu={0} 78 (-2%) 8200
command:
$ taskset -c 0 iperf -c 11.1.1.1 -P 3 -i 6 -t 30 | grep SUM
The different between the SF examples is that before this series we
allocate num_cpus (56) IRQs, and all of them were shared among the PF
and the SFs. And after this series, we allocate 255 IRQs, and we spread
the SFs among the above IRQs. This have significantly decreased the load
on each IRQ and the number of EQs per IRQ is down by 95% (251->11).
In this patchset the solution proposed is to have a dedicated IRQ pool
for SFs to use. the pool will allocate a large number of IRQs
for SFs to grab from in order to minimize irq sharing between the
different SFs.
IRQs will not be requested from the OS until they are 1st requested by
an SF consumer, and will be eventually released when the last SF consumer
releases them.
For the detailed IRQ spread and allocation scheme please see last patch:
("net/mlx5: Round-Robin EQs over IRQs")
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Added police action for ingress TC flower
hardware offload. With this rate limiting can be
done per flow. Since rate limiting is tied to
RQs in hardware the number of TC flower filters
with action as police is limited to number
of receive queues of the interface. Both bps
and pps modes are supported.
Examples to rate limit a flow:
$ ethtool -K eth0 hw-tc-offload on
$ tc qdisc add dev eth0 ingress
$ tc filter add dev eth0 parent ffff: protocol ip \
flower ip_proto udp dst_port 80 action \
police rate 100Mbit burst 32Kbit
$ tc filter add dev eth0 parent ffff: \
protocol ip flower dst_mac 5e:b2:34:ee:29:49 \
action police pkts_rate 5000 pkts_burst 2048
Signed-off-by: Subbaraya Sundeep <sbhatta@marvell.com>
Signed-off-by: Sunil Kovvuri Goutham <sgoutham@marvell.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This patch modifies all netdev_err messages in
tc code to NL_SET_ERR_MSG_MOD. NL_SET_ERR_MSG_MOD
does not support format specifiers yet hence
netdev_err messages with only strings are modified.
Signed-off-by: Subbaraya Sundeep <sbhatta@marvell.com>
Signed-off-by: Sunil Kovvuri Goutham <sgoutham@marvell.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Add TC_MATCHALL ingress ratelimiting offload support with POLICE
action for entire traffic coming into the interface.
Eg: To ratelimit ingress traffic to 100Mbps
$ ethtool -K eth0 hw-tc-offload on
$ tc qdisc add dev eth0 clsact
$ tc filter add dev eth0 ingress matchall skip_sw \
action police rate 100Mbit burst 32Kbit
To support this, a leaf level bandwidth profile is allocated and all
RQs' contexts used by this interface are updated to point to it.
And the leaf level bandwidth profile is configured with user specified
rate and burst sizes.
Co-developed-by: Subbaraya Sundeep <sbhatta@marvell.com>
Signed-off-by: Subbaraya Sundeep <sbhatta@marvell.com>
Signed-off-by: Sunil Goutham <sgoutham@marvell.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Added support for dumping current resource status of bandwidth
profiles and contexts of allocated profiles via debugfs.
Signed-off-by: Sunil Goutham <sgoutham@marvell.com>
Signed-off-by: Subbaraya Sundeep <sbhatta@marvell.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
CN10K silicons supports hierarchial ingress packet ratelimiting.
There are 3 levels of profilers supported leaf, mid and top.
Ratelimiting is done after packet forwarding decision is taken
and a NIXLF's RQ is identified to DMA the packet. RQ's context
points to a leaf bandwidth profile which can be configured
to achieve desired ratelimit.
This patch adds logic for management of these bandwidth profiles
ie profile alloc, free, context update etc.
Signed-off-by: Sunil Goutham <sgoutham@marvell.com>
Signed-off-by: Subbaraya Sundeep <sbhatta@marvell.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
On RX an SKB is allocated and the received buffer is copied into it.
But on some architectures, the memcpy() needs the source and destination
buffers to have the same alignment to be efficient.
This is not our case, because SKB data pointer is misaligned by two bytes
to compensate the ethernet header.
Align the RX buffer the same way as the SKB one, so the copy is faster.
An iperf3 RX test gives a decent improvement on a RISC-V machine:
before:
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 733 MBytes 615 Mbits/sec 88 sender
[ 5] 0.00-10.01 sec 730 MBytes 612 Mbits/sec receiver
after:
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 1.10 GBytes 942 Mbits/sec 0 sender
[ 5] 0.00-10.00 sec 1.09 GBytes 940 Mbits/sec receiver
And the memcpy() overhead during the RX drops dramatically.
before:
Overhead Shared O Symbol
43.35% [kernel] [k] memcpy
33.77% [kernel] [k] __asm_copy_to_user
3.64% [kernel] [k] sifive_l2_flush64_range
after:
Overhead Shared O Symbol
45.40% [kernel] [k] __asm_copy_to_user
28.09% [kernel] [k] memcpy
4.27% [kernel] [k] sifive_l2_flush64_range
Signed-off-by: Matteo Croce <mcroce@microsoft.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Whenever users provided affinity for an EQ creation request, map the
EQ to a matching IRQ.
Matching IRQ=IRQ with the same affinity and type (completion/control) of
the EQ created.
This mapping is being done in agressive dedicated IRQ allocation scheme,
which described bellow.
First, we check whether there is a matching IRQ that his min threshold
is not exhausted.
- min_eqs_threshold = 3 for control EQ.
- min_eqs_threshold = 1 for completion EQ.
In case no matching IRQ was found, try to request a new IRQ.
In case we can't request a new IRQ, reuse least-used matching IRQ.
Signed-off-by: Shay Drory <shayd@nvidia.com>
Reviewed-by: Leon Romanovsky <leonro@nvidia.com>
Reviewed-by: Tariq Toukan <tariqt@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
|