summaryrefslogtreecommitdiffstats
path: root/drivers/net/ethernet/mellanox
AgeCommit message (Collapse)AuthorFilesLines
2022-12-08net/mlx5e: meter, add mtu post meter tablesOz Shlomo3-7/+185
TC police action may configure the maximum packet size to be handled by the policer, in addition to byte/packet rate. MTU check is realized in hardware using the range destination, specifying a hit ft, if packet len is in the range, or miss ft otherwise. Instantiate mtu green/red flow tables with a single match-all rule. Add the green/red actions to the hit/miss table accordingly. Signed-off-by: Oz Shlomo <ozsh@nvidia.com> Reviewed-by: Roi Dayan <roid@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-12-08net/mlx5e: meter, refactor to allow multiple post meter tablesOz Shlomo1-58/+96
TC police action may configure the maximum packet size to be handled by the policer, in addition to byte/packet rate. Currently the post meter table steers the packet according to the meter aso output. Refactor the code to allow both metering and range post actions as a pre-step for adding police mtu offload support. Signed-off-by: Oz Shlomo <ozsh@nvidia.com> Reviewed-by: Roi Dayan <roid@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-12-08net/mlx5: DR, Add support for range match actionYevgeny Kliteynik7-3/+363
Add support for matching on range. The supported type of range is L2 frame size. Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com> Reviewed-by: Erez Shitrit <erezsh@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-12-08net/mlx5: DR, Add function that tells if STE miss addr has been initializedYevgeny Kliteynik7-0/+23
Up until now miss address in all the STEs was used to connect miss lists and to link the last STE in the list to end anchor. Match range STE will require special handling because its miss address is part of the 'action'. That is, range action has hit and miss addresses. Since the range action is always the last action, need to make sure that its miss address isn't overwritten by the end anchor. Adding new function mlx5dr_ste_is_miss_addr_set() to answer the question whether the STE's miss address has already been set as part of STE initialization. Use a callback that always returns false right now. Once match range is added, a different callback will be used for that STE type. Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com> Reviewed-by: Erez Shitrit <erezsh@nvidia.com> Reviewed-by: Mark Bloch <mbloch@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-12-08net/mlx5: DR, Some refactoring of miss address handlingYevgeny Kliteynik1-10/+14
In preparation for MATCH RANGE STE support, create a function to set the miss address of an STE. Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com> Reviewed-by: Erez Shitrit <erezsh@nvidia.com> Reviewed-by: Mark Bloch <mbloch@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-12-08net/mlx5: DR, Manage definers with refcountsYevgeny Kliteynik5-2/+163
In many cases different actions will ask for the same definer format. Instead of allocating new definer general object and running out of definers, have an xarray of allocated definers and keep track of their usage with refcounts: allocate a new definer only when there isn't one with the same format already created, and destroy definer only when its refcount runs down to zero. Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com> Reviewed-by: Alex Vesker <valex@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-12-08net/mlx5: DR, Handle FT action in a separate functionYevgeny Kliteynik1-46/+81
As preparation for range action support, moving the handling of final ICM address for flow table action to a separate function. Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com> Reviewed-by: Alex Vesker <valex@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-12-08net/mlx5: DR, Rework is_fw_table functionYevgeny Kliteynik2-11/+18
This patch handles the following two changes w.r.t. is_fw_table function: 1. When SW steering is asked to create/destroy FW table, we allow for creation/destruction of only termination tables. Rename mlx5_dr_is_fw_table both to comply with the static function naming and to reflect that we're actually checking for FW termination table. 2. When the action 'go to flow table' is created, the destination flow table can be any FW table, not only termination table. Adding function to check if the dest table is FW table. This function will also be used by the later creation of range match action, so putting it the header file. Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com> Reviewed-by: Alex Vesker <valex@nvidia.com> Reviewed-by: Mark Bloch <mbloch@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-12-08net/mlx5: DR, Add functions to create/destroy MATCH_DEFINER general objectYevgeny Kliteynik2-0/+86
SW steering is able to match only on the exact values of the packet fields, as requested by the user: the user provides mask for the fields that are of interest, and the exact values to be matched on when the traffic is handled. Match Definer is a general FW object that defines which fields in the packet will be referenced by the mask and tag of each STE. Match definer ID is part of STE fields, and it defines how the HW needs to interpret the STE's mask/tag values. Till now SW steering used the definers that were managed by FW and implemented the STE layout as described by the HW spec. Now that we're adding a new type of STE, SW steering needs to define for the HW how it should interpret this new STE's layout. This is done with a programmable match definer. The programmable definer allows to selects which fields will be included in the definer, and their layout: it has up to 9 DW selectors 8 Byte selectors. Each selector indicates a DW/Byte worth of fields out of the table that is defined by HW spec by referencing the offset of the required DW/Byte. This patch adds dr_cmd function to create and destroy MATCH_DEFINER general object. Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com> Reviewed-by: Alex Vesker <valex@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-12-08net/mlx5: fs, add match on ranges APIYevgeny Kliteynik3-2/+26
Range is a new flow destination type which allows matching on a range of values instead of matching on a specific value. Range flow destination has the following fields: - hit_ft: flow table to forward the traffic in case of hit - miss_ft: flow table to forward the traffic in case of miss - field: which packet characteristic to match on - min: minimal value for the selected field - max: maximal value for the selected field Note: - In order to match, the value in the packet should meet the following criteria: min <= value < max - Currently, the only supported field type is L2 packet length Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com> Reviewed-by: Alex Vesker <valex@nvidia.com> Reviewed-by: Mark Bloch <mbloch@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-12-08net/mlx4: small optimization in mlx4_en_xmit()Eric Dumazet1-5/+5
Test against MLX4_MAX_DESC_TXBBS only matters if the TX bounce buffer is going to be used. Signed-off-by: Eric Dumazet <edumazet@google.com> Cc: Wei Wang <weiwan@google.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-12-08net/mlx4: MLX4_TX_BOUNCE_BUFFER_SIZE depends on MAX_SKB_FRAGSEric Dumazet1-4/+12
Google production kernel has increased MAX_SKB_FRAGS to 45 for BIG-TCP rollout. Unfortunately mlx4 TX bounce buffer is not big enough whenever an skb has up to 45 page fragments. This can happen often with TCP TX zero copy, as one frag usually holds 4096 bytes of payload (order-0 page). Tested: Kernel built with MAX_SKB_FRAGS=45 ip link set dev eth0 gso_max_size 185000 netperf -t TCP_SENDFILE I made sure that "ethtool -G eth0 tx 64" was properly working, ring->full_size being set to 15. Signed-off-by: Eric Dumazet <edumazet@google.com> Reported-by: Wei Wang <weiwan@google.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-12-08net/mlx4: rename two constantsEric Dumazet2-6/+8
MAX_DESC_SIZE is really the size of the bounce buffer used when reaching the right side of TX ring buffer. MAX_DESC_TXBBS get a MLX4_ prefix. Signed-off-by: Eric Dumazet <edumazet@google.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-12-08net/mlx5e: TC, allow meter jump control actionOz Shlomo3-23/+54
Separate the matchall police action validation from flower validation. Isolate the action validation logic in the police action parser. Signed-off-by: Oz Shlomo <ozsh@nvidia.com> Reviewed-by: Roi Dayan <roid@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Link: https://lore.kernel.org/r/20221203221337.29267-12-saeed@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-12-08net/mlx5e: TC, init post meter rules with branching attributesOz Shlomo3-34/+67
Instantiate the post meter actions with the platform initialized branching action attributes. Signed-off-by: Oz Shlomo <ozsh@nvidia.com> Reviewed-by: Roi Dayan <roid@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Link: https://lore.kernel.org/r/20221203221337.29267-11-saeed@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-12-08net/mlx5e: TC, rename post_meter actionsOz Shlomo5-33/+33
Currently post meter supports only the pipe/drop conform-exceed policy. This assumption is reflected in several variable names. Rename the following variables as a pre-step for using the generalized branching action platform. Rename fwd_green_rule/drop_red_rule to green_rule/red_rule respectively. Repurpose red_counter/green_counter to act_counter/drop_counter to allow police conform-exceed configurations that do not drop. Signed-off-by: Oz Shlomo <ozsh@nvidia.com> Reviewed-by: Roi Dayan <roid@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Link: https://lore.kernel.org/r/20221203221337.29267-10-saeed@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-12-08net/mlx5e: TC, initialize branching action with target attrOz Shlomo2-5/+83
Identify the jump target action when iterating the action list. Initialize the jump target attr with the jumping attribute during the parsing phase. Initialize the jumping attr post action with the target during the offload phase. Signed-off-by: Oz Shlomo <ozsh@nvidia.com> Reviewed-by: Roi Dayan <roid@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Link: https://lore.kernel.org/r/20221203221337.29267-9-saeed@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-12-08net/mlx5e: TC, initialize branch flow attributesOz Shlomo2-16/+142
Initialize flow attribute for drop, accept, pipe and jump branching actions. Instantiate a flow attribute instance according to the specified branch control action. Store the branching attributes on the branching action flow attribute during the parsing phase. Then, during the offload phase, allocate the relevant mod header objects to the branching actions. Signed-off-by: Oz Shlomo <ozsh@nvidia.com> Reviewed-by: Roi Dayan <roid@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Link: https://lore.kernel.org/r/20221203221337.29267-8-saeed@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-12-08net/mlx5e: TC, set control params for branching actionsOz Shlomo2-0/+23
Extend the act tc api to set the branch control params aligning with the police conform/exceed use case. Signed-off-by: Oz Shlomo <ozsh@nvidia.com> Reviewed-by: Roi Dayan <roid@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Link: https://lore.kernel.org/r/20221203221337.29267-7-saeed@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-12-08net/mlx5e: TC, validate action list per attributeOz Shlomo1-30/+32
Currently the entire flow action list is validate for offload limitations. For example, flow with both forward and drop actions are declared invalid due to hardware restrictions. However, a multi-table hardware model changes the limitations from a flow scope to a single flow attribute scope. Apply offload limitations to flow attributes instead of the entire flow. Signed-off-by: Oz Shlomo <ozsh@nvidia.com> Reviewed-by: Roi Dayan <roid@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Link: https://lore.kernel.org/r/20221203221337.29267-6-saeed@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-12-08net/mlx5e: TC, add terminating actionsOz Shlomo7-1/+15
Extend act api to identify actions that terminate action list. Pre-step for terminating branching actions. Signed-off-by: Oz Shlomo <ozsh@nvidia.com> Reviewed-by: Roi Dayan <roid@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Link: https://lore.kernel.org/r/20221203221337.29267-5-saeed@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-12-08net/mlx5e: TC, reuse flow attribute post parser processingOz Shlomo1-45/+51
After the tc action parsing phase the flow attribute is initialized with relevant eswitch offload objects such as tunnel, vlan, header modify and counter attributes. The post processing is done both for fdb and post-action attributes. Reuse the flow attribute post parsing logic by both fdb and post-action offloads. Signed-off-by: Oz Shlomo <ozsh@nvidia.com> Reviewed-by: Roi Dayan <roid@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Link: https://lore.kernel.org/r/20221203221337.29267-4-saeed@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-12-08net/mlx5: fs, assert null dest pointer when dest_num is 0Oz Shlomo1-0/+3
Currently create_flow_handle() assumes a null dest pointer when there are no destinations. This might not be the case as the caller may pass an allocated dest array while setting the dest_num parameter to 0. Assert null dest array for flow rules that have no destinations (e.g. drop rule). Signed-off-by: Oz Shlomo <ozsh@nvidia.com> Reviewed-by: Roi Dayan <roid@nvidia.com> Reviewed-by: Mark Bloch <mbloch@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Link: https://lore.kernel.org/r/20221203221337.29267-3-saeed@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-12-08net/mlx5e: E-Switch, handle flow attribute with no destinationsOz Shlomo1-0/+5
Rules with drop action are not required to have a destination. Currently the destination list is allocated with the maximum number of destinations and passed to the fs_core layer along with the actual number of destinations. Remove redundant passing of dest pointer when count of dest is 0. Signed-off-by: Oz Shlomo <ozsh@nvidia.com> Reviewed-by: Roi Dayan <roid@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Link: https://lore.kernel.org/r/20221203221337.29267-2-saeed@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-12-08net/mlx5e: Open mlx5 driver to accept IPsec packet offloadLeon Romanovsky1-10/+31
Enable configuration of IPsec packet offload through XFRM state add interface together with moving specific to IPsec packet mode limitations to specific switch-case section. Reviewed-by: Raed Salem <raeds@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
2022-12-08net/mlx5e: Handle ESN update eventsLeon Romanovsky3-3/+48
Extend event logic to update ESN state (esn_msb, esn_overlap) for an IPsec Offload context. Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
2022-12-08net/mlx5e: Handle hardware IPsec limits eventsLeon Romanovsky4-5/+118
Enable object changed event to signal IPsec about hitting soft and hard limits. Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
2022-12-08net/mlx5e: Update IPsec soft and hard limitsLeon Romanovsky4-0/+127
Implement mlx5 IPsec callback to update current lifetime counters. Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
2022-12-08net/mlx5e: Store all XFRM SAs in XarrayLeon Romanovsky3-74/+28
Instead of performing custom hash calculations, rely on FW that returns unique identifier to every created SA. That identifier is Xarray ready, which provides better semantic with efficient access. In addition, store both TX and RX SAs to allow correlation between event generated by HW when limits are armed and XFRM states. Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
2022-12-08net/mlx5e: Provide intermediate pointer to access IPsec structLeon Romanovsky1-6/+6
Improve readability by providing direct pointer to struct mlx5e_ipsec. Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
2022-12-08net/mlx5e: Skip IPsec encryption for TX path without matching policyLeon Romanovsky3-7/+43
Software implementation of IPsec skips encryption of packets in TX path if no matching policy is found. So align HW implementation to this behavior, by requiring matching reqid for offloaded policy and SA. Reviewed-by: Raed Salem <raeds@nvidia.com> Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
2022-12-08net/mlx5e: Add statistics for Rx/Tx IPsec offloaded flowsRaed Salem5-24/+233
Add the following statistics: RX successfully IPsec flows: ipsec_rx_pkts : Number of packets passed Rx IPsec flow ipsec_rx_bytes : Number of bytes passed Rx IPsec flow Rx dropped IPsec policy packets: ipsec_rx_drop_pkts: Number of packets dropped in Rx datapath due to IPsec drop policy ipsec_rx_drop_bytes: Number of bytes dropped in Rx datapath due to IPsec drop policy TX successfully encrypted and encapsulated IPsec packets: ipsec_tx_pkts : Number of packets encrypted and encapsulated successfully ipsec_tx_bytes : Number of bytes encrypted and encapsulated successfully Tx dropped IPsec policy packets: ipsec_tx_drop_pkts: Number of packets dropped in Tx datapath due to IPsec drop policy ipsec_tx_drop_bytes: Number of bytes dropped in Tx datapath due to IPsec drop policy The above can be seen using: ethtool -S <ifc> |grep ipsec Signed-off-by: Raed Salem <raeds@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
2022-12-08net/mlx5e: Improve IPsec flow steering autogroupLeon Romanovsky1-4/+4
Flow steering API separates newly created rules based on their match criteria. Right now, all IPsec tables are created with one group and suffers from not-optimal FS performance. Count number of different match criteria for relevant tables, and set proper value at the table creation. Reviewed-by: Raed Salem <raeds@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
2022-12-08net/mlx5e: Configure IPsec packet offload flow steeringLeon Romanovsky3-10/+91
In packet offload mode, the HW is responsible to handle ESP headers, SPI numbers and trailers (ICV) together with different logic for RX and TX paths. In order to support packet offload mode, special logic is added to flow steering rules. Reviewed-by: Raed Salem <raeds@nvidia.com> Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
2022-12-08net/mlx5e: Use same coding pattern for Rx and Tx flowsLeon Romanovsky1-3/+2
Remove intermediate variable in favor of having similar coding style for Rx and Tx add rule functions. Reviewed-by: Raed Salem <raeds@nvidia.com> Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
2022-12-08net/mlx5e: Add XFRM policy offload logicLeon Romanovsky3-1/+295
Implement mlx5 flow steering logic and mlx5 IPsec code support XFRM policy offload. Reviewed-by: Raed Salem <raeds@nvidia.com> Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
2022-12-08net/mlx5e: Create IPsec policy offload tablesLeon Romanovsky3-6/+56
Add empty table to be used for IPsec policy offload. Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
2022-12-07net/mlx5: E-Switch, Implement devlink port function cmds to control migratableShay Drory4-0/+116
Implement devlink port function commands to enable / disable migratable. This is used to control the migratable capability of the device. Signed-off-by: Shay Drory <shayd@nvidia.com> Reviewed-by: Mark Bloch <mbloch@nvidia.com> Acked-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-12-07net/mlx5: E-Switch, Implement devlink port function cmds to control RoCEYishai Hadas6-1/+176
Implement devlink port function commands to enable / disable RoCE. This is used to control the RoCE device capabilities. This patch implement infrastructure which will be used by downstream patches that will add additional capabilities. Signed-off-by: Yishai Hadas <yishaih@nvidia.com> Signed-off-by: Daniel Jurgens <danielj@nvidia.com> Signed-off-by: Shay Drory <shayd@nvidia.com> Reviewed-by: Parav Pandit <parav@nvidia.com> Reviewed-by: Mark Bloch <mbloch@nvidia.com> Acked-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-12-07net/mlx5: Add generic getters for other functions capsShay Drory4-5/+9
Downstream patch requires to get other function GENERAL2 caps while mlx5_vport_get_other_func_cap() gets only one type of caps (general). Rename it to represent this and introduce a generic implementation of mlx5_vport_get_other_func_cap(). Signed-off-by: Shay Drory <shayd@nvidia.com> Reviewed-by: Mark Bloch <mbloch@nvidia.com> Acked-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-12-06net/mlx5e: Generalize creation of default IPsec miss group and ruleLeon Romanovsky1-24/+23
Create general function that sets miss group and rule to forward all not-matched traffic to the next table. Reviewed-by: Raed Salem <raeds@nvidia.com> Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
2022-12-06net/mlx5e: Group IPsec miss handles into separate structLeon Romanovsky1-7/+11
Move miss handles into dedicated struct, so we can reuse it in next patch when creating IPsec policy flow table. Reviewed-by: Raed Salem <raeds@nvidia.com> Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
2022-12-06net/mlx5e: Make clear what IPsec rx_err doesLeon Romanovsky1-22/+16
Reuse existing struct what holds all information about modify header pointer and rule. This helps to reduce ambiguity from the name _err_ that doesn't describe the real purpose of that flow table, rule and function - to copy status result from HW to the stack. Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
2022-12-06net/mlx5e: Flatten the IPsec RX add rule pathLeon Romanovsky2-37/+53
Rewrote the IPsec RX add rule path to be less convoluted and don't rely on pre-initialized variables. The code now has clean linear flow with clean separation between error and success paths. Reviewed-by: Raed Salem <raeds@nvidia.com> Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
2022-12-06net/mlx5e: Refactor FTE setup code to be more clearLeon Romanovsky1-54/+85
The policy offload logic needs to set flow steering rule that match on saddr and daddr too, so factor out this code to separate functions, together with code alignment to netdev coding pattern of relying on family type. As part of this change, let's separate more logic from setup_fte_common to make sure that the function names describe that is done in the function better than general *common* name. Reviewed-by: Raed Salem <raeds@nvidia.com> Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
2022-12-06net/mlx5e: Move IPsec flow table creation to separate functionLeon Romanovsky1-22/+23
Even now, to support IPsec crypto, the RX and TX paths use same logic to create flow tables. In the following patches, we will add more tables to support IPsec packet offload. So reuse existing code and rewrite it to support IPsec packet offload from the beginning. Reviewed-by: Raed Salem <raeds@nvidia.com> Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
2022-12-06net/mlx5e: Create hardware IPsec packet offload objectsLeon Romanovsky3-2/+39
Create initial hardware IPsec packet offload object and connect it to advanced steering operation (ASO) context and queue, so the data path can communicate with the stack. Reviewed-by: Raed Salem <raeds@nvidia.com> Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
2022-12-06net/mlx5e: Create Advanced Steering Operation object for IPsecLeon Romanovsky4-0/+80
Setup the ASO (Advanced Steering Operation) object that is needed for IPsec to interact with SW stack about various fast changing events: replay window, lifetime limits, e.t.c Reviewed-by: Raed Salem <raeds@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
2022-12-06net/mlx5e: Remove accesses to priv for low level IPsec FS codeLeon Romanovsky3-59/+56
mlx5 priv structure is driver main structure that holds high level data. That information is not needed for IPsec flow steering logic and the pointer to mlx5e_priv was not supposed to be passed in the first place. This change "cleans" the logic to rely on internal to IPsec structures without touching global mlx5e_priv. Reviewed-by: Raed Salem <raeds@nvidia.com> Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
2022-12-06net/mlx5e: Use mlx5 print routines for low level IPsec codeLeon Romanovsky1-12/+14
Low level mlx5 code needs to use mlx5_core print routines and not netdev ones, as the failures are relevant to the HW itself and not to its netdev. This change allows us to remove access to mlx5 priv structure, which holds high level driver data that isn't needed for mlx5 IPsec code. Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>