summaryrefslogtreecommitdiffstats
path: root/drivers/infiniband/core/cm.c
AgeCommit message (Collapse)AuthorFilesLines
2021-04-19RDMA/core: Unify RoCE check and re-factor codeHåkon Bugge1-6/+2
In cm_req_handler(), unify the check for RoCE and re-factor to avoid one test. Link: https://lore.kernel.org/r/1617705423-15570-1-git-send-email-haakon.bugge@oracle.com Suggested-by: Jason Gunthorpe <jgg@nvidia.com> Fixes: 8f9748602491 ("IB/cm: Reduce dependency on gid attribute ndev check") Fixes: 194f64a3cad3 ("RDMA/core: Fix corrupted SL on passive side") Signed-off-by: Håkon Bugge <haakon.bugge@oracle.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2021-04-12RDMA/core: Correct format of block commentsWenpeng Liang1-1/+2
Block comments should not use a trailing */ on a separate line and every line of a block comment should start with an '*'. Link: https://lore.kernel.org/r/1617783353-48249-7-git-send-email-liweihang@huawei.com Signed-off-by: Wenpeng Liang <liangwenpeng@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2021-04-12RDMA/core: Correct format of bracesWenpeng Liang1-2/+1
Do following cleanups about braces: - Add the necessary braces to maintain context alignment. - Fix the open '{' that is not on the same line as "switch". - Remove braces that are not necessary for single statement blocks. - Fix "else" that doesn't follow close brace '}'. Link: https://lore.kernel.org/r/1617783353-48249-6-git-send-email-liweihang@huawei.com Signed-off-by: Wenpeng Liang <liangwenpeng@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2021-04-12RDMA/core: Remove redundant spacesWenpeng Liang1-16/+15
Space is not required after '(', before ')', before ',' and between '*' and symbol name of a definition. Link: https://lore.kernel.org/r/1617783353-48249-5-git-send-email-liweihang@huawei.com Signed-off-by: Wenpeng Liang <liangwenpeng@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2021-04-12RDMA/core: Add necessary spacesWenpeng Liang1-1/+1
Space is required before '(' of switch statements and around '='. Link: https://lore.kernel.org/r/1617783353-48249-4-git-send-email-liweihang@huawei.com Signed-off-by: Wenpeng Liang <liangwenpeng@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2021-04-01RDMA/core: Fix corrupted SL on passive sideHåkon Bugge1-1/+2
On RoCE systems, a CM REQ contains a Primary Hop Limit > 1 and Primary Subnet Local is zero. In cm_req_handler(), the cm_process_routed_req() function is called. Since the Primary Subnet Local value is zero in the request, and since this is RoCE (Primary Local LID is permissive), the following statement will be executed: IBA_SET(CM_REQ_PRIMARY_SL, req_msg, wc->sl); This corrupts SL in req_msg if it was different from zero. In other words, a request to setup a connection using an SL != zero, will not be honored, and a connection using SL zero will be created instead. Fixed by not calling cm_process_routed_req() on RoCE systems, the cm_process_route_req() is only for IB anyhow. Fixes: 3971c9f6dbf2 ("IB/cm: Add interim support for routed paths") Link: https://lore.kernel.org/r/1616420132-31005-1-git-send-email-haakon.bugge@oracle.com Signed-off-by: Håkon Bugge <haakon.bugge@oracle.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2021-03-26RDMA: Support more than 255 rdma portsMark Bloch1-6/+6
Current code uses many different types when dealing with a port of a RDMA device: u8, unsigned int and u32. Switch to u32 to clean up the logic. This allows us to make (at least) the core view consistent and use the same type. Unfortunately not all places can be converted. Many uverbs functions expect port to be u8 so keep those places in order not to break UAPIs. HW/Spec defined values must also not be changed. With the switch to u32 we now can support devices with more than 255 ports. U32_MAX is reserved to make control logic a bit easier to deal with. As a device with U32_MAX ports probably isn't going to happen any time soon this seems like a non issue. When a device with more than 255 ports is created uverbs will report the RDMA device as having 255 ports as this is the max currently supported. The verbs interface is not changed yet because the IBTA spec limits the port size in too many places to be u8 and all applications that relies in verbs won't be able to cope with this change. At this stage, we are extending the interfaces that are using vendor channel solely Once the limitation is lifted mlx5 in switchdev mode will be able to have thousands of SFs created by the device. As the only instance of an RDMA device that reports more than 255 ports will be a representor device and it exposes itself as a RAW Ethernet only device CM/MAD/IPoIB and other ULPs aren't effected by this change and their sysfs/interfaces that are exposes to userspace can remain unchanged. While here cleanup some alignment issues and remove unneeded sanity checks (mainly in rdmavt), Link: https://lore.kernel.org/r/20210301070420.439400-1-leon@kernel.org Signed-off-by: Mark Bloch <mbloch@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2021-03-01RDMA/cm: Fix IRQ restore in ib_send_cm_sidr_repSaeed Mahameed1-2/+3
ib_send_cm_sidr_rep() { spin_lock_irqsave() cm_send_sidr_rep_locked() { ... spin_lock_irq() .... spin_unlock_irq() <--- this will enable interrupts } spin_unlock_irqrestore() } spin_unlock_irqrestore() expects interrupts to be disabled but the internal spin_unlock_irq() will always enable hard interrupts. Fix this by replacing the internal spin_{lock,unlock}_irq() with irqsave/restore variants. It fixes the following kernel trace: raw_local_irq_restore() called with IRQs enabled WARNING: CPU: 2 PID: 20001 at kernel/locking/irqflag-debug.c:10 warn_bogus_irq_restore+0x1d/0x20 Call Trace: _raw_spin_unlock_irqrestore+0x4e/0x50 ib_send_cm_sidr_rep+0x3a/0x50 [ib_cm] cma_send_sidr_rep+0xa1/0x160 [rdma_cm] rdma_accept+0x25e/0x350 [rdma_cm] ucma_accept+0x132/0x1cc [rdma_ucm] ucma_write+0xbf/0x140 [rdma_ucm] vfs_write+0xc1/0x340 ksys_write+0xb3/0xe0 do_syscall_64+0x2d/0x40 entry_SYSCALL_64_after_hwframe+0x44/0xae Fixes: 87c4c774cbef ("RDMA/cm: Protect access to remote_sidr_table") Link: https://lore.kernel.org/r/20210301081844.445823-1-leon@kernel.org Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2021-02-02IB/cm: Avoid a loop when device has 255 portsParav Pandit1-4/+4
When RDMA device has 255 ports, loop iterator i overflows. Due to which cm_add_one() port iterator loops infinitely. Use core provided port iterator to avoid the infinite loop. Fixes: a977049dacde ("[PATCH] IB: Add the kernel CM implementation") Link: https://lore.kernel.org/r/20210127150010.1876121-9-leon@kernel.org Signed-off-by: Mark Bloch <mbloch@nvidia.com> Signed-off-by: Parav Pandit <parav@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2020-12-16Merge tag 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/rdma/rdmaLinus Torvalds1-4/+5
Pull rdma updates from Jason Gunthorpe: "A smaller set of patches, nothing stands out as being particularly major this cycle. The biggest item would be the new HIP09 HW support from HNS, otherwise it was pretty quiet for new work here: - Driver bug fixes and updates: bnxt_re, cxgb4, rxe, hns, i40iw, cxgb4, mlx4 and mlx5 - Bug fixes and polishing for the new rts ULP - Cleanup of uverbs checking for allowed driver operations - Use sysfs_emit all over the place - Lots of bug fixes and clarity improvements for hns - hip09 support for hns - NDR and 50/100Gb signaling rates - Remove dma_virt_ops and go back to using the IB DMA wrappers - mlx5 optimizations for contiguous DMA regions" * tag 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/rdma/rdma: (147 commits) RDMA/cma: Don't overwrite sgid_attr after device is released RDMA/mlx5: Fix MR cache memory leak RDMA/rxe: Use acquire/release for memory ordering RDMA/hns: Simplify AEQE process for different types of queue RDMA/hns: Fix inaccurate prints RDMA/hns: Fix incorrect symbol types RDMA/hns: Clear redundant variable initialization RDMA/hns: Fix coding style issues RDMA/hns: Remove unnecessary access right set during INIT2INIT RDMA/hns: WARN_ON if get a reserved sl from users RDMA/hns: Avoid filling sl in high 3 bits of vlan_id RDMA/hns: Do shift on traffic class when using RoCEv2 RDMA/hns: Normalization the judgment of some features RDMA/hns: Limit the length of data copied between kernel and userspace RDMA/mlx4: Remove bogus dev_base_lock usage RDMA/uverbs: Fix incorrect variable type RDMA/core: Do not indicate device ready when device enablement fails RDMA/core: Clean up cq pool mechanism RDMA/core: Update kernel documentation for ib_create_named_qp() MAINTAINERS: SOFT-ROCE: Change Zhu Yanjun's email address ...
2020-12-09RDMA/cm: Fix an attempt to use non-valid pointer when cleaning timewaitLeon Romanovsky1-0/+2
If cm_create_timewait_info() fails, the timewait_info pointer will contain an error value and will be used in cm_remove_remote() later. general protection fault, probably for non-canonical address 0xdffffc0000000024: 0000 [#1] SMP KASAN PTI KASAN: null-ptr-deref in range [0×0000000000000120-0×0000000000000127] CPU: 2 PID: 12446 Comm: syz-executor.3 Not tainted 5.10.0-rc5-5d4c0742a60e #27 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.13.0-0-gf21b5a4aeb02-prebuilt.qemu.org 04/01/2014 RIP: 0010:cm_remove_remote.isra.0+0x24/0×170 drivers/infiniband/core/cm.c:978 Code: 84 00 00 00 00 00 41 54 55 53 48 89 fb 48 8d ab 2d 01 00 00 e8 7d bf 4b fe 48 89 ea 48 b8 00 00 00 00 00 fc ff df 48 c1 ea 03 <0f> b6 04 02 48 89 ea 83 e2 07 38 d0 7f 08 84 c0 0f 85 fc 00 00 00 RSP: 0018:ffff888013127918 EFLAGS: 00010006 RAX: dffffc0000000000 RBX: fffffffffffffff4 RCX: ffffc9000a18b000 RDX: 0000000000000024 RSI: ffffffff82edc573 RDI: fffffffffffffff4 RBP: 0000000000000121 R08: 0000000000000001 R09: ffffed1002624f1d R10: 0000000000000003 R11: ffffed1002624f1c R12: ffff888107760c70 R13: ffff888107760c40 R14: fffffffffffffff4 R15: ffff888107760c9c FS: 00007fe1ffcc1700(0000) GS:ffff88811a600000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000001b2ff21000 CR3: 000000010f504001 CR4: 0000000000370ee0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 Call Trace: cm_destroy_id+0x189/0×15b0 drivers/infiniband/core/cm.c:1155 cma_connect_ib drivers/infiniband/core/cma.c:4029 [inline] rdma_connect_locked+0x1100/0×17c0 drivers/infiniband/core/cma.c:4107 rdma_connect+0x2a/0×40 drivers/infiniband/core/cma.c:4140 ucma_connect+0x277/0×340 drivers/infiniband/core/ucma.c:1069 ucma_write+0x236/0×2f0 drivers/infiniband/core/ucma.c:1724 vfs_write+0x220/0×830 fs/read_write.c:603 ksys_write+0x1df/0×240 fs/read_write.c:658 do_syscall_64+0x33/0×40 arch/x86/entry/common.c:46 entry_SYSCALL_64_after_hwframe+0x44/0xa9 Fixes: a977049dacde ("[PATCH] IB: Add the kernel CM implementation") Link: https://lore.kernel.org/r/20201204064205.145795-1-leon@kernel.org Reviewed-by: Maor Gottlieb <maorg@nvidia.com> Reported-by: Amit Matityahu <mitm@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2020-12-07IB: Fix kernel-doc markupsMauro Carvalho Chehab1-2/+3
Some functions have different names between their prototypes and the kernel-doc markup. Others need to be fixed, as kernel-doc markups should use this format: identifier - description Link: https://lore.kernel.org/r/78b98c41a5a0f4c0106433d305b143028a4168b0.1606823973.git.mchehab+huawei@kernel.org Signed-off-by: Mauro Carvalho Chehab <mchehab+huawei@kernel.org> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2020-11-17Merge branch 'for-rc' into rdma.gitJason Gunthorpe1-6/+6
From https://git.kernel.org/pub/scm/linux/kernel/git/rdma/rdma.git The rc RDMA branch is needed due to dependencies on the next patches. Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2020-11-12RDMA/cm: Make the local_id_table xarray non-irqJason Gunthorpe1-6/+6
The xarray is never mutated from an IRQ handler, only from work queues under a spinlock_irq. Thus there is no reason for it be an IRQ type xarray. This was copied over from the original IDR code, but the recent rework put the xarray inside another spinlock_irq which will unbalance the unlocking. Fixes: c206f8bad15d ("RDMA/cm: Make it clearer how concurrency works in cm_req_handler()") Link: https://lore.kernel.org/r/0-v1-808b6da3bd3f+1857-cm_xarray_no_irq_jgg@nvidia.com Reported-by: Matthew Wilcox <willy@infradead.org> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2020-10-30RDMA: Convert various random sprintf sysfs _show uses to sysfs_emitJoe Perches1-2/+2
Manual changes for sysfs_emit as cocci scripts can't easily convert them. Link: https://lore.kernel.org/r/ecde7791467cddb570c6f6d2c908ffbab9145cac.1602122880.git.joe@perches.com Signed-off-by: Joe Perches <joe@perches.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Acked-by: Jack Wang <jinpu.wang@cloud.ionos.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2020-08-31Merge tag 'v5.9-rc3' into rdma.git for-nextJason Gunthorpe1-6/+6
Required due to dependencies in following patches. Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2020-08-24RDMA/cm: Add tracepoints to track MAD send operationsChuck Lever1-2/+20
Surface the operation of MAD exchanges during connection establishment. Some samples: [root@klimt ~]# trace-cmd report -F ib_cma cpus=4 kworker/0:4-123 [000] 60.677388: icm_send_rep: local_id=1965336542 remote_id=1096195961 state=REQ_RCVD lap_state=LAP_UNINIT kworker/u8:11-391 [002] 60.678808: icm_send_req: local_id=1982113758 remote_id=0 state=IDLE lap_state=LAP_UNINIT kworker/0:4-123 [000] 60.679652: icm_send_rtu: local_id=1982113758 remote_id=1079418745 state=REP_RCVD lap_state=LAP_UNINIT nfsd-1954 [001] 60.691350: icm_send_rep: local_id=1998890974 remote_id=1129750393 state=MRA_REQ_SENT lap_state=LAP_UNINIT nfsd-1954 [003] 62.017931: icm_send_drep: local_id=1998890974 remote_id=1129750393 state=TIMEWAIT lap_state=LAP_UNINIT Link: https://lore.kernel.org/r/159767240197.2968.12048458026453596018.stgit@klimt.1015granger.net Signed-off-by: Chuck Lever <chuck.lever@oracle.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2020-08-24RDMA/cm: Replace pr_debug() call sites with tracepointsChuck Lever1-54/+26
In the interest of converging on a common instrumentation infrastructure, modernize the pr_debug() call sites added by commit 119bf81793ea ("IB/cm: Add debug prints to ib_cm"). The new tracepoints appear in a new "ib_cma" subsystem. The conversion is somewhat mechanical. Someone more familiar with the semantics of the recorded information might suggest additional data capture. Some benefits include: - Tracepoints enable "always on" reporting of these errors - The error records are structured and compact - Tracepoints provide hooks for eBPF scripts Sample output: nfsd-1954 [003] 62.017901: icm_dreq_skipped: local_id=1998890974 remote_id=1129750393 state=DREQ_RCVD lap_state=LAP_UNINIT Link: https://lore.kernel.org/r/159767239665.2968.10613294222688696646.stgit@klimt.1015granger.net Signed-off-by: Chuck Lever <chuck.lever@oracle.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2020-08-23treewide: Use fallthrough pseudo-keywordGustavo A. R. Silva1-6/+6
Replace the existing /* fall through */ comments and its variants with the new pseudo-keyword macro fallthrough[1]. Also, remove unnecessary fall-through markings when it is the case. [1] https://www.kernel.org/doc/html/v5.7/process/deprecated.html?highlight=fallthrough#implicit-switch-case-fall-through Signed-off-by: Gustavo A. R. Silva <gustavoars@kernel.org>
2020-08-18RDMA/cm: Remove unused cm_classJason Gunthorpe1-24/+0
Previous commits removed all references to the /sys/class/infiniband_cm/ directory represented by the cm_class symbol. Remove the directory and cm_class. Fixes: a1a8e4a85cf7 ("rdma: Delete the ib_ucm module") Link: https://lore.kernel.org/r/0-v1-90096a98c476+205-remove_cm_leftovers_jgg@nvidia.com Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2020-07-16RDMA/cm: Protect access to remote_sidr_tableMaor Gottlieb1-0/+2
cm.lock must be held while accessing remote_sidr_table. This fixes the below NULL pointer dereference. BUG: kernel NULL pointer dereference, address: 0000000000000000 #PF: supervisor write access in kernel mode #PF: error_code(0x0002) - not-present page PGD 0 P4D 0 Oops: 0002 [#1] SMP PTI CPU: 2 PID: 7288 Comm: udaddy Not tainted 5.7.0_for_upstream_perf_2020_06_09_15_14_20_38 #1 Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS rel-1.12.1-0-ga5cab58e9a3f-prebuilt.qemu.org 04/01/2014 RIP: 0010:rb_erase+0x10d/0x360 Code: 00 00 00 48 89 c1 48 89 d0 48 8b 50 08 48 39 ca 74 48 f6 02 01 75 af 48 8b 7a 10 48 89 c1 48 83 c9 01 48 89 78 08 48 89 42 10 <48> 89 0f 48 8b 08 48 89 0a 48 83 e1 fc 48 89 10 0f 84 b1 00 00 00 RSP: 0018:ffffc90000f77c30 EFLAGS: 00010086 RAX: ffff8883df27d458 RBX: ffff8883df27da58 RCX: ffff8883df27d459 RDX: ffff8883d183fa58 RSI: ffffffffa01e8d00 RDI: 0000000000000000 RBP: ffff8883d62ac800 R08: 0000000000000000 R09: 00000000000000ce R10: 000000000000000a R11: 0000000000000000 R12: ffff8883df27da00 R13: ffffc90000f77c98 R14: 0000000000000130 R15: 0000000000000000 FS: 00007f009f877740(0000) GS:ffff8883f1a00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000000000000 CR3: 00000003d467e003 CR4: 0000000000160ee0 Call Trace: cm_send_sidr_rep_locked+0x15a/0x1a0 [ib_cm] ib_send_cm_sidr_rep+0x2b/0x50 [ib_cm] cma_send_sidr_rep+0x8b/0xe0 [rdma_cm] __rdma_accept+0x21d/0x2b0 [rdma_cm] ? ucma_get_ctx+0x2b/0xe0 [rdma_ucm] ? _copy_from_user+0x30/0x60 ucma_accept+0x13e/0x1e0 [rdma_ucm] ucma_write+0xb4/0x130 [rdma_ucm] vfs_write+0xad/0x1a0 ksys_write+0x9d/0xb0 do_syscall_64+0x48/0x130 entry_SYSCALL_64_after_hwframe+0x44/0xa9 RIP: 0033:0x7f009ef60924 Code: 00 f7 d8 64 89 02 48 c7 c0 ff ff ff ff eb b7 0f 1f 80 00 00 00 00 8b 05 2a ef 2c 00 48 63 ff 85 c0 75 13 b8 01 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 54 f3 c3 66 90 55 53 48 89 d5 48 89 f3 48 83 RSP: 002b:00007fff843edf38 EFLAGS: 00000246 ORIG_RAX: 0000000000000001 RAX: ffffffffffffffda RBX: 000055743042e1d0 RCX: 00007f009ef60924 RDX: 0000000000000130 RSI: 00007fff843edf40 RDI: 0000000000000003 RBP: 00007fff843ee0e0 R08: 0000000000000000 R09: 0000557430433090 R10: 0000000000000001 R11: 0000000000000246 R12: 0000000000000000 R13: 00007fff843edf40 R14: 000000000000038c R15: 00000000ffffff00 CR2: 0000000000000000 Fixes: 6a8824a74bc9 ("RDMA/cm: Allow ib_send_cm_sidr_rep() to be done under lock") Link: https://lore.kernel.org/r/20200716105519.1424266-1-leon@kernel.org Signed-off-by: Maor Gottlieb <maorg@mellanox.com> Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2020-06-18RDMA/core: Annotate CMA unlock helper routineLeon Romanovsky1-0/+1
Fix the following sparse error by adding annotation to cm_queue_work_unlock() that it releases cm_id_priv->lock lock. drivers/infiniband/core/cm.c:936:24: warning: context imbalance in 'cm_queue_work_unlock' - unexpected unlock Fixes: e83f195aa45c ("RDMA/cm: Pull duplicated code into cm_queue_work_unlock()") Link: https://lore.kernel.org/r/20200611130045.1994026-1-leon@kernel.org Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2020-06-03RDMA/cm: Spurious WARNING triggered in cm_destroy_id()Ka-Cheong Poon1-1/+3
If the cm_id state is IB_CM_REP_SENT when cm_destroy_id() is called, it calls cm_send_rej_locked(). In cm_send_rej_locked(), it calls cm_enter_timewait() and the state is changed to IB_CM_TIMEWAIT. Now back to cm_destroy_id(), it breaks from the switch statement, and the next call is WARN_ON(cm_id->state != IB_CM_IDLE). This triggers a spurious warning. Instead, the code should goto retest after returning from cm_send_rej_locked() to move the state to IDLE. Fixes: 67b3c8dceac6 ("RDMA/cm: Make sure the cm_id is in the IB_CM_IDLE state in destroy") Link: https://lore.kernel.org/r/1591191218-9446-1-git-send-email-ka-cheong.poon@oracle.com Signed-off-by: Ka-Cheong Poon <ka-cheong.poon@oracle.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2020-05-27RDMA/cm: Send and receive ECE parameter over the wireLeon Romanovsky1-5/+34
ECE parameters are exchanged through REQ->REP/SIDR_REP messages, this patch adds the data to provide to other side of CMID communication channel. Link: https://lore.kernel.org/r/20200526103304.196371-5-leon@kernel.org Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2020-05-21Merge tag 'v5.7-rc6' into rdma.git for-nextJason Gunthorpe1-12/+14
Linux 5.7-rc6 Conflict in drivers/net/ethernet/mellanox/mlx5/core/steering/dr_send.c resolved by deleting dr_cq_event, matching how netdev resolved it. Required for dependencies in the following patches. Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2020-05-12RDMA/cm: Increment the refcount inside cm_find_listen()Jason Gunthorpe1-4/+3
All callers need the 'get', so do it in a central place before returning the pointer. Link: https://lore.kernel.org/r/20200506074701.9775-11-leon@kernel.org Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2020-05-12RDMA/cm: Remove needless cm_id variableJason Gunthorpe1-6/+2
Just put the expression in the only reader Link: https://lore.kernel.org/r/20200506074701.9775-10-leon@kernel.org Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2020-05-12RDMA/cm: Remove the cm_free_id() wrapper functionJason Gunthorpe1-7/+2
Just call xa_erase directly during cm_destroy_id() Link: https://lore.kernel.org/r/20200506074701.9775-9-leon@kernel.org Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2020-05-12RDMA/cm: Make find_remote_id() return a cm_id_privateJason Gunthorpe1-15/+12
The only caller doesn't care about the timewait, so acquire and return the cm_id_private from the function. Link: https://lore.kernel.org/r/20200506074701.9775-8-leon@kernel.org Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2020-05-12RDMA/cm: Add a note explaining how the timewait is eventually freedJason Gunthorpe1-0/+5
The way the cm_timewait_info is converted into a work and then freed is very subtle and surprising, add a note clarifying the lifetime here. Link: https://lore.kernel.org/r/20200506074701.9775-7-leon@kernel.org Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2020-05-12RDMA/cm: Pass the cm_id_private into cm_cleanup_timewaitJason Gunthorpe1-9/+9
Also rename it to cm_remove_remote(). This function now removes the tracking of the remote ID/QPN in the redblack trees from a cm_id_private. Replace a open-coded version with a call. The open coded version was deleting only the remote_id, however at this call site the qpn can not have been in the RB tree either, so the cm_remove_remote() will do the same. Link: https://lore.kernel.org/r/20200506074701.9775-6-leon@kernel.org Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2020-05-12RDMA/cm: Pull duplicated code into cm_queue_work_unlock()Jason Gunthorpe1-102/+44
While unlocking a spinlock held by the caller is a disturbing pattern, this extensively duplicated code is even worse. Pull all the duplicates into a function and explain the purpose of the algorithm. The on creation side call in cm_req_handler() which is different has been micro-optimized on the basis that the work_count == -1 during creation, remove that and just use the normal function. Link: https://lore.kernel.org/r/20200506074701.9775-5-leon@kernel.org Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2020-05-12RDMA/cm: Remove unused store to ret in cm_rej_handlerDanit Goldberg1-1/+0
The 'goto out' label doesn't read ret, so don't set it. Link: https://lore.kernel.org/r/20200506074701.9775-4-leon@kernel.org Signed-off-by: Danit Goldberg <danitg@mellanox.com> Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2020-05-12RDMA/cm: Remove return code from add_cm_id_to_port_listJason Gunthorpe1-14/+4
This cannot happen, all callers pass in one of the two pointers. Use a WARN_ON guard instead. Link: https://lore.kernel.org/r/20200506074701.9775-3-leon@kernel.org Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2020-05-06RDMA: Allow ib_client's to fail when add() is calledJason Gunthorpe1-10/+14
When a client is added it isn't allowed to fail, but all the client's have various failure paths within their add routines. This creates the very fringe condition where the client was added, failed during add and didn't set the client_data. The core code will then still call other client_data centric ops like remove(), rename(), get_nl_info(), and get_net_dev_by_params() with NULL client_data - which is confusing and unexpected. If the add() callback fails, then do not call any more client ops for the device, even remove. Remove all the now redundant checks for NULL client_data in ops callbacks. Update all the add() callbacks to return error codes appropriately. EOPNOTSUPP is used for cases where the ULP does not support the ib_device - eg because it only works with IB. Link: https://lore.kernel.org/r/20200421172440.387069-1-leon@kernel.org Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Acked-by: Ursula Braun <ubraun@linux.ibm.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2020-04-14RDMA/cm: Fix an error check in cm_alloc_id_priv()Dan Carpenter1-1/+1
The xa_alloc_cyclic_irq() function returns either 0 or 1 on success and negatives on error. This code treats 1 as an error and returns ERR_PTR(1) which will cause an Oops in the caller. Fixes: ae78ff3a0f0c ("RDMA/cm: Convert local_id_table to XArray") Link: https://lore.kernel.org/r/20200407093714.GA80285@mwanda Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Reviewed-by: Matthew Wilcox (Oracle) <willy@infradead.org> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2020-04-14RDMA/cm: Fix missing RDMA_CM_EVENT_REJECTED event after receiving REJ messageLeon Romanovsky1-11/+13
The cm_reset_to_idle() call before formatting event changed the CM_ID state from IB_CM_REQ_RCVD to be IB_CM_IDLE. It caused to wrong value of CM_REJ_MESSAGE_REJECTED field. The result of that was that rdma_reject() calls in the passive side didn't generate RDMA_CM_EVENT_REJECTED event in the active side. Fixes: 81ddb41f876d ("RDMA/cm: Allow ib_send_cm_rej() to be done under lock") Link: https://lore.kernel.org/r/20200406173242.1465911-1-leon@kernel.org Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2020-03-17RDMA/cm: Make sure the cm_id is in the IB_CM_IDLE state in destroyJason Gunthorpe1-20/+21
The first switch statement in cm_destroy_id() tries to move the ID to either IB_CM_IDLE or IB_CM_TIMEWAIT. Both states will block concurrent MAD handlers from progressing. Previous patches removed the unreliably lock/unlock sequences in this flow, this patch removes the extra locking steps and adds the missing parts to guarantee that destroy reaches IB_CM_IDLE. There is no point in leaving the ID in the IB_CM_TIMEWAIT state the memory about to be kfreed. Rework things to hold the lock across all the state transitions and directly assert when done that it ended up in IB_CM_IDLE as expected. This was accompanied by a careful audit of all the state transitions here, which generally did end up in IDLE on their success and non-racy paths. Link: https://lore.kernel.org/r/20200310092545.251365-16-leon@kernel.org Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2020-03-17RDMA/cm: Allow ib_send_cm_sidr_rep() to be done under lockJason Gunthorpe1-30/+28
The first thing ib_send_cm_sidr_rep() does is obtain the lock, so use the usual unlocked wrapper, locked actor pattern here. Get rid of the cm_reject_sidr_req() wrapper so each call site can call the locked or unlocked version as required. This avoids a sketchy lock/unlock sequence (which could allow state to change) during cm_destroy_id(). Link: https://lore.kernel.org/r/20200310092545.251365-15-leon@kernel.org Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2020-03-17RDMA/cm: Allow ib_send_cm_rej() to be done under lockJason Gunthorpe1-40/+52
The first thing ib_send_cm_rej() does is obtain the lock, so use the usual unlocked wrapper, locked actor pattern here. This avoids a sketchy lock/unlock sequence (which could allow state to change) during cm_destroy_id(). While here simplify some of the logic in the implementation. Link: https://lore.kernel.org/r/20200310092545.251365-14-leon@kernel.org Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2020-03-17RDMA/cm: Allow ib_send_cm_drep() to be done under lockJason Gunthorpe1-22/+33
The first thing ib_send_cm_drep() does is obtain the lock, so use the usual unlocked wrapper, locked actor pattern here. This avoids a sketchy lock/unlock sequence (which could allow state to change) during cm_destroy_id(). Link: https://lore.kernel.org/r/20200310092545.251365-13-leon@kernel.org Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2020-03-17RDMA/cm: Allow ib_send_cm_dreq() to be done under lockJason Gunthorpe1-20/+34
The first thing ib_send_cm_dreq() does is obtain the lock, so use the usual unlocked wrapper, locked actor pattern here. This avoids a sketchy lock/unlock sequence (which could allow state to change) during cm_destroy_id(). Link: https://lore.kernel.org/r/20200310092545.251365-12-leon@kernel.org Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2020-03-17RDMA/cm: Add some lockdep assertions for cm_id_priv->lockJason Gunthorpe1-0/+6
These functions all touch state, so must be called under the lock. Inspection shows this is currently true. Link: https://lore.kernel.org/r/20200310092545.251365-11-leon@kernel.org Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2020-03-17RDMA/cm: Add missing locking around id.state in cm_dup_req_handlerJason Gunthorpe1-1/+5
All accesses to id.state must be done under the spinlock. Fixes: a977049dacde ("[PATCH] IB: Add the kernel CM implementation") Link: https://lore.kernel.org/r/20200310092545.251365-10-leon@kernel.org Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2020-03-17RDMA/cm: Make it clearer how concurrency works in cm_req_handler()Jason Gunthorpe1-42/+57
ib_crate_cm_id() immediately places the id in the xarray, and publishes it into the remote_id and remote_qpn rbtrees. This makes it visible to other threads before it is fully set up. It appears the thinking here was that the states IB_CM_IDLE and IB_CM_REQ_RCVD do not allow any MAD handler or lookup in the remote_id and remote_qpn rbtrees to advance. However, cm_rej_handler() does take an action on IB_CM_REQ_RCVD, which is not really expected by the design. Make the whole thing clearer: - Keep the new cm_id out of the xarray until it is completely set up. This directly prevents MAD handlers and all rbtree lookups from seeing the pointer. - Move all the trivial setup right to the top so it is obviously done before any concurrency begins - Move the mutation of the cm_id_priv out of cm_match_id() and into the caller so the state transition is obvious - Place the manipulation of the work_list at the end, under lock, after the cm_id is placed in the xarray. The work_count cannot change on an ID outside the xarray. - Add some comments Link: https://lore.kernel.org/r/20200310092545.251365-9-leon@kernel.org Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2020-03-17RDMA/cm: Make it clear that there is no concurrency in cm_sidr_req_handler()Jason Gunthorpe1-27/+37
ib_create_cm_id() immediately places the id in the xarray, so it is visible to network traffic. The state is initially set to IB_CM_IDLE and all the MAD handlers will test this state under lock and refuse to advance from IDLE, so adding to the xarray is harmless. Further, the set to IB_CM_SIDR_REQ_RCVD also excludes all MAD handlers. However, the local_id isn't even used for SIDR mode, and there will be no input MADs related to the newly created ID. So, make the whole flow simpler so it can be understood: - Do not put the SIDR cm_id in the xarray. This directly shows that there is no concurrency - Delete the confusing work_count and pending_list manipulations. This mechanism is only used by MAD handlers and timewait, neither of which apply to SIDR. - Add a few comments and rename 'cur_cm_id_priv' to 'listen_cm_id_priv' - Move other loose sets up to immediately after cm_id creation so that the cm_id is fully configured right away. This fixes an oversight where the service_id will not be returned back on a IB_SIDR_UNSUPPORTED reject. Link: https://lore.kernel.org/r/20200310092545.251365-8-leon@kernel.org Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2020-03-17RDMA/cm: Read id.state under lock when doing pr_debug()Jason Gunthorpe1-4/+4
The lock should not be dropped before doing the pr_debug() print as it is accessing data protected by the lock, such as id.state. Fixes: 119bf81793ea ("IB/cm: Add debug prints to ib_cm") Link: https://lore.kernel.org/r/20200310092545.251365-7-leon@kernel.org Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2020-03-17RDMA/cm: Simplify establishing a listen cm_idJason Gunthorpe1-83/+116
Any manipulation of cm_id->state must be done under the cm_id_priv->lock, the two routines that added listens did not follow this rule, because they never participate in any concurrent access around the state. However, since this exception makes the code hard to understand, simplify the flow so that it can be fully locked: - Move manipulation of listen_sharecount into cm_insert_listen() so it is trivially under the cm.lock without having to expose the cm.lock to the caller. - Push the cm.lock down into cm_insert_listen() and have the function increment the reference count before returning an existing pointer. - Split ib_cm_listen() into an cm_init_listen() and do not call ib_cm_listen() from ib_cm_insert_listen() - Make both ib_cm_listen() and ib_cm_insert_listen() directly call cm_insert_listen() under their cm_id_priv->lock which does both a collision detect and, if needed, the insert (atomically) - Enclose all state manipulation within the cm_id_priv->lock, notice this set can be done safely after cm_insert_listen() as no reader is allowed to read the state without holding the lock. - Do not set the listen cm_id in the xarray, as it is never correct to look it up. This makes the concurrency simpler to understand. Many needless error unwinds are removed in the process. Link: https://lore.kernel.org/r/20200310092545.251365-6-leon@kernel.org Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2020-03-17RDMA/cm: Make the destroy_id flow more robustJason Gunthorpe1-5/+8
Too much of the destruction is very carefully sensitive to the state and various other things. Move more code to the unconditional path and add several WARN_ONs to check consistency. Link: https://lore.kernel.org/r/20200310092545.251365-5-leon@kernel.org Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2020-03-17RDMA/cm: Remove a race freeing timewait_infoJason Gunthorpe1-10/+15
When creating a cm_id during REQ the id immediately becomes visible to the other MAD handlers, and shortly after the state is moved to IB_CM_REQ_RCVD This allows cm_rej_handler() to run concurrently and free the work: CPU 0 CPU1 cm_req_handler() ib_create_cm_id() cm_match_req() id_priv->state = IB_CM_REQ_RCVD cm_rej_handler() cm_acquire_id() spin_lock(&id_priv->lock) switch (id_priv->state) case IB_CM_REQ_RCVD: cm_reset_to_idle() kfree(id_priv->timewait_info); goto destroy destroy: kfree(id_priv->timewait_info); id_priv->timewait_info = NULL Causing a double free or worse. Do not free the timewait_info without also holding the id_priv->lock. Simplify this entire flow by making the free unconditional during cm_destroy_id() and removing the confusing special case error unwind during creation of the timewait_info. This also fixes a leak of the timewait if cm_destroy_id() is called in IB_CM_ESTABLISHED with an XRC TGT QP. The state machine will be left in ESTABLISHED while it needed to transition through IB_CM_TIMEWAIT to release the timewait pointer. Also fix a leak of the timewait_info if the caller mis-uses the API and does ib_send_cm_reqs(). Fixes: a977049dacde ("[PATCH] IB: Add the kernel CM implementation") Link: https://lore.kernel.org/r/20200310092545.251365-4-leon@kernel.org Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>