- 22 1月, 2016 3 次提交
-
-
由 Haggai Abramovsky 提交于
Enforce working with CQE version 1 when the user supports CQE version 1 and asked to work this way. If the user still works with CQE version 0, then use the default CQE version to tell the Firmware that the user still works in the older mode. After this patch, the kernel still reports CQE version 0. Signed-off-by: NHaggai Abramovsky <hagaya@mellanox.com> Reviewed-by: NMatan Barak <matanb@mellanox.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Haggai Abramovsky 提交于
The wrong buffer size was passed to ib_is_udata_cleared. Signed-off-by: NHaggai Abramovsky <hagaya@mellanox.com> Reviewed-by: NMatan Barak <matanb@mellanox.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Kaike Wan 提交于
The rdma netlink local service registers a handler to handle RESOLVE response and another handler to handle SET_TIMEOUT request. The first thing these handlers do is to call netlink_capable() to check the access right of the received skb to make sure that the sender has root access. Under normal conditions, such responses and requests will be directly forwarded to the handlers without going through the netlink_dump pathway (see ibnl_rcv_msg() in drivers/infiniband/core/netlink.c). However, a user application could send a RESOLVE request (not response) to the local service, which will fall into the netlink_dump pathway, where a new skb will be created without initializing the control block. This new skb will be eventually forwarded to the local service RESOLVE response handler. Unfortunately, netlink_capable() will cause general protection fault if the skb's control block is not initialized. This patch will address the problem by checking the skb first. Signed-off-by: NKaike Wan <kaike.wan@intel.com> Reported-by: NDmitry Vyukov <dvyukov@google.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
- 20 1月, 2016 37 次提交
-
-
由 Sagi Grimberg 提交于
No usage after the conversion to the new CQ API. Signed-off-by: NSagi Grimberg <sagig@mellanox.com> Reviewed-by: NBart Van Assche <bart.vanassche@sandisk.com> Reviewed-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Mike Marciniszyn 提交于
Based on profiling, UD performance drops in case of processes in a single client due to excess context switches when the progress workqueue is scheduled. This is solved by modifying the heuristic to select the direct progress instead of the scheduling progress via the workqueue when UD-like situations are detected in the heuristic. Reviewed-by: NVinit Agnihotri <vinit.abhay.agnihotri@intel.com> Signed-off-by: NMike Marciniszyn <mike.marciniszyn@intel.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Matan Barak 提交于
Advertise RoCE v2 support in port_immutable attributes according to the hardware's capabilities. This enables the verbs stack to use RoCE v2 mode. Signed-off-by: NMatan Barak <matanb@mellanox.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Moni Shoua 提交于
The mlx4 driver uses a special QP to implement the GSI QP. This kind of QP allows to build the InfiniBand headers in software. When mlx4 hardware builds the packet, it calculates the ICRC and puts it at the end of the payload. However, this ICRC calculation depends on the QP configuration, which is determined when the QP is modified (roce_mode during INIT->RTR). When receiving a packet, the ICRC verification doesn't depend on this configuration. Therefore, using two GSI QPs for send (one for each RoCE version) and one GSI QP for receive are required. Signed-off-by: NMoni Shoua <monis@mellanox.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Moni Shoua 提交于
RoCEv2 packets are sent over IP/UDP protocols. The mlx4 driver uses a type of RAW QP to send packets for QP1 and therefore needs to build the network headers below BTH in software. This patch adds option to build QP1 packets with IP and UDP headers if RoCEv2 is requested. Signed-off-by: NMoni Shoua <monis@mellanox.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Moni Shoua 提交于
If the hardware supports RoCE v2, we configure the hardware UDP port according to the RoCE v2 Annex when mlx4_ib device is added. Signed-off-by: NMoni Shoua <monis@mellanox.com> Signed-off-by: NMatan Barak <matanb@mellanox.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Moni Shoua 提交于
In order to support modify_qp for RoCE v2, we need to set the gid_type in the QP context. Signed-off-by: NMoni Shoua <monis@mellanox.com> Signed-off-by: NMatan Barak <matanb@mellanox.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Moni Shoua 提交于
This will be used in hardware device driver when building QP or AH contexts. Signed-off-by: NMoni Shoua <monis@mellanox.com> Signed-off-by: NMatan Barak <matanb@mellanox.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Moni Shoua 提交于
In RoCE v2 we need to choose a source UDP port, we do so by using entropy over the source and dest QPNs. Signed-off-by: NMoni Shoua <monis@mellanox.com> Signed-off-by: NMatan Barak <matanb@mellanox.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Moni Shoua 提交于
In order to support RoCE v2, the hardware needs to be configured to classify certain UDP packets as RoCE v2 packets and pass it through its RoCE pipeline. This patch enables configuring this UDP port. Signed-off-by: NMoni Shoua <monis@mellanox.com> Signed-off-by: NMatan Barak <matanb@mellanox.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Moni Shoua 提交于
To tell hardware about a gid with type RoCEv2, software needs a new modifier to the SET_PORT command: MLX4_SET_PORT_ROCE_ADDR. This can replace the old method, MLX4_SET_PORT_GID_TABLE, for RoCEv1 gids. Signed-off-by: NMoni Shoua <monis@mellanox.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Moni Shoua 提交于
If the hardware supports RoCE v2 (mixed with RoCE v1) mode, we enable it. This is necessary in order to support RoCE v2. Signed-off-by: NMoni Shoua <monis@mellanox.com> Signed-off-by: NMatan Barak <matanb@mellanox.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Moni Shoua 提交于
IB core driver adds a property of type to struct ib_gid_attr. The mlx4 driver should take that in consideration when modifying or querying the hardware gid table. Signed-off-by: NMoni Shoua <monis@mellanox.com> Signed-off-by: NMatan Barak <matanb@mellanox.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Moni Shoua 提交于
Query the RoCE support from firmware using the appropriate firmware commands. Downstream patches will read these capabilities and act accordingly. Signed-off-by: NMoni Shoua <monis@mellanox.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Christoph Hellwig 提交于
We now alwasy have a per-PD local_dma_lkey available. Make use of that fact in svc_rdma and stop registering our own MR. Signed-off-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NSagi Grimberg <sagig@mellanox.com> Reviewed-by: NJason Gunthorpe <jgunthorpe@obsidianresearch.com> Reviewed-by: NChuck Lever <chuck.lever@oracle.com> Reviewed-by: NSteve Wise <swise@opengridcomputing.com> Acked-by: NJ. Bruce Fields <bfields@redhat.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Chuck Lever 提交于
To support the server-side of an NFSv4.1 backchannel on RDMA connections, add a transport class that enables backward direction messages on an existing forward channel connection. Signed-off-by: NChuck Lever <chuck.lever@oracle.com> Acked-by: NBruce Fields <bfields@fieldses.org> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Chuck Lever 提交于
Extra resources for handling backchannel requests have to be pre-allocated when a transport instance is created. Set up additional fields in svcxprt_rdma to track these resources. The max_requests fields are elements of the RPC-over-RDMA protocol, so they should be u32. To ensure that unsigned arithmetic is used everywhere, some other fields in the svcxprt_rdma struct are updated. Signed-off-by: NChuck Lever <chuck.lever@oracle.com> Acked-by: NBruce Fields <bfields@fieldses.org> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Chuck Lever 提交于
Pre-requisite to use map_xdr in the backchannel code. Signed-off-by: NChuck Lever <chuck.lever@oracle.com> Acked-by: NBruce Fields <bfields@fieldses.org> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Chuck Lever 提交于
Clean up. These functions can otherwise fail, so check for page allocation failures too. Signed-off-by: NChuck Lever <chuck.lever@oracle.com> Acked-by: NBruce Fields <bfields@fieldses.org> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Chuck Lever 提交于
svc_rdma_post_recv() allocates pages for receive buffers on-demand. It uses GFP_KERNEL so the allocator tries hard, and may sleep. But I'm about to add a call to svc_rdma_post_recv() from a function that may not sleep. Since all svc_rdma_post_recv() call sites can tolerate its failure, allow it to fail if the page allocator returns nothing. Longer term, receive buffers, being a finite resource per-connection, should be pre-allocated and re-used. Signed-off-by: NChuck Lever <chuck.lever@oracle.com> Acked-by: NBruce Fields <bfields@fieldses.org> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Chuck Lever 提交于
Clean up. Signed-off-by: NChuck Lever <chuck.lever@oracle.com> Acked-by: NBruce Fields <bfields@fieldses.org> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Chuck Lever 提交于
To ensure this allocation cannot fail and will not sleep, pre-allocate the req_map structures per-connection. Signed-off-by: NChuck Lever <chuck.lever@oracle.com> Acked-by: NBruce Fields <bfields@fieldses.org> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Chuck Lever 提交于
When the maximum payload size of NFS READ and WRITE was increased by commit cc9a903d ("svcrdma: Change maximum server payload back to RPCSVC_MAXPAYLOAD"), the size of struct svc_rdma_op_ctxt increased to over 6KB (on x86_64). That makes allocating one of these from a kmem_cache more likely to fail in situations when system memory is exhausted. Since I'm about to add a caller where this allocation must always work _and_ it cannot sleep, pre-allocate ctxts for each connection. Another motivation for this change is that NFSv4.x servers are required by specification not to drop NFS requests. Pre-allocating memory resources reduces the likelihood of a drop. Signed-off-by: NChuck Lever <chuck.lever@oracle.com> Acked-by: NBruce Fields <bfields@fieldses.org> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Chuck Lever 提交于
Be sure the completed ctxt is put in every path. The xprt enqueue can take a while, so put the completed ctxt back in circulation _before_ enqueuing the xprt. Remove/disable debugging. Signed-off-by: NChuck Lever <chuck.lever@oracle.com> Acked-by: NBruce Fields <bfields@fieldses.org> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Chuck Lever 提交于
kzalloc is used here, so setting the atomic fields to zero is unnecessary. sc_ord is set again in handle_connect_req. The other fields are re-initialized in svc_rdma_accept(). Signed-off-by: NChuck Lever <chuck.lever@oracle.com> Acked-by: NBruce Fields <bfields@fieldses.org> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Matan Barak 提交于
Previously, IPV6_DEFAULT_HOPLIMIT was used as the hop limit value for RoCE. Fixing that by taking ip4_dst_hoplimit and ip6_dst_hoplimit as hop limit values. Signed-off-by: NMatan Barak <matanb@mellanox.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Matan Barak 提交于
rdma_addr_find_dmac_by_grh resolves dmac, vlan_id and if_index and downsteram patch will also add hop_limit as an output parameter, thus we rename it to rdma_addr_find_l2_eth_by_grh. Signed-off-by: NMatan Barak <matanb@mellanox.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Bart Van Assche 提交于
ib_send_cm_drep() calls cm_enter_timewait() while holding a spinlock that can be locked from inside an interrupt handler. Hence do not enable interrupts inside cm_enter_timewait() if called with interrupts disabled. This patch fixes e.g. the following deadlock: Acked-by: NErez Shitrit <erezsh@mellanox.com> ================================= [ INFO: inconsistent lock state ] 4.4.0-rc7+ #1 Tainted: G E --------------------------------- inconsistent {HARDIRQ-ON-W} -> {IN-HARDIRQ-W} usage. swapper/8/0 [HC1[1]:SC0[0]:HE0:SE1] takes: (&(&cm_id_priv->lock)->rlock){?.+...}, at: [<ffffffffa036eec4>] cm_establish+0x 74/0x1b0 [ib_cm] {HARDIRQ-ON-W} state was registered at: [<ffffffff810a3c11>] mark_held_locks+0x71/0x90 [<ffffffff810a3e87>] trace_hardirqs_on_caller+0xa7/0x1c0 [<ffffffff810a3fad>] trace_hardirqs_on+0xd/0x10 [<ffffffff8151c40b>] _raw_spin_unlock_irq+0x2b/0x40 [<ffffffffa036ea8e>] cm_enter_timewait+0xae/0x100 [ib_cm] [<ffffffffa036ff76>] ib_send_cm_drep+0xb6/0x190 [ib_cm] [<ffffffffa052ed08>] srp_cm_handler+0x128/0x1a0 [ib_srp] [<ffffffffa0370340>] cm_process_work+0x20/0xf0 [ib_cm] [<ffffffffa0371335>] cm_dreq_handler+0x135/0x2c0 [ib_cm] [<ffffffffa03733c5>] cm_work_handler+0x75/0xd0 [ib_cm] [<ffffffff8107184d>] process_one_work+0x1bd/0x460 [<ffffffff81073148>] worker_thread+0x118/0x420 [<ffffffff81078454>] kthread+0xe4/0x100 [<ffffffff8151cbbf>] ret_from_fork+0x3f/0x70 irq event stamp: 1672286 hardirqs last enabled at (1672283): [<ffffffff81408ec0>] poll_idle+0x10/0x80 hardirqs last disabled at (1672284): [<ffffffff8151d304>] common_interrupt+0x84/0x89 softirqs last enabled at (1672286): [<ffffffff8105b4dc>] _local_bh_enable+0x1c/0x50 softirqs last disabled at (1672285): [<ffffffff8105b697>] irq_enter+0x47/0x70 other info that might help us debug this: Possible unsafe locking scenario: CPU0 ---- lock(&(&cm_id_priv->lock)->rlock); <Interrupt> lock(&(&cm_id_priv->lock)->rlock); *** DEADLOCK *** no locks held by swapper/8/0. stack backtrace: CPU: 8 PID: 0 Comm: swapper/8 Tainted: G E 4.4.0-rc7+ #1 Hardware name: Dell Inc. PowerEdge R430/03XKDV, BIOS 1.0.2 11/17/2014 ffff88045af5e950 ffff88046e503a88 ffffffff81251c1b 0000000000000007 0000000000000006 0000000000000003 ffff88045af5ddc0 ffff88046e503ad8 ffffffff810a32f4 0000000000000000 0000000000000000 0000000000000001 Call Trace: <IRQ> [<ffffffff81251c1b>] dump_stack+0x4f/0x74 [<ffffffff810a32f4>] print_usage_bug+0x184/0x190 [<ffffffff810a36e2>] mark_lock_irq+0xf2/0x290 [<ffffffff810a3995>] mark_lock+0x115/0x1b0 [<ffffffff810a3b8c>] mark_irqflags+0x15c/0x170 [<ffffffff810a4fef>] __lock_acquire+0x1ef/0x560 [<ffffffff810a53c2>] lock_acquire+0x62/0x80 [<ffffffff8151bd33>] _raw_spin_lock_irqsave+0x43/0x60 [<ffffffffa036eec4>] cm_establish+0x74/0x1b0 [ib_cm] [<ffffffffa036f031>] ib_cm_notify+0x31/0x100 [ib_cm] [<ffffffffa0637f24>] srpt_qp_event+0x54/0xd0 [ib_srpt] [<ffffffffa0196052>] mlx4_ib_qp_event+0x72/0xc0 [mlx4_ib] [<ffffffffa00775b9>] mlx4_qp_event+0x69/0xd0 [mlx4_core] [<ffffffffa006000e>] mlx4_eq_int+0x51e/0xd50 [mlx4_core] [<ffffffffa006084f>] mlx4_msi_x_interrupt+0xf/0x20 [mlx4_core] [<ffffffff810b67b0>] handle_irq_event_percpu+0x40/0x110 [<ffffffff810b68bf>] handle_irq_event+0x3f/0x70 [<ffffffff810ba7f9>] handle_edge_irq+0x79/0x120 [<ffffffff81007f3d>] handle_irq+0x5d/0x130 [<ffffffff810071fd>] do_IRQ+0x6d/0x130 [<ffffffff8151d309>] common_interrupt+0x89/0x89 <EOI> [<ffffffff8140895f>] cpuidle_enter_state+0xcf/0x200 [<ffffffff81408aa2>] cpuidle_enter+0x12/0x20 [<ffffffff810990d6>] call_cpuidle+0x36/0x60 [<ffffffff81099163>] cpuidle_idle_call+0x63/0x110 [<ffffffff8109930a>] cpu_idle_loop+0xfa/0x130 [<ffffffff8109934e>] cpu_startup_entry+0xe/0x10 [<ffffffff8103c443>] start_secondary+0x83/0x90 Fixes: commit be4b4993 ("IB/cm: Do not queue work to a device that's going away") Signed-off-by: NBart Van Assche <bart.vanassche@sandisk.com> Cc: Erez Shitrit <erezsh@mellanox.com> Cc: stable <stable@vger.kernel.org> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Bart Van Assche 提交于
Avoid that the following kernel crash is triggered when processing an RDMA completion: BUG: unable to handle kernel paging request at 0000000100000198 IP: [<ffffffff810a4ea2>] __lock_acquire+0xa2/0x560 Call Trace: [<ffffffff810a53c2>] lock_acquire+0x62/0x80 [<ffffffff8151bd33>] _raw_spin_lock_irqsave+0x43/0x60 [<ffffffffa04fd437>] srpt_rdma_read_done+0x57/0x120 [ib_srpt] [<ffffffffa0144dd3>] __ib_process_cq+0x43/0xc0 [ib_core] [<ffffffffa0145115>] ib_cq_poll_work+0x25/0x70 [ib_core] [<ffffffff8107184d>] process_one_work+0x1bd/0x460 [<ffffffff81073148>] worker_thread+0x118/0x420 [<ffffffff81078454>] kthread+0xe4/0x100 [<ffffffff8151cbbf>] ret_from_fork+0x3f/0x70 Fixes: commit 59fae4de ("IB/srpt: chain RDMA READ/WRITE requests"). Signed-off-by: NBart Van Assche <bart.vanassche@sandisk.com> Reviewed-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NSagi Grimberg <sagig@mellanox.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Bart Van Assche 提交于
The IRQ_POLL_F_SCHED bit is set as long as polling is ongoing. This means that irq_poll_sched() must proceed if this bit has not yet been set. Fixes: commit ea51190c ("irq_poll: fold irq_poll_sched_prep into irq_poll_sched"). Signed-off-by: NBart Van Assche <bart.vanassche@sandisk.com> Reviewed-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NSagi Grimberg <sagig@mellanox.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Matan Barak 提交于
Sparse complains about dereference before check. Fixing this by moving the check before the dereference. Fixes: 20029832 ('IB/core: Validate route when we init ah') Signed-off-by: NMatan Barak <matanb@mellanox.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Matan Barak 提交于
When write_gid function needs to do a sleep-able operation, it unlocks table->rwlock and then relocks it. Sparse complains about context imbalance. This is safe as write_gid is always called with table->rwlock. write_gid protects from simultaneous writes to this GID entry by setting the GID_TABLE_ENTRY_INVALID flag. Fixes: 9c584f04 ('IB/core: Change per-entry lock in RoCE GID table to one lock') Signed-off-by: NMatan Barak <matanb@mellanox.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Hal Rosenstock 提交于
Port number is not part of ClassPortInfo attribute but is still needed as a parameter when invoking process_mad. To properly handle this attribute, port_num is added as a parameter to get_counter_table and get_perf_mad was changed not to store port_num in the attribute itself when it's querying the ClassPortInfo attribute. This handles issue pointed out by Matan Barak <matanb@dev.mellanox.co.il> Fixes: 145d9c54 ('IB/core: Display extended counter set if available') Signed-off-by: NHal Rosenstock <hal@mellanox.com> Acked-by: NMatan Barak <matanb@mellanox.com> Acked-by: NIra Weiny <ira.weiny@intel.com> Reviewed-by: NChristoph Lameter <cl@linux.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Bart Van Assche 提交于
Detected this by building the IB core with W=1. See also patch "IB core: Fix ib_sg_to_pages()" (commit 8f5ba10e). Signed-off-by: NBart Van Assche <bart.vanassche@sandisk.com> Cc: Sagi Grimberg <sagig@mellanox.com> Cc: Christoph Hellwig <hch@lst.de> Reviewed-by: NLeon Romanovsky <leon.romanovsky@mellanox.com> Acked-by: NSagi Grimberg <sagig@mellanox.com> Reviewed-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Leon Romanovsky 提交于
Fix static checker warning: drivers/infiniband/hw/mlx5/main.c:149 mlx5_query_port_roce() warn: passing casted pointer '&props->qkey_viol_cntr' to 'mlx5_query_nic_vport_qkey_viol_cntr()' 32 vs 16. Fixes: 3f89a643 ("IB/mlx5: Extend query_device/port to support RoCE") Reported-by: NDan Carpenter <dan.carpenter@oracle.com> Signed-off-by: NLeon Romanovsky <leonro@mellanox.com> Reviewed-by: NSagi Grimberg <sagig@mellanox.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Christoph Hellwig 提交于
Remove the local workqueue to process mad completions and use the CQ API instead. Signed-off-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NHal Rosenstock <hal@mellanox.com> Reviewed-by: NIra Weiny <ira.weiny@intel.com> Reviewed-by: NSagi Grimberg <sagig@mellanox.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Christoph Hellwig 提交于
Stop abusing wr_id and just pass the parameter explicitly. Signed-off-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NHal Rosenstock <hal@mellanox.com> Reviewed-by: NIra Weiny <ira.weiny@intel.com> Reviewed-by: NSagi Grimberg <sagig@mellanox.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-