- 01 3月, 2016 1 次提交
-
-
由 Erez Shitrit 提交于
In order to support LSO and CSUM in the TX flow the driver does the following: * LSO bit for the enum mlx5_ib_qp_flags was added, indicates QP that supports LSO offloads. * Enables the special offload when the QP is created, and enable the special work request id (IB_WR_LSO) when comes. * Calculates the size of the WQE according to the new WQE format that support these offloads. * Handles the new WQE format when arrived, sets the relevant fields, and copies the needed data. Signed-off-by: NErez Shitrit <erezsh@mellanox.com> Signed-off-by: NEran Ben Elisha <eranbe@mellanox.com> Reviewed-by: NLeon Romanovsky <leonro@mellanox.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
- 13 2月, 2016 1 次提交
-
-
由 Leon Romanovsky 提交于
Fix the RC QPs send queue overhead computation to take into account two additional segments in the WQE which are needed for registration operations. The ATOMIC and UMR segments can't coexist together, so chose maximum out of them. The commit 9e65dc37 ("IB/mlx5: Fix RC transport send queue overhead computation") was intended to update RC transport as commit messages states, but added the code to UC transport. Fixes: 9e65dc37 ("IB/mlx5: Fix RC transport send queue overhead computation") Signed-off-by: NKamal Heib <kamalh@mellanox.com> Signed-off-by: NLeon Romanovsky <leonro@mellanox.com> Reviewed-by: NSagi Grimberg <sagig@mellanox.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
- 03 2月, 2016 2 次提交
-
-
由 Maor Gottlieb 提交于
MLX5_GET64 was used on end_padding_mode, which is a 2-bit field. This is wrong as the calculated offset is incorrect. Using MLX5_GET instead of MLX5_GET64 to fix that. Fixes: 0fb2ed66 ('IB/mlx5: Add create and destroy functionality for Raw Packet QP') Signed-off-by: NMaor Gottlieb <maorg@mellanox.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Majd Dibbiny 提交于
When a Raw Ethernet QP is created, a NULL pointer PD could be used. Fixing that by only using the PD after validating it's valid. smatch also reported this error: drivers/infiniband/hw/mlx5/qp.c:1629 mlx5_ib_create_qp() error: we previously assumed 'pd' could be null (see line 1616) Fixes: 0fb2ed66 ('IB/mlx5: Add create and destroy functionality for Raw Packet QP') Signed-off-by: NMajd Dibbiny <majd@mellanox.com> Reported-by: NDan Carpenter <dan.carpenter@oracle.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
- 22 1月, 2016 7 次提交
-
-
由 majd@mellanox.com 提交于
Added Raw Packet QP modify functionality which will enable user space consumers to use it. Since Raw Packet QP is built of SQ and RQ sub-objects, therefore Raw Packet QP state changes are implemented by changing the state of the sub-objects. Signed-off-by: NMajd Dibbiny <majd@mellanox.com> Reviewed-by: NMatan Barak <matanb@mellanox.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 majd@mellanox.com 提交于
When modifying a QP, the desired operation was determined in the mlx5_core using a transition table that takes the current state, the final state, and returns the desired operation. Since this logic will be used for Raw Packet QP, move the operation table to the mlx5_ib. Signed-off-by: NMajd Dibbiny <majd@mellanox.com> Reviewed-by: NMatan Barak <matanb@mellanox.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 majd@mellanox.com 提交于
When the user changes the Address Vector(AV) in the modify QP, he provides an SL. This SL should be translated to Ethernet Priority by taking the 3 LSB bits, and modify the QP's TIS according to this Ethernet priority. Signed-off-by: NMajd Dibbiny <majd@mellanox.com> Reviewed-by: NMatan Barak <matanb@mellanox.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 majd@mellanox.com 提交于
Since Raw Packet QP is composed of RQ and SQ, the IB QP's state is derived from the sub-objects. Therefore we need to query each one of the sub-objects, and decide on the IB QP's state. Signed-off-by: NMajd Dibbiny <majd@mellanox.com> Reviewed-by: NMatan Barak <matanb@mellanox.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 majd@mellanox.com 提交于
This patch adds support for Raw Packet QP for the mlx5 device. Raw Packet QP, unlike other QP types, has no matching mlx5_core_qp object but rather it is built of RQ/SQ/TIR/TIS/TD mlx5_core object. Since the SQ and RQ work-queue (WQ) buffers are not contiguous like other QPs, we allocate separate buffers in the user-space and pass the address of each one of them separately to the kernel. Signed-off-by: NMajd Dibbiny <majd@mellanox.com> Reviewed-by: NMatan Barak <matanb@mellanox.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 majd@mellanox.com 提交于
Extract specific IB QP fields to mlx5_ib_qp_trans structure. The mlx5_core QP object resides in mlx5_ib_qp_base, which all QP types inherit from. When we need to find mlx5_ib_qp using mlx5_core QP (event handling and co), we use a pointer that resides in mlx5_ib_qp_base. In addition, we delete all redundant fields that weren't used anywhere in the code: -doorbell_qpn -sq_max_wqes_per_wr -sq_spare_wqes Signed-off-by: NMajd Dibbiny <majd@mellanox.com> Reviewed-by: NMatan Barak <matanb@mellanox.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Haggai Abramovsky 提交于
Enforce working with CQE version 1 when the user supports CQE version 1 and asked to work this way. If the user still works with CQE version 0, then use the default CQE version to tell the Firmware that the user still works in the older mode. After this patch, the kernel still reports CQE version 0. Signed-off-by: NHaggai Abramovsky <hagaya@mellanox.com> Reviewed-by: NMatan Barak <matanb@mellanox.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
- 24 12月, 2015 2 次提交
-
-
由 Leon Romanovsky 提交于
Add support of cross-channel functionality to mlx5 driver. This includes ability to ignore overrun for CQ which intended for cross-channel, export device capability and configure the QP to be sync master/slave queues. The cross-channel enabled QP supports combination of three possible properties: * WQE processing on the receive queue of this QP * WQE processing on the send queue of this QP * WQE are supported on the send queue Reviewed-by: NSagi Grimberg <sagig@mellanox.com> Signed-off-by: NLeon Romanovsky <leonro@mellanox.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Achiad Shochat 提交于
Set the address handle and QP address path fields according to the link layer type (IB/Eth). Signed-off-by: NAchiad Shochat <achiad@mellanox.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
- 29 10月, 2015 2 次提交
-
-
由 Sagi Grimberg 提交于
No ULP uses it anymore, go ahead and remove it. Keep only the local invalidate part of the handlers. Signed-off-by: NSagi Grimberg <sagig@mellanox.com> Acked-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Sagi Grimberg 提交于
Support the new memory registration API by allocating a private page list array in mlx5_ib_mr and populate it when mlx5_ib_map_mr_sg is invoked. Also, support IB_WR_REG_MR by setting the exact WQE as IB_WR_FAST_REG_MR, just take the needed information from different places: - page_size, iova, length, access flags (ib_mr) - page array (mlx5_ib_mr) - key (ib_reg_wr) The IB_WR_FAST_REG_MR handlers will be removed later when all the ULPs will be converted. Signed-off-by: NSagi Grimberg <sagig@mellanox.com> Acked-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
- 08 10月, 2015 2 次提交
-
-
由 Christoph Hellwig 提交于
The field is only initialized in mlx, but never used. If we want to add proper XRC support it should be done with a new struct ib_xrc_wr. This shrinks the various WR structures by another 4 bytes. Signed-off-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NSagi Grimberg <sagig@mellanox.com> Reviewed-by: NJason Gunthorpe <jgunthorpe@obsidianresearch.com> Tested-by: NHaggai Eran <haggaie@mellanox.com>
-
由 Christoph Hellwig 提交于
This patch split up struct ib_send_wr so that all non-trivial verbs use their own structure which embedds struct ib_send_wr. This dramaticly shrinks the size of a WR for most common operations: sizeof(struct ib_send_wr) (old): 96 sizeof(struct ib_send_wr): 48 sizeof(struct ib_rdma_wr): 64 sizeof(struct ib_atomic_wr): 96 sizeof(struct ib_ud_wr): 88 sizeof(struct ib_fast_reg_wr): 88 sizeof(struct ib_bind_mw_wr): 96 sizeof(struct ib_sig_handover_wr): 80 And with Sagi's pending MR rework the fast registration WR will also be down to a reasonable size: sizeof(struct ib_fastreg_wr): 64 Signed-off-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: Bart Van Assche <bart.vanassche@sandisk.com> [srp, srpt] Reviewed-by: Chuck Lever <chuck.lever@oracle.com> [sunrpc] Tested-by: NHaggai Eran <haggaie@mellanox.com> Tested-by: NSagi Grimberg <sagig@mellanox.com> Tested-by: NSteve Wise <swise@opengridcomputing.com>
-
- 25 9月, 2015 1 次提交
-
-
由 Sagi Grimberg 提交于
Since mlx5 driver cannot rely on registration using the reserved lkey (global_dma_lkey) it used to allocate a private physical address lkey for each allocated pd. Commit 96249d70 ("IB/core: Guarantee that a local_dma_lkey is available") just does it in the core layer so we can go ahead and use that. Signed-off-by: NSagi Grimberg <sagig@mellanox.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
- 04 9月, 2015 1 次提交
-
-
由 Sagi Grimberg 提交于
Since patch series "Demux IB CM requests in the rdma_cm module" the P_Key index is taken from the work completion rather than the message itself. The HCA provides us with the message P_Key. In order to provide the P_Key index, we need to look it up. Given that this is relevant only for GSI messages (session establishments) which is less performance critical, micro-optimize against the GSI (is_qp1) branch. Fixes: 4c21b5bc ("IB/cma: Add net_dev and private data checks to RDMA CM") Signed-off-by: NSagi Grimberg <sagig@mellanox.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
- 05 6月, 2015 1 次提交
-
-
由 Haggai Abramonvsky 提交于
Ethernet functionality is only available when working in ISSI > 0 mode. Previously, the IB driver wasn't ready to work on that mode, and hence building both the IB driver and the Ethernet functionality in the core driver were disallowed by Kconfigs. Now, once we have all the pre-steps in place, we can remove this limitation. The last steps in the IB driver for getting that setup to work are: create dummy SRQ for the driver's use (until now we could use XRC_SRQ as SRQ and XRC_SRQ, after moving to ISSI > 0, we separate XRC SRQs from basic SRQs) and adapt the create QP function to be compatible with ISSI > 0. Signed-off-by: NHaggai Abramovsky <hagaya@mellanox.com> Signed-off-by: NOr Gerlitz <ogerlitz@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 31 5月, 2015 2 次提交
-
-
由 Saeed Mahameed 提交于
- Query all supported types of dev caps on driver load. - Store the Cap data outbox per cap type into driver private data. - Introduce new Macros to access/dump stored caps (using the auto generated data types). - Obsolete SW representation of dev caps (no need for SW copy for each cap). - Modify IB driver to use new macros for checking caps. Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com> Signed-off-by: NAmir Vadai <amirv@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Amir Vadai 提交于
As David Daney pointed in mlx4_core driver [1], mlx5_core is also misusing the DMA-API. This patch is removing the code that vmap() memory allocated by dma_alloc_coherent(). After this patch, users of this drivers might fail allocating resources on memory fragmeneted systems. This will be fixed later on. [1] - https://patchwork.ozlabs.org/patch/458531/ CC: David Daney <david.daney@cavium.com> Signed-off-by: NAmir Vadai <amirv@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 13 5月, 2015 1 次提交
-
-
由 Joe Perches 提交于
These KERN_<LEVEL> uses are unnecessary with pr_<level> and cause bad logging output so remove them. Signed-off-by: NJoe Perches <joe@perches.com> Acked-by: NSteve Wise <swise@opengridcomputing.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
- 03 4月, 2015 4 次提交
-
-
由 Saeed Mahameed 提交于
Signed-off-by: NAchiad Shochat <achiad@mellanox.com> Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com> Signed-off-by: NEli Cohen <eli@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Saeed Mahameed 提交于
Do it in one place instead of every where the function is invoked Signed-off-by: NAchiad Shochat <achiad@mellanox.com> Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com> Signed-off-by: NEli Cohen <eli@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Eli Cohen 提交于
Put a line of space before return and next statement. Signed-off-by: NEli Cohen <eli@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Haggai Abramonvsky 提交于
Pass 0 in the sqd_event parameter. Signed-off-by: NHaggai Abramovsky <hagaya@mellanox.com> Signed-off-by: NEli Cohen <eli@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 16 12月, 2014 3 次提交
-
-
由 Haggai Eran 提交于
* Refactor MR registration and cleanup, and fix reg_pages accounting. * Create a work queue to handle page fault events in a kthread context. * Register a fault handler to get events from the core for each QP. The registered fault handler is empty in this patch, and only a later patch implements it. Signed-off-by: NSagi Grimberg <sagig@mellanox.com> Signed-off-by: NShachar Raindel <raindel@mellanox.com> Signed-off-by: NHaggai Eran <haggaie@mellanox.com> Signed-off-by: NRoland Dreier <roland@purestorage.com>
-
由 Haggai Eran 提交于
Add a helper function mlx5_ib_read_user_wqe to read information from user-space owned work queues. The function will be used in a later patch by the page-fault handling code in mlx5_ib. Signed-off-by: NHaggai Eran <haggaie@mellanox.com> [ Add stub for ib_umem_copy_from() for CONFIG_INFINIBAND_USER_MEM=n - Roland ] Signed-off-by: NRoland Dreier <roland@purestorage.com>
-
由 Haggai Eran 提交于
The current UMR interface doesn't allow partial updates to a memory region's page tables. This patch changes the interface to allow that. It also changes the way the UMR operation validates the memory region's state. When set, IB_SEND_UMR_FAIL_IF_FREE will cause the UMR operation to fail if the MKEY is in the free state. When it is unchecked the operation will check that it isn't in the free state. Signed-off-by: NHaggai Eran <haggaie@mellanox.com> Signed-off-by: NShachar Raindel <raindel@mellanox.com> Signed-off-by: NRoland Dreier <roland@purestorage.com>
-
- 09 12月, 2014 1 次提交
-
-
由 Eli Cohen 提交于
1. Add required __acquire/__release statements to balance spinlock usage. 2. Change the index parameter of begin_wqe() to be unsigned to match supplied argument type. Signed-off-by: NEli Cohen <eli@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 22 11月, 2014 1 次提交
-
-
由 Al Viro 提交于
Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk> Acked-by: NEli Cohen <eli@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 09 10月, 2014 4 次提交
-
-
由 Sagi Grimberg 提交于
Expose more signature setting parameters. We modify the signature API to allow usage of some new execution parameters relevant to data integrity feature. This patch modifies ib_sig_domain structure by: - Deprecate DIF type in signature API (operation will be determined by the parameters alone, no DIF type awareness) - Add APPTAG check bitmask (for input domain) - Add REFTAG remap (increment) flag for each domain - Add APPTAG/REFTAG escape options for each domain The mlx5 driver is modified to follow the new parameters in HW signature setup. At the moment the callers (iser/isert) hard-code new parameters (by DIF type). In the future, callers will retrieve them from the scsi command structure. Signed-off-by: NSagi Grimberg <sagig@mellanox.com> Signed-off-by: NRoland Dreier <roland@purestorage.com>
-
由 Sagi Grimberg 提交于
Rather than using the basic BSF layout which utilizes a pre-configured signature settings (sufficient for current DIF implementation), we use the extended BSF layout to expose advanced signature settings. These settings will also be exposed to the user later. Signed-off-by: NSagi Grimberg <sagig@mellanox.com> Signed-off-by: NRoland Dreier <roland@purestorage.com>
-
由 Sagi Grimberg 提交于
In case input and output space parameters match, we can use a copy mask from input and output space. Use enums for those. Signed-off-by: NSagi Grimberg <sagig@mellanox.com> Signed-off-by: NRoland Dreier <roland@purestorage.com>
-
由 Eli Cohen 提交于
Some of the fields were set twice. Re-organize to avoid that. Signed-off-by: NEli Cohen <eli@mellanox.com> Signed-off-by: NRoland Dreier <roland@purestorage.com>
-
- 04 10月, 2014 1 次提交
-
-
由 Eli Cohen 提交于
Rearrange struct mlx5_caps so it has a "gen" field to represent the current capabilities configured for the device. Max capabilities can also be queried from the device. Also update capabilities struct to contain more fields as per the latest revision if firmware specification. Signed-off-by: NEli Cohen <eli@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 13 8月, 2014 1 次提交
-
-
由 Fabian Frederick 提交于
Acked-by: NEli Cohen <eli@mellanox.com> Signed-off-by: NFabian Frederick <fabf@skynet.be> Signed-off-by: NDoug Ledford <dledford@redhat.com> Signed-off-by: NRoland Dreier <roland@purestorage.com>
-
- 31 7月, 2014 2 次提交
-
-
由 Jack Morgenstein 提交于
There were many places where parameters which should be u8/u16 were integer type. Additionally, in 2 places, a check for a non-null pointer was added before dereferencing the pointer (this is actually a bug fix). Signed-off-by: NJack Morgenstein <jackm@dev.mellanox.co.il> Signed-off-by: NEli Cohen <eli@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jack Morgenstein 提交于
In preparation for a new mlx5 device which is VPI (i.e., ports can be either IB or ETH), move the pci device functionality from mlx5_ib to mlx5_core. This involves the following changes: 1. Move mlx5_core_dev struct out of mlx5_ib_dev. mlx5_core_dev is now an independent structure maintained by mlx5_core. mlx5_ib_dev now has a pointer to that struct. This requires changing a lot of places where the core_dev struct was accessed via mlx5_ib_dev (now, this needs to be a pointer dereference). 2. All PCI initializations are now done in mlx5_core. Thus, it is now mlx5_core which does pci_register_device (and not mlx5_ib, as was previously). 3. mlx5_ib now registers itself with mlx5_core as an "interface" driver. This is very similar to the mechanism employed for the mlx4 (ConnectX) driver. Once the HCA is initialized (by mlx5_core), it invokes the interface drivers to do their initializations. 4. There is a new event handler which the core registers: mlx5_core_event(). This event handler invokes the event handlers registered by the interfaces. Based on a patch by Eli Cohen <eli@mellanox.com> Signed-off-by: NJack Morgenstein <jackm@dev.mellanox.co.il> Signed-off-by: NEli Cohen <eli@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-