1. 25 9月, 2015 1 次提交
    • S
      IB/mlx5: Remove support for IB_DEVICE_LOCAL_DMA_LKEY · c6790aa9
      Sagi Grimberg 提交于
      Commit 96249d70 ("IB/core: Guarantee that a local_dma_lkey
      is available") allows ULPs that make use of the local dma key to keep
      working as before by allocating a DMA MR with local permissions and
      converted these consumers to use the MR associated with the PD
      rather then device->local_dma_lkey.
      
      ConnectIB has some known issues with memory registration
      using the local_dma_lkey (SEND, RDMA, RECV seems to work ok).
      
      Thus don't expose support for it (remove device->local_dma_lkey
      setting), and take advantage of the above commit such that no regression
      is introduced to working systems.
      
      The local_dma_lkey support will be restored in CX4 depending on FW
      capability query.
      Signed-off-by: NSagi Grimberg <sagig@mellanox.com>
      Signed-off-by: NDoug Ledford <dledford@redhat.com>
      c6790aa9
  2. 29 8月, 2015 1 次提交
  3. 18 8月, 2015 2 次提交
  4. 07 8月, 2015 1 次提交
  5. 27 7月, 2015 2 次提交
    • A
      net/mlx5e: TX latency optimization to save DMA reads · 88a85f99
      Achiad Shochat 提交于
      A regular TX WQE execution involves two or more DMA reads -
      one to fetch the WQE, and another one per WQE gather entry.
      
      These DMA reads obviously increase the TX latency.
      There are two mlx5 mechanisms to bypass these DMA reads:
      1) Inline WQE
      2) Blue Flame (BF)
      
      An inline WQE contains a whole packet, thus saves the DMA read/s
      of the regular WQE gather entry/s. Inline WQE support was already
      added in the previous commit.
      
      A BF WQE is written directly to the device I/O mapped memory, thus
      enables saving the DMA read that fetches the WQE.
      
      The BF WQE I/O write must be in cache line granularity, thus uses
      the CPU write combining mechanism.
      A BF WQE I/O write acts also as a TX doorbell for notifying the
      device of new TX WQEs.
      A BF WQE is written to the same I/O mapped address as the regular TX
      doorbell, thus this address is being mapped twice - once by ioremap()
      and once by io_mapping_map_wc().
      
      While both mechanisms reduce the TX latency, they both consume more CPU
      cycles than a regular WQE:
      - A BF WQE must still be written to host memory, in addition to being
        written directly to the device I/O mapped memory.
      - An inline WQE involves copying the SKB data into it.
      
      To handle this tradeoff, we introduce here a heuristic algorithm that
      strives to avoid using these two mechanisms in case the TX queue is
      being back-pressured by the device, and limit their usage rate otherwise.
      
      An inline WQE will always be "Blue Flamed" (written directly to the
      device I/O mapped memory) while a BF WQE may not be inlined (may contain
      gather entries).
      
      Preliminary testing using netperf UDP_RR shows that the latency goes down
      from 17.5us to 16.9us, while the message rate (tested with pktgen) stays
      the same.
      Signed-off-by: NAchiad Shochat <achiad@mellanox.com>
      Signed-off-by: NAmir Vadai <amirv@mellanox.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      88a85f99
    • S
      net/mlx5e: Allocate DMA coherent memory on reader NUMA node · 311c7c71
      Saeed Mahameed 提交于
      By affinity hints and XPS, each mlx5e channel is assigned a CPU
      core.
      
      Channel DMA coherent memory that is written by the NIC and read
      by SW (e.g CQ buffer) is allocated on the NUMA node of the CPU
      core assigned for the channel.
      
      Channel DMA coherent memory that is written by SW and read by the
      NIC (e.g SQ/RQ buffer) is allocated on the NUMA node of the NIC.
      
      Doorbell record (written by SW and read by the NIC) is an
      exception since it is accessed by SW more frequently.
      Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com>
      Signed-off-by: NAmir Vadai <amirv@mellanox.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      311c7c71
  6. 12 6月, 2015 1 次提交
  7. 05 6月, 2015 6 次提交
  8. 02 6月, 2015 1 次提交
  9. 31 5月, 2015 8 次提交
  10. 03 4月, 2015 4 次提交
  11. 16 12月, 2014 2 次提交
  12. 22 11月, 2014 1 次提交
  13. 04 10月, 2014 4 次提交
  14. 31 7月, 2014 3 次提交
  15. 24 7月, 2014 1 次提交
  16. 28 5月, 2014 1 次提交
  17. 08 3月, 2014 1 次提交
    • S
      IB/mlx5: Collect signature error completion · d5436ba0
      Sagi Grimberg 提交于
      This commit takes care of the generated signature error CQE generated
      by the HW (if happened).  The underlying mlx5 driver will handle
      signature error completions and will mark the relevant memory region
      as dirty.
      
      Once the consumer gets the completion for the transaction, it must
      check for signature errors on signature memory region using a new
      lightweight verb ib_check_mr_status().
      
      In case the user doesn't check for signature error (i.e. doesn't call
      ib_check_mr_status() with status check IB_MR_CHECK_SIG_STATUS), the
      memory region cannot be used for another signature operation
      (REG_SIG_MR work request will fail).
      Signed-off-by: NSagi Grimberg <sagig@mellanox.com>
      Signed-off-by: NRoland Dreier <roland@purestorage.com>
      d5436ba0