1. 02 5月, 2017 1 次提交
  2. 26 4月, 2017 3 次提交
  3. 15 2月, 2017 2 次提交
  4. 25 1月, 2017 1 次提交
  5. 03 1月, 2017 5 次提交
  6. 15 12月, 2016 1 次提交
    • A
      IB/mlx5: avoid bogus -Wmaybe-uninitialized warning · 14ab8896
      Arnd Bergmann 提交于
      We get a false-positive warning in linux-next for the mlx5 driver:
      
      infiniband/hw/mlx5/mr.c: In function ‘mlx5_ib_reg_user_mr’:
      infiniband/hw/mlx5/mr.c:1172:5: error: ‘order’ may be used uninitialized in this function [-Werror=maybe-uninitialized]
      infiniband/hw/mlx5/mr.c:1161:6: note: ‘order’ was declared here
      infiniband/hw/mlx5/mr.c:1173:6: error: ‘ncont’ may be used uninitialized in this function [-Werror=maybe-uninitialized]
      infiniband/hw/mlx5/mr.c:1160:6: note: ‘ncont’ was declared here
      infiniband/hw/mlx5/mr.c:1173:6: error: ‘page_shift’ may be used uninitialized in this function [-Werror=maybe-uninitialized]
      infiniband/hw/mlx5/mr.c:1158:6: note: ‘page_shift’ was declared here
      infiniband/hw/mlx5/mr.c:1143:13: error: ‘npages’ may be used uninitialized in this function [-Werror=maybe-uninitialized]
      infiniband/hw/mlx5/mr.c:1159:6: note: ‘npages’ was declared here
      
      I had a trivial workaround for gcc-5 or higher, but that didn't work
      on gcc-4.9 unfortunately.
      
      The only way I found to avoid the warnings for gcc-4.9, short of
      initializing each of the arguments first was to change the calling
      conventions to separate the error code from the umem pointer. This
      avoids casting the error codes from one pointer to another incompatible
      pointer, and lets gcc figure out when that the data is actually valid
      whenever we return successfully.
      Acked-by: NLeon Romanovsky <leonro@mellanox.com>
      Signed-off-by: NArnd Bergmann <arnd@arndb.de>
      Signed-off-by: NDoug Ledford <dledford@redhat.com>
      14ab8896
  7. 14 12月, 2016 1 次提交
  8. 17 11月, 2016 3 次提交
  9. 08 10月, 2016 2 次提交
  10. 14 8月, 2016 1 次提交
  11. 23 6月, 2016 1 次提交
    • M
      IB/mlx5: Reset flow support for IB kernel ULPs · 89ea94a7
      Maor Gottlieb 提交于
      The driver exposes interfaces that directly relate to HW state.
      Upon fatal error, consumers of these interfaces (ULPs) that rely
      on completion of all their posted work-request could hang, thereby
      introducing dependencies in shutdown order. To prevent this from
      happening, we manage the relevant resources (CQs, QPs) that are used
      by the device. Upon a fatal error, we now generate simulated
      completions for outstanding WQEs that were not completed at the
      time the HW was reset.
      
      It includes invoking the completion event handler for all involved
      CQs so that the ULPs will poll those CQs. When polled we return
      simulated CQEs with IB_WC_WR_FLUSH_ERR return code enabling ULPs
      to clean up their  resources and not wait forever for completions
      upon receiving remove_one.
      
      The above change requires an extra check in the data path to make
      sure that when device is in error state, the simulated CQEs will
      be returned and no further WQEs will be posted.
      Signed-off-by: NMaor Gottlieb <maorg@mellanox.com>
      Signed-off-by: NLeon Romanovsky <leon@kernel.org>
      Signed-off-by: NDoug Ledford <dledford@redhat.com>
      89ea94a7
  12. 14 5月, 2016 2 次提交
  13. 05 3月, 2016 3 次提交
  14. 02 3月, 2016 4 次提交
  15. 09 12月, 2015 1 次提交
    • L
      IB/mlx5: Postpone remove_keys under knowledge of coming preemption · ab5cdc31
      Leon Romanovsky 提交于
      The remove_keys() logic is performed as garbage collection task. Such
      task is intended to be run when no other active processes are running.
      
      The need_resched() will return TRUE if there are user tasks to be
      activated in near future.
      
      In such case, we don't execute remove_keys() and postpone
      the garbage collection work to try to run in next cycle,
      in order to free CPU resources to other tasks.
      
      The possible pseudo-code to trigger such scenario:
      1. Allocate a lot of MR to fill the cache above the limit.
      2. Wait a small amount of time "to calm" the system.
      3. Start CPU extensive operations on multi-node cluster.
      4. Expect performance degradation during MR cache shrink operation.
      Signed-off-by: NLeon Romanovsky <leonro@mellanox.com>
      Signed-off-by: NEli Cohen <eli@mellanox.com>
      Signed-off-by: NDoug Ledford <dledford@redhat.com>
      ab5cdc31
  16. 29 10月, 2015 2 次提交
  17. 08 10月, 2015 1 次提交
    • C
      IB: split struct ib_send_wr · e622f2f4
      Christoph Hellwig 提交于
      This patch split up struct ib_send_wr so that all non-trivial verbs
      use their own structure which embedds struct ib_send_wr.  This dramaticly
      shrinks the size of a WR for most common operations:
      
      sizeof(struct ib_send_wr) (old):	96
      
      sizeof(struct ib_send_wr):		48
      sizeof(struct ib_rdma_wr):		64
      sizeof(struct ib_atomic_wr):		96
      sizeof(struct ib_ud_wr):		88
      sizeof(struct ib_fast_reg_wr):		88
      sizeof(struct ib_bind_mw_wr):		96
      sizeof(struct ib_sig_handover_wr):	80
      
      And with Sagi's pending MR rework the fast registration WR will also be
      down to a reasonable size:
      
      sizeof(struct ib_fastreg_wr):		64
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Reviewed-by: Bart Van Assche <bart.vanassche@sandisk.com> [srp, srpt]
      Reviewed-by: Chuck Lever <chuck.lever@oracle.com> [sunrpc]
      Tested-by: NHaggai Eran <haggaie@mellanox.com>
      Tested-by: NSagi Grimberg <sagig@mellanox.com>
      Tested-by: NSteve Wise <swise@opengridcomputing.com>
      e622f2f4
  18. 04 9月, 2015 1 次提交
  19. 31 8月, 2015 4 次提交
  20. 29 8月, 2015 1 次提交