1. 19 2月, 2017 4 次提交
  2. 12 12月, 2016 4 次提交
  3. 16 11月, 2016 2 次提交
  4. 08 10月, 2016 1 次提交
  5. 02 10月, 2016 4 次提交
  6. 17 9月, 2016 1 次提交
  7. 03 8月, 2016 8 次提交
  8. 24 6月, 2016 1 次提交
  9. 27 5月, 2016 1 次提交
  10. 26 5月, 2016 1 次提交
  11. 14 5月, 2016 2 次提交
  12. 29 4月, 2016 2 次提交
  13. 18 3月, 2016 6 次提交
    • M
      IB/hfi1: Enable adaptive pio by default · d0e859c3
      Mike Marciniszyn 提交于
      Set the piothreshold to the agreed upon default of 256B.
      Reviewed-by: NJubin John <jubin.john@intel.com>
      Signed-off-by: NMike Marciniszyn <mike.marciniszyn@intel.com>
      Signed-off-by: NDoug Ledford <dledford@redhat.com>
      d0e859c3
    • M
      IB/hfi1: Fix adaptive pio packet corruption · 47177f1b
      Mike Marciniszyn 提交于
      The adaptive pio heuristic missed a case that causes a corrupted
      packet on the wire.
      
      The case is if SDMA egress had been chosen for a pio-able packet and
      then encountered a ring space wait, the packet is queued.   The sge
      cursor had been incremented as part of the packet build out for SDMA.
      
      After the send engine restart, the heuristic might now chose pio based
      on the sdma count being zero and start the mmio copy using the already
      incremented sge cursor.
      
      Fix this by forcing SDMA egress when the SDMA descriptor has already
      been built.
      
      Additionally, the code to wait for a QPs pio count to zero when
      switching to SDMA was missing.  Add it.
      
      There is also an issue with UD QPs, in that the different SLs can pick
      a different egress send context.  For now, just insure the UD/GSI
      always go through SDMA.
      Reviewed-by: NVennila Megavannan <vennila.megavannan@intel.com>
      Signed-off-by: NMike Marciniszyn <mike.marciniszyn@intel.com>
      Signed-off-by: NDoug Ledford <dledford@redhat.com>
      47177f1b
    • M
      IB/hfi1: Fix panic in adaptive pio · cef504c5
      Mike Marciniszyn 提交于
      The following panic occurs while running ib_send_bw -a with
      adaptive pio turned on:
      
      [ 8551.143596] BUG: unable to handle kernel NULL pointer dereference at (null)
      [ 8551.152986] IP: [<ffffffffa0902a94>] pio_wait.isra.21+0x34/0x190 [hfi1]
      [ 8551.160926] PGD 80db21067 PUD 80bb45067 PMD 0
      [ 8551.166431] Oops: 0000 [#1] SMP
      [ 8551.276725] task: ffff880816bf15c0 ti: ffff880812ac0000 task.ti: ffff880812ac0000
      [ 8551.285705] RIP: 0010:[<ffffffffa0902a94>] pio_wait.isra.21+0x34/0x190 [hfi1]
      [ 8551.296462] RSP: 0018:ffff880812ac3b58  EFLAGS: 00010282
      [ 8551.303029] RAX: 000000000000002d RBX: 0000000000000000 RCX: 0000000000000800
      [ 8551.311633] RDX: ffff880812ac3c08 RSI: 0000000000000000 RDI: ffff8800b6665e40
      [ 8551.320228] RBP: ffff880812ac3ba0 R08: 0000000000001000 R09: ffffffffa09039a0
      [ 8551.328820] R10: ffff880817a0c000 R11: 0000000000000000 R12: ffff8800b6665e40
      [ 8551.337406] R13: ffff880817a0c000 R14: ffff8800b6665800 R15: ffff8800b6665e40
      [ 8551.355640] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
      [ 8551.362674] CR2: 0000000000000000 CR3: 000000080abe8000 CR4: 00000000001406e0
      [ 8551.371262] Stack:
      [ 8551.374119]  ffff880812ac3bf0 ffff88080cf54010 ffff880800000800 ffff880812ac3c08
      [ 8551.383036]  ffff8800b6665800 ffff8800b6665e40 0000000000000202 ffffffffa08e7b80
      [ 8551.391941]  00000001007de431 ffff880812ac3bc8 ffffffffa0904645 ffff8800b6665800
      [ 8551.400859] Call Trace:
      [ 8551.404214]  [<ffffffffa08e7b80>] ? hfi1_del_timers_sync+0x30/0x30 [hfi1]
      [ 8551.412417]  [<ffffffffa0904645>] hfi1_verbs_send+0x215/0x330 [hfi1]
      [ 8551.420154]  [<ffffffffa08ec126>] hfi1_do_send+0x166/0x350 [hfi1]
      [ 8551.427618]  [<ffffffffa055a533>] rvt_post_send+0x533/0x6a0 [rdmavt]
      [ 8551.435367]  [<ffffffffa050760f>] ib_uverbs_post_send+0x30f/0x530 [ib_uverbs]
      [ 8551.443999]  [<ffffffffa0501367>] ib_uverbs_write+0x117/0x380 [ib_uverbs]
      [ 8551.452269]  [<ffffffff815810ab>] ? sock_recvmsg+0x3b/0x50
      [ 8551.459071]  [<ffffffff81581152>] ? sock_read_iter+0x92/0xe0
      [ 8551.466068]  [<ffffffff81212857>] __vfs_write+0x37/0x100
      [ 8551.472692]  [<ffffffff81213532>] ? rw_verify_area+0x52/0xd0
      [ 8551.479682]  [<ffffffff81213782>] vfs_write+0xa2/0x1a0
      [ 8551.486089]  [<ffffffff81003176>] ? do_audit_syscall_entry+0x66/0x70
      [ 8551.493891]  [<ffffffff812146c5>] SyS_write+0x55/0xc0
      [ 8551.500220]  [<ffffffff816ae0ee>] entry_SYSCALL_64_fastpath+0x12/0x71
      [ 8551.531284] RIP  [<ffffffffa0902a94>] pio_wait.isra.21+0x34/0x190 [hfi1]
      [ 8551.539508]  RSP <ffff880812ac3b58>
      [ 8551.544110] CR2: 0000000000000000
      
      The priv s_sendcontext pointer was not setup properly.  Fix with this
      patch by using the s_sendcontext and eliminating its send engine use.
      Reviewed-by: NDennis Dalessandro <dennis.dalessandro@intel.com>
      Signed-off-by: NMike Marciniszyn <mike.marciniszyn@intel.com>
      Signed-off-by: NDoug Ledford <dledford@redhat.com>
      cef504c5
    • M
      IB/hfi1: Fix ordering of trace for accuracy · 5326dfbf
      Mike Marciniszyn 提交于
      The postitioning of the sdma ibhdr trace was
      causing an extra trace message when the tx send
      returned -EBUSY.
      
      Move the trace to just before the return
      and handle negative return values to avoid
      any trace.
      Reviewed-by: NDennis Dalessandro <dennis.dalessandro@intel.com>
      Signed-off-by: NMike Marciniszyn <mike.marciniszyn@intel.com>
      Signed-off-by: NDoug Ledford <dledford@redhat.com>
      5326dfbf
    • M
      IB/hfi1: Add unique trace point for pio and sdma send · 1db78eee
      Mike Marciniszyn 提交于
      This allows for separately enabling pio and sdma
      tracepoints to cut the volume of trace information.
      Reviewed-by: NDennis Dalessandro <dennis.dalessandro@intel.com>
      Signed-off-by: NMike Marciniszyn <mike.marciniszyn@intel.com>
      Signed-off-by: NDoug Ledford <dledford@redhat.com>
      1db78eee
    • D
      IB/hfi1: Add adaptive cacheless verbs copy · 528ee9fb
      Dean Luick 提交于
      The kernel memcpy is faster than a cacheless copy.  However,
      if too much of the L3 cache is overwritten by one-time copies
      then overall bandwidth suffers.  Implement an adaptive scheme
      where full page copies are tracked and if the number of unique
      entries are larger than a threshold, verbs will use a cacheless
      copy.  Tracked entries are gradually cleaned, allowing memcpy to
      resume once the larger copies have stopped.
      Reviewed-by: NDennis Dalessandro <dennis.dalessandro@intel.com>
      Reviewed-by: NMike Marciniszyn <mike.marciniszyn@intel.com>
      Signed-off-by: NDean Luick <dean.luick@intel.com>
      Signed-off-by: NJubin John <jubin.john@intel.com>
      Signed-off-by: NDoug Ledford <dledford@redhat.com>
      528ee9fb
  14. 12 3月, 2016 1 次提交
    • B
      staging: rdma: hfi1: Use setup_timer · e1af35bc
      Bhaktipriya Shridhar 提交于
      The function setup_timer combines the initialization of a timer with the
      initialization of the timer's function and data fields.
      
      The multiline code for timer initialization is now replaced with function
      setup_timer.
      
      This was done with Coccinelle.
      
      @@ expression e1, e2, e3; type T; @@
      - init_timer(&e1);
      ...
      (
      - e1.function = e2;
      ...
      - e1.data = (T)e3;
      + setup_timer(&e1, e2, (T)e3);
      |
      - e1.data = (T)e3;
      ...
      - e1.function = e2;
      + setup_timer(&e1, e2, (T)e3);
      |
      - e1.function = e2;
      + setup_timer(&e1, e2, 0);
      )
      Signed-off-by: NBhaktipriya Shridhar <bhaktipriya96@gmail.com>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      e1af35bc
  15. 11 3月, 2016 2 次提交