1. 12 4月, 2014 4 次提交
  2. 21 3月, 2014 2 次提交
  3. 15 3月, 2014 1 次提交
    • S
      cxgb4/iw_cxgb4: Doorbell Drop Avoidance Bug Fixes · 05eb2389
      Steve Wise 提交于
      The current logic suffers from a slow response time to disable user DB
      usage, and also fails to avoid DB FIFO drops under heavy load. This commit
      fixes these deficiencies and makes the avoidance logic more optimal.
      This is done by more efficiently notifying the ULDs of potential DB
      problems, and implements a smoother flow control algorithm in iw_cxgb4,
      which is the ULD that puts the most load on the DB fifo.
      
      Design:
      
      cxgb4:
      
      Direct ULD callback from the DB FULL/DROP interrupt handler.  This allows
      the ULD to stop doing user DB writes as quickly as possible.
      
      While user DB usage is disabled, the LLD will accumulate DB write events
      for its queues.  Then once DB usage is reenabled, a single DB write is
      done for each queue with its accumulated write count.  This reduces the
      load put on the DB fifo when reenabling.
      
      iw_cxgb4:
      
      Instead of marking each qp to indicate DB writes are disabled, we create
      a device-global status page that each user process maps.  This allows
      iw_cxgb4 to only set this single bit to disable all DB writes for all
      user QPs vs traversing the idr of all the active QPs.  If the libcxgb4
      doesn't support this, then we fall back to the old approach of marking
      each QP.  Thus we allow the new driver to work with an older libcxgb4.
      
      When the LLD upcalls iw_cxgb4 indicating DB FULL, we disable all DB writes
      via the status page and transition the DB state to STOPPED.  As user
      processes see that DB writes are disabled, they call into iw_cxgb4
      to submit their DB write events.  Since the DB state is in STOPPED,
      the QP trying to write gets enqueued on a new DB "flow control" list.
      As subsequent DB writes are submitted for this flow controlled QP, the
      amount of writes are accumulated for each QP on the flow control list.
      So all the user QPs that are actively ringing the DB get put on this
      list and the number of writes they request are accumulated.
      
      When the LLD upcalls iw_cxgb4 indicating DB EMPTY, which is in a workq
      context, we change the DB state to FLOW_CONTROL, and begin resuming all
      the QPs that are on the flow control list.  This logic runs on until
      the flow control list is empty or we exit FLOW_CONTROL mode (due to
      a DB DROP upcall, for example).  QPs are removed from this list, and
      their accumulated DB write counts written to the DB FIFO.  Sets of QPs,
      called chunks in the code, are removed at one time. The chunk size is 64.
      So 64 QPs are resumed at a time, and before the next chunk is resumed, the
      logic waits (blocks) for the DB FIFO to drain.  This prevents resuming to
      quickly and overflowing the FIFO.  Once the flow control list is empty,
      the db state transitions back to NORMAL and user QPs are again allowed
      to write directly to the user DB register.
      
      The algorithm is designed such that if the DB write load is high enough,
      then all the DB writes get submitted by the kernel using this flow
      controlled approach to avoid DB drops.  As the load lightens though, we
      resume to normal DB writes directly by user applications.
      Signed-off-by: NSteve Wise <swise@opengridcomputing.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      05eb2389
  4. 14 8月, 2013 2 次提交
    • S
    • S
      RDMA/cxgb4: Fix QP flush logic · 1cf24dce
      Steve Wise 提交于
      This patch makes following fixes in QP flush logic:
      
      - correctly flushes unsignaled WRs followed by a signaled WR
      - supports for flushing a CQ bound to multiple QPs
      - resets cidx_flush if a active queue starts getting HW CQEs again
      - marks WQ in error when we leave RTS. This was only being done for
        user queues, but we need it for kernel queues too so that
        post_send/post_recv will start returning the appropriate error
        synchronously
      - eats unsignaled read resp CQEs. HW always inserts CQEs so we must
        silently discard them if the read work request was unsignaled.
      - handles QP flushes with pending SW CQEs. The flush and out of order
        completion logic has a bug where if out of order completions are
        flushed but not yet polled by the consumer and the qp is then
        flushed then we end up inserting duplicate completions.
      - c4iw_flush_sq() should only flush wrs that have not already been
        flushed.  Since we already track where in the SQ we've flushed via
        sq.cidx_flush, just start at that point and flush any remaining.
        This bug only caused a problem in the presence of unsignaled work
        requests.
      Signed-off-by: NSteve Wise <swise@opengridcomputing.com>
      Signed-off-by: NVipul Pandya <vipul@chelsio.com>
      
      [ Fixed sparse warning due to htonl/ntohl confusion.  - Roland ]
      Signed-off-by: NRoland Dreier <roland@purestorage.com>
      1cf24dce
  5. 31 7月, 2013 1 次提交
  6. 17 4月, 2013 1 次提交
    • T
      RDMA/cxgb4: Fix SQ allocation when on-chip SQ is disabled · 5b0c2759
      Thadeu Lima de Souza Cascardo 提交于
      Commit c079c287 ("RDMA/cxgb4: Fix error handling in create_qp()")
      broke SQ allocation.  Instead of falling back to host allocation when
      on-chip allocation fails, it tries to allocate both.  And when it
      does, and we try to free the address from the genpool using the host
      address, we hit a BUG and the system crashes as below.
      
      We create a new function that has the previous behavior and properly
      propagate the error, as intended.
      
          kernel BUG at /usr/src/packages/BUILD/kernel-ppc64-3.0.68/linux-3.0/lib/genalloc.c:340!
          Oops: Exception in kernel mode, sig: 5 [#1]
          SMP NR_CPUS=1024 NUMA pSeries
          Modules linked in: rdma_ucm rdma_cm ib_addr ib_cm iw_cm ib_sa ib_mad ib_uverbs iw_cxgb4 ib_core ip6t_LOG xt_tcpudp xt_pkttype ipt_LOG xt_limit ip6t_REJECT nf_conntrack_ipv6 nf_defrag_ipv6 ip6table_raw xt_NOTRACK ipt_REJECT xt_state iptable_raw iptable_filter ip6table_mangle nf_conntrack_netbios_ns nf_conntrack_broadcast nf_conntrack_ipv4 nf_conntrack nf_defrag_ipv4 ip_tables ip6table_filter ip6_tables x_tables fuse loop dm_mod ipv6 ipv6_lib sr_mod cdrom ibmveth(X) cxgb4 sg ext3 jbd mbcache sd_mod crc_t10dif scsi_dh_emc scsi_dh_hp_sw scsi_dh_alua scsi_dh_rdac scsi_dh ibmvscsic(X) scsi_transport_srp scsi_tgt scsi_mod
          Supported: Yes
          NIP: c00000000037d41c LR: d000000003913824 CTR: c00000000037d3b0
          REGS: c0000001f350ae50 TRAP: 0700   Tainted: G            X  (3.0.68-0.9-ppc64)
          MSR: 8000000000029032 <EE,ME,CE,IR,DR>  CR: 24042482  XER: 00000001
          TASK = c0000001f6f2a840[3616] 'rping' THREAD: c0000001f3508000 CPU: 0
          GPR00: c0000001f6e875c8 c0000001f350b0d0 c000000000fc9690 c0000001f6e875c0
          GPR04: 00000000000c0000 0000000000010000 0000000000000000 c0000000009d482a
          GPR08: 000000006a170000 0000000000100000 c0000001f350b140 c0000001f6e875c8
          GPR12: d000000003915dd0 c000000003f40000 000000003e3ecfa8 c0000001f350bea0
          GPR16: c0000001f350bcd0 00000000003c0000 0000000000040100 c0000001f6e74a80
          GPR20: d00000000399a898 c0000001f6e74ac8 c0000001fad91600 c0000001f6e74ab0
          GPR24: c0000001f7d23f80 0000000000000000 0000000000000002 000000006a170000
          GPR28: 000000000000000c c0000001f584c8d0 d000000003925180 c0000001f6e875c8
          NIP [c00000000037d41c] .gen_pool_free+0x6c/0xf8
          LR [d000000003913824] .c4iw_ocqp_pool_free+0x8c/0xd8 [iw_cxgb4]
          Call Trace:
          [c0000001f350b0d0] [c0000001f350b180] 0xc0000001f350b180 (unreliable)
          [c0000001f350b170] [d000000003913824] .c4iw_ocqp_pool_free+0x8c/0xd8 [iw_cxgb4]
          [c0000001f350b210] [d00000000390fd70] .dealloc_sq+0x90/0xb0 [iw_cxgb4]
          [c0000001f350b280] [d00000000390fe08] .destroy_qp+0x78/0xf8 [iw_cxgb4]
          [c0000001f350b310] [d000000003912738] .c4iw_destroy_qp+0x208/0x2d0 [iw_cxgb4]
          [c0000001f350b460] [d000000003861874] .ib_destroy_qp+0x5c/0x130 [ib_core]
          [c0000001f350b510] [d0000000039911bc] .ib_uverbs_cleanup_ucontext+0x174/0x4f8 [ib_uverbs]
          [c0000001f350b5f0] [d000000003991568] .ib_uverbs_close+0x28/0x70 [ib_uverbs]
          [c0000001f350b670] [c0000000001e7b2c] .__fput+0xdc/0x278
          [c0000001f350b720] [c0000000001a9590] .remove_vma+0x68/0xd8
          [c0000001f350b7b0] [c0000000001a9720] .exit_mmap+0x120/0x160
          [c0000001f350b8d0] [c0000000000af330] .mmput+0x80/0x160
          [c0000001f350b960] [c0000000000b5d0c] .exit_mm+0x1ac/0x1e8
          [c0000001f350ba10] [c0000000000b8154] .do_exit+0x1b4/0x4b8
          [c0000001f350bad0] [c0000000000b84b0] .do_group_exit+0x58/0xf8
          [c0000001f350bb60] [c0000000000ce9f4] .get_signal_to_deliver+0x2f4/0x5d0
          [c0000001f350bc60] [c000000000017ee4] .do_signal_pending+0x6c/0x3e0
          [c0000001f350bdb0] [c0000000000182cc] .do_signal+0x74/0x78
          [c0000001f350be30] [c000000000009e74] do_work+0x24/0x28
      Signed-off-by: NThadeu Lima de Souza Cascardo <cascardo@linux.vnet.ibm.com>
      Cc: Emil Goode <emilgoode@gmail.com>
      Cc: <stable@vger.kernel.org>
      Acked-by: NSteve Wise <swise@opengridcomputing.com>
      Signed-off-by: NRoland Dreier <roland@purestorage.com>
      5b0c2759
  7. 23 3月, 2013 1 次提交
  8. 14 3月, 2013 5 次提交
  9. 15 2月, 2013 1 次提交
  10. 01 10月, 2012 1 次提交
  11. 06 9月, 2012 1 次提交
  12. 19 5月, 2012 3 次提交
  13. 01 11月, 2011 3 次提交
  14. 15 10月, 2011 1 次提交
  15. 07 10月, 2011 1 次提交
  16. 18 6月, 2011 1 次提交
  17. 10 5月, 2011 2 次提交
  18. 15 3月, 2011 1 次提交
  19. 29 1月, 2011 1 次提交
  20. 11 1月, 2011 2 次提交
  21. 29 9月, 2010 5 次提交