1. 11 11月, 2014 2 次提交
  2. 15 10月, 2014 1 次提交
  3. 10 10月, 2014 1 次提交
  4. 29 9月, 2014 2 次提交
  5. 22 8月, 2014 1 次提交
  6. 08 8月, 2014 2 次提交
  7. 05 8月, 2014 1 次提交
  8. 16 7月, 2014 1 次提交
    • H
      cxgb4/iw_cxgb4: use firmware ord/ird resource limits · 4c2c5763
      Hariprasad Shenai 提交于
      Advertise a larger max read queue depth for qps, and gather the resource limits
      from fw and use them to avoid exhaustinq all the resources.
      
      Design:
      
      cxgb4:
      
      Obtain the max_ordird_qp and max_ird_adapter device params from FW
      at init time and pass them up to the ULDs when they attach.  If these
      parameters are not available, due to older firmware, then hard-code
      the values based on the known values for older firmware.
      iw_cxgb4:
      
      Fix the c4iw_query_device() to report these correct values based on
      adapter parameters.  ibv_query_device() will always return:
      
      max_qp_rd_atom = max_qp_init_rd_atom = min(module_max, max_ordird_qp)
      max_res_rd_atom = max_ird_adapter
      
      Bump up the per qp max module option to 32, allowing it to be increased
      by the user up to the device max of max_ordird_qp.  32 seems to be
      sufficient to maximize throughput for streaming read benchmarks.
      
      Fail connection setup if the negotiated IRD exhausts the available
      adapter ird resources.  So the driver will track the amount of ird
      resource in use and not send an RI_WR/INIT to FW that would reduce the
      available ird resources below zero.
      Signed-off-by: NSteve Wise <swise@opengridcomputing.com>
      Signed-off-by: NHariprasad Shenai <hariprasad@chelsio.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      4c2c5763
  9. 02 7月, 2014 2 次提交
  10. 23 6月, 2014 2 次提交
  11. 11 6月, 2014 1 次提交
    • H
      iw_cxgb4: Allocate and use IQs specifically for indirect interrupts · cf38be6d
      Hariprasad Shenai 提交于
      Currently indirect interrupts for RDMA CQs funnel through the LLD's RDMA
      RXQs, which also handle direct interrupts for offload CPLs during RDMA
      connection setup/teardown.  The intended T4 usage model, however, is to
      have indirect interrupts flow through dedicated IQs. IE not to mix
      indirect interrupts with CPL messages in an IQ.  This patch adds the
      concept of RDMA concentrator IQs, or CIQs, setup and maintained by the
      LLD and exported to iw_cxgb4 for use when creating CQs. RDMA CPLs will
      flow through the LLD's RDMA RXQs, and CQ interrupts flow through the
      CIQs.
      
      Design:
      
      cxgb4 creates and exports an array of CIQs for the RDMA ULD.  These IQs
      are sized according to the max available CQs available at adapter init.
      In addition, these IQs don't need FL buffers since they only service
      indirect interrupts.  One CIQ is setup per RX channel similar to the
      RDMA RXQs.
      
      iw_cxgb4 will utilize these CIQs based on the vector value passed into
      create_cq().  The num_comp_vectors advertised by iw_cxgb4 will be the
      number of CIQs configured, and thus the vector value will be the index
      into the array of CIQs.
      
      Based on original work by Steve Wise <swise@opengridcomputing.com>
      Signed-off-by: NSteve Wise <swise@opengridcomputing.com>
      Signed-off-by: NHariprasad Shenai <hariprasad@chelsio.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      cf38be6d
  12. 15 3月, 2014 1 次提交
    • S
      cxgb4/iw_cxgb4: Doorbell Drop Avoidance Bug Fixes · 05eb2389
      Steve Wise 提交于
      The current logic suffers from a slow response time to disable user DB
      usage, and also fails to avoid DB FIFO drops under heavy load. This commit
      fixes these deficiencies and makes the avoidance logic more optimal.
      This is done by more efficiently notifying the ULDs of potential DB
      problems, and implements a smoother flow control algorithm in iw_cxgb4,
      which is the ULD that puts the most load on the DB fifo.
      
      Design:
      
      cxgb4:
      
      Direct ULD callback from the DB FULL/DROP interrupt handler.  This allows
      the ULD to stop doing user DB writes as quickly as possible.
      
      While user DB usage is disabled, the LLD will accumulate DB write events
      for its queues.  Then once DB usage is reenabled, a single DB write is
      done for each queue with its accumulated write count.  This reduces the
      load put on the DB fifo when reenabling.
      
      iw_cxgb4:
      
      Instead of marking each qp to indicate DB writes are disabled, we create
      a device-global status page that each user process maps.  This allows
      iw_cxgb4 to only set this single bit to disable all DB writes for all
      user QPs vs traversing the idr of all the active QPs.  If the libcxgb4
      doesn't support this, then we fall back to the old approach of marking
      each QP.  Thus we allow the new driver to work with an older libcxgb4.
      
      When the LLD upcalls iw_cxgb4 indicating DB FULL, we disable all DB writes
      via the status page and transition the DB state to STOPPED.  As user
      processes see that DB writes are disabled, they call into iw_cxgb4
      to submit their DB write events.  Since the DB state is in STOPPED,
      the QP trying to write gets enqueued on a new DB "flow control" list.
      As subsequent DB writes are submitted for this flow controlled QP, the
      amount of writes are accumulated for each QP on the flow control list.
      So all the user QPs that are actively ringing the DB get put on this
      list and the number of writes they request are accumulated.
      
      When the LLD upcalls iw_cxgb4 indicating DB EMPTY, which is in a workq
      context, we change the DB state to FLOW_CONTROL, and begin resuming all
      the QPs that are on the flow control list.  This logic runs on until
      the flow control list is empty or we exit FLOW_CONTROL mode (due to
      a DB DROP upcall, for example).  QPs are removed from this list, and
      their accumulated DB write counts written to the DB FIFO.  Sets of QPs,
      called chunks in the code, are removed at one time. The chunk size is 64.
      So 64 QPs are resumed at a time, and before the next chunk is resumed, the
      logic waits (blocks) for the DB FIFO to drain.  This prevents resuming to
      quickly and overflowing the FIFO.  Once the flow control list is empty,
      the db state transitions back to NORMAL and user QPs are again allowed
      to write directly to the user DB register.
      
      The algorithm is designed such that if the DB write load is high enough,
      then all the DB writes get submitted by the kernel using this flow
      controlled approach to avoid DB drops.  As the load lightens though, we
      resume to normal DB writes directly by user applications.
      Signed-off-by: NSteve Wise <swise@opengridcomputing.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      05eb2389
  13. 14 3月, 2014 2 次提交
  14. 19 2月, 2014 3 次提交
  15. 24 1月, 2014 1 次提交
    • G
      net/cxgb4: Avoid disabling PCI device for towice · 144be3d9
      Gavin Shan 提交于
      If we have EEH error happens to the adapter and we have to remove
      it from the system for some reasons (e.g. more than 5 EEH errors
      detected from the device in last hour), the adapter will be disabled
      for towice separately by eeh_err_detected() and remove_one(), which
      will incur following unexpected backtrace. The patch tries to avoid
      it.
      
      WARNING: at drivers/pci/pci.c:1431
      CPU: 12 PID: 121 Comm: eehd Not tainted 3.13.0-rc7+ #1
      task: c0000001823a3780 ti: c00000018240c000 task.ti: c00000018240c000
      NIP: c0000000003c1e40 LR: c0000000003c1e3c CTR: 0000000001764c5c
      REGS: c00000018240f470 TRAP: 0700   Not tainted  (3.13.0-rc7+)
      MSR: 8000000000029032 <SF,EE,ME,IR,DR,RI>  CR: 28000024  XER: 00000004
      CFAR: c000000000706528 SOFTE: 1
      GPR00: c0000000003c1e3c c00000018240f6f0 c0000000010fe1f8 0000000000000035
      GPR04: 0000000000000000 0000000000000000 00000000003ae509 0000000000000000
      GPR08: 000000000000346f 0000000000000000 0000000000000000 0000000000003fef
      GPR12: 0000000028000022 c00000000ec93000 c0000000000c11b0 c000000184ac3e40
      GPR16: 0000000000000000 0000000000000000 0000000000000000 0000000000000000
      GPR20: 0000000000000000 0000000000000000 0000000000000000 0000000000000000
      GPR24: 0000000000000000 c0000000009398d8 c00000000101f9c0 c0000001860ae000
      GPR28: c000000182ba0000 00000000000001f0 c0000001860ae6f8 c0000001860ae000
      NIP [c0000000003c1e40] .pci_disable_device+0xd0/0xf0
      LR [c0000000003c1e3c] .pci_disable_device+0xcc/0xf0
      Call Trace:
      [c0000000003c1e3c] .pci_disable_device+0xcc/0xf0 (unreliable)
      [d0000000073881c4] .remove_one+0x174/0x320 [cxgb4]
      [c0000000003c57e0] .pci_device_remove+0x60/0x100
      [c00000000046396c] .__device_release_driver+0x9c/0x120
      [c000000000463a20] .device_release_driver+0x30/0x60
      [c0000000003bcdb4] .pci_stop_bus_device+0x94/0xd0
      [c0000000003bcf48] .pci_stop_and_remove_bus_device+0x18/0x30
      [c00000000003f548] .pcibios_remove_pci_devices+0xa8/0x140
      [c000000000035c00] .eeh_handle_normal_event+0xa0/0x3c0
      [c000000000035f50] .eeh_handle_event+0x30/0x2b0
      [c0000000000362c4] .eeh_event_handler+0xf4/0x1b0
      [c0000000000c12b8] .kthread+0x108/0x130
      [c00000000000a168] .ret_from_kernel_thread+0x5c/0x74
      Signed-off-by: NGavin Shan <shangw@linux.vnet.ibm.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      144be3d9
  16. 10 1月, 2014 1 次提交
  17. 23 12月, 2013 2 次提交
  18. 04 12月, 2013 2 次提交
  19. 22 10月, 2013 1 次提交
  20. 13 8月, 2013 1 次提交
  21. 01 6月, 2013 1 次提交
    • J
      cxgb4: Force uninitialized state if FW_ON_ADAPTER is < FW_VERSION and we're the MASTER_PF · e69972f5
      Jay Hernandez 提交于
      Forcing uninitialized state allows us to upgrade and reinitialize the adapter.
      
      FW_VERSION_T4 = 1.4.0.0
      FW_VERSION_T5 = 0.0.0.0
      At this point driver supports above and greater than above version of firmware.
      If it doesn't find the required firmware version than it forces the adapter to
      be reinitialized as shown below.
      
      1) If FW_ON_ADAPTER < FW_VERSION and we're the MASTER_PF force uninitialized
         state and a FW upgrade if available.
      
             - If FW_ON_ADAPTER < /lib/firmware/cxgb4/t*fw.bin we will update the
               adapters FW.
             - If FW_ON_ADAPTER >= /lib/firmware/cxgb4/t*fw.bin don't upgrade FW.
             - If upgrade_fw() fails force reinitialization of the adapter anyways,
               it might still work.
      
         Either way forcing the uninitialized state allows cxgb4 reinitialize FW.
      
      2) If FW_ON_ADAPTER >= FW_VERSION driver follows normal path.
      Signed-off-by: NJay Hernandez <jay@chelsio.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      e69972f5
  22. 14 3月, 2013 4 次提交
  23. 20 12月, 2012 3 次提交
    • V
      RDMA/cxgb4: Fix bug for active and passive LE hash collision path · 793dad94
      Vipul Pandya 提交于
      Retries active opens for INUSE errors.
      
      Logs any active ofld_connect_wr error replies.
      
      Sends ofld_connect_wr on same ctrlq. It needs to go  on the same control txq as
      regular CPL active/passive messages.
      
      Retries on active open replies with EADDRINUSE.
      
      Uses active open fw wr only if active filter region is set.
      
      Adds stat for ofld_connect_wr failures.
      
      This patch also adds debugfs file to show endpoints.
      Signed-off-by: NVipul Pandya <vipul@chelsio.com>
      Signed-off-by: NRoland Dreier <roland@purestorage.com>
      793dad94
    • V
      cxgb4: Add LE hash collision bug fix path in LLD driver · dca4faeb
      Vipul Pandya 提交于
      It supports establishing passive open connection through firmware filter work
      request. Passive open connection will go through this path as now instead of
      listening server we create a server filter which will redirect the incoming SYN
      packet to the offload queue.
      
      It divides filter region into regular filters and server filter portion. It
      introduces new server filter region which will be exclusively used for creating
      server filters. This region will not overlap with regular filter region.
      
      It provides new API cxgb4_alloc_sftid in LLD for getting stid in case of LE
      hash collision path. This new stid will be used to open server filter in the
      filter region.
      Signed-off-by: NVipul Pandya <vipul@chelsio.com>
      Signed-off-by: NRoland Dreier <roland@purestorage.com>
      dca4faeb
    • V
      cxgb4: Add T4 filter support · f2b7e78d
      Vipul Pandya 提交于
      The T4 architecture is capable of filtering ingress packets at line rate
      using the rule in TCAM. If packet hits a rule in the TCAM then it can be either
      dropped or passed to the receive queues based on a rule settings.
      
      This patch adds framework for managing filters and to use T4's filter
      capabilities. It constructs a Firmware Filter Work Request which writes the
      filter at a specified index to get the work done. It hosts shadow copy of
      ingress filter entry to check field size limitations and save memory in the
      case where the filter table is large.
      Signed-off-by: NVipul Pandya <vipul@chelsio.com>
      Signed-off-by: NRoland Dreier <roland@purestorage.com>
      f2b7e78d
  24. 22 10月, 2012 1 次提交
  25. 09 10月, 2012 1 次提交