1. 29 3月, 2019 1 次提交
  2. 04 3月, 2019 1 次提交
  3. 23 2月, 2019 2 次提交
  4. 16 2月, 2019 1 次提交
  5. 12 2月, 2019 2 次提交
  6. 09 2月, 2019 1 次提交
  7. 08 2月, 2019 4 次提交
    • D
      RDMA/bnxt_re: Update kernel user abi to pass chip context · 95b86d1c
      Devesh Sharma 提交于
      User space verbs provider library would need chip context.  Changing the
      ABI to add chip version details in structure.  Furthermore, changing the
      kernel driver ucontext allocation code to initialize the abi structure
      with appropriate values.
      
      As suggested by community, appended the new fields at the bottom of the
      ABI structure and retaining to older fields as those were in the older
      versions.
      
      Keeping the ABI version at 1 and adding a new field in the ucontext
      response structure to hold the component mask.  The user space library
      should check pre-defined flags to figure out if a certain feature is
      supported on not.
      Signed-off-by: NDevesh Sharma <devesh.sharma@broadcom.com>
      Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
      95b86d1c
    • D
      RDMA/bnxt_re: Add extended psn structure for 57500 adapters · 37f91cff
      Devesh Sharma 提交于
      The new 57500 series of adapter has bigger psn search structure.  The size
      of new structure is 16B. Changing the control path memory allocation and
      fast path code to accommodate the new psn structure while maintaining the
      backward compatibility.
      
      There are few additional changes listed below:
       - For 57500 chip max-sge are limited to 6 for now.
       - For 57500 chip max-receive-sge should be set to 6 for now.
       - Add driver/hardware interface structure for new chip.
      Signed-off-by: NSelvin Xavier <selvin.xavier@broadcom.com>
      Signed-off-by: NDevesh Sharma <devesh.sharma@broadcom.com>
      Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
      37f91cff
    • D
      RDMA/bnxt_re: Enable GSI QP support for 57500 series · 374c5285
      Devesh Sharma 提交于
      In the new 57500 series of adapters the GSI qp is a UD type QP unlike the
      previous generation where it was a Raw Eth QP. Changing the control and
      data path to support the same. Listing all the significant diffs:
      
       - AH creation resolve network type unconditionally
       - Add check at relevant places to distinguish from Raw Eth
         processing flow.
       - bnxt_re_process_res_ud_wc report completion with GRH flag
         when qp is GSI.
       - Change length, cfa_meta and smac to match new driver/hardware
         interface.
       - Add new driver/hardware interface.
      Signed-off-by: NSelvin Xavier <selvin.xavier@broadcom.com>
      Signed-off-by: NDevesh Sharma <devesh.sharma@broadcom.com>
      Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
      374c5285
    • D
      RDMA/bnxt_re: Add 64bit doorbells for 57500 series · b353ce55
      Devesh Sharma 提交于
      The new chip series has 64 bit doorbell for notification queues. Thus,
      both control and data path event queues need new routines to write 64 bit
      doorbell. Adding the same. There is new doorbell interface between the
      chip and driver. Changing the chip specific data structure definitions.
      
      Additional significant changes are listed below
      - bnxt_re_net_ring_free/alloc takes a new argument
      - bnxt_qplib_enable_nq and enable_rcfw uses new doorbell offset
        for new chip.
      - DB mapping for NQ and CREQ now maps 8 bytes.
      - DBR_DBR_* macros renames to DBC_DBC_*
      - store nq_db_offset in a 32bit data type.
      - got rid of __iowrite64_copy, used writeq instead.
      - changed the DB header initialization to simpler scheme.
      Signed-off-by: NSelvin Xavier <selvin.xavier@broadcom.com>
      Signed-off-by: NDevesh Sharma <devesh.sharma@broadcom.com>
      Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
      b353ce55
  8. 11 1月, 2019 1 次提交
  9. 20 12月, 2018 4 次提交
  10. 19 12月, 2018 1 次提交
  11. 16 10月, 2018 1 次提交
  12. 04 10月, 2018 1 次提交
  13. 06 9月, 2018 1 次提交
  14. 31 7月, 2018 3 次提交
  15. 24 7月, 2018 1 次提交
  16. 11 7月, 2018 1 次提交
    • J
      RDMA: Fix storage of PortInfo CapabilityMask in the kernel · 2f944c0f
      Jason Gunthorpe 提交于
      The internal flag IP_BASED_GIDS was added to a field that was being used
      to hold the port Info CapabilityMask without considering the effects this
      will have. Since most drivers just use the value from the HW MAD it means
      IP_BASED_GIDS will also become set on any HW that sets the IBA flag
      IsOtherLocalChangesNoticeSupported - which is not intended.
      
      Fix this by keeping port_cap_flags only for the IBA CapabilityMask value
      and store unrelated flags externally. Move the bit definitions for this to
      ib_mad.h to make it clear what is happening.
      
      To keep the uAPI unchanged define a new set of flags in the uapi header
      that are only used by ib_uverbs_query_port_resp.port_cap_flags which match
      the current flags supported in rdma-core, and the values exposed by the
      current kernel.
      
      Fixes: b4a26a27 ("IB: Report using RoCE IP based gids in port caps")
      Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
      Signed-off-by: NArtemy Kovalyov <artemyko@mellanox.com>
      Signed-off-by: NLeon Romanovsky <leonro@mellanox.com>
      2f944c0f
  17. 19 6月, 2018 4 次提交
  18. 04 4月, 2018 2 次提交
  19. 15 3月, 2018 1 次提交
    • A
      infiniband: bnxt_re: use BIT_ULL() for 64-bit bit masks · bd8602ca
      Arnd Bergmann 提交于
      On 32-bit targets, we otherwise get a warning about an impossible constant
      integer expression:
      
      In file included from include/linux/kernel.h:11,
                       from include/linux/interrupt.h:6,
                       from drivers/infiniband/hw/bnxt_re/ib_verbs.c:39:
      drivers/infiniband/hw/bnxt_re/ib_verbs.c: In function 'bnxt_re_query_device':
      include/linux/bitops.h:7:24: error: left shift count >= width of type [-Werror=shift-count-overflow]
       #define BIT(nr)   (1UL << (nr))
                              ^~
      drivers/infiniband/hw/bnxt_re/bnxt_re.h:61:34: note: in expansion of macro 'BIT'
       #define BNXT_RE_MAX_MR_SIZE_HIGH BIT(39)
                                        ^~~
      drivers/infiniband/hw/bnxt_re/bnxt_re.h:62:30: note: in expansion of macro 'BNXT_RE_MAX_MR_SIZE_HIGH'
       #define BNXT_RE_MAX_MR_SIZE  BNXT_RE_MAX_MR_SIZE_HIGH
                                    ^~~~~~~~~~~~~~~~~~~~~~~~
      drivers/infiniband/hw/bnxt_re/ib_verbs.c:149:25: note: in expansion of macro 'BNXT_RE_MAX_MR_SIZE'
        ib_attr->max_mr_size = BNXT_RE_MAX_MR_SIZE;
                               ^~~~~~~~~~~~~~~~~~~
      
      Fixes: 872f3578 ("RDMA/bnxt_re: Add support for MRs with Huge pages")
      Signed-off-by: NArnd Bergmann <arnd@arndb.de>
      Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
      bd8602ca
  20. 14 3月, 2018 1 次提交
  21. 07 3月, 2018 1 次提交
    • S
      RDMA/bnxt_re: Avoid Hard lockup during error CQE processing · 942c9b6c
      Selvin Xavier 提交于
      Hitting the following hardlockup due to a race condition in
      error CQE processing.
      
      [26146.879798] bnxt_en 0000:04:00.0: QPLIB: FP: CQ Processed Req
      [26146.886346] bnxt_en 0000:04:00.0: QPLIB: wr_id[1251] = 0x0 with status 0xa
      [26156.350935] NMI watchdog: Watchdog detected hard LOCKUP on cpu 4
      [26156.357470] Modules linked in: nfsd auth_rpcgss nfs_acl lockd grace
      [26156.447957] CPU: 4 PID: 3413 Comm: kworker/4:1H Kdump: loaded
      [26156.457994] Hardware name: Dell Inc. PowerEdge R430/0CN7X8,
      [26156.466390] Workqueue: ib-comp-wq ib_cq_poll_work [ib_core]
      [26156.472639] Call Trace:
      [26156.475379]  <NMI>  [<ffffffff98d0d722>] dump_stack+0x19/0x1b
      [26156.481833]  [<ffffffff9873f775>] watchdog_overflow_callback+0x135/0x140
      [26156.489341]  [<ffffffff9877f237>] __perf_event_overflow+0x57/0x100
      [26156.496256]  [<ffffffff98787c24>] perf_event_overflow+0x14/0x20
      [26156.502887]  [<ffffffff9860a580>] intel_pmu_handle_irq+0x220/0x510
      [26156.509813]  [<ffffffff98d16031>] perf_event_nmi_handler+0x31/0x50
      [26156.516738]  [<ffffffff98d1790c>] nmi_handle.isra.0+0x8c/0x150
      [26156.523273]  [<ffffffff98d17be8>] do_nmi+0x218/0x460
      [26156.528834]  [<ffffffff98d16d79>] end_repeat_nmi+0x1e/0x7e
      [26156.534980]  [<ffffffff987089c0>] ? native_queued_spin_lock_slowpath+0x1d0/0x200
      [26156.543268]  [<ffffffff987089c0>] ? native_queued_spin_lock_slowpath+0x1d0/0x200
      [26156.551556]  [<ffffffff987089c0>] ? native_queued_spin_lock_slowpath+0x1d0/0x200
      [26156.559842]  <EOE>  [<ffffffff98d083e4>] queued_spin_lock_slowpath+0xb/0xf
      [26156.567555]  [<ffffffff98d15690>] _raw_spin_lock+0x20/0x30
      [26156.573696]  [<ffffffffc08381a1>] bnxt_qplib_lock_buddy_cq+0x31/0x40 [bnxt_re]
      [26156.581789]  [<ffffffffc083bbaa>] bnxt_qplib_poll_cq+0x43a/0xf10 [bnxt_re]
      [26156.589493]  [<ffffffffc083239b>] bnxt_re_poll_cq+0x9b/0x760 [bnxt_re]
      
      The issue happens if RQ poll_cq or SQ poll_cq or Async error event tries to
      put the error QP in flush list. Since SQ and RQ of each error qp are added
      to two different flush list, we need to protect it using locks of
      corresponding CQs. Difference in order of acquiring the lock in
      SQ poll_cq and RQ poll_cq can cause a hard lockup.
      
      Revisits the locking strategy and removes the usage of qplib_cq.hwq.lock.
      Instead of this lock, introduces qplib_cq.flush_lock to handle
      addition/deletion of QPs in flush list. Also, always invoke the flush_lock
      in order (SQ CQ lock first and then RQ CQ lock) to avoid any potential
      deadlock.
      
      Other than the poll_cq context, the movement of QP to/from flush list can
      be done in modify_qp context or from an async error event from HW.
      Synchronize these operations using the bnxt_re verbs layer CQ locks.
      To achieve this, adds a call back to the HW abstraction layer(qplib) to
      bnxt_re ib_verbs layer in case of async error event. Also, removes the
      buddy cq functions as it is no longer required.
      Signed-off-by: NSriharsha Basavapatna <sriharsha.basavapatna@broadcom.com>
      Signed-off-by: NSomnath Kotur <somnath.kotur@broadcom.com>
      Signed-off-by: NDevesh Sharma <devesh.sharma@broadcom.com>
      Signed-off-by: NSelvin Xavier <selvin.xavier@broadcom.com>
      Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
      942c9b6c
  22. 01 3月, 2018 2 次提交
    • A
      infiniband: bnxt_re: use BIT_ULL() for 64-bit bit masks · a8ed7487
      Arnd Bergmann 提交于
      On 32-bit targets, we otherwise get a warning about an impossible constant
      integer expression:
      
      In file included from include/linux/kernel.h:11,
                       from include/linux/interrupt.h:6,
                       from drivers/infiniband/hw/bnxt_re/ib_verbs.c:39:
      drivers/infiniband/hw/bnxt_re/ib_verbs.c: In function 'bnxt_re_query_device':
      include/linux/bitops.h:7:24: error: left shift count >= width of type [-Werror=shift-count-overflow]
       #define BIT(nr)   (1UL << (nr))
                              ^~
      drivers/infiniband/hw/bnxt_re/bnxt_re.h:61:34: note: in expansion of macro 'BIT'
       #define BNXT_RE_MAX_MR_SIZE_HIGH BIT(39)
                                        ^~~
      drivers/infiniband/hw/bnxt_re/bnxt_re.h:62:30: note: in expansion of macro 'BNXT_RE_MAX_MR_SIZE_HIGH'
       #define BNXT_RE_MAX_MR_SIZE  BNXT_RE_MAX_MR_SIZE_HIGH
                                    ^~~~~~~~~~~~~~~~~~~~~~~~
      drivers/infiniband/hw/bnxt_re/ib_verbs.c:149:25: note: in expansion of macro 'BNXT_RE_MAX_MR_SIZE'
        ib_attr->max_mr_size = BNXT_RE_MAX_MR_SIZE;
                               ^~~~~~~~~~~~~~~~~~~
      
      Fixes: 872f3578 ("RDMA/bnxt_re: Add support for MRs with Huge pages")
      Signed-off-by: NArnd Bergmann <arnd@arndb.de>
      Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
      a8ed7487
    • D
      RDMA/bnxt_re: Unconditionly fence non wire memory operations · a45bc17b
      Devesh Sharma 提交于
      HW requires an unconditonal fence for all non-wire memory operations
      through SQ. This guarantees the completions of these memory operations.
      Signed-off-by: NDevesh Sharma <devesh.sharma@broadcom.com>
      Signed-off-by: NSelvin Xavier <selvin.xavier@broadcom.com>
      Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
      a45bc17b
  23. 21 2月, 2018 3 次提交