1. 19 5月, 2012 1 次提交
  2. 09 5月, 2012 1 次提交
  3. 13 3月, 2012 1 次提交
    • E
      IB/mlx4: Fix possible missed completion event · 3616f9ce
      Eli Cohen 提交于
      If an erroneous CQE is polled in the first iteration (i.e. npolled ==
      0), we don't update the consumer index and hence the hardware could
      get a wrong notion of how many CQEs software polled.  Fix this by
      unconditionally updating the doorbell record.  We could change the
      check to be something like
      
      	if (npolled || err != -EAGAIN)
      		...
      
      but it does not seem worth the effort since a posted write to memory
      should not cost too much.
      Signed-off-by: NEli Cohen <eli@mellanox.com>
      Signed-off-by: NRoland Dreier <roland@purestorage.com>
      3616f9ce
  4. 09 3月, 2012 1 次提交
    • O
      IB: Change CQE "csum_ok" field to a bit flag · d927d505
      Or Gerlitz 提交于
      Use a bit in wc_flags rather then a whole integer to hold the
      "checksum OK" flag.  By itself, this change doesn't reduce the size of
      struct ib_wc on 64bit machines -- it stays on 56 bytes because of
      padding.  However, it will allow to add more fields in the future
      without enlarging the struct.  Also, it will let us have a unified
      approach with future libibverbs checksum offload reporting, because a
      bit flag doesn't break the library ABI.
      
      This patch was suggested during conversation with Liran Liss
      <liranl@mellanox.com>.
      Signed-off-by: NOr Gerlitz <ogerlitz@mellanox.com>
      Reviewed-by: NSean Hefty <sean.hefty@intel.com>
      Signed-off-by: NRoland Dreier <roland@purestorage.com>
      d927d505
  5. 04 1月, 2012 1 次提交
    • O
      IB/mlx4: Fix SL to 802.1Q priority-bits mapping for IBoE · 9106c410
      Or Gerlitz 提交于
      For IBoE, SLs 0-7 are mapped to Ethernet 802.1Q user priority bits
      (pbits) which are part of the VLAN tag, SLs 8-15 are reserved.
      
      Under Ethernet, the ConnectX firmware treats (decode/encode) the four
      bit SL field in various constructs such as QPC / UD WQE / CQE as PPP0
      and not as 0PPP. This correlates well to the fact that within the
      vlan tag the pbits are located in bits 15-13 and not 12-14.
      
      The current code wasn't consistent around that area - the
      encoding was correct for the IBoE QPC.path.schedule_queue field,
      but was wrong for IBoE CQEs and when MLX header was built.
      
      These inconsistencies resulted in wrong SL <--> wire 802.1Q pbits
      mapping, which is fixed by using SL <--> PPP0 all around the place.
      Signed-off-by: NOren Duer <oren@mellanox.co.il>
      Signed-off-by: NOr Gerlitz <ogerlitz@mellanox.com>
      Signed-off-by: NRoland Dreier <roland@purestorage.com>
      9106c410
  6. 11 1月, 2011 1 次提交
  7. 22 4月, 2010 1 次提交
  8. 30 3月, 2010 1 次提交
    • T
      include cleanup: Update gfp.h and slab.h includes to prepare for breaking... · 5a0e3ad6
      Tejun Heo 提交于
      include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
      
      percpu.h is included by sched.h and module.h and thus ends up being
      included when building most .c files.  percpu.h includes slab.h which
      in turn includes gfp.h making everything defined by the two files
      universally available and complicating inclusion dependencies.
      
      percpu.h -> slab.h dependency is about to be removed.  Prepare for
      this change by updating users of gfp and slab facilities include those
      headers directly instead of assuming availability.  As this conversion
      needs to touch large number of source files, the following script is
      used as the basis of conversion.
      
        http://userweb.kernel.org/~tj/misc/slabh-sweep.py
      
      The script does the followings.
      
      * Scan files for gfp and slab usages and update includes such that
        only the necessary includes are there.  ie. if only gfp is used,
        gfp.h, if slab is used, slab.h.
      
      * When the script inserts a new include, it looks at the include
        blocks and try to put the new include such that its order conforms
        to its surrounding.  It's put in the include block which contains
        core kernel includes, in the same order that the rest are ordered -
        alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
        doesn't seem to be any matching order.
      
      * If the script can't find a place to put a new include (mostly
        because the file doesn't have fitting include block), it prints out
        an error message indicating which .h file needs to be added to the
        file.
      
      The conversion was done in the following steps.
      
      1. The initial automatic conversion of all .c files updated slightly
         over 4000 files, deleting around 700 includes and adding ~480 gfp.h
         and ~3000 slab.h inclusions.  The script emitted errors for ~400
         files.
      
      2. Each error was manually checked.  Some didn't need the inclusion,
         some needed manual addition while adding it to implementation .h or
         embedding .c file was more appropriate for others.  This step added
         inclusions to around 150 files.
      
      3. The script was run again and the output was compared to the edits
         from #2 to make sure no file was left behind.
      
      4. Several build tests were done and a couple of problems were fixed.
         e.g. lib/decompress_*.c used malloc/free() wrappers around slab
         APIs requiring slab.h to be added manually.
      
      5. The script was run on all .h files but without automatically
         editing them as sprinkling gfp.h and slab.h inclusions around .h
         files could easily lead to inclusion dependency hell.  Most gfp.h
         inclusion directives were ignored as stuff from gfp.h was usually
         wildly available and often used in preprocessor macros.  Each
         slab.h inclusion directive was examined and added manually as
         necessary.
      
      6. percpu.h was updated not to include slab.h.
      
      7. Build test were done on the following configurations and failures
         were fixed.  CONFIG_GCOV_KERNEL was turned off for all tests (as my
         distributed build env didn't work with gcov compiles) and a few
         more options had to be turned off depending on archs to make things
         build (like ipr on powerpc/64 which failed due to missing writeq).
      
         * x86 and x86_64 UP and SMP allmodconfig and a custom test config.
         * powerpc and powerpc64 SMP allmodconfig
         * sparc and sparc64 SMP allmodconfig
         * ia64 SMP allmodconfig
         * s390 SMP allmodconfig
         * alpha SMP allmodconfig
         * um on x86_64 SMP allmodconfig
      
      8. percpu.h modifications were reverted so that it could be applied as
         a separate patch and serve as bisection point.
      
      Given the fact that I had only a couple of failures from tests on step
      6, I'm fairly confident about the coverage of this conversion patch.
      If there is a breakage, it's likely to be something in one of the arch
      headers which should be easily discoverable easily on most builds of
      the specific arch.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Guess-its-ok-by: NChristoph Lameter <cl@linux-foundation.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
      5a0e3ad6
  9. 06 1月, 2009 1 次提交
  10. 31 12月, 2008 1 次提交
  11. 25 12月, 2008 1 次提交
  12. 22 12月, 2008 1 次提交
  13. 02 12月, 2008 1 次提交
  14. 07 8月, 2008 1 次提交
  15. 26 7月, 2008 1 次提交
  16. 23 7月, 2008 1 次提交
  17. 15 7月, 2008 1 次提交
    • S
      RDMA/core: Add memory management extensions support · 00f7ec36
      Steve Wise 提交于
      This patch adds support for the IB "base memory management extension"
      (BMME) and the equivalent iWARP operations (which the iWARP verbs
      mandates all devices must implement).  The new operations are:
      
       - Allocate an ib_mr for use in fast register work requests.
      
       - Allocate/free a physical buffer lists for use in fast register work
         requests.  This allows device drivers to allocate this memory as
         needed for use in posting send requests (eg via dma_alloc_coherent).
      
       - New send queue work requests:
         * send with remote invalidate
         * fast register memory region
         * local invalidate memory region
         * RDMA read with invalidate local memory region (iWARP only)
      
      Consumer interface details:
      
       - A new device capability flag IB_DEVICE_MEM_MGT_EXTENSIONS is added
         to indicate device support for these features.
      
       - New send work request opcodes IB_WR_FAST_REG_MR, IB_WR_LOCAL_INV,
         IB_WR_RDMA_READ_WITH_INV are added.
      
       - A new consumer API function, ib_alloc_mr() is added to allocate
         fast register memory regions.
      
       - New consumer API functions, ib_alloc_fast_reg_page_list() and
         ib_free_fast_reg_page_list() are added to allocate and free
         device-specific memory for fast registration page lists.
      
       - A new consumer API function, ib_update_fast_reg_key(), is added to
         allow the key portion of the R_Key and L_Key of a fast registration
         MR to be updated.  Consumers call this if desired before posting
         a IB_WR_FAST_REG_MR work request.
      
      Consumers can use this as follows:
      
       - MR is allocated with ib_alloc_mr().
      
       - Page list memory is allocated with ib_alloc_fast_reg_page_list().
      
       - MR R_Key/L_Key "key" field is updated with ib_update_fast_reg_key().
      
       - MR made VALID and bound to a specific page list via
         ib_post_send(IB_WR_FAST_REG_MR)
      
       - MR made INVALID via ib_post_send(IB_WR_LOCAL_INV),
         ib_post_send(IB_WR_RDMA_READ_WITH_INV) or an incoming send with
         invalidate operation.
      
       - MR is deallocated with ib_dereg_mr()
      
       - page lists dealloced via ib_free_fast_reg_page_list().
      
      Applications can allocate a fast register MR once, and then can
      repeatedly bind the MR to different physical block lists (PBLs) via
      posting work requests to a send queue (SQ).  For each outstanding
      MR-to-PBL binding in the SQ pipe, a fast_reg_page_list needs to be
      allocated (the fast_reg_page_list is owned by the low-level driver
      from the consumer posting a work request until the request completes).
      Thus pipelining can be achieved while still allowing device-specific
      page_list processing.
      
      The 32-bit fast register memory key/STag is composed of a 24-bit index
      and an 8-bit key.  The application can change the key each time it
      fast registers thus allowing more control over the peer's use of the
      key/STag (ie it can effectively be changed each time the rkey is
      rebound to a page list).
      Signed-off-by: NSteve Wise <swise@opengridcomputing.com>
      Signed-off-by: NRoland Dreier <rolandd@cisco.com>
      00f7ec36
  18. 01 5月, 2008 1 次提交
    • R
      IB/mlx4: Fix off-by-one errors in calls to mlx4_ib_free_cq_buf() · 3ae15e16
      Roland Dreier 提交于
      When I merged bbf8eed1 ("IB/mlx4: Add support for resizing CQs") I
      changed things around so that mlx4_ib_alloc_cq_buf() and
      mlx4_ib_free_cq_buf() were used everywhere they could be.  However, I
      screwed up the number of entries passed into mlx4_ib_alloc_cq_buf()
      in a couple places -- the function bumps the number of entries
      internally, so the caller shouldn't add 1 as well.
      
      Passing a too-big value for the number of entries to mlx4_ib_free_cq_buf()
      can cause the cleanup to go off the end of an array and corrupt
      allocator state in interesting ways.
      Signed-off-by: NRoland Dreier <rolandd@cisco.com>
      3ae15e16
  19. 30 4月, 2008 1 次提交
  20. 29 4月, 2008 1 次提交
    • A
      IB: expand ib_umem_get() prototype · cb9fbc5c
      Arthur Kepner 提交于
      Add a new parameter, dmasync, to the ib_umem_get() prototype.  Use dmasync = 1
      when mapping user-allocated CQs with ib_umem_get().
      Signed-off-by: NArthur Kepner <akepner@sgi.com>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Jesse Barnes <jbarnes@virtuousgeek.org>
      Cc: Jes Sorensen <jes@sgi.com>
      Cc: Randy Dunlap <randy.dunlap@oracle.com>
      Cc: Roland Dreier <rdreier@cisco.com>
      Cc: James Bottomley <James.Bottomley@HansenPartnership.com>
      Cc: David Miller <davem@davemloft.net>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Grant Grundler <grundler@parisc-linux.org>
      Cc: Michael Ellerman <michael@ellerman.id.au>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      cb9fbc5c
  21. 24 4月, 2008 1 次提交
  22. 17 4月, 2008 4 次提交
  23. 09 2月, 2008 1 次提交
    • J
      IB/mlx4: Use multiple WQ blocks to post smaller send WQEs · ea54b10c
      Jack Morgenstein 提交于
      ConnectX HCA supports shrinking WQEs, so that a single work request
      can be made of multiple units of wqe_shift.  This way, WRs can differ
      in size, and do not have to be a power of 2 in size, saving memory and
      speeding up send WR posting.  Unfortunately, if we do this then the
      wqe_index field in CQEs can't be used to look up the WR ID anymore, so
      our implementation does this only if selective signaling is off.
      
      Further, on 32-bit platforms, we can't use vmap() to make the QP
      buffer virtually contigious. Thus we have to use constant-sized WRs to
      make sure a WR is always fully within a single page-sized chunk.
      
      Finally, we use WRs with the NOP opcode to avoid wrapping around the
      queue buffer in the middle of posting a WR, and we set the
      NoErrorCompletion bit to avoid getting completions with error for NOP
      WRs.  However, NEC is only supported starting with firmware 2.2.232,
      so we use constant-sized WRs for older firmware.  And, since MLX QPs
      only support SEND, we use constant-sized WRs in this case.
      
      When stamping during NOP posting, do stamping following setting of the
      NOP WQE valid bit.
      Signed-off-by: NMichael S. Tsirkin <mst@dev.mellanox.co.il>
      Signed-off-by: NJack Morgenstein <jackm@dev.mellanox.co.il>
      Signed-off-by: NRoland Dreier <rolandd@cisco.com>
      ea54b10c
  24. 07 2月, 2008 1 次提交
  25. 26 1月, 2008 1 次提交
    • R
      IB/mlx4: Micro-optimize mlx4_ib_poll_one() · b3226184
      Roland Dreier 提交于
      Rather than byte-swapping cqe->g_mlpath_rqpn each time we extract a
      field from it, byte-swap it once into a temporary variable.  This 
      results in smaller, better code -- eg, on 32-bit x86:
      
      add/remove: 0/0 grow/shrink: 0/1 up/down: 0/-5 (-5)
      function                                     old     new   delta
      mlx4_ib_poll_cq                             1188    1183      -5
      Signed-off-by: NRoland Dreier <rolandd@cisco.com>
      b3226184
  26. 09 1月, 2008 1 次提交
  27. 04 8月, 2007 1 次提交
  28. 18 6月, 2007 2 次提交
    • J
      IB/mlx4: Handle buffer wraparound in __mlx4_ib_cq_clean() · 082dee32
      Jack Morgenstein 提交于
      When compacting CQ entries, we need to set the correct value of the
      ownership bit in case the value is different between the index we copy
      the CQE from and the index we copy it to.
      
      Found by Ronni Zimmerman of Mellanox.
      Signed-off-by: NJack Morgenstein <jackm@dev.mellanox.co.il>
      Signed-off-by: NRoland Dreier <rolandd@cisco.com>
      082dee32
    • R
      IB/mlx4: Handle new FW requirement for send request prefetching · 0e6e7416
      Roland Dreier 提交于
      New ConnectX firmware introduces FW command interface revision 2,
      which requires that for each QP, a chunk of send queue entries (the
      "headroom") is kept marked as invalid, so that the HCA doesn't get
      confused if it prefetches entries that haven't been posted yet.  Add
      code to the driver to do this, and also update the user ABI so that
      userspace can request that the prefetcher be turned off for userspace
      QPs (we just leave the prefetcher on for all kernel QPs).
      
      Unfortunately, marking send queue entries this way is confuses older
      firmware, so we change the driver to allow only FW command interface
      revisions 2.  This means that users will have to update their firmware
      to work with the new driver, but the firmware is changing quickly and
      the old firmware has lots of other bugs anyway, so this shouldn't be too
      big a deal.
      
      Based on a patch from Jack Morgenstein <jackm@dev.mellanox.co.il>.
      Signed-off-by: NRoland Dreier <rolandd@cisco.com>
      0e6e7416
  29. 13 6月, 2007 1 次提交
  30. 09 5月, 2007 1 次提交
    • R
      IB/mlx4: Add a driver Mellanox ConnectX InfiniBand adapters · 225c7b1f
      Roland Dreier 提交于
      Add an InfiniBand driver for Mellanox ConnectX adapters.  Because
      these adapters can also be used as ethernet NICs and Fibre Channel 
      HBAs, the driver is split into two modules: 
       
        mlx4_core: Handles low-level things like device initialization and 
          processing firmware commands.  Also controls resource allocation 
          so that the InfiniBand, ethernet and FC functions can share a 
          device without stepping on each other. 
       
        mlx4_ib: Handles InfiniBand-specific things; plugs into the 
          InfiniBand midlayer. 
      Signed-off-by: NRoland Dreier <rolandd@cisco.com>
      225c7b1f