1. 15 7月, 2008 1 次提交
  2. 12 7月, 2008 1 次提交
  3. 05 7月, 2008 1 次提交
  4. 21 6月, 2008 4 次提交
  5. 19 6月, 2008 1 次提交
  6. 07 6月, 2008 1 次提交
    • R
      IB/umem: Avoid sign problems when demoting npages to integer · 8079ffa0
      Roland Dreier 提交于
      On a 64-bit architecture, if ib_umem_get() is called with a size value
      that is so big that npages is negative when cast to int, then the
      length of the page list passed to get_user_pages(), namely
      
      	min_t(int, npages, PAGE_SIZE / sizeof (struct page *))
      
      will be negative, and get_user_pages() will immediately return 0 (at
      least since 900cf086, "Be more robust about bad arguments in
      get_user_pages()").  This leads to an infinite loop in ib_umem_get(),
      since the code boils down to:
      
      	while (npages) {
      		ret = get_user_pages(...);
      		npages -= ret;
      	}
      
      Fix this by taking the minimum as unsigned longs, so that the value of
      npages is never truncated.
      
      The impact of this bug isn't too severe, since the value of npages is
      checked against RLIMIT_MEMLOCK, so a process would need to have an
      astronomical limit or have CAP_IPC_LOCK to be able to trigger this,
      and such a process could already cause lots of mischief.  But it does
      let buggy userspace code cause a kernel lock-up; for example I hit
      this with code that passes a negative value into a memory registartion
      function where it is promoted to a huge u64 value.
      
      Cc: <stable@kernel.org>
      Signed-off-by: NRoland Dreier <rolandd@cisco.com>
      8079ffa0
  7. 24 5月, 2008 1 次提交
  8. 21 5月, 2008 1 次提交
    • G
      IB: fix race in device_create · 6c06aec2
      Greg Kroah-Hartman 提交于
      There is a race from when a device is created with device_create() and
      then the drvdata is set with a call to dev_set_drvdata() in which a
      sysfs file could be open, yet the drvdata will be NULL, causing all
      sorts of bad things to happen.
      
      This patch fixes the problem by using the new function,
      device_create_drvdata().
      
      Cc: Kay Sievers <kay.sievers@vrfy.org>
      Reviewed-by: NRoland Dreier <rolandd@cisco.com>
      Cc: Sean Hefty <sean.hefty@intel.com>
      Cc: Hal Rosenstock <hal.rosenstock@gmail.com>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@suse.de>
      6c06aec2
  9. 29 4月, 2008 1 次提交
    • A
      IB: expand ib_umem_get() prototype · cb9fbc5c
      Arthur Kepner 提交于
      Add a new parameter, dmasync, to the ib_umem_get() prototype.  Use dmasync = 1
      when mapping user-allocated CQs with ib_umem_get().
      Signed-off-by: NArthur Kepner <akepner@sgi.com>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Jesse Barnes <jbarnes@virtuousgeek.org>
      Cc: Jes Sorensen <jes@sgi.com>
      Cc: Randy Dunlap <randy.dunlap@oracle.com>
      Cc: Roland Dreier <rdreier@cisco.com>
      Cc: James Bottomley <James.Bottomley@HansenPartnership.com>
      Cc: David Miller <davem@davemloft.net>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Grant Grundler <grundler@parisc-linux.org>
      Cc: Michael Ellerman <michael@ellerman.id.au>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      cb9fbc5c
  10. 20 4月, 2008 1 次提交
  11. 19 4月, 2008 1 次提交
  12. 17 4月, 2008 10 次提交
  13. 31 3月, 2008 1 次提交
  14. 11 3月, 2008 1 次提交
  15. 01 3月, 2008 3 次提交
  16. 16 2月, 2008 1 次提交
  17. 15 2月, 2008 1 次提交
    • S
      RDMA/cma: Do not issue MRA if user rejects connection request · ead595ae
      Sean Hefty 提交于
      There's an undesirable interaction with issuing MRA requests to
      increase connection timeouts and the listen backlog.
      
      When the rdma_cm receives a connection request, it queues an MRA with
      the ib_cm.  (The ib_cm will send an MRA if it receives a duplicate
      REQ.)  The rdma_cm will then create a new rdma_cm_id and give that to
      the user, which in this case is the rdma_user_cm.
      
      If the listen backlog maintained in the rdma_user_cm is full, it
      destroys the rdma_cm_id, which in turns destroys the ib_cm_id.  The
      ib_cm_id generates a REJ because the state of the ib_cm_id has changed
      to MRA sent, versus REQ received.  When the backlog is full, we just
      want to drop the REQ so that it is retried later.
      
      Fix this by deferring queuing the MRA until after the user of the
      rdma_cm has examined the connection request.
      Signed-off-by: NSean Hefty <sean.hefty@intel.com>
      Signed-off-by: NRoland Dreier <rolandd@cisco.com>
      ead595ae
  18. 13 2月, 2008 2 次提交
  19. 05 2月, 2008 2 次提交
    • O
      IB/fmr_pool: Allocate page list for pool FMRs only when caching enabled · 1d96354e
      Or Gerlitz 提交于
      Allocate memory for the page_list field of struct ib_pool_fmr only
      when caching is enabled for the FMR pool, since the field is not used
      otherwise.  This can save significant amounts of memory for large
      pools with caching turned off.
      Signed-off-by: NOr Gerlitz <ogerlitz@voltaire.com>
      Signed-off-by: NRoland Dreier <rolandd@cisco.com>
      1d96354e
    • S
      IB/cm: Add interim support for routed paths · 3971c9f6
      Sean Hefty 提交于
      Paths with hop_limit > 1 indicate that the connection will be routed
      between IB subnets.  Update the subnet local field in the CM REQ based
      on the hop_limit value.  In addition, if the path is routed, then set
      the LIDs in the REQ to the permissive LIDs.  This is used to indicate
      to the passive side that it should use the LIDs in the received local
      route header (LRH) associated with the REQ when programming the QP.
      
      This is a temporary work-around to the IB CM to support IB router
      development until the IB router specification is completed.  It is not
      anticipated that this work-around will cause any interoperability
      issues with existing stacks or future stacks that will properly
      support IB routers when defined.
      Signed-off-by: NSean Hefty <sean.hefty@intel.com>
      Signed-off-by: NRoland Dreier <rolandd@cisco.com>
      3971c9f6
  20. 29 1月, 2008 3 次提交
  21. 26 1月, 2008 2 次提交
    • O
      IB/fmr_pool: ib_fmr_pool_flush() should flush all dirty FMRs · a3cd7d90
      Olaf Kirch 提交于
      When a FMR is released via ib_fmr_pool_unmap(), the FMR usually ends
      up on the free_list rather than the dirty_list (because we allow a
      certain number of remappings before actually requiring a flush).
      
      However, ib_fmr_batch_release() only looks at dirty_list when flushing
      out old mappings.  This means that when ib_fmr_pool_flush() is used to
      force a flush of the FMR pool, some dirty FMRs that have not reached
      their maximum remap count will not actually be flushed.
      
      Fix this by flushing all FMRs that have been used at least once in
      ib_fmr_batch_release().
      Signed-off-by: NOlaf Kirch <olaf.kirch@oracle.com>
      Signed-off-by: NRoland Dreier <rolandd@cisco.com>
      a3cd7d90
    • O
      IB/fmr_pool: Flush serial numbers can get out of sync · a656eb75
      Olaf Kirch 提交于
      Normally, the serial numbers for flush requests and flushes executed
      for an FMR pool should be in sync.
      
      However, if the FMR pool flushes dirty FMRs because the
      dirty_watermark was reached, we wake up the cleanup thread and let it
      do its stuff.  As a side effect, the cleanup thread increments
      pool->flush_ser, which leaves it one higher than pool->req_ser.  The
      next time the user calls ib_flush_fmr_pool(), the cleanup thread will
      be woken up, but ib_flush_fmr_pool() won't wait for the flush to
      complete because flush_ser is already past req_ser.  This means the
      FMRs that the user expects to be flushed may not have all been flushed
      when the function returns.
      
      Fix this by telling the cleanup thread to do work exclusively by
      incrementing req_ser, and by moving the comparison of dirty_len and
      dirty_watermark into ib_fmr_pool_unmap().
      Signed-off-by: NOlaf Kirch <olaf.kirch@oracle.com>
      a656eb75