1. 29 3月, 2014 3 次提交
  2. 28 3月, 2014 1 次提交
  3. 18 3月, 2014 2 次提交
  4. 15 7月, 2013 1 次提交
  5. 13 6月, 2013 1 次提交
  6. 22 2月, 2013 1 次提交
    • S
      IB/core: Add "type 2" memory windows support · 7083e42e
      Shani Michaeli 提交于
      This patch enhances the IB core support for Memory Windows (MWs).
      
      MWs allow an application to have better/flexible control over remote
      access to memory.
      
      Two types of MWs are supported, with the second type having two flavors:
      
          Type 1  - associated with PD only
          Type 2A - associated with QPN only
          Type 2B - associated with PD and QPN
      
      Applications can allocate a MW once, and then repeatedly bind the MW
      to different ranges in MRs that are associated to the same PD. Type 1
      windows are bound through a verb, while type 2 windows are bound by
      posting a work request.
      
      The 32-bit memory key is composed of a 24-bit index and an 8-bit
      key. The key is changed with each bind, thus allowing more control
      over the peer's use of the memory key.
      
      The changes introduced are the following:
      
      * add memory window type enum and a corresponding parameter to ib_alloc_mw.
      * type 2 memory window bind work request support.
      * create a struct that contains the common part of the bind verb struct
        ibv_mw_bind and the bind work request into a single struct.
      * add the ib_inc_rkey helper function to advance the tag part of an rkey.
      
      Consumer interface details:
      
      * new device capability flags IB_DEVICE_MEM_WINDOW_TYPE_2A and
        IB_DEVICE_MEM_WINDOW_TYPE_2B are added to indicate device support
        for these features.
      
        Devices can set either IB_DEVICE_MEM_WINDOW_TYPE_2A or
        IB_DEVICE_MEM_WINDOW_TYPE_2B if it supports type 2A or type 2B
        memory windows. It can set neither to indicate it doesn't support
        type 2 windows at all.
      
      * modify existing provides and consumers code to the new param of
        ib_alloc_mw and the ib_mw_bind_info structure
      Signed-off-by: NHaggai Eran <haggaie@mellanox.com>
      Signed-off-by: NShani Michaeli <shanim@mellanox.com>
      Signed-off-by: NOr Gerlitz <ogerlitz@mellanox.com>
      Signed-off-by: NRoland Dreier <roland@purestorage.com>
      7083e42e
  7. 05 2月, 2013 1 次提交
  8. 01 2月, 2013 2 次提交
  9. 18 12月, 2012 1 次提交
  10. 29 9月, 2012 1 次提交
  11. 07 9月, 2012 1 次提交
    • T
      SUNRPC: Fix a UDP transport regression · f39c1bfb
      Trond Myklebust 提交于
      Commit 43cedbf0 (SUNRPC: Ensure that
      we grab the XPRT_LOCK before calling xprt_alloc_slot) is causing
      hangs in the case of NFS over UDP mounts.
      
      Since neither the UDP or the RDMA transport mechanism use dynamic slot
      allocation, we can skip grabbing the socket lock for those transports.
      Add a new rpc_xprt_op to allow switching between the TCP and UDP/RDMA
      case.
      
      Note that the NFSv4.1 back channel assigns the slot directly
      through rpc_run_bc_task, so we can ignore that case.
      Reported-by: NDick Streefland <dick.streefland@altium.nl>
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      Cc: stable@vger.kernel.org [>= 3.1]
      f39c1bfb
  12. 22 8月, 2012 1 次提交
  13. 31 7月, 2012 1 次提交
    • J
      nfs: skip commit in releasepage if we're freeing memory for fs-related reasons · 5cf02d09
      Jeff Layton 提交于
      We've had some reports of a deadlock where rpciod ends up with a stack
      trace like this:
      
          PID: 2507   TASK: ffff88103691ab40  CPU: 14  COMMAND: "rpciod/14"
           #0 [ffff8810343bf2f0] schedule at ffffffff814dabd9
           #1 [ffff8810343bf3b8] nfs_wait_bit_killable at ffffffffa038fc04 [nfs]
           #2 [ffff8810343bf3c8] __wait_on_bit at ffffffff814dbc2f
           #3 [ffff8810343bf418] out_of_line_wait_on_bit at ffffffff814dbcd8
           #4 [ffff8810343bf488] nfs_commit_inode at ffffffffa039e0c1 [nfs]
           #5 [ffff8810343bf4f8] nfs_release_page at ffffffffa038bef6 [nfs]
           #6 [ffff8810343bf528] try_to_release_page at ffffffff8110c670
           #7 [ffff8810343bf538] shrink_page_list.clone.0 at ffffffff81126271
           #8 [ffff8810343bf668] shrink_inactive_list at ffffffff81126638
           #9 [ffff8810343bf818] shrink_zone at ffffffff8112788f
          #10 [ffff8810343bf8c8] do_try_to_free_pages at ffffffff81127b1e
          #11 [ffff8810343bf958] try_to_free_pages at ffffffff8112812f
          #12 [ffff8810343bfa08] __alloc_pages_nodemask at ffffffff8111fdad
          #13 [ffff8810343bfb28] kmem_getpages at ffffffff81159942
          #14 [ffff8810343bfb58] fallback_alloc at ffffffff8115a55a
          #15 [ffff8810343bfbd8] ____cache_alloc_node at ffffffff8115a2d9
          #16 [ffff8810343bfc38] kmem_cache_alloc at ffffffff8115b09b
          #17 [ffff8810343bfc78] sk_prot_alloc at ffffffff81411808
          #18 [ffff8810343bfcb8] sk_alloc at ffffffff8141197c
          #19 [ffff8810343bfce8] inet_create at ffffffff81483ba6
          #20 [ffff8810343bfd38] __sock_create at ffffffff8140b4a7
          #21 [ffff8810343bfd98] xs_create_sock at ffffffffa01f649b [sunrpc]
          #22 [ffff8810343bfdd8] xs_tcp_setup_socket at ffffffffa01f6965 [sunrpc]
          #23 [ffff8810343bfe38] worker_thread at ffffffff810887d0
          #24 [ffff8810343bfee8] kthread at ffffffff8108dd96
          #25 [ffff8810343bff48] kernel_thread at ffffffff8100c1ca
      
      rpciod is trying to allocate memory for a new socket to talk to the
      server. The VM ends up calling ->releasepage to get more memory, and it
      tries to do a blocking commit. That commit can't succeed however without
      a connected socket, so we deadlock.
      
      Fix this by setting PF_FSTRANS on the workqueue task prior to doing the
      socket allocation, and having nfs_release_page check for that flag when
      deciding whether to do a commit call. Also, set PF_FSTRANS
      unconditionally in rpc_async_schedule since that function can also do
      allocations sometimes.
      Signed-off-by: NJeff Layton <jlayton@redhat.com>
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      Cc: stable@vger.kernel.org
      5cf02d09
  14. 21 3月, 2012 2 次提交
  15. 20 3月, 2012 1 次提交
  16. 18 2月, 2012 1 次提交
  17. 07 12月, 2011 1 次提交
  18. 01 11月, 2011 1 次提交
  19. 27 7月, 2011 1 次提交
  20. 26 7月, 2011 1 次提交
  21. 18 7月, 2011 2 次提交
  22. 07 6月, 2011 1 次提交
  23. 26 5月, 2011 1 次提交
    • S
      RDMA/cma: Pass QP type into rdma_create_id() · b26f9b99
      Sean Hefty 提交于
      The RDMA CM currently infers the QP type from the port space selected
      by the user.  In the future (eg with RDMA_PS_IB or XRC), there may not
      be a 1-1 correspondence between port space and QP type.  For netlink
      export of RDMA CM state, we want to export the QP type to userspace,
      so it is cleaner to explicitly associate a QP type to an ID.
      
      Modify rdma_create_id() to allow the user to specify the QP type, and
      use it to make our selections of datagram versus connected mode.
      Signed-off-by: NSean Hefty <sean.hefty@intel.com>
      Signed-off-by: NRoland Dreier <roland@purestorage.com>
      b26f9b99
  24. 10 5月, 2011 1 次提交
  25. 18 3月, 2011 1 次提交
  26. 16 3月, 2011 1 次提交
  27. 12 3月, 2011 2 次提交
    • T
      RPCRDMA: Fix FRMR registration/invalidate handling. · 5c635e09
      Tom Tucker 提交于
      When the rpc_memreg_strategy is 5, FRMR are used to map RPC data.
      This mode uses an FRMR to map the RPC data, then invalidates
      (i.e. unregisers) the data in xprt_rdma_free. These FRMR are used
      across connections on the same mount, i.e. if the connection goes
      away on an idle timeout and reconnects later, the FRMR are not
      destroyed and recreated.
      
      This creates a problem for transport errors because the WR that
      invalidate an FRMR may be flushed (i.e. fail) leaving the
      FRMR valid. When the FRMR is later used to map an RPC it will fail,
      tearing down the transport and starting over. Over time, more and
      more of the FRMR pool end up in the wrong state resulting in
      seemingly random disconnects.
      
      This fix keeps track of the FRMR state explicitly by setting it's
      state based on the successful completion of a reg/inv WR. If the FRMR
      is ever used and found to be in the wrong state, an invalidate WR
      is prepended, re-syncing the FRMR state and avoiding the connection loss.
      Signed-off-by: NTom Tucker <tom@ogc.us>
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      5c635e09
    • T
      RPCRDMA: Fix to XDR page base interpretation in marshalling logic. · bd7ea31b
      Tom Tucker 提交于
      The RPCRDMA marshalling logic assumed that xdr->page_base was an
      offset into the first page of xdr->page_list. It is in fact an
      offset into the xdr->page_list itself, that is, it selects the
      first page in the page_list and the offset into that page.
      
      The symptom depended in part on the rpc_memreg_strategy, if it was
      FRMR, or some other one-shot mapping mode, the connection would get
      torn down on a base and bounds error. When the badly marshalled RPC
      was retransmitted it would reconnect, get the error, and tear down the
      connection again in a loop forever. This resulted in a hung-mount. For
      the other modes, it would result in silent data corruption. This bug is
      most easily reproduced by writing more data than the filesystem
      has space for.
      
      This fix corrects the page_base assumption and otherwise simplifies
      the iov mapping logic.
      Signed-off-by: NTom Tucker <tom@ogc.us>
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      bd7ea31b
  28. 11 3月, 2011 1 次提交
  29. 21 10月, 2010 1 次提交
    • T
      sunrpc/xprtrdma: clean up workqueue usage · a25e758c
      Tejun Heo 提交于
      * Create and use svc_rdma_wq instead of using the system workqueue and
        flush_scheduled_work().  This workqueue is necessary to serve as
        flushing domain for rdma->sc_work which is used to destroy itself
        and thus can't be flushed explicitly.
      
      * Replace cancel_delayed_work() + flush_scheduled_work() with
        cancel_delayed_work_sync().
      
      * Implement synchronous connect in xprt_rdma_connect() using
        flush_delayed_work() on the rdma_connect work instead of using
        flush_scheduled_work().
      
      This is to prepare for the deprecation and removal of
      flush_scheduled_work().
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
      a25e758c
  30. 19 10月, 2010 2 次提交
  31. 02 10月, 2010 2 次提交