1. 16 8月, 2017 1 次提交
  2. 14 7月, 2017 4 次提交
  3. 11 2月, 2017 1 次提交
  4. 20 9月, 2016 2 次提交
  5. 12 7月, 2016 12 次提交
  6. 18 5月, 2016 5 次提交
    • C
      xprtrdma: Remove ro_unmap() from all registration modes · 0b043b9f
      Chuck Lever 提交于
      Clean up: The ro_unmap method is no longer used.
      Signed-off-by: NChuck Lever <chuck.lever@oracle.com>
      Tested-by: NSteve Wise <swise@opengridcomputing.com>
      Reviewed-by: NSagi Grimberg <sagi@grimberg.me>
      Signed-off-by: NAnna Schumaker <Anna.Schumaker@Netapp.com>
      0b043b9f
    • C
      xprtrdma: Add ro_unmap_safe memreg method · ead3f26e
      Chuck Lever 提交于
      There needs to be a safe method of releasing registered memory
      resources when an RPC terminates. Safe can mean a number of things:
      
      + Doesn't have to sleep
      
      + Doesn't rely on having a QP in RTS
      
      ro_unmap_safe will be that safe method. It can be used in cases
      where synchronous memory invalidation can deadlock, or needs to have
      an active QP.
      
      The important case is fencing an RPC's memory regions after it is
      signaled (^C) and before it exits. If this is not done, there is a
      window where the server can write an RPC reply into memory that the
      client has released and re-used for some other purpose.
      
      Note that this is a full solution for FRWR, but FMR and physical
      still have some gaps where a particularly bad server can wreak
      some havoc on the client. These gaps are not made worse by this
      patch and are expected to be exceptionally rare and timing-based.
      They are noted in documenting comments.
      Signed-off-by: NChuck Lever <chuck.lever@oracle.com>
      Tested-by: NSteve Wise <swise@opengridcomputing.com>
      Signed-off-by: NAnna Schumaker <Anna.Schumaker@Netapp.com>
      ead3f26e
    • C
      xprtrdma: Refactor __fmr_dma_unmap() · 763bc230
      Chuck Lever 提交于
      Separate the DMA unmap operation from freeing the MW. In a
      subsequent patch they will not always be done at the same time,
      and they are not related operations (except by order; freeing
      the MW must be the last step during invalidation).
      Signed-off-by: NChuck Lever <chuck.lever@oracle.com>
      Tested-by: NSteve Wise <swise@opengridcomputing.com>
      Reviewed-by: NSagi Grimberg <sagi@grimberg.me>
      Signed-off-by: NAnna Schumaker <Anna.Schumaker@Netapp.com>
      763bc230
    • C
      xprtrdma: Prevent inline overflow · 302d3deb
      Chuck Lever 提交于
      When deciding whether to send a Call inline, rpcrdma_marshal_req
      doesn't take into account header bytes consumed by chunk lists.
      This results in Call messages on the wire that are sometimes larger
      than the inline threshold.
      
      Likewise, when a Write list or Reply chunk is in play, the server's
      reply has to emit an RDMA Send that includes a larger-than-minimal
      RPC-over-RDMA header.
      
      The actual size of a Call message cannot be estimated until after
      the chunk lists have been registered. Thus the size of each
      RPC-over-RDMA header can be estimated only after chunks are
      registered; but the decision to register chunks is based on the size
      of that header. Chicken, meet egg.
      
      The best a client can do is estimate header size based on the
      largest header that might occur, and then ensure that inline content
      is always smaller than that.
      Signed-off-by: NChuck Lever <chuck.lever@oracle.com>
      Tested-by: NSteve Wise <swise@opengridcomputing.com>
      Signed-off-by: NAnna Schumaker <Anna.Schumaker@Netapp.com>
      302d3deb
    • C
      xprtrdma: Limit number of RDMA segments in RPC-over-RDMA headers · 94931746
      Chuck Lever 提交于
      Send buffer space is shared between the RPC-over-RDMA header and
      an RPC message. A large RPC-over-RDMA header means less space is
      available for the associated RPC message, which then has to be
      moved via an RDMA Read or Write.
      
      As more segments are added to the chunk lists, the header increases
      in size.  Typical modern hardware needs only a few segments to
      convey the maximum payload size, but some devices and registration
      modes may need a lot of segments to convey data payload. Sometimes
      so many are needed that the remaining space in the Send buffer is
      not enough for the RPC message. Sending such a message usually
      fails.
      
      To ensure a transport can always make forward progress, cap the
      number of RDMA segments that are allowed in chunk lists. This
      prevents less-capable devices and memory registrations from
      consuming a large portion of the Send buffer by reducing the
      maximum data payload that can be conveyed with such devices.
      
      For now I choose an arbitrary maximum of 8 RDMA segments. This
      allows a maximum size RPC-over-RDMA header to fit nicely in the
      current 1024 byte inline threshold with over 700 bytes remaining
      for an inline RPC message.
      
      The current maximum data payload of NFS READ or WRITE requests is
      one megabyte. To convey that payload on a client with 4KB pages,
      each chunk segment would need to handle 32 or more data pages. This
      is well within the capabilities of FMR. For physical registration,
      the maximum payload size on platforms with 4KB pages is reduced to
      32KB.
      
      For FRWR, a device's maximum page list depth would need to be at
      least 34 to support the maximum 1MB payload. A device with a smaller
      maximum page list depth means the maximum data payload is reduced
      when using that device.
      Signed-off-by: NChuck Lever <chuck.lever@oracle.com>
      Tested-by: NSteve Wise <swise@opengridcomputing.com>
      Reviewed-by: NSagi Grimberg <sagi@grimberg.me>
      Signed-off-by: NAnna Schumaker <Anna.Schumaker@Netapp.com>
      94931746
  7. 15 3月, 2016 1 次提交
  8. 19 12月, 2015 1 次提交
  9. 25 9月, 2015 1 次提交
  10. 06 8月, 2015 1 次提交
  11. 13 6月, 2015 6 次提交
  12. 31 3月, 2015 5 次提交