1. 01 10月, 2018 8 次提交
  2. 23 8月, 2018 1 次提交
  3. 10 8月, 2018 2 次提交
    • C
      NFSD: Handle full-length symlinks · 11b4d66e
      Chuck Lever 提交于
      I've given up on the idea of zero-copy handling of SYMLINK on the
      server side. This is because the Linux VFS symlink API requires the
      symlink pathname to be in a NUL-terminated kmalloc'd buffer. The
      NUL-termination is going to be problematic (watching out for
      landing on a page boundary and dealing with a 4096-byte pathname).
      
      I don't believe that SYMLINK creation is on a performance path or is
      requested frequently enough that it will cause noticeable CPU cache
      pollution due to data copies.
      
      There will be two places where a transport callout will be necessary
      to fill in the rqstp: one will be in the svc_fill_symlink_pathname()
      helper that is used by NFSv2 and NFSv3, and the other will be in
      nfsd4_decode_create().
      Signed-off-by: NChuck Lever <chuck.lever@oracle.com>
      Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
      11b4d66e
    • C
      NFSD: Refactor the generic write vector fill helper · 3fd9557a
      Chuck Lever 提交于
      fill_in_write_vector() is nearly the same logic as
      svc_fill_write_vector(), but there are a few differences so that
      the former can handle multiple WRITE payloads in a single COMPOUND.
      
      svc_fill_write_vector() can be adjusted so that it can be used in
      the NFSv4 WRITE code path too. Instead of assuming the pages are
      coming from rq_args.pages, have the caller pass in the page list.
      
      The immediate benefit is a reduction of code duplication. It also
      prevents the NFSv4 WRITE decoder from passing an empty vector
      element when the transport has provided the payload in the xdr_buf's
      page array.
      Signed-off-by: NChuck Lever <chuck.lever@oracle.com>
      Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
      3fd9557a
  4. 01 8月, 2018 2 次提交
    • B
      NFSv4 client live hangs after live data migration recovery · 0f90be13
      Bill Baker 提交于
      After a live data migration event at the NFS server, the client may send
      I/O requests to the wrong server, causing a live hang due to repeated
      recovery events.  On the wire, this will appear as an I/O request failing
      with NFS4ERR_BADSESSION, followed by successful CREATE_SESSION, repeatedly.
      NFS4ERR_BADSSESSION is returned because the session ID being used was
      issued by the other server and is not valid at the old server.
      
      The failure is caused by async worker threads having cached the transport
      (xprt) in the rpc_task structure.  After the migration recovery completes,
      the task is redispatched and the task resends the request to the wrong
      server based on the old value still present in tk_xprt.
      
      The solution is to recompute the tk_xprt field of the rpc_task structure
      so that the request goes to the correct server.
      Signed-off-by: NBill Baker <bill.baker@oracle.com>
      Reviewed-by: NChuck Lever <chuck.lever@oracle.com>
      Tested-by: NHelen Chao <helen.chao@oracle.com>
      Fixes: fb43d172 ("SUNRPC: Use the multipath iterator to assign a ...")
      Cc: stable@vger.kernel.org # v4.9+
      Signed-off-by: NAnna Schumaker <Anna.Schumaker@Netapp.com>
      0f90be13
    • D
      sunrpc: Change rpc_print_iostats to rpc_clnt_show_stats and handle rpc_clnt clones · 016583d7
      Dave Wysochanski 提交于
      The existing rpc_print_iostats has a few shortcomings.  First, the naming
      is not consistent with other functions in the kernel that display stats.
      Second, it is really displaying stats for an rpc_clnt structure as it
      displays both xprt stats and per-op stats.  Third, it does not handle
      rpc_clnt clones, which is important for the one in-kernel tree caller
      of this function, the NFS client's nfs_show_stats function.
      
      Fix all of the above by renaming the rpc_print_iostats to
      rpc_clnt_show_stats and looping through any rpc_clnt clones via
      cl_parent.
      
      Once this interface is fixed, this addresses a problem with NFSv4.
      Before this patch, the /proc/self/mountstats always showed incorrect
      counts for NFSv4 lease and session related opcodes such as SEQUENCE,
      RENEW, SETCLIENTID, CREATE_SESSION, etc.  These counts were always 0
      even though many ops would go over the wire.  The reason for this is
      there are multiple rpc_clnt structures allocated for any given NFSv4
      mount, and inside nfs_show_stats() we callled into rpc_print_iostats()
      which only handled one of them, nfs_server->client.  Fix these counts
      by calling sunrpc's new rpc_clnt_show_stats() function, which handles
      cloned rpc_clnt structs and prints the stats together.
      
      Note that one side-effect of the above is that multiple mounts from
      the same NFS server will show identical counts in the above ops due
      to the fact the one rpc_clnt (representing the NFSv4 client state)
      is shared across mounts.
      Signed-off-by: NDave Wysochanski <dwysocha@redhat.com>
      Signed-off-by: NAnna Schumaker <Anna.Schumaker@Netapp.com>
      016583d7
  5. 31 7月, 2018 1 次提交
  6. 12 5月, 2018 13 次提交
    • C
      svcrdma: Remove unused svc_rdma_op_ctxt · 51cc257a
      Chuck Lever 提交于
      Clean up: Eliminate a structure that is no longer used.
      Signed-off-by: NChuck Lever <chuck.lever@oracle.com>
      Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
      51cc257a
    • C
      svcrdma: Persistently allocate and DMA-map Send buffers · 99722fe4
      Chuck Lever 提交于
      While sending each RPC Reply, svc_rdma_sendto allocates and DMA-
      maps a separate buffer where the RPC/RDMA transport header is
      constructed. The buffer is unmapped and released in the Send
      completion handler. This is significant per-RPC overhead,
      especially for small RPCs.
      
      Instead, allocate and DMA-map a buffer, and cache it in each
      svc_rdma_send_ctxt. This buffer and its mapping can be re-used
      for each RPC, saving the cost of memory allocation and DMA
      mapping.
      Signed-off-by: NChuck Lever <chuck.lever@oracle.com>
      Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
      99722fe4
    • C
      svcrdma: Remove post_send_wr · 986b7889
      Chuck Lever 提交于
      Clean up: Now that the send_wr is part of the svc_rdma_send_ctxt,
      svc_rdma_post_send_wr is nearly empty.
      Signed-off-by: NChuck Lever <chuck.lever@oracle.com>
      Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
      986b7889
    • C
      svcrdma: Don't overrun the SGE array in svc_rdma_send_ctxt · 25fd86ec
      Chuck Lever 提交于
      Receive buffers are always the same size, but each Send WR has a
      variable number of SGEs, based on the contents of the xdr_buf being
      sent.
      
      While assembling a Send WR, keep track of the number of SGEs so that
      we don't exceed the device's maximum, or walk off the end of the
      Send SGE array.
      
      For now the Send path just fails if it exceeds the maximum.
      
      The current logic in svc_rdma_accept bases the maximum number of
      Send SGEs on the largest NFS request that can be sent or received.
      In the transport layer, the limit is actually based on the
      capabilities of the underlying device, not on properties of the
      Upper Layer Protocol.
      Signed-off-by: NChuck Lever <chuck.lever@oracle.com>
      Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
      25fd86ec
    • C
      svcrdma: Introduce svc_rdma_send_ctxt · 4201c746
      Chuck Lever 提交于
      svc_rdma_op_ctxt's are pre-allocated and maintained on a per-xprt
      free list. This eliminates the overhead of calling kmalloc / kfree,
      both of which grab a globally shared lock that disables interrupts.
      Introduce a replacement to svc_rdma_op_ctxt's that is built
      especially for the svcrdma Send path.
      
      Subsequent patches will take advantage of this new structure by
      allocating real resources which are then cached in these objects.
      The allocations are freed when the transport is torn down.
      
      I've renamed the structure so that static type checking can be used
      to ensure that uses of op_ctxt and send_ctxt are not confused. As an
      additional clean up, structure fields are renamed to conform with
      kernel coding conventions.
      
      Additional clean ups:
      - Handle svc_rdma_send_ctxt_get allocation failure at each call
        site, rather than pre-allocating and hoping we guessed correctly
      - All send_ctxt_put call-sites request page freeing, so remove
        the @free_pages argument
      - All send_ctxt_put call-sites unmap SGEs, so fold that into
        svc_rdma_send_ctxt_put
      Signed-off-by: NChuck Lever <chuck.lever@oracle.com>
      Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
      4201c746
    • C
      svcrdma: Clean up Send SGE accounting · 23262790
      Chuck Lever 提交于
      Clean up: Since there's already a svc_rdma_op_ctxt being passed
      around with the running count of mapped SGEs, drop unneeded
      parameters to svc_rdma_post_send_wr().
      Signed-off-by: NChuck Lever <chuck.lever@oracle.com>
      Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
      23262790
    • C
      svcrdma: Refactor svc_rdma_dma_map_buf · f016f305
      Chuck Lever 提交于
      Clean up: svc_rdma_dma_map_buf does mostly the same thing as
      svc_rdma_dma_map_page, so let's fold these together.
      Signed-off-by: NChuck Lever <chuck.lever@oracle.com>
      Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
      f016f305
    • C
      svcrdma: Allocate recv_ctxt's on CPU handling Receives · eb5d7a62
      Chuck Lever 提交于
      There is a significant latency penalty when processing an ingress
      Receive if the Receive buffer resides in memory that is not on the
      same NUMA node as the the CPU handling completions for a CQ.
      
      The system administrator and the device driver determine which CPU
      handles completions. This CPU does not change during life of the CQ.
      Further the Upper Layer does not have any visibility of which CPU it
      is.
      
      Allocating Receive buffers in the Receive completion handler
      guarantees that Receive buffers are allocated on the preferred NUMA
      node for that CQ.
      Signed-off-by: NChuck Lever <chuck.lever@oracle.com>
      Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
      eb5d7a62
    • C
      svcrdma: Persistently allocate and DMA-map Receive buffers · 3316f063
      Chuck Lever 提交于
      The current Receive path uses an array of pages which are allocated
      and DMA mapped when each Receive WR is posted, and then handed off
      to the upper layer in rqstp::rq_arg. The page flip releases unused
      pages in the rq_pages pagelist. This mechanism introduces a
      significant amount of overhead.
      
      So instead, kmalloc the Receive buffer, and leave it DMA-mapped
      while the transport remains connected. This confers a number of
      benefits:
      
      * Each Receive WR requires only one receive SGE, no matter how large
        the inline threshold is. This helps the server-side NFS/RDMA
        transport operate on less capable RDMA devices.
      
      * The Receive buffer is left allocated and mapped all the time. This
        relieves svc_rdma_post_recv from the overhead of allocating and
        DMA-mapping a fresh buffer.
      
      * svc_rdma_wc_receive no longer has to DMA unmap the Receive buffer.
        It has to DMA sync only the number of bytes that were received.
      
      * svc_rdma_build_arg_xdr no longer has to free a page in rq_pages
        for each page in the Receive buffer, making it a constant-time
        function.
      
      * The Receive buffer is now plugged directly into the rq_arg's
        head[0].iov_vec, and can be larger than a page without spilling
        over into rq_arg's page list. This enables simplification of
        the RDMA Read path in subsequent patches.
      Signed-off-by: NChuck Lever <chuck.lever@oracle.com>
      Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
      3316f063
    • C
      svcrdma: Simplify svc_rdma_recv_ctxt_put · 1e5f4160
      Chuck Lever 提交于
      Currently svc_rdma_recv_ctxt_put's callers have to know whether they
      want to free the ctxt's pages or not. This means the human
      developers have to know when and why to set that free_pages
      argument.
      
      Instead, the ctxt should carry that information with it so that
      svc_rdma_recv_ctxt_put does the right thing no matter who is
      calling.
      
      We want to keep track of the number of pages in the Receive buffer
      separately from the number of pages pulled over by RDMA Read. This
      is so that the correct number of pages can be freed properly and
      that number is well-documented.
      
      So now, rc_hdr_count is the number of pages consumed by head[0]
      (ie., the page index where the Read chunk should start); and
      rc_page_count is always the number of pages that need to be released
      when the ctxt is put.
      
      The @free_pages argument is no longer needed.
      Signed-off-by: NChuck Lever <chuck.lever@oracle.com>
      Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
      1e5f4160
    • C
      svcrdma: Remove sc_rq_depth · 2c577bfe
      Chuck Lever 提交于
      Clean up: No need to retain rq_depth in struct svcrdma_xprt, it is
      used only in svc_rdma_accept().
      Signed-off-by: NChuck Lever <chuck.lever@oracle.com>
      Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
      2c577bfe
    • C
      svcrdma: Introduce svc_rdma_recv_ctxt · ecf85b23
      Chuck Lever 提交于
      svc_rdma_op_ctxt's are pre-allocated and maintained on a per-xprt
      free list. This eliminates the overhead of calling kmalloc / kfree,
      both of which grab a globally shared lock that disables interrupts.
      To reduce contention further, separate the use of these objects in
      the Receive and Send paths in svcrdma.
      
      Subsequent patches will take advantage of this separation by
      allocating real resources which are then cached in these objects.
      The allocations are freed when the transport is torn down.
      
      I've renamed the structure so that static type checking can be used
      to ensure that uses of op_ctxt and recv_ctxt are not confused. As an
      additional clean up, structure fields are renamed to conform with
      kernel coding conventions.
      
      As a final clean up, helpers related to recv_ctxt are moved closer
      to the functions that use them.
      Signed-off-by: NChuck Lever <chuck.lever@oracle.com>
      Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
      ecf85b23
    • C
  7. 07 5月, 2018 4 次提交
  8. 17 4月, 2018 1 次提交
  9. 11 4月, 2018 6 次提交
  10. 04 4月, 2018 2 次提交
    • C
      NFSD: Clean up legacy NFS SYMLINK argument XDR decoders · 38a70315
      Chuck Lever 提交于
      Move common code in NFSD's legacy SYMLINK decoders into a helper.
      The immediate benefits include:
      
       - one fewer data copies on transports that support DDP
       - consistent error checking across all versions
       - reduction of code duplication
       - support for both legal forms of SYMLINK requests on RDMA
         transports for all versions of NFS (in particular, NFSv2, for
         completeness)
      
      In the long term, this helper is an appropriate spot to perform a
      per-transport call-out to fill the pathname argument using, say,
      RDMA Reads.
      
      Filling the pathname in the proc function also means that eventually
      the incoming filehandle can be interpreted so that filesystem-
      specific memory can be allocated as a sink for the pathname
      argument, rather than using anonymous pages.
      Signed-off-by: NChuck Lever <chuck.lever@oracle.com>
      Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
      38a70315
    • C
      NFSD: Clean up legacy NFS WRITE argument XDR decoders · 8154ef27
      Chuck Lever 提交于
      Move common code in NFSD's legacy NFS WRITE decoders into a helper.
      The immediate benefit is reduction of code duplication and some nice
      micro-optimizations (see below).
      
      In the long term, this helper can perform a per-transport call-out
      to fill the rq_vec (say, using RDMA Reads).
      
      The legacy WRITE decoders and procs are changed to work like NFSv4,
      which constructs the rq_vec just before it is about to call
      vfs_writev.
      
      Why? Calling a transport call-out from the proc instead of the XDR
      decoder means that the incoming FH can be resolved to a particular
      filesystem and file. This would allow pages from the backing file to
      be presented to the transport to be filled, rather than presenting
      anonymous pages and copying or flipping them into the file's page
      cache later.
      
      I also prefer using the pages in rq_arg.pages, instead of pulling
      the data pages directly out of the rqstp::rq_pages array. This is
      currently the way the NFSv3 write decoder works, but the other two
      do not seem to take this approach. Fixing this removes the only
      reference to rq_pages found in NFSD, eliminating an NFSD assumption
      about how transports use the pages in rq_pages.
      
      Lastly, avoid setting up the first element of rq_vec as a zero-
      length buffer. This happens with an RDMA transport when a normal
      Read chunk is present because the data payload is in rq_arg's
      page list (none of it is in the head buffer).
      Signed-off-by: NChuck Lever <chuck.lever@oracle.com>
      Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
      8154ef27