1. 26 4月, 2019 14 次提交
  2. 12 4月, 2019 1 次提交
  3. 21 2月, 2019 1 次提交
  4. 14 2月, 2019 2 次提交
  5. 13 2月, 2019 4 次提交
    • C
      xprtrdma: Reduce the doorbell rate (Receive) · e340c2d6
      Chuck Lever 提交于
      Post RECV WRs in batches to reduce the hardware doorbell rate per
      transport. This helps the RPC-over-RDMA client scale better in
      number of transports.
      Signed-off-by: NChuck Lever <chuck.lever@oracle.com>
      Signed-off-by: NAnna Schumaker <Anna.Schumaker@Netapp.com>
      e340c2d6
    • C
      xprtrdma: Check inline size before providing a Write chunk · d4550bbe
      Chuck Lever 提交于
      In very rare cases, an NFS READ operation might predict that the
      non-payload part of the RPC Call is large. For instance, an
      NFSv4 COMPOUND with a large GETATTR result, in combination with a
      large Kerberos credential, could push the non-payload part to be
      several kilobytes.
      
      If the non-payload part is larger than the connection's inline
      threshold, the client is required to provision a Reply chunk. The
      current Linux client does not check for this case. There are two
      obvious ways to handle it:
      
      a. Provision a Write chunk for the payload and a Reply chunk for
         the non-payload part
      
      b. Provision a Reply chunk for the whole RPC Reply
      
      Some testing at a recent NFS bake-a-thon showed that servers can
      mostly handle a. but there are some corner cases that do not work
      yet. b. already works (it has to, to handle krb5i/p), but could be
      somewhat less efficient. However, I expect this scenario to be very
      rare -- no-one has reported a problem yet.
      
      So I'm going to implement b. Sometime later I will provide some
      patches to help make b. a little more efficient by more carefully
      choosing the Reply chunk's segment sizes to ensure the payload is
      optimally aligned.
      Signed-off-by: NChuck Lever <chuck.lever@oracle.com>
      Signed-off-by: NAnna Schumaker <Anna.Schumaker@Netapp.com>
      d4550bbe
    • C
      xprtrdma: Fix sparse warnings · ec482cc1
      Chuck Lever 提交于
      linux/net/sunrpc/xprtrdma/rpc_rdma.c:375:63: warning: incorrect type in argument 5 (different base types)
      linux/net/sunrpc/xprtrdma/rpc_rdma.c:375:63:    expected unsigned int [usertype] xid
      linux/net/sunrpc/xprtrdma/rpc_rdma.c:375:63:    got restricted __be32 [usertype] rq_xid
      linux/net/sunrpc/xprtrdma/rpc_rdma.c:432:62: warning: incorrect type in argument 5 (different base types)
      linux/net/sunrpc/xprtrdma/rpc_rdma.c:432:62:    expected unsigned int [usertype] xid
      linux/net/sunrpc/xprtrdma/rpc_rdma.c:432:62:    got restricted __be32 [usertype] rq_xid
      linux/net/sunrpc/xprtrdma/rpc_rdma.c:489:62: warning: incorrect type in argument 5 (different base types)
      linux/net/sunrpc/xprtrdma/rpc_rdma.c:489:62:    expected unsigned int [usertype] xid
      linux/net/sunrpc/xprtrdma/rpc_rdma.c:489:62:    got restricted __be32 [usertype] rq_xid
      
      Fixes: 0a93fbcb ("xprtrdma: Plant XID in on-the-wire RDMA ... ")
      Signed-off-by: NChuck Lever <chuck.lever@oracle.com>
      Signed-off-by: NAnna Schumaker <Anna.Schumaker@Netapp.com>
      ec482cc1
    • N
      xprtrdma: Make sure Send CQ is allocated on an existing compvec · a4cb5bdb
      Nicolas Morey-Chaisemartin 提交于
      Make sure the device has at least 2 completion vectors
      before allocating to compvec#1
      
      Fixes: a4699f56 (xprtrdma: Put Send CQ in IB_POLL_WORKQUEUE mode)
      Signed-off-by: NNicolas Morey-Chaisemartin <nmoreychaisemartin@suse.com>
      Reviewed-by: NChuck Lever <chuck.lever@oracle.com>
      Signed-off-by: NAnna Schumaker <Anna.Schumaker@Netapp.com>
      a4cb5bdb
  6. 07 2月, 2019 5 次提交
    • C
      svcrdma: Remove syslog warnings in work completion handlers · 8820bcaa
      Chuck Lever 提交于
      These can result in a lot of log noise, and are able to be triggered
      by client misbehavior. Since there are trace points in these
      handlers now, there's no need to spam the log.
      Signed-off-by: NChuck Lever <chuck.lever@oracle.com>
      Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
      8820bcaa
    • C
      svcrdma: Squelch compiler warning when SUNRPC_DEBUG is disabled · c7920f06
      Chuck Lever 提交于
        CC [M]  net/sunrpc/xprtrdma/svc_rdma_transport.o
      linux/net/sunrpc/xprtrdma/svc_rdma_transport.c: In function ‘svc_rdma_accept’:
      linux/net/sunrpc/xprtrdma/svc_rdma_transport.c:452:19: warning: variable ‘sap’ set but not used [-Wunused-but-set-variable]
        struct sockaddr *sap;
                         ^
      Signed-off-by: NChuck Lever <chuck.lever@oracle.com>
      Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
      c7920f06
    • G
      svcrdma: Use struct_size() in kmalloc() · 14cfbd94
      Gustavo A. R. Silva 提交于
      One of the more common cases of allocation size calculations is finding
      the size of a structure that has a zero-sized array at the end, along
      with memory for some number of elements for that array. For example:
      
      struct foo {
          int stuff;
          struct boo entry[];
      };
      
      instance = kmalloc(sizeof(struct foo) + count * sizeof(struct boo), GFP_KERNEL);
      
      Instead of leaving these open-coded and prone to type mistakes, we can
      now use the new struct_size() helper:
      
      instance = kmalloc(struct_size(instance, entry, count), GFP_KERNEL);
      
      This code was detected with the help of Coccinelle.
      Signed-off-by: NGustavo A. R. Silva <gustavo@embeddedor.com>
      Reviewed-by: NChuck Lever <chuck.lever@oracle.com>
      Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
      14cfbd94
    • J
      svcrpc: fix unlikely races preventing queueing of sockets · 95503d29
      J. Bruce Fields 提交于
      In the rpc server, When something happens that might be reason to wake
      up a thread to do something, what we do is
      
      	- modify xpt_flags, sk_sock->flags, xpt_reserved, or
      	  xpt_nr_rqsts to indicate the new situation
      	- call svc_xprt_enqueue() to decide whether to wake up a thread.
      
      svc_xprt_enqueue may require multiple conditions to be true before
      queueing up a thread to handle the xprt.  In the SMP case, one of the
      other CPU's may have set another required condition, and in that case,
      although both CPUs run svc_xprt_enqueue(), it's possible that neither
      call sees the writes done by the other CPU in time, and neither one
      recognizes that all the required conditions have been set.  A socket
      could therefore be ignored indefinitely.
      
      Add memory barries to ensure that any svc_xprt_enqueue() call will
      always see the conditions changed by other CPUs before deciding to
      ignore a socket.
      
      I've never seen this race reported.  In the unlikely event it happens,
      another event will usually come along and the problem will fix itself.
      So I don't think this is worth backporting to stable.
      
      Chuck tried this patch and said "I don't see any performance
      regressions, but my server has only a single last-level CPU cache."
      Tested-by: NChuck Lever <chuck.lever@oracle.com>
      Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
      95503d29
    • C
      svcrdma: Remove max_sge check at connect time · e248aa7b
      Chuck Lever 提交于
      Two and a half years ago, the client was changed to use gathered
      Send for larger inline messages, in commit 655fec69 ("xprtrdma:
      Use gathered Send for large inline messages"). Several fixes were
      required because there are a few in-kernel device drivers whose
      max_sge is 3, and these were broken by the change.
      
      Apparently my memory is going, because some time later, I submitted
      commit 25fd86ec ("svcrdma: Don't overrun the SGE array in
      svc_rdma_send_ctxt"), and after that, commit f3c1fd0e ("svcrdma:
      Reduce max_send_sges"). These too incorrectly assumed in-kernel
      device drivers would have more than a few Send SGEs available.
      
      The fix for the server side is not the same. This is because the
      fundamental problem on the server is that, whether or not the client
      has provisioned a chunk for the RPC reply, the server must squeeze
      even the most complex RPC replies into a single RDMA Send. Failing
      in the send path because of Send SGE exhaustion should never be an
      option.
      
      Therefore, instead of failing when the send path runs out of SGEs,
      switch to using a bounce buffer mechanism to handle RPC replies that
      are too complex for the device to send directly. That allows us to
      remove the max_sge check to enable drivers with small max_sge to
      work again.
      Reported-by: NDon Dutile <ddutile@redhat.com>
      Fixes: 25fd86ec ("svcrdma: Don't overrun the SGE array in ...")
      Cc: stable@vger.kernel.org
      Signed-off-by: NChuck Lever <chuck.lever@oracle.com>
      Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
      e248aa7b
  7. 09 1月, 2019 2 次提交
  8. 03 1月, 2019 11 次提交