1. 09 3月, 2021 1 次提交
  2. 03 12月, 2020 1 次提交
  3. 21 9月, 2020 5 次提交
  4. 05 4月, 2020 1 次提交
  5. 16 3月, 2020 1 次提交
  6. 15 1月, 2020 1 次提交
  7. 23 11月, 2019 1 次提交
    • C
      SUNRPC: Capture completion of all RPC tasks · a264abad
      Chuck Lever 提交于
      RPC tasks on the backchannel never invoke xprt_complete_rqst(), so
      there is no way to report their tk_status at completion. Also, any
      RPC task that exits via rpc_exit_task() before it is replied to will
      also disappear without a trace.
      
      Introduce a trace point that is symmetrical with rpc_task_begin that
      captures the termination status of each RPC task.
      
      Sample trace output for callback requests initiated on the server:
         kworker/u8:12-448   [003]   127.025240: rpc_task_end:         task:50@3 flags=ASYNC|DYNAMIC|SOFT|SOFTCONN|SENT runstate=RUNNING|ACTIVE status=0 action=rpc_exit_task
         kworker/u8:12-448   [002]   127.567310: rpc_task_end:         task:51@3 flags=ASYNC|DYNAMIC|SOFT|SOFTCONN|SENT runstate=RUNNING|ACTIVE status=0 action=rpc_exit_task
         kworker/u8:12-448   [001]   130.506817: rpc_task_end:         task:52@3 flags=ASYNC|DYNAMIC|SOFT|SOFTCONN|SENT runstate=RUNNING|ACTIVE status=0 action=rpc_exit_task
      
      Odd, though, that I never see trace_rpc_task_complete, either in the
      forward or backchannel. Should it be removed?
      Signed-off-by: NChuck Lever <chuck.lever@oracle.com>
      Signed-off-by: NTrond Myklebust <trond.myklebust@hammerspace.com>
      a264abad
  8. 06 11月, 2019 1 次提交
  9. 18 9月, 2019 1 次提交
  10. 20 8月, 2019 1 次提交
  11. 13 7月, 2019 1 次提交
  12. 09 7月, 2019 1 次提交
  13. 07 7月, 2019 3 次提交
  14. 22 6月, 2019 1 次提交
  15. 21 5月, 2019 1 次提交
  16. 26 4月, 2019 7 次提交
  17. 10 3月, 2019 1 次提交
  18. 03 3月, 2019 1 次提交
  19. 21 2月, 2019 1 次提交
  20. 20 12月, 2018 2 次提交
    • N
      NFS/NFSD/SUNRPC: replace generic creds with 'struct cred'. · a52458b4
      NeilBrown 提交于
      SUNRPC has two sorts of credentials, both of which appear as
      "struct rpc_cred".
      There are "generic credentials" which are supplied by clients
      such as NFS and passed in 'struct rpc_message' to indicate
      which user should be used to authorize the request, and there
      are low-level credentials such as AUTH_NULL, AUTH_UNIX, AUTH_GSS
      which describe the credential to be sent over the wires.
      
      This patch replaces all the generic credentials by 'struct cred'
      pointers - the credential structure used throughout Linux.
      
      For machine credentials, there is a special 'struct cred *' pointer
      which is statically allocated and recognized where needed as
      having a special meaning.  A look-up of a low-level cred will
      map this to a machine credential.
      Signed-off-by: NNeilBrown <neilb@suse.com>
      Acked-by: NJ. Bruce Fields <bfields@redhat.com>
      Signed-off-by: NAnna Schumaker <Anna.Schumaker@Netapp.com>
      a52458b4
    • N
      SUNRPC: add side channel to use non-generic cred for rpc call. · 1de7eea9
      NeilBrown 提交于
      The credential passed in rpc_message.rpc_cred is always a
      generic credential except in one instance.
      When gss_destroying_context() calls rpc_call_null(), it passes
      a specific credential that it needs to destroy.
      In this case the RPC acts *on* the credential rather than
      being authorized by it.
      
      This special case deserves explicit support and providing that will
      mean that rpc_message.rpc_cred is *always* generic, allowing
      some optimizations.
      
      So add "tk_op_cred" to rpc_task and "rpc_op_cred" to the setup data.
      Use this to pass the cred down from rpc_call_null(), and have
      rpcauth_bindcred() notice it and bind it in place.
      
      Credit to kernel test robot <fengguang.wu@intel.com> for finding
      a bug in earlier version of this patch.
      Signed-off-by: NNeilBrown <neilb@suse.com>
      Signed-off-by: NAnna Schumaker <Anna.Schumaker@Netapp.com>
      1de7eea9
  21. 01 10月, 2018 3 次提交
  22. 11 4月, 2018 1 次提交
  23. 09 2月, 2018 1 次提交
    • O
      fix parallelism for rpc tasks · f515f86b
      Olga Kornievskaia 提交于
      Hi folks,
      
      On a multi-core machine, is it expected that we can have parallel RPCs
      handled by each of the per-core workqueue?
      
      In testing a read workload, observing via "top" command that a single
      "kworker" thread is running servicing the requests (no parallelism).
      It's more prominent while doing these operations over krb5p mount.
      
      What has been suggested by Bruce is to try this and in my testing I
      see then the read workload spread among all the kworker threads.
      Signed-off-by: NOlga Kornievskaia <kolga@netapp.com>
      Signed-off-by: NTrond Myklebust <trond.myklebust@primarydata.com>
      f515f86b
  24. 08 2月, 2018 1 次提交
  25. 07 2月, 2018 1 次提交