1. 23 3月, 2022 2 次提交
  2. 14 3月, 2022 3 次提交
    • N
      SUNRPC: improve 'swap' handling: scheduling and PF_MEMALLOC · 8db55a03
      NeilBrown 提交于
      rpc tasks can be marked as RPC_TASK_SWAPPER.  This causes GFP_MEMALLOC
      to be used for some allocations.  This is needed in some cases, but not
      in all where it is currently provided, and in some where it isn't
      provided.
      
      Currently *all* tasks associated with a rpc_client on which swap is
      enabled get the flag and hence some GFP_MEMALLOC support.
      
      GFP_MEMALLOC is provided for ->buf_alloc() but only swap-writes need it.
      However xdr_alloc_bvec does not get GFP_MEMALLOC - though it often does
      need it.
      
      xdr_alloc_bvec is called while the XPRT_LOCK is held.  If this blocks,
      then it blocks all other queued tasks.  So this allocation needs
      GFP_MEMALLOC for *all* requests, not just writes, when the xprt is used
      for any swap writes.
      
      Similarly, if the transport is not connected, that will block all
      requests including swap writes, so memory allocations should get
      GFP_MEMALLOC if swap writes are possible.
      
      So with this patch:
       1/ we ONLY set RPC_TASK_SWAPPER for swap writes.
       2/ __rpc_execute() sets PF_MEMALLOC while handling any task
          with RPC_TASK_SWAPPER set, or when handling any task that
          holds the XPRT_LOCKED lock on an xprt used for swap.
          This removes the need for the RPC_IS_SWAPPER() test
          in ->buf_alloc handlers.
       3/ xprt_prepare_transmit() sets PF_MEMALLOC after locking
          any task to a swapper xprt.  __rpc_execute() will clear it.
       3/ PF_MEMALLOC is set for all the connect workers.
      
      Reviewed-by: Chuck Lever <chuck.lever@oracle.com> (for xprtrdma parts)
      Signed-off-by: NNeilBrown <neilb@suse.de>
      Signed-off-by: NTrond Myklebust <trond.myklebust@hammerspace.com>
      8db55a03
    • N
      SUNRPC: remove scheduling boost for "SWAPPER" tasks. · a80a8461
      NeilBrown 提交于
      Currently, tasks marked as "swapper" tasks get put to the front of
      non-priority rpc_queues, and are sorted earlier than non-swapper tasks on
      the transport's ->xmit_queue.
      
      This is pointless as currently *all* tasks for a mount that has swap
      enabled on *any* file are marked as "swapper" tasks.  So the net result
      is that the non-priority rpc_queues are reverse-ordered (LIFO).
      
      This scheduling boost is not necessary to avoid deadlocks, and hurts
      fairness, so remove it.  If there were a need to expedite some requests,
      the tk_priority mechanism is a more appropriate tool.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      Signed-off-by: NTrond Myklebust <trond.myklebust@hammerspace.com>
      a80a8461
    • N
      SUNRPC/call_alloc: async tasks mustn't block waiting for memory · c487216b
      NeilBrown 提交于
      When memory is short, new worker threads cannot be created and we depend
      on the minimum one rpciod thread to be able to handle everything.
      So it must not block waiting for memory.
      
      mempools are particularly a problem as memory can only be released back
      to the mempool by an async rpc task running.  If all available
      workqueue threads are waiting on the mempool, no thread is available to
      return anything.
      
      rpc_malloc() can block, and this might cause deadlocks.
      So check RPC_IS_ASYNC(), rather than RPC_IS_SWAPPER() to determine if
      blocking is acceptable.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      Signed-off-by: NTrond Myklebust <trond.myklebust@hammerspace.com>
      c487216b
  3. 26 2月, 2022 1 次提交
  4. 21 10月, 2021 1 次提交
    • C
      SUNRPC: Trace calls to .rpc_call_done · b40887e1
      Chuck Lever 提交于
      Introduce a single tracepoint that can replace simple dprintk call
      sites in upper layer "rpc_call_done" callbacks. Example:
      
         kworker/u24:2-1254  [001]   771.026677: rpc_stats_latency:    task:00000001@00000002 xid=0x16a6f3c0 rpcbindv2 GETPORT backlog=446 rtt=101 execute=555
         kworker/u24:2-1254  [001]   771.026677: rpc_task_call_done:   task:00000001@00000002 flags=ASYNC|DYNAMIC|SOFT|SOFTCONN|SENT runstate=RUNNING|ACTIVE status=0 action=rpcb_getport_done
         kworker/u24:2-1254  [001]   771.026678: rpcb_setport:         task:00000001@00000002 status=0 port=20048
      Signed-off-by: NChuck Lever <chuck.lever@oracle.com>
      Signed-off-by: NTrond Myklebust <trond.myklebust@hammerspace.com>
      b40887e1
  5. 10 10月, 2021 1 次提交
  6. 04 10月, 2021 2 次提交
  7. 28 6月, 2021 2 次提交
    • Z
      SUNRPC: Should wake up the privileged task firstly. · 5483b904
      Zhang Xiaoxu 提交于
      When find a task from wait queue to wake up, a non-privileged task may
      be found out, rather than the privileged. This maybe lead a deadlock
      same as commit dfe1fe75 ("NFSv4: Fix deadlock between nfs4_evict_inode()
      and nfs4_opendata_get_inode()"):
      
      Privileged delegreturn task is queued to privileged list because all
      the slots are assigned. If there has no enough slot to wake up the
      non-privileged batch tasks(session less than 8 slot), then the privileged
      delegreturn task maybe lost waked up because the found out task can't
      get slot since the session is on draining.
      
      So we should treate the privileged task as the emergency task, and
      execute it as for as we can.
      Reported-by: NHulk Robot <hulkci@huawei.com>
      Fixes: 5fcdfacc ("NFSv4: Return delegations synchronously in evict_inode")
      Cc: stable@vger.kernel.org
      Signed-off-by: NZhang Xiaoxu <zhangxiaoxu5@huawei.com>
      Signed-off-by: NTrond Myklebust <trond.myklebust@hammerspace.com>
      5483b904
    • Z
      SUNRPC: Fix the batch tasks count wraparound. · fcb170a9
      Zhang Xiaoxu 提交于
      The 'queue->nr' will wraparound from 0 to 255 when only current
      priority queue has tasks. This maybe lead a deadlock same as commit
      dfe1fe75 ("NFSv4: Fix deadlock between nfs4_evict_inode()
      and nfs4_opendata_get_inode()"):
      
      Privileged delegreturn task is queued to privileged list because all
      the slots are assigned. When non-privileged task complete and release
      the slot, a non-privileged maybe picked out. It maybe allocate slot
      failed when the session on draining.
      
      If the 'queue->nr' has wraparound to 255, and no enough slot to
      service it, then the privileged delegreturn will lost to wake up.
      
      So we should avoid the wraparound on 'queue->nr'.
      Reported-by: NHulk Robot <hulkci@huawei.com>
      Fixes: 5fcdfacc ("NFSv4: Return delegations synchronously in evict_inode")
      Fixes: 1da177e4 ("Linux-2.6.12-rc2")
      Cc: stable@vger.kernel.org
      Signed-off-by: NZhang Xiaoxu <zhangxiaoxu5@huawei.com>
      Signed-off-by: NTrond Myklebust <trond.myklebust@hammerspace.com>
      fcb170a9
  8. 09 3月, 2021 1 次提交
  9. 03 12月, 2020 1 次提交
  10. 21 9月, 2020 5 次提交
  11. 05 4月, 2020 1 次提交
  12. 16 3月, 2020 1 次提交
  13. 15 1月, 2020 1 次提交
  14. 23 11月, 2019 1 次提交
    • C
      SUNRPC: Capture completion of all RPC tasks · a264abad
      Chuck Lever 提交于
      RPC tasks on the backchannel never invoke xprt_complete_rqst(), so
      there is no way to report their tk_status at completion. Also, any
      RPC task that exits via rpc_exit_task() before it is replied to will
      also disappear without a trace.
      
      Introduce a trace point that is symmetrical with rpc_task_begin that
      captures the termination status of each RPC task.
      
      Sample trace output for callback requests initiated on the server:
         kworker/u8:12-448   [003]   127.025240: rpc_task_end:         task:50@3 flags=ASYNC|DYNAMIC|SOFT|SOFTCONN|SENT runstate=RUNNING|ACTIVE status=0 action=rpc_exit_task
         kworker/u8:12-448   [002]   127.567310: rpc_task_end:         task:51@3 flags=ASYNC|DYNAMIC|SOFT|SOFTCONN|SENT runstate=RUNNING|ACTIVE status=0 action=rpc_exit_task
         kworker/u8:12-448   [001]   130.506817: rpc_task_end:         task:52@3 flags=ASYNC|DYNAMIC|SOFT|SOFTCONN|SENT runstate=RUNNING|ACTIVE status=0 action=rpc_exit_task
      
      Odd, though, that I never see trace_rpc_task_complete, either in the
      forward or backchannel. Should it be removed?
      Signed-off-by: NChuck Lever <chuck.lever@oracle.com>
      Signed-off-by: NTrond Myklebust <trond.myklebust@hammerspace.com>
      a264abad
  15. 06 11月, 2019 1 次提交
  16. 18 9月, 2019 1 次提交
  17. 20 8月, 2019 1 次提交
  18. 13 7月, 2019 1 次提交
  19. 09 7月, 2019 1 次提交
  20. 07 7月, 2019 3 次提交
  21. 22 6月, 2019 1 次提交
  22. 21 5月, 2019 1 次提交
  23. 26 4月, 2019 7 次提交