提交 fa4c44b5 编写于 作者: C Chuck Lever 提交者: Zheng Zengkai

SUNRPC: set rq_page_end differently

mainline inclusion
from mainline-5.13-rc1
commit ab836264
category: feature
bugzilla: https://gitee.com/openeuler/kernel/issues/I3ZVL2
CVE: NA

Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=ab8362645fba90fa44ec1991ad05544e307dd02f

-------------------------------------------------

Patch series "SUNRPC consumer for the bulk page allocator"

This patch set and the measurements below are based on yesterday's
bulk allocator series:

  git://git.kernel.org/pub/scm/linux/kernel/git/mel/linux.git mm-bulk-rebase-v5r9

The patches change SUNRPC to invoke the array-based bulk allocator
instead of alloc_page().

The micro-benchmark results are promising.  I ran a mixture of 256KB
reads and writes over NFSv3.  The server's kernel is built with KASAN
enabled, so the comparison is exaggerated but I believe it is still
valid.

I instrumented svc_recv() to measure the latency of each call to
svc_alloc_arg() and report it via a trace point.  The following results
are averages across the trace events.

  Single page: 25.007 us per call over 532,571 calls
  Bulk list:    6.258 us per call over 517,034 calls
  Bulk array:   4.590 us per call over 517,442 calls

This patch (of 2)

Refactor:

I'm about to use the loop variable @i for something else.

As far as the "i++" is concerned, that is a post-increment. The
value of @i is not used subsequently, so the increment operator
is unnecessary and can be removed.

Also note that nfsd_read_actor() was renamed nfsd_splice_actor()
by commit cf8208d0 ("sendfile: convert nfsd to
splice_direct_to_actor()").

Link: https://lkml.kernel.org/r/20210325114228.27719-7-mgorman@techsingularity.netSigned-off-by: NChuck Lever <chuck.lever@oracle.com>
Signed-off-by: NMel Gorman <mgorman@techsingularity.net>
Reviewed-by: NAlexander Lobakin <alobakin@pm.me>
Cc: Alexander Duyck <alexander.duyck@gmail.com>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: David Miller <davem@davemloft.net>
Cc: Ilias Apalodimas <ilias.apalodimas@linaro.org>
Cc: Jesper Dangaard Brouer <brouer@redhat.com>
Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
Cc: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
(cherry picked from commit ab836264)
Signed-off-by: NYongqiang Liu <liuyongqiang13@huawei.com>
Reviewed-by: NTong Tiangen <tongtiangen@huawei.com>
Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
上级 6c1bad0f
...@@ -642,7 +642,7 @@ static void svc_check_conn_limits(struct svc_serv *serv) ...@@ -642,7 +642,7 @@ static void svc_check_conn_limits(struct svc_serv *serv)
static int svc_alloc_arg(struct svc_rqst *rqstp) static int svc_alloc_arg(struct svc_rqst *rqstp)
{ {
struct svc_serv *serv = rqstp->rq_server; struct svc_serv *serv = rqstp->rq_server;
struct xdr_buf *arg; struct xdr_buf *arg = &rqstp->rq_arg;
int pages; int pages;
int i; int i;
...@@ -667,11 +667,10 @@ static int svc_alloc_arg(struct svc_rqst *rqstp) ...@@ -667,11 +667,10 @@ static int svc_alloc_arg(struct svc_rqst *rqstp)
} }
rqstp->rq_pages[i] = p; rqstp->rq_pages[i] = p;
} }
rqstp->rq_page_end = &rqstp->rq_pages[i]; rqstp->rq_page_end = &rqstp->rq_pages[pages];
rqstp->rq_pages[i++] = NULL; /* this might be seen in nfs_read_actor */ rqstp->rq_pages[pages] = NULL; /* this might be seen in nfsd_splice_actor() */
/* Make arg->head point to first page and arg->pages point to rest */ /* Make arg->head point to first page and arg->pages point to rest */
arg = &rqstp->rq_arg;
arg->head[0].iov_base = page_address(rqstp->rq_pages[0]); arg->head[0].iov_base = page_address(rqstp->rq_pages[0]);
arg->head[0].iov_len = PAGE_SIZE; arg->head[0].iov_len = PAGE_SIZE;
arg->pages = rqstp->rq_pages + 1; arg->pages = rqstp->rq_pages + 1;
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册