提交 65866f82 编写于 作者: C Chuck Lever 提交者: Anna Schumaker

xprtrdma: Reduce the number of hardway buffer allocations

While marshaling an RPC/RDMA request, the inline_{rsize,wsize}
settings determine whether an inline request is used, or whether
read or write chunks lists are built. The current default value of
these settings is 1024. Any RPC request smaller than 1024 bytes is
sent to the NFS server completely inline.

rpcrdma_buffer_create() allocates and pre-registers a set of RPC
buffers for each transport instance, also based on the inline rsize
and wsize settings.

RPC/RDMA requests and replies are built in these buffers. However,
if an RPC/RDMA request is expected to be larger than 1024, a buffer
has to be allocated and registered for that RPC, and deregistered
and released when the RPC is complete. This is known has a
"hardway allocation."

Since the introduction of NFSv4, the size of RPC requests has become
larger, and hardway allocations are thus more frequent. Hardway
allocations are significant overhead, and they waste the existing
RPC buffers pre-allocated by rpcrdma_buffer_create().

We'd like fewer hardway allocations.

Increasing the size of the pre-registered buffers is the most direct
way to do this. However, a blanket increase of the inline thresholds
has interoperability consequences.

On my 64-bit system, rpcrdma_buffer_create() requests roughly 7000
bytes for each RPC request buffer, using kmalloc(). Due to internal
fragmentation, this wastes nearly 1200 bytes because kmalloc()
already returns an 8192-byte piece of memory for a 7000-byte
allocation request, though the extra space remains unused.

So let's round up the size of the pre-allocated buffers, and make
use of the unused space in the kmalloc'd memory.

This change reduces the amount of hardway allocated memory for an
NFSv4 general connectathon run from 1322092 to 9472 bytes (99%).
Signed-off-by: NChuck Lever <chuck.lever@oracle.com>
Tested-by: NSteve Wise <swise@opengridcomputing.com>
Signed-off-by: NAnna Schumaker <Anna.Schumaker@Netapp.com>
上级 8301a2c0
...@@ -50,6 +50,7 @@ ...@@ -50,6 +50,7 @@
#include <linux/interrupt.h> #include <linux/interrupt.h>
#include <linux/pci.h> /* for Tavor hack below */ #include <linux/pci.h> /* for Tavor hack below */
#include <linux/slab.h> #include <linux/slab.h>
#include <asm/bitops.h>
#include "xprt_rdma.h" #include "xprt_rdma.h"
...@@ -1005,7 +1006,7 @@ rpcrdma_buffer_create(struct rpcrdma_buffer *buf, struct rpcrdma_ep *ep, ...@@ -1005,7 +1006,7 @@ rpcrdma_buffer_create(struct rpcrdma_buffer *buf, struct rpcrdma_ep *ep,
struct rpcrdma_ia *ia, struct rpcrdma_create_data_internal *cdata) struct rpcrdma_ia *ia, struct rpcrdma_create_data_internal *cdata)
{ {
char *p; char *p;
size_t len; size_t len, rlen, wlen;
int i, rc; int i, rc;
struct rpcrdma_mw *r; struct rpcrdma_mw *r;
...@@ -1120,16 +1121,16 @@ rpcrdma_buffer_create(struct rpcrdma_buffer *buf, struct rpcrdma_ep *ep, ...@@ -1120,16 +1121,16 @@ rpcrdma_buffer_create(struct rpcrdma_buffer *buf, struct rpcrdma_ep *ep,
* Allocate/init the request/reply buffers. Doing this * Allocate/init the request/reply buffers. Doing this
* using kmalloc for now -- one for each buf. * using kmalloc for now -- one for each buf.
*/ */
wlen = 1 << fls(cdata->inline_wsize + sizeof(struct rpcrdma_req));
rlen = 1 << fls(cdata->inline_rsize + sizeof(struct rpcrdma_rep));
dprintk("RPC: %s: wlen = %zu, rlen = %zu\n",
__func__, wlen, rlen);
for (i = 0; i < buf->rb_max_requests; i++) { for (i = 0; i < buf->rb_max_requests; i++) {
struct rpcrdma_req *req; struct rpcrdma_req *req;
struct rpcrdma_rep *rep; struct rpcrdma_rep *rep;
len = cdata->inline_wsize + sizeof(struct rpcrdma_req); req = kmalloc(wlen, GFP_KERNEL);
/* RPC layer requests *double* size + 1K RPC_SLACK_SPACE! */
/* Typical ~2400b, so rounding up saves work later */
if (len < 4096)
len = 4096;
req = kmalloc(len, GFP_KERNEL);
if (req == NULL) { if (req == NULL) {
dprintk("RPC: %s: request buffer %d alloc" dprintk("RPC: %s: request buffer %d alloc"
" failed\n", __func__, i); " failed\n", __func__, i);
...@@ -1141,16 +1142,16 @@ rpcrdma_buffer_create(struct rpcrdma_buffer *buf, struct rpcrdma_ep *ep, ...@@ -1141,16 +1142,16 @@ rpcrdma_buffer_create(struct rpcrdma_buffer *buf, struct rpcrdma_ep *ep,
buf->rb_send_bufs[i]->rl_buffer = buf; buf->rb_send_bufs[i]->rl_buffer = buf;
rc = rpcrdma_register_internal(ia, req->rl_base, rc = rpcrdma_register_internal(ia, req->rl_base,
len - offsetof(struct rpcrdma_req, rl_base), wlen - offsetof(struct rpcrdma_req, rl_base),
&buf->rb_send_bufs[i]->rl_handle, &buf->rb_send_bufs[i]->rl_handle,
&buf->rb_send_bufs[i]->rl_iov); &buf->rb_send_bufs[i]->rl_iov);
if (rc) if (rc)
goto out; goto out;
buf->rb_send_bufs[i]->rl_size = len-sizeof(struct rpcrdma_req); buf->rb_send_bufs[i]->rl_size = wlen -
sizeof(struct rpcrdma_req);
len = cdata->inline_rsize + sizeof(struct rpcrdma_rep); rep = kmalloc(rlen, GFP_KERNEL);
rep = kmalloc(len, GFP_KERNEL);
if (rep == NULL) { if (rep == NULL) {
dprintk("RPC: %s: reply buffer %d alloc failed\n", dprintk("RPC: %s: reply buffer %d alloc failed\n",
__func__, i); __func__, i);
...@@ -1162,7 +1163,7 @@ rpcrdma_buffer_create(struct rpcrdma_buffer *buf, struct rpcrdma_ep *ep, ...@@ -1162,7 +1163,7 @@ rpcrdma_buffer_create(struct rpcrdma_buffer *buf, struct rpcrdma_ep *ep,
buf->rb_recv_bufs[i]->rr_buffer = buf; buf->rb_recv_bufs[i]->rr_buffer = buf;
rc = rpcrdma_register_internal(ia, rep->rr_base, rc = rpcrdma_register_internal(ia, rep->rr_base,
len - offsetof(struct rpcrdma_rep, rr_base), rlen - offsetof(struct rpcrdma_rep, rr_base),
&buf->rb_recv_bufs[i]->rr_handle, &buf->rb_recv_bufs[i]->rr_handle,
&buf->rb_recv_bufs[i]->rr_iov); &buf->rb_recv_bufs[i]->rr_iov);
if (rc) if (rc)
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册