- 14 7月, 2016 1 次提交
-
-
由 Trond Myklebust 提交于
Allow the user to limit the number of requests serviced through a single connection, to help prevent faster clients from starving slower clients. Signed-off-by: NTrond Myklebust <trond.myklebust@primarydata.com> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
- 05 4月, 2016 1 次提交
-
-
由 Kirill A. Shutemov 提交于
Mostly direct substitution with occasional adjustment or removing outdated comments. Signed-off-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com> Acked-by: NMichal Hocko <mhocko@suse.com> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 11 8月, 2015 7 次提交
-
-
由 Jeff Layton 提交于
In later patches, we'll want to be able to allocate and free svc_rqst structures without monkeying with the serv->sv_nrthreads refcount. Factor those pieces out of their respective functions. Signed-off-by: NShirley Ma <shirley.ma@oracle.com> Acked-by: NJeff Layton <jlayton@primarydata.com> Tested-by: NShirley Ma <shirley.ma@oracle.com> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 Jeff Layton 提交于
In later patches, we're going to need to allow code external to svc.c to figure out what pool_mode is in use. Move these definitions into svc.h to prepare for that. Also, make the svc_pool_map object available and exported so that other modules can peek in there to get insight into what pool mode is in use. Likewise, export svc_pool_map_get/put function to make it safe to do so. Signed-off-by: NShirley Ma <shirley.ma@oracle.com> Acked-by: NJeff Layton <jlayton@primarydata.com> Tested-by: NShirley Ma <shirley.ma@oracle.com> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 Jeff Layton 提交于
Add an operation that will do setup of the service. In the case of a classic thread-based service that means starting up threads. In the case of a workqueue-based service, the setup will do something different. Signed-off-by: NShirley Ma <shirley.ma@oracle.com> Acked-by: NJeff Layton <jlayton@primarydata.com> Tested-by: NShirley Ma <shirliey.ma@oracle.com> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 Jeff Layton 提交于
For now, all services use svc_xprt_do_enqueue, but once we add workqueue-based service support, we'll need to do something different. Signed-off-by: NShirley Ma <shirley.ma@oracle.com> Acked-by: NJeff Layton <jlayton@primarydata.com> Tested-by: NShirley Ma <shirley.ma@oracle.com> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 Jeff Layton 提交于
...not technically an operation, but it's more convenient and cleaner to pass the module pointer in this struct. Signed-off-by: NShirley Ma <shirley.ma@oracle.com> Acked-by: NJeff Layton <jlayton@primarydata.com> Tested-by: NShirley Ma <shirley.ma@oracle.com> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 Jeff Layton 提交于
Since we now have a container for holding svc_serv operations, move the sv_function into it as well. Signed-off-by: NShirley Ma <shirley.ma@oracle.com> Acked-by: NJeff Layton <jlayton@primarydata.com> Tested-by: NShirley Ma <shirley.ma@oracle.com> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 Jeff Layton 提交于
In later patches we'll need to abstract out more operations on a per-service level, besides sv_shutdown and sv_function. Declare a new svc_serv_ops struct to hold these operations, and move sv_shutdown into this struct. Signed-off-by: NShirley Ma <shirley.ma@oracle.com> Acked-by: NJeff Layton <jlayton@primarydata.com> Tested-by: NShirley Ma <shirley.ma@oracle.com> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
- 23 1月, 2015 1 次提交
-
-
由 Jeff Layton 提交于
The BKL is completely out of the picture in the lockd and sunrpc code these days. Update the antiquated comments that refer to it. Signed-off-by: NJeff Layton <jlayton@primarydata.com> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
- 10 12月, 2014 10 次提交
-
-
由 Jeff Layton 提交于
Testing has shown that the pool->sp_lock can be a bottleneck on a busy server. Every time data is received on a socket, the server must take that lock in order to dequeue a thread from the sp_threads list. Address this problem by eliminating the sp_threads list (which contains threads that are currently idle) and replacing it with a RQ_BUSY flag in svc_rqst. This allows us to walk the sp_all_threads list under the rcu_read_lock and find a suitable thread for the xprt by doing a test_and_set_bit. Note that we do still have a potential atomicity problem however with this approach. We don't want svc_xprt_do_enqueue to set the rqst->rq_xprt pointer unless a test_and_set_bit of RQ_BUSY returned zero (which indicates that the thread was idle). But, by the time we check that, the bit could be flipped by a waking thread. To address this, we acquire a new per-rqst spinlock (rq_lock) and take that before doing the test_and_set_bit. If that returns false, then we can set rq_xprt and drop the spinlock. Then, when the thread wakes up, it must set the bit under the same spinlock and can trust that if it was already set then the rq_xprt is also properly set. With this scheme, the case where we have an idle thread no longer needs to take the highly contended pool->sp_lock at all, and that removes the bottleneck. That still leaves one issue: What of the case where we walk the whole sp_all_threads list and don't find an idle thread? Because the search is lockess, it's possible for the queueing to race with a thread that is going to sleep. To address that, we queue the xprt and then search again. If we find an idle thread at that point, we can't attach the xprt to it directly since that might race with a different thread waking up and finding it. All we can do is wake the idle thread back up and let it attempt to find the now-queued xprt. Signed-off-by: NJeff Layton <jlayton@primarydata.com> Tested-by: NChris Worley <chris.worley@primarydata.com> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 Jeff Layton 提交于
In a later patch, we'll be removing some spinlocking around the socket and thread queueing code in order to fix some contention problems. At that point, the stats counters will no longer be protected by the sp_lock. Change the counters to atomic_long_t fields, except for the "sockets_queued" counter which will still be manipulated under a spinlock. Signed-off-by: NJeff Layton <jlayton@primarydata.com> Tested-by: NChris Worley <chris.worley@primarydata.com> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 Jeff Layton 提交于
...also make the manipulation of sp_all_threads list use RCU-friendly functions. Signed-off-by: NJeff Layton <jlayton@primarydata.com> Tested-by: NChris Worley <chris.worley@primarydata.com> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 Jeff Layton 提交于
In a later patch, we'll want to be able to handle this flag without holding the sp_lock. Change this field to an unsigned long flags field, and declare a new flag in it that can be managed with atomic bitops. Signed-off-by: NJeff Layton <jlayton@primarydata.com> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 Jeff Layton 提交于
There are a couple of holes in the svc_rqst field on x86_64. Move the rq_cachetype to a different location to eliminate both of them. Signed-off-by: NJeff Layton <jlayton@primarydata.com> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 Jeff Layton 提交于
Signed-off-by: NJeff Layton <jlayton@primarydata.com> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 Jeff Layton 提交于
Signed-off-by: NJeff Layton <jlayton@primarydata.com> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 Jeff Layton 提交于
Signed-off-by: NJeff Layton <jlayton@primarydata.com> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 Jeff Layton 提交于
Signed-off-by: NJeff Layton <jlayton@primarydata.com> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 Jeff Layton 提交于
In a later patch, we're going to need some atomic bit flags. Since that field will need to be an unsigned long, we mitigate that space consumption by migrating some other bitflags to the new field. Start with the rq_secure flag. Signed-off-by: NJeff Layton <jlayton@primarydata.com> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
- 18 8月, 2014 1 次提交
-
-
由 Trond Myklebust 提交于
We're always _only_ waking up tasks from within the sp_threads list, so we know that they are enqueued and alive. The rq_wait waitqueue is just a distraction with extra atomic semantics. Signed-off-by: NTrond Myklebust <trond.myklebust@primarydata.com> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
- 23 6月, 2014 1 次提交
-
-
由 Kinglong Mee 提交于
rq_usedeferral and rq_splice_ok are used as 0 and 1, just defined to bool. Signed-off-by: NKinglong Mee <kinglongmee@gmail.com> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
- 31 5月, 2014 2 次提交
-
-
由 J. Bruce Fields 提交于
RPC_MAX_AUTH_SIZE is scattered around several places. Better to set it once in the auth code, where this kind of estimate should be made. And while we're at it we can leave it zero when we're not using krb5i or krb5p. Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 J. Bruce Fields 提交于
After this we can handle for example getattr of very large ACLs. Read, readdir, readlink are still special cases with their own limits. Also we can't handle a new operation starting close to the end of a page. Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
- 23 5月, 2014 1 次提交
-
-
由 NeilBrown 提交于
If an incoming NFS request is coming from the local host, then nfsd will need to perform some special handling. So detect that possibility and make the source visible in rq_local. Signed-off-by: NNeilBrown <neilb@suse.de> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
- 04 1月, 2014 1 次提交
-
-
由 Kinglong Mee 提交于
NFSv4 clients can contact port 2049 directly instead of needing the portmapper. Therefore a failure to register to the portmapper when starting an NFSv4-only server isn't really a problem. But Gareth Williams reports that an attempt to start an NFSv4-only server without starting portmap fails: #rpc.nfsd -N 2 -N 3 rpc.nfsd: writing fd to kernel failed: errno 111 (Connection refused) rpc.nfsd: unable to set any sockets for nfsd Add a flag to svc_version to tell the rpc layer it can safely ignore an rpcbind failure in the NFSv4-only case. Reported-by: NGareth Williams <gareth@garethwilliams.me.uk> Reviewed-by: NChuck Lever <chuck.lever@oracle.com> Signed-off-by: NKinglong Mee <kinglongmee@gmail.com> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
- 11 12月, 2013 1 次提交
-
-
由 Weng Meiling 提交于
Signed-off-by: NWeng Meiling <wengmeiling.weng@huawei.com> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
- 31 8月, 2013 1 次提交
-
-
由 J. Bruce Fields 提交于
I forgot to remove this in afc59400 "nfsd4: cleanup: replace rq_resused count by rq_next_page pointer". Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
- 24 1月, 2013 1 次提交
-
-
由 Andriy Skulysh 提交于
There is a race in enqueueing thread to a pool and waking up a thread. lockd doesn't wake up on reception of lock granted callback if svc_wake_up() is called before lockd's thread is added to a pool. Signed-off-by: NAndriy Skulysh <Andriy_Skulysh@xyratex.com> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
- 18 12月, 2012 1 次提交
-
-
由 J. Bruce Fields 提交于
It may be a matter of personal taste, but I find this makes the code clearer. Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
- 28 7月, 2012 1 次提交
-
-
由 Stanislav Kinsbursky 提交于
This is a cleanup patch - makes code looks simplier. It replaces widely used rqstp->rq_xprt->xpt_net by introduced SVC_NET(rqstp). Signed-off-by: NStanislav Kinsbursky <skinsbursky@parallels.com> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
- 01 6月, 2012 2 次提交
-
-
由 J. Bruce Fields 提交于
Move the rq_flavor into struct svc_cred, and use it in setclientid and exchange_id comparisons as well. Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 Stanislav Kinsbursky 提交于
This new routine is responsible for service registration in a specified network context. The idea is to separate service creation from per-net operations. Note also: since registering service with svc_bind() can fail, the service will be destroyed and during destruction it will try to unregister itself from rpcbind. In this case unregistration has to be skipped. Signed-off-by: NStanislav Kinsbursky <skinsbursky@parallels.com> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
- 15 2月, 2012 1 次提交
-
-
由 Stanislav Kinsbursky 提交于
This patch introduces per-net Lockd initialization and destruction routines. The logic is the same as in global Lockd up and down routines. Probably the solution is not the best one. But at least it looks clear. So per-net "up" routine are called only in case of lockd is running already. If per-net resources are not allocated yet, then service is being registered with local portmapper and lockd sockets created. Per-net "down" routine is called on every lockd_down() call in case of global users counter is not zero. Signed-off-by: NStanislav Kinsbursky <skinsbursky@parallels.com> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
- 01 2月, 2012 2 次提交
-
-
由 Stanislav Kinsbursky 提交于
On service shutdown we can be sure, that no more users of it left except current. Thus it looks like using current network namespace context is safe in this case. Signed-off-by: NStanislav Kinsbursky <skinsbursky@parallels.com> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Stanislav Kinsbursky 提交于
Lockd and NFSd services will handle requests from and to many network nsamespaces. And thus have to be registered and unregistered per network namespace. Signed-off-by: NStanislav Kinsbursky <skinsbursky@parallels.com> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
- 25 10月, 2011 1 次提交
-
-
由 Stanislav Kinsbursky 提交于
We have to call svc_rpcb_cleanup() explicitly from nfsd_last_thread() since this function is registered as service shutdown callback and thus nobody else will done it for us. Signed-off-by: NStanislav Kinsbursky <skinsbursky@parallels.com> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
- 14 9月, 2011 1 次提交
-
-
由 Mi Jinlong 提交于
For IPv6 local address, lockd can not callback to client for missing scope id when binding address at inet6_bind: 324 if (addr_type & IPV6_ADDR_LINKLOCAL) { 325 if (addr_len >= sizeof(struct sockaddr_in6) && 326 addr->sin6_scope_id) { 327 /* Override any existing binding, if another one 328 * is supplied by user. 329 */ 330 sk->sk_bound_dev_if = addr->sin6_scope_id; 331 } 332 333 /* Binding to link-local address requires an interface */ 334 if (!sk->sk_bound_dev_if) { 335 err = -EINVAL; 336 goto out_unlock; 337 } Replacing svc_addr_u by sockaddr_storage, let rqstp->rq_daddr contains more info besides address. Reviewed-by: NJeff Layton <jlayton@redhat.com> Reviewed-by: NChuck Lever <chuck.lever@oracle.com> Signed-off-by: NMi Jinlong <mijinlong@cn.fujitsu.com> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
- 20 8月, 2011 1 次提交
-
-
由 Eric Dumazet 提交于
Use NUMA aware allocations to reduce latencies and increase throughput. sunrpc kthreads can use kthread_create_on_node() if pool_mode is "percpu" or "pernode", and svc_prepare_thread()/svc_init_buffer() can also take into account NUMA node affinity for memory allocations. Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com> CC: "J. Bruce Fields" <bfields@fieldses.org> CC: Neil Brown <neilb@suse.de> CC: David Miller <davem@davemloft.net> Reviewed-by: NGreg Banks <gnb@fastmail.fm> [bfields@redhat.com: fix up caller nfs41_callback_up] Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
- 18 7月, 2011 1 次提交
-
-
由 J. Bruce Fields 提交于
It's sort of ridiculous that we've never had a working reply cache for NFSv4. On the other hand, we may still not: our current reply cache is likely not very good, especially in the TCP case (which is the only case that matters for v4). What we really need here is some serious testing. Anyway, here's a start. Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-