- 27 1月, 2007 1 次提交
-
-
由 NeilBrown 提交于
NFSd assumes that largest number of pages that will be needed for a request+response is 2+N where N pages is the size of the largest permitted read/write request. The '2' are 1 for the non-data part of the request, and 1 for the non-data part of the reply. However, when a read request is not page-aligned, and we choose to use ->sendfile to send it directly from the page cache, we may need N+1 pages to hold the whole reply. This can overflow and array and cause an Oops. This patch increases size of the array for holding pages by one and makes sure that entry is NULL when it is not in use. Signed-off-by: NNeil Brown <neilb@suse.de> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 21 10月, 2006 1 次提交
-
-
由 Al Viro 提交于
svc_procfunc instances return __be32, not int Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk> Acked-by: NTrond Myklebust <trond.myklebust@fys.uio.no> Acked-by: NNeil Brown <neilb@suse.de> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
- 06 10月, 2006 1 次提交
-
-
由 NeilBrown 提交于
There is some confusion about the meaning of 'bufsz' for a sunrpc server. In some cases it is the largest message that can be sent or received. In other cases it is the largest 'payload' that can be included in a NFS message. In either case, it is not possible for both the request and the reply to be this large. One of the request or reply may only be one page long, which fits nicely with NFS. So we remove 'bufsz' and replace it with two numbers: 'max_payload' and 'max_mesg'. Max_payload is the size that the server requests. It is used by the server to check the max size allowed on a particular connection: depending on the protocol a lower limit might be used. max_mesg is the largest single message that can be sent or received. It is calculated as the max_payload, rounded up to a multiple of PAGE_SIZE, and with PAGE_SIZE added to overhead. Only one of the request and reply may be this size. The other must be at most one page. Cc: Greg Banks <gnb@sgi.com> Cc: "J. Bruce Fields" <bfields@fieldses.org> Signed-off-by: NNeil Brown <neilb@suse.de> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
- 04 10月, 2006 4 次提交
-
-
由 Olaf Kirch 提交于
The NFSACL patches introduced support for multiple RPC services listening on the same transport. However, only the first of these services was registered with portmapper. This was perfectly fine for nfsacl, as you traditionally do not want these to show up in a portmapper listing. The patch below changes the default behavior to always register all services listening on a given transport, but retains the old behavior for nfsacl services. Signed-off-by: NOlaf Kirch <okir@suse.de> Signed-off-by: NNeil Brown <neilb@suse.de> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Greg Banks 提交于
The limit over UDP remains at 32K. Also, make some of the apparently arbitrary sizing constants clearer. The biggest change here involves replacing NFSSVC_MAXBLKSIZE by a function of the rqstp. This allows it to be different for different protocols (udp/tcp) and also allows it to depend on the servers declared sv_bufsiz. Note that we don't actually increase sv_bufsz for nfs yet. That comes next. Signed-off-by: NGreg Banks <gnb@melbourne.sgi.com> Signed-off-by: NNeil Brown <neilb@suse.de> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 NeilBrown 提交于
.. by allocating the array of 'kvec' in 'struct svc_rqst'. As we plan to increase RPCSVC_MAXPAGES from 8 upto 256, we can no longer allocate an array of this size on the stack. So we allocate it in 'struct svc_rqst'. However svc_rqst contains (indirectly) an array of the same type and size (actually several, but they are in a union). So rather than waste space, we move those arrays out of the separately allocated union and into svc_rqst to share with the kvec moved out of svc_tcp_recvfrom (various arrays are used at different times, so there is no conflict). Signed-off-by: NNeil Brown <neilb@suse.de> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 NeilBrown 提交于
We are planning to increase RPCSVC_MAXPAGES from about 8 to about 256. This means we need to be a bit careful about arrays of size RPCSVC_MAXPAGES. struct svc_rqst contains two such arrays. However the there are never more that RPCSVC_MAXPAGES pages in the two arrays together, so only one array is needed. The two arrays are for the pages holding the request, and the pages holding the reply. Instead of two arrays, we can simply keep an index into where the first reply page is. This patch also removes a number of small inline functions that probably server to obscure what is going on rather than clarify it, and opencode the needed functionality. Also remove the 'rq_restailpage' variable as it is *always* 0. i.e. if the response 'xdr' structure has a non-empty tail it is always in the same pages as the head. check counters are initilised and incr properly check for consistant usage of ++ etc maybe extra some inlines for common approach general review Signed-off-by: NNeil Brown <neilb@suse.de> Cc: Magnus Maatta <novell@kiruna.se> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
- 02 10月, 2006 7 次提交
-
-
由 Greg Banks 提交于
Actually implement multiple pools. On NUMA machines, allocate a svc_pool per NUMA node; on SMP a svc_pool per CPU; otherwise a single global pool. Enqueue sockets on the svc_pool corresponding to the CPU on which the socket bh is run (i.e. the NIC interrupt CPU). Threads have their cpu mask set to limit them to the CPUs in the svc_pool that owns them. This is the patch that allows an Altix to scale NFS traffic linearly beyond 4 CPUs and 4 NICs. Incorporates changes and feedback from Neil Brown, Trond Myklebust, and Christoph Hellwig. Signed-off-by: NGreg Banks <gnb@melbourne.sgi.com> Signed-off-by: NNeil Brown <neilb@suse.de> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Greg Banks 提交于
Currently knfsd keeps its own list of all nfsd threads in nfssvc.c; add a new way of managing the list of all threads in a svc_serv. Add svc_create_pooled() to allow creation of a svc_serv whose threads are managed by the sunrpc code. Add svc_set_num_threads() to manage the number of threads in a service, either per-pool or globally across the service. Signed-off-by: NGreg Banks <gnb@melbourne.sgi.com> Signed-off-by: NNeil Brown <neilb@suse.de> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Greg Banks 提交于
add svc_get() for those occasions when we need to temporarily bump up svc_serv->sv_nrthreads as a pseudo refcount. Signed-off-by: NGreg Banks <gnb@melbourne.sgi.com> Signed-off-by: NNeil Brown <neilb@suse.de> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Greg Banks 提交于
Split out the list of idle threads and pending sockets from svc_serv into a new svc_pool structure, and allocate a fixed number (in this patch, 1) of pools per svc_serv. The new structure contains a lock which takes over several of the duties of svc_serv->sv_lock, which is now relegated to protecting only sv_tempsocks, sv_permsocks, and sv_tmpcnt in svc_serv. The point is to move the hottest fields out of svc_serv and into svc_pool, allowing a following patch to arrange for a svc_pool per NUMA node or per CPU. This is a major step towards making the NFS server NUMA-friendly. Signed-off-by: NGreg Banks <gnb@melbourne.sgi.com> Signed-off-by: NNeil Brown <neilb@suse.de> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Greg Banks 提交于
Following are 11 patches from Greg Banks which combine to make knfsd more Numa-aware. They reduce hitting on 'global' data structures, and create some data-structures that can be node-local. knfsd threads are bound to a particular node, and the thread to handle a new request is chosen from the threads that are attach to the node that received the interrupt. The distribution of threads across nodes can be controlled by a new file in the 'nfsd' filesystem, though the default approach of an even spread is probably fine for most sites. Some (old) numbers that show the efficacy of these patches: N == number of NICs == number of CPUs == nmber of clients. Number of NUMA nodes == N/2 N Throughput, MiB/s CPU usage, % (max=N*100) Before After Before After --- ------ ---- ----- ----- 4 312 435 350 228 6 500 656 501 418 8 562 804 690 589 This patch: Move the aging of RPC/TCP connection sockets from the main svc_recv() loop to a timer which uses a mark-and-sweep algorithm every 6 minutes. This reduces the amount of work that needs to be done in the main RPC loop and the length of time we need to hold the (effectively global) svc_serv->sv_lock. [akpm@osdl.org: cleanup] Signed-off-by: NGreg Banks <gnb@melbourne.sgi.com> Signed-off-by: NNeil Brown <neilb@suse.de> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 NeilBrown 提交于
It isn't needed as it is available in rqstp->rq_server, and dropping it allows some local vars to be dropped. [akpm@osdl.org: build fix] Cc: "J. Bruce Fields" <bfields@fieldses.org> Signed-off-by: NNeil Brown <neilb@suse.de> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 NeilBrown 提交于
nfsd has some cleanup that it wants to do when the last thread exits, and there will shortly be some more. So collect this all into one place and define a callback for an rpc service to call when the service is about to be destroyed. [akpm@osdl.org: cleanups, build fix] Signed-off-by: NNeil Brown <neilb@suse.de> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
- 29 9月, 2006 2 次提交
-
-
由 Alexey Dobriyan 提交于
pure s/u32/__be32/ [AV: large part based on Alexey's patches] Signed-off-by: NAlexey Dobriyan <adobriyan@gmail.com> Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Alexey Dobriyan 提交于
* add svc_getnl(): Take network-endian value from buffer, convert to host-endian and return it. * add svc_putnl(): Take host-endian value, convert to network-endian and put it into a buffer. * annotate svc_getu32()/svc_putu32() as dealing with network-endian. * convert to svc_getnl(), svc_putnl(). [AV: in large part it's a carved-up Alexey's patch] Signed-off-by: NAlexey Dobriyan <adobriyan@gmail.com> Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 01 7月, 2006 1 次提交
-
-
由 J. Bruce Fields 提交于
Add a rq_sendfile_ok flag to svc_rqst which will be cleared in the privacy case so that the wrapping code will get copies of the read data instead of real page cache pages. This makes life simpler when we encrypt the response. Signed-off-by: NJ. Bruce Fields <bfields@citi.umich.edu> Signed-off-by: NNeil Brown <neilb@suse.de> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
- 11 4月, 2006 1 次提交
-
-
由 J. Bruce Fields 提交于
Every caller of svc_take_page ignores its return value and assumes it succeeded. So just WARN() instead of returning an ignored error. This would have saved some time debugging a recent nfsd4 problem. If there are still failure cases here, then the result is probably that we overwrite an earlier part of the reply while xdr-encoding. While the corrupted reply is a nasty bug, it would be worse to panic here and create the possibility of a remote DOS; hence WARN() instead of BUG(). Signed-off-by: NJ. Bruce Fields <bfields@citi.umich.edu> Signed-off-by: NNeil Brown <neilb@suse.de> Cc: Ingo Oeser <ioe-lkml@rameria.de> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
- 19 1月, 2006 1 次提交
-
-
由 J. Bruce Fields 提交于
The server code currently keeps track of the destination address on every request so that it can reply using the same address. However we forget to do that in the case of a deferred request. Remedy this oversight. >From folks at PolyServe. Signed-off-by: NJ. Bruce Fields <bfields@citi.umich.edu> Signed-off-by: NNeil Brown <neilb@suse.de> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
- 07 11月, 2005 1 次提交
-
-
由 NeilBrown 提交于
There are a couple of tests which could possibly be confused by extremely large numbers appearing in 'xdr' packets. I think the closest to an exploit you could get would be writing random data from a free page into a file - i.e. leak data out of kernel space. I'm fairly sure they cannot be used for remote compromise. Signed-off-by: NNeil Brown <neilb@suse.de> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
- 23 6月, 2005 2 次提交
-
-
由 Andreas Gruenbacher 提交于
This adds functions for encoding and decoding POSIX ACLs for the NFSACL protocol extension, and the GETACL and SETACL RPCs. The implementation is compatible with NFSACL in Solaris. Signed-off-by: NAndreas Gruenbacher <agruen@suse.de> Acked-by: NOlaf Kirch <okir@suse.de> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Andreas Gruenbacher 提交于
The NFS and NFSACL programs run on the same RPC transport. This patch adds support for this by converting svc_program into a chained list of programs (server-side). Signed-off-by: NAndreas Gruenbacher <agruen@suse.de> Signed-off-by: NOlaf Kirch <okir@suse.de> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
- 17 4月, 2005 1 次提交
-
-
由 Linus Torvalds 提交于
Initial git repository build. I'm not bothering with the full history, even though we have it. We can create a separate "historical" git archive of that later if we want to, and in the meantime it's about 3.2GB when imported into git - space that would just make the early git days unnecessarily complicated, when we don't have a lot of good infrastructure for it. Let it rip!
-