- 18 12月, 2012 2 次提交
-
-
由 J. Bruce Fields 提交于
To ensure ordering of read data with any following operations, turn off zero copy if the read is not the final operation in the compound. Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 J. Bruce Fields 提交于
Reported-by: Nkbuild test robot <fengguang.wu@intel.com> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
- 11 12月, 2012 21 次提交
-
-
由 Bryan Schumaker 提交于
If len == 0 we end up with size = (0 - 1), which could cause bad things to happen in copy_from_user(). Signed-off-by: NBryan Schumaker <bjschuma@netapp.com> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 Bryan Schumaker 提交于
I honestly have no idea where I got 129 from, but it's a much bigger value than the actual buffer size (INET6_ADDRSTRLEN). Signed-off-by: NBryan Schumaker <bjschuma@netapp.com> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 Stanislav Kinsbursky 提交于
Since NFSd service is per-net now, we have to pass proper network context in nfsd_shutdown() from NFSd kthreads. The simplest way I found is to get proper net from one of transports with permanent sockets. Signed-off-by: NStanislav Kinsbursky <skinsbursky@parallels.com> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 Stanislav Kinsbursky 提交于
Function nfsd_shutdown is called from two places: nfsd_last_thread (when last kernel thread is exiting) and nfsd_svc (in case of kthreads starting error). When calling from nfsd_svc(), we can be sure that per-net resources are allocated, so we don't need to check per-net nfsd_net_up boolean flag. This allows us to remove nfsd_shutdown function at all and move check for per-net nfsd_net_up boolean flag to nfsd_last_thread. Signed-off-by: NStanislav Kinsbursky <skinsbursky@parallels.com> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 Stanislav Kinsbursky 提交于
Since we have generic NFSd resurces, we have to introduce some way how to allocate and destroy those resources on first per-net NFSd start and on last per-net NFSd stop respectively. This patch replaces global boolean nfsd_up flag (which is unused now) by users counter and use it to determine either we need to allocate generic resources or destroy them. Signed-off-by: NStanislav Kinsbursky <skinsbursky@parallels.com> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 Stanislav Kinsbursky 提交于
This patch moves nfsd_startup_generic() and nfsd_shutdown_generic() calls to nfsd_startup_net() and nfsd_shutdown_net() respectively, which allows us to call nfsd_startup_net() instead of nfsd_startup() and makes the code look clearer. It also modifies nfsd_svc() and nfsd_shutdown() to check nn->nfsd_net_up instead of global nfsd_up. The latter is now used only for generic resources shutdown and is currently useless. It will replaced by NFSd users counter later in this series. Signed-off-by: NStanislav Kinsbursky <skinsbursky@parallels.com> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 Stanislav Kinsbursky 提交于
NFSd have per-net resources and resources, used globally. Let's move generic resources init and shutdown to separated functions since they are going to be allocated on first NFSd service start and destroyed after last NFSd service shutdown. Signed-off-by: NStanislav Kinsbursky <skinsbursky@parallels.com> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 Stanislav Kinsbursky 提交于
This patch makes main step in NFSd containerisation. There could be different approaches to how to make NFSd able to handle incoming RPC request from different network namespaces. The two main options are: 1) Share NFSd kthreads betwween all network namespaces. 2) Create separated pool of threads for each namespace. While first approach looks more flexible, second one is simpler and non-racy. This patch implements the second option. To make it possible to allocate separate pools of threads, we have to make it possible to allocate separate NFSd service structures per net. Signed-off-by: NStanislav Kinsbursky <skinsbursky@parallels.com> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 Stanislav Kinsbursky 提交于
This is simple: an NFSd service can be started at different times in different network environments. So, its "boot time" has to be assigned per net. Signed-off-by: NStanislav Kinsbursky <skinsbursky@parallels.com> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 Stanislav Kinsbursky 提交于
This patch introduces introduces per-net "nfsd_net_up" boolean flag, which has the same purpose as general "nfsd_up" flag - skip init or shutdown of per-net resources in case of they are inited on shutted down respectively. Signed-off-by: NStanislav Kinsbursky <skinsbursky@parallels.com> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 Stanislav Kinsbursky 提交于
NFSd resources are partially per-net and partially globally used. This patch splits resources init and shutdown and moves per-net code to separated functions. Generic and per-net init and shutdown are called sequentially for a while. Signed-off-by: NStanislav Kinsbursky <skinsbursky@parallels.com> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 Stanislav Kinsbursky 提交于
Precursor patch. Hard-coded "init_net" will be replaced by proper one in future. Signed-off-by: NStanislav Kinsbursky <skinsbursky@parallels.com> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 Stanislav Kinsbursky 提交于
Precursor patch. Hard-coded "init_net" will be replaced by proper one in future. Signed-off-by: NStanislav Kinsbursky <skinsbursky@parallels.com> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 Stanislav Kinsbursky 提交于
Precursor patch. Hard-coded "init_net" will be replaced by proper one in future. Signed-off-by: NStanislav Kinsbursky <skinsbursky@parallels.com> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 Stanislav Kinsbursky 提交于
Precursor patch. Hard-coded "init_net" will be replaced by proper one in future. Signed-off-by: NStanislav Kinsbursky <skinsbursky@parallels.com> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 Stanislav Kinsbursky 提交于
Precursor patch. Hard-coded "init_net" will be replaced by proper one in future. Signed-off-by: NStanislav Kinsbursky <skinsbursky@parallels.com> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 Stanislav Kinsbursky 提交于
Precursor patch. Hard-coded "init_net" will be replaced by proper one in future. Signed-off-by: NStanislav Kinsbursky <skinsbursky@parallels.com> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 Stanislav Kinsbursky 提交于
There could be a situation, when NFSd was started in one network namespace, but stopped in another one. This will trigger kernel panic, because RPCBIND client is stored on per-net NFSd data, and will be NULL on NFSd shutdown. Signed-off-by: NStanislav Kinsbursky <skinsbursky@parallels.com> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 Neil Brown 提交于
With NFSv4, if we create a file then open it we explicit avoid checking the permissions on the file during the open because the fact that we created it ensures we should be allow to open it (the create and the open should appear to be a single operation). However if the reply to an EXCLUSIVE create gets lots and the client resends the create, the current code will perform the permission check - because it doesn't realise that it did the open already.. This patch should fix this. Note that I haven't actually seen this cause a problem. I was just looking at the code trying to figure out a different EXCLUSIVE open related issue, and this looked wrong. (Fix confirmed with pynfs 4.0 test OPEN4--bfields) Cc: stable@kernel.org Signed-off-by: NNeilBrown <neilb@suse.de> [bfields: use OWNER_OVERRIDE and update for 4.1] Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 Stanislav Kinsbursky 提交于
This is a cleanup patch. Signed-off-by: NStanislav Kinsbursky <skinsbursky@parallels.com> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 Stanislav Kinsbursky 提交于
Pointer to client tracking operations - client_tracking_ops - have to be containerized, because different environment can support different trackers (for example, legacy tracker currently is not suported in container). Signed-off-by: NStanislav Kinsbursky <skinsbursky@parallels.com> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
- 04 12月, 2012 6 次提交
-
-
由 J. Bruce Fields 提交于
Fix nfsd4_lockt and release_lockowner to lookup the referenced client, so that it can renew it, or correctly return "expired", as appropriate. Also share some code while we're here. Reported-by: NFrank Filz <ffilzlnx@us.ibm.com> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 J. Bruce Fields 提交于
Over TCP, RPC's are preceded by a single 4-byte field telling you how long the rpc is (in bytes). The spec also allows you to send an RPC in multiple such records (the high bit of the length field is used to tell you whether this is the final record). We've survived for years without supporting this because in practice the clients we care about don't use it. But the userland rpc libraries do, and every now and then an experimental client will run into this. (Most recently I noticed it while trying to write a pynfs check.) And we're really on the wrong side of the spec here--let's fix this. Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 J. Bruce Fields 提交于
Keep a separate field, sk_datalen, that tracks only the data contained in a fragment, not including the fragment header. For now, this is always just max(0, sk_tcplen - 4), but after we allow multiple fragments sk_datalen will accumulate the total rpc data size while sk_tcplen only tracks progress receiving the current fragment. Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 J. Bruce Fields 提交于
The full reclen doesn't include the fragment header, but sk_tcplen does. Fix this to make it an apples-to-apples comparison. Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 J. Bruce Fields 提交于
Soon we want to support multiple fragments, in which case it may be legal for a single fragment to be smaller than 8 bytes, so we'll want to delay this check till we've reached the last fragment. Also fix an outdated comment. Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 J. Bruce Fields 提交于
Byte-swapping in place is always a little dubious. Let's instead define this field to always be big-endian, and do the swapping on demand where we need it. Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
- 03 12月, 2012 10 次提交
-
-
由 Bryan Schumaker 提交于
Write the client's ip address to any state file and all appropriate state for that client will be forgotten. Signed-off-by: NBryan Schumaker <bjschuma@netapp.com> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 Bryan Schumaker 提交于
Controlling the read and write functions allows me to add in "forget client w.x.y.z", since we won't be limited to reading and writing only u64 values. Signed-off-by: NBryan Schumaker <bjschuma@netapp.com> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 Bryan Schumaker 提交于
I also log basic information that I can figure out about the type of state (such as number of locks for each client IP address). This can be useful for checking that state was actually dropped and later for checking if the client was able to recover. Signed-off-by: NBryan Schumaker <bjschuma@netapp.com> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 Bryan Schumaker 提交于
The eventual goal is to forget state based on ip address, so it makes sense to call this function in a for-each-client loop until the correct amount of state is forgotten. I also use this patch as an opportunity to rename the forget function from "func()" to "forget()". Signed-off-by: NBryan Schumaker <bjschuma@netapp.com> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 Bryan Schumaker 提交于
Once I have a client, I can easily use its delegation list rather than searching the file hash table for delegations to remove. Signed-off-by: NBryan Schumaker <bjschuma@netapp.com> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 Bryan Schumaker 提交于
Using "forget_n_state()" forces me to implement the code needed to forget a specific client's openowners. Signed-off-by: NBryan Schumaker <bjschuma@netapp.com> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 Bryan Schumaker 提交于
I use the new "forget_n_state()" function to iterate through each client first when searching for locks. This may slow down forgetting locks a little bit, but it implements most of the code needed to forget a specified client's locks. Signed-off-by: NBryan Schumaker <bjschuma@netapp.com> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 Bryan Schumaker 提交于
I added in a generic for-each loop that takes a pass over the client_lru list for the current net namespace and calls some function. The next few patches will update other operations to use this function as well. A value of 0 still means "forget everything that is found". Signed-off-by: NBryan Schumaker <bjschuma@netapp.com> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 Bryan Schumaker 提交于
Each function touches state in some way, so getting the lock earlier can help simplify code. Signed-off-by: NBryan Schumaker <bjschuma@netapp.com> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
由 J. Bruce Fields 提交于
Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-
- 29 11月, 2012 1 次提交
-
-
由 Bryan Schumaker 提交于
There were only a small number of functions in this file and since they all affect stored state I think it makes sense to put them in state.h instead. I also dropped most static inline declarations since there are no callers when fault injection is not enabled. Signed-off-by: NBryan Schumaker <bjschuma@netapp.com> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
-