- 28 9月, 2016 8 次提交
-
-
由 Trond Myklebust 提交于
Allow the callers of nfs_remove_bad_delegation() to specify the stateid that needs to be marked as bad. Signed-off-by: NTrond Myklebust <trond.myklebust@primarydata.com> Tested-by: NOleg Drokin <green@linuxhacker.ru> Signed-off-by: NAnna Schumaker <Anna.Schumaker@Netapp.com>
-
由 Trond Myklebust 提交于
In NFSv4.1 and newer, if the server decides to revoke some or all of the protocol state, the client is required to iterate through all the stateids that it holds and call TEST_STATEID to determine which stateids still correspond to valid state, and then call FREE_STATEID on the others. Signed-off-by: NTrond Myklebust <trond.myklebust@primarydata.com> Tested-by: NOleg Drokin <green@linuxhacker.ru> Signed-off-by: NAnna Schumaker <Anna.Schumaker@Netapp.com>
-
由 Trond Myklebust 提交于
If the server crashes while we're testing stateids for validity, then we want to initiate session recovery. Usually, we will be calling from a state manager thread, though, so we don't really want to wait. Signed-off-by: NTrond Myklebust <trond.myklebust@primarydata.com> Tested-by: NOleg Drokin <green@linuxhacker.ru> Signed-off-by: NAnna Schumaker <Anna.Schumaker@Netapp.com>
-
由 Trond Myklebust 提交于
If the delegation has been marked as revoked, we don't have to test it, because we should already have called FREE_STATEID on it. Signed-off-by: NTrond Myklebust <trond.myklebust@primarydata.com> Tested-by: NOlek Drokin <green@linuxhacker.ru> Signed-off-by: NAnna Schumaker <Anna.Schumaker@Netapp.com>
-
由 Trond Myklebust 提交于
We must not allow the use of delegations that have been revoked or are being returned. Signed-off-by: NTrond Myklebust <trond.myklebust@primarydata.com> Fixes: 869f9dfa ("NFSv4: Fix races between nfs_remove_bad_delegation()...") Signed-off-by: NTrond Myklebust <trond.myklebust@primarydata.com> Cc: stable@vger.kernel.org # v3.19+ Tested-by: NOleg Drokin <green@linuxhacker.ru> Signed-off-by: NAnna Schumaker <Anna.Schumaker@Netapp.com>
-
由 Trond Myklebust 提交于
If the delegation is revoked, then it can't be used for caching. Fixes: 869f9dfa ("NFSv4: Fix races between nfs_remove_bad_delegation()...") Signed-off-by: NTrond Myklebust <trond.myklebust@primarydata.com> Cc: stable@vger.kernel.org # v3.19+ Tested-by: NOleg Drokin <green@linuxhacker.ru> Signed-off-by: NAnna Schumaker <Anna.Schumaker@Netapp.com>
-
由 Trond Myklebust 提交于
Due to inode number reuse in filesystems, we can end up corrupting the inode on our client if we apply the file attributes without ensuring that the filehandle matches. Typical symptoms include spurious "mode changed" reports in the syslog. We still do want to ensure that we don't invalidate the dentry if the inode number matches, but we don't have a filehandle. Fixes: fa923369 ("NFS: Don't require a filehandle to refresh...") Signed-off-by: NTrond Myklebust <trond.myklebust@primarydata.com> Cc: stable@vger.kernel.org # v4.0+ Tested-by: NOleg Drokin <green@linuxhacker.ru> Signed-off-by: NAnna Schumaker <Anna.Schumaker@Netapp.com>
-
由 Trond Myklebust 提交于
As described in RFC5661, section 18.46, some of the status flags exist in order to tell the client when it needs to acknowledge the existence of revoked state on the server and/or to recover state. Those flags will then remain set until the recovery procedure is done. In order to avoid looping, the client therefore needs to ignore those particular flags while recovering. Signed-off-by: NTrond Myklebust <trond.myklebust@primarydata.com> Tested-by: NOleg Drokin <green@linuxhacker.ru> Signed-off-by: NAnna Schumaker <Anna.Schumaker@Netapp.com>
-
- 23 9月, 2016 11 次提交
-
-
由 Daniel Wagner 提交于
There is only one waiter for the completion, therefore there is no need to use complete_all(). Let's make that clear by using complete() instead of complete_all(). The generic caching code from sunrpc is calling revisit() only once. The usage pattern of the completion is: waiter context waker context do_cache_lookup_wait() nfs_cache_defer_req_alloc() init_completion() do_cache_lookup() nfs_cache_wait_for_upcall() wait_for_completion_timeout() nfs_dns_cache_revisit() complete() nfs_cache_defer_req_put() Signed-off-by: NDaniel Wagner <daniel.wagner@bmw-carit.de> Signed-off-by: NAnna Schumaker <Anna.Schumaker@Netapp.com>
-
由 Daniel Wagner 提交于
There is only one waiter for the completion, therefore there is no need to use complete_all(). Let's make that clear by using complete() instead of complete_all(). nfs_file_direct_write() or nfs_file_direct_read() allocated a request object via nfs_direct_req_alloc(), which initializes the completion. The request object then is freed later in the exit path. Between the initialization and the release either nfs_direct_write_schedule_iovec() resp nfs_direct_read_schedule_iovec() are called which will asynchronously process the request. The calling function waits via nfs_direct_wait() till the async work has been done. Thus there is only one waiter on the completion. nfs_direct_pgio_init() and nfs_direct_read_completion() are passed via function pointers to nfs pageio. The first function does a ref counting (get_dreq() and put_dreq()) which ensures that nfs_direct_read_completion() and nfs_direct_read_schedule_iovec() only call the completion path once. The usage pattern of the completion is: waiter context waker context nfs_file_direct_write() dreq = nfs_direct_req_alloc() init_completion() nfs_direct_write_schedule_iovec() nfs_direct_wait() wait_for_completion_killable() nfs_direct_write_schedule_work() nfs_direct_complete() complete() nfs_file_direct_read() dreq = nfs_direct_req_all() init_completion() nfs_direct_read_schedule_iovec() nfs_direct_wait() wait_for_completion_killable() nfs_direct_read_schedule_iovec() nfs_direct_complete() complete() nfs_direct_read_completion() nfs_direct_complete() complete() Signed-off-by: NDaniel Wagner <daniel.wagner@bmw-carit.de> Signed-off-by: NAnna Schumaker <Anna.Schumaker@Netapp.com>
-
由 Trond Myklebust 提交于
Before we try to stash it in the dcache, we need to at least check that the filename passed to us by the server is non-empty and doesn't contain any illegal '\0' or '/' characters. Signed-off-by: NTrond Myklebust <trond.myklebust@primarydata.com> Signed-off-by: NAnna Schumaker <Anna.Schumaker@Netapp.com>
-
由 Jeff Layton 提交于
Add a waitqueue head to the client structure. Have clients set a wait on that queue prior to requesting a lock from the server. If the lock is blocked, then we can use that to wait for wakeups. Note that we do need to do this "manually" since we need to set the wait on the waitqueue prior to requesting the lock, but requesting a lock can involve activities that can block. However, only do that for NFSv4.1 locks, either by compiling out all of the waitqueue handling when CONFIG_NFS_V4_1 is disabled, or skipping all of it at runtime if we're dealing with v4.0, or v4.1 servers that don't send lock callbacks. Note too that even when we expect to get a lock callback, RFC5661 section 20.11.4 is pretty clear that we still need to poll for them, so we do still sleep on a timeout. We do however always poll at the longest interval in that case. Signed-off-by: NJeff Layton <jlayton@redhat.com> [Anna: nfs4_retry_setlk() "status" should default to -ERESTARTSYS] Signed-off-by: NAnna Schumaker <Anna.Schumaker@Netapp.com>
-
由 Jeff Layton 提交于
This also consolidates the waiting logic into a single function, instead of having it spread across two like it is now. Signed-off-by: NJeff Layton <jlayton@redhat.com> Signed-off-by: NAnna Schumaker <Anna.Schumaker@Netapp.com>
-
由 Jeff Layton 提交于
We need to have this info set up before adding the waiter to the waitqueue, so move this out of the _nfs4_proc_setlk and into the caller. That's more efficient anyway since we don't need to do this more than once if we end up waiting on the lock. Signed-off-by: NJeff Layton <jlayton@redhat.com> Signed-off-by: NAnna Schumaker <Anna.Schumaker@Netapp.com>
-
由 Jeff Layton 提交于
For now, the callback doesn't do anything. Support for that will be added in later patches. Signed-off-by: NJeff Layton <jlayton@redhat.com> Signed-off-by: NAnna Schumaker <Anna.Schumaker@Netapp.com>
-
由 Jeff Layton 提交于
We want to handle the two cases differently, such that we poll more aggressively when we don't expect a callback. Signed-off-by: NJeff Layton <jlayton@redhat.com> Signed-off-by: NAnna Schumaker <Anna.Schumaker@Netapp.com>
-
由 Jeff Layton 提交于
We actually want to use TASK_INTERRUPTIBLE sleeps when we're in the process of polling for a NFSv4 lock. If there is a signal pending when the task wakes up, then we'll be returning an error anyway. So, we might as well wake up immediately for non-fatal signals as well. That allows us to return to userland more quickly in that case, but won't change the error that userland sees. Also, there is no need to use the *_unsafe sleep variants here, as no vfs-layer locks should be held at this point. Signed-off-by: NJeff Layton <jlayton@redhat.com> Signed-off-by: NAnna Schumaker <Anna.Schumaker@Netapp.com>
-
由 Jeff Layton 提交于
Signed-off-by: NJeff Layton <jlayton@redhat.com> Signed-off-by: NAnna Schumaker <Anna.Schumaker@Netapp.com>
-
由 Jeff Layton 提交于
Since it gets passed through to xdr_inline_decode, we might as well have read_buf expect what it expects -- a size_t. Signed-off-by: NJeff Layton <jlayton@redhat.com> Signed-off-by: NAnna Schumaker <Anna.Schumaker@Netapp.com>
-
- 20 9月, 2016 15 次提交
-
-
由 Chao Yu 提交于
It will be more clean to use CONFIG_MIGRATION to cover nfs' private .migratepage in nfs_file_aops like we do in other part of nfs operations. Signed-off-by: NChao Yu <yuchao0@huawei.com> Signed-off-by: NAnna Schumaker <Anna.Schumaker@Netapp.com>
-
由 Jeff Layton 提交于
Currently, the layout driver selection code always chooses the first one from the list. That's not really ideal however, as the server can send the list of layout types in any order that it likes. It's up to the client to select the best one for its needs. This patch adds an ordered list of preferred driver types and has the selection code sort the list of available layout drivers according to it. Any unrecognized layout type is sorted to the end of the list. For now, the order of preference is hardcoded, but it should be possible to make this configurable in the future. Signed-off-by: NJeff Layton <jlayton@redhat.com> Reviewed-by: NJ. Bruce Fields <bfields@fieldses.org> Signed-off-by: NAnna Schumaker <Anna.Schumaker@Netapp.com>
-
由 Andy Adamson 提交于
Try all multipath addresses for a data server. The first address that successfully connects and creates a session is the DS mount address. All subsequent addresses are tested for session trunking and added as aliases. Signed-off-by: NAndy Adamson <andros@netapp.com> Signed-off-by: NAnna Schumaker <Anna.Schumaker@Netapp.com>
-
由 Andy Adamson 提交于
Use an async exchange id call to test for session trunking To conform with RFC 5661 section 18.35.4, the Non-Update on Existing Clientid case, save the exchange id verifier in cl_confirm and use it for the session trunking exhange id test. Signed-off-by: NAndy Adamson <andros@netapp.com> Signed-off-by: NAnna Schumaker <Anna.Schumaker@Netapp.com>
-
由 Andy Adamson 提交于
Signed-off-by: NAndy Adamson <andros@netapp.com> Signed-off-by: NAnna Schumaker <Anna.Schumaker@Netapp.com>
-
由 Andy Adamson 提交于
Signed-off-by: NAndy Adamson <andros@netapp.com> Signed-off-by: NAnna Schumaker <Anna.Schumaker@Netapp.com>
-
由 Andy Adamson 提交于
For session trunking, to compare nfs41_exchange_id_res with existing nfs_client Signed-off-by: NAndy Adamson <andros@netapp.com> Signed-off-by: NAnna Schumaker <Anna.Schumaker@Netapp.com>
-
由 Andy Adamson 提交于
For session trunking, to compare nfs41_exchange_id_res with exiting nfs_client. Signed-off-by: NAndy Adamson <andros@netapp.com> Signed-off-by: NAnna Schumaker <Anna.Schumaker@Netapp.com>
-
由 Andy Adamson 提交于
Testing an rpc_xprt for session trunking should not delay application progress over already established transports. Setup exchange_id to be able to be an async call to test an rpc_xprt for session trunking use. Signed-off-by: NAndy Adamson <andros@netapp.com> Signed-off-by: NAnna Schumaker <Anna.Schumaker@Netapp.com>
-
由 Trond Myklebust 提交于
Add support for the kernel parameter nfs.callback_nr_threads to set the number of threads that will be assigned to the callback channel. Add support for the kernel parameter nfs.nfs.max_session_cb_slots to set the maximum size of the callback channel slot table. Signed-off-by: NTrond Myklebust <trond.myklebust@primarydata.com> Signed-off-by: NAnna Schumaker <Anna.Schumaker@Netapp.com>
-
由 Trond Myklebust 提交于
This will allow us to bump the number of callback threads at will. Signed-off-by: NTrond Myklebust <trond.myklebust@primarydata.com> Signed-off-by: NAnna Schumaker <Anna.Schumaker@Netapp.com>
-
由 Trond Myklebust 提交于
Ensure that the nfs_callback_info[] array correctly tracks the struct svc_serv. Signed-off-by: NTrond Myklebust <trond.myklebust@primarydata.com> Signed-off-by: NAnna Schumaker <Anna.Schumaker@Netapp.com>
-
由 Trond Myklebust 提交于
Clean up. Signed-off-by: NTrond Myklebust <trond.myklebust@primarydata.com> Signed-off-by: NAnna Schumaker <Anna.Schumaker@Netapp.com>
-
由 Trond Myklebust 提交于
In order to manage the threads using svc_set_num_threads, we need to fill in a few extra fields. Signed-off-by: NTrond Myklebust <trond.myklebust@primarydata.com> Signed-off-by: NAnna Schumaker <Anna.Schumaker@Netapp.com>
-
由 Jeff Layton 提交于
Current NFSv4.1/pNFS client assumes that MDS supports only one layout type. While it's true for most existing servers, nevertheless, this can be change in the near future. For now, this patch just plumbs in the ability to track a list of layouts in the fsinfo structure. The existing behavior of the client is preserved, by having it just select the first entry in the list. Signed-off-by: NTigran Mkrtchyan <tigran.mkrtchyan@desy.de> Signed-off-by: NJeff Layton <jlayton@poochiereds.net> Reviewed-by: NJ. Bruce Fields <bfields@fieldses.org> Signed-off-by: NAnna Schumaker <Anna.Schumaker@Netapp.com>
-
- 12 9月, 2016 1 次提交
-
-
由 Trond Myklebust 提交于
Ensure that we conform to the algorithm described in RFC5661, section 18.36.4 for when to bump the sequence id. In essence we do it for all cases except when the RPC call timed out, or in case of the server returning NFS4ERR_DELAY or NFS4ERR_STALE_CLIENTID. Signed-off-by: NTrond Myklebust <trond.myklebust@primarydata.com> Cc: stable@vger.kernel.org
-
- 05 9月, 2016 1 次提交
-
-
由 Trond Myklebust 提交于
If there are outstanding LAYOUTGET rpc calls, then we want to ensure that we keep the layout stateid around so we that don't inadvertently pick up an old/misordered sequence id. The race is as follows: Client Server ====== ====== LAYOUTGET(seqid) LAYOUTGET(seqid) return LAYOUTGET(seqid+1) return LAYOUTGET(seqid+2) process LAYOUTGET(seqid+2) forget layout process LAYOUTGET(seqid+1) If it forgets the layout stateid before processing seqid+1, then the client will not check the layout->plh_barrier, and so will set the stateid with seqid+1. Signed-off-by: NTrond Myklebust <trond.myklebust@primarydata.com>
-
- 04 9月, 2016 4 次提交
-
-
由 Trond Myklebust 提交于
If the server fails to set lrp->res.lrs_present in the LAYOUTRETURN reply, then that means it believes the client holds no more layout state for that file, and that the layout stateid is now invalid. Signed-off-by: NTrond Myklebust <trond.myklebust@primarydata.com>
-
由 Trond Myklebust 提交于
If the layout was marked as invalid, we want to ensure to initialise the layout header fields correctly. Signed-off-by: NTrond Myklebust <trond.myklebust@primarydata.com>
-
由 Trond Myklebust 提交于
According to RFC5661, the client is responsible for serialising LAYOUTGET and LAYOUTRETURN to avoid ambiguity. Consider the case where we send both in parallel. Client Server ====== ====== LAYOUTGET(seqid=X) LAYOUTRETURN(seqid=X) LAYOUTGET return seqid=X+1 LAYOUTRETURN return seqid=X+2 Process LAYOUTRETURN Forget layout stateid Process LAYOUTGET Set seqid=X+1 The client processes the layoutget/layoutreturn in the wrong order, and since the result of the layoutreturn was to clear the only existing layout segment, the client forgets the layout stateid. When the LAYOUTGET comes in, it is treated as having a completely new stateid, and so the client sets the wrong sequence id... Fix is to check if there are outstanding LAYOUTGET requests before we send the LAYOUTRETURN (note that LAYOUGET will already wait if it sees an outstanding LAYOUTRETURN). Signed-off-by: NTrond Myklebust <trond.myklebust@primarydata.com> Cc: stable@vger.kernel.org # v4.5+ Signed-off-by: NTrond Myklebust <trond.myklebust@primarydata.com>
-
由 Trond Myklebust 提交于
When doing O_DSYNC writes, the actual write errors are reported through generic_write_sync(), so we must test the result. Reported-by: NJ. R. Okajima <hooanon05g@gmail.com> Fixes: 18290650 ("NFS: Move buffered I/O locking into nfs_file_write()") Signed-off-by: NTrond Myklebust <trond.myklebust@primarydata.com>
-