- 12 3月, 2011 3 次提交
-
-
由 Andy Adamson 提交于
Data servers cannot send nfs4_proc_get_lease_time. but still need to setup state renewal. Add the NFS_CS_CHECK_LEASE_TIME bit to indicate if the lease time can be checked. Signed-off-by: NAndy Adamson <andros@netapp.com> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Trond Myklebust 提交于
There are no more external users of nfs4_state_mark_reclaim_nograce() or nfs4_state_mark_reclaim_reboot(), so mark them as static. Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Trond Myklebust 提交于
nfs4_schedule_state_recovery() should only be used when we need to force the state manager to check the lease. If we just want to start the state manager in order to handle a state recovery situation, we should be using nfs4_schedule_state_manager(). This patch fixes the abuses of nfs4_schedule_state_recovery() by replacing its use with a set of helper functions that do the right thing. Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
- 26 1月, 2011 1 次提交
-
-
由 Andy Adamson 提交于
The information required to find the nfs_client cooresponding to the incoming back channel request is contained in the NFS layer. Perform minimal checking in the RPC layer pg_authenticate method, and push more detailed checking into the NFS layer where the nfs_client can be found. Signed-off-by: NAndy Adamson <andros@netapp.com> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
- 07 1月, 2011 4 次提交
-
-
由 Chuck Lever 提交于
NFSv4 migration needs to reassociate state owners from the source to the destination nfs_server data structures. To make that easier, move the cl_state_owners field to the nfs_server struct. cl_openowner_id and cl_lockowner_id accompany this move, as they are used in conjunction with cl_state_owners. The cl_lock field in the parent nfs_client continues to protect all three of these fields. Signed-off-by: NChuck Lever <chuck.lever@oracle.com> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Fred Isaman 提交于
A layout can request return-on-close. How this interacts with the forgetful model of never sending LAYOUTRETURNS is a bit ambiguous. We forget any layouts marked roc, and wait for them to be completely forgotten before continuing with the close. In addition, to compensate for races with any inflight LAYOUTGETs, and the fact that we do not get any layout stateid back from the server, we set the barrier to the worst case scenario of current_seqid + number of outstanding LAYOUTGETS. Signed-off-by: NFred Isaman <iisaman@netapp.com> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Andy Adamson 提交于
Currently session draining only drains the fore channel. The back channel processing must also be drained. Use the back channel highest_slot_used to indicate that a callback is being processed by the callback thread. Move the session complete to be per channel. When the session is draininig, wait for any current back channel processing to complete and stop all new back channel processing by returning NFS4ERR_DELAY to the back channel client. Drain the back channel, then the fore channel. Signed-off-by: NAndy Adamson <andros@netapp.com> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Andy Adamson 提交于
The sessions based callback service is started prior to the CREATE_SESSION call so that it can handle CB_NULL requests which can be sent before the CREATE_SESSION call returns and the session ID is known. Set the callback sessionid after a sucessful CREATE_SESSION. Signed-off-by: NAndy Adamson <andros@netapp.com> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
- 25 10月, 2010 1 次提交
-
-
由 Andy Adamson 提交于
In particular, server reboot will invalidate all layouts. Note that in order to have an active layout, we must get a successful response from the server. To avoid adding that machinery, this patch just includes a stub that fakes up a successful return. Since the layout is never referenced for io, this is not a problem. Signed-off-by: NAndy Adamson <andros@netapp.com> Signed-off-by: NBenny Halevy <bhalevy@panasas.com> Signed-off-by: NDean Hildebrand <dhildebz@umich.edu> Signed-off-by: NFred Isaman <iisaman@netapp.com> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
- 24 10月, 2010 2 次提交
-
-
由 Randy Dunlap 提交于
nfs4state.c uses interfaces from ratelimit.h. It needs to include that header file to fix build errors: fs/nfs/nfs4state.c:1195: warning: type defaults to 'int' in declaration of 'DEFINE_RATELIMIT_STATE' fs/nfs/nfs4state.c:1195: warning: parameter names (without types) in function declaration fs/nfs/nfs4state.c:1195: error: invalid storage class for function 'DEFINE_RATELIMIT_STATE' fs/nfs/nfs4state.c:1195: error: implicit declaration of function '__ratelimit' fs/nfs/nfs4state.c:1195: error: '_rs' undeclared (first use in this function) Signed-off-by: NRandy Dunlap <randy.dunlap@oracle.com> Cc: Trond Myklebust <Trond.Myklebust@netapp.com> Cc: linux-nfs@vger.kernel.org Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Trond Myklebust 提交于
Otherwise, we cannot recover state correctly. Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
- 20 10月, 2010 1 次提交
-
-
由 Trond Myklebust 提交于
If the server sends us an NFS4ERR_STALE_CLIENTID while the state management thread is busy reclaiming state, we do want to treat all state that wasn't reclaimed before the STALE_CLIENTID as if a network partition occurred (see the edge conditions described in RFC3530 and RFC5661). What we do not want to do is to send an nfs4_reclaim_complete(), since we haven't yet even started reclaiming state after the server rebooted. Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com> Cc: stable@kernel.org
-
- 05 10月, 2010 1 次提交
-
-
由 Arnd Bergmann 提交于
This prepares the removal of the big kernel lock from the file locking code. We still use the BKL as long as fs/lockd uses it and ceph might sleep, but we can flip the definition to a private spinlock as soon as that's done. All users outside of fs/lockd get converted to use lock_flocks() instead of lock_kernel() where appropriate. Based on an earlier patch to use a spinlock from Matthew Wilcox, who has attempted this a few times before, the earliest patch from over 10 years ago turned it into a semaphore, which ended up being slower than the BKL and was subsequently reverted. Someone should do some serious performance testing when this becomes a spinlock, since this has caused problems before. Using a spinlock should be at least as good as the BKL in theory, but who knows... Signed-off-by: NArnd Bergmann <arnd@arndb.de> Acked-by: NMatthew Wilcox <willy@linux.intel.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Trond Myklebust <trond.myklebust@fys.uio.no> Cc: "J. Bruce Fields" <bfields@fieldses.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Miklos Szeredi <mszeredi@suse.cz> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: John Kacur <jkacur@redhat.com> Cc: Sage Weil <sage@newdream.net> Cc: linux-kernel@vger.kernel.org Cc: linux-fsdevel@vger.kernel.org
-
- 31 7月, 2010 2 次提交
-
-
由 Trond Myklebust 提交于
flock locks want to be labelled using the process pid, while posix locks want to be labelled using the fl_owner. Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Trond Myklebust 提交于
This is needed by NFSv4.0 servers in order to keep the number of locking stateids at a manageable level. Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
- 25 6月, 2010 1 次提交
-
-
由 Trond Myklebust 提交于
The 'so_delegations' list appears to be unused. Also eliminate so_client. If we already have so_server, we can get to the nfs_client structure. Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
- 23 6月, 2010 2 次提交
-
-
由 Trond Myklebust 提交于
Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Trond Myklebust 提交于
Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
- 15 5月, 2010 2 次提交
-
-
由 Trond Myklebust 提交于
We do not want to have the state recovery thread kick off and wait for a memory reclaim, since that may deadlock when the writebacks end up waiting for the state recovery thread to complete. The safe thing is therefore to use GFP_NOFS in all open, close, delegation return, lock, etc. operations that may be called by the state recovery thread. Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Trond Myklebust 提交于
Reviewed-by: NChuck Lever <chuck.lever@oracle.com> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
- 03 3月, 2010 1 次提交
-
-
由 Trond Myklebust 提交于
Ensure that we change the EXCHANGE_ID verifier (i.e. clp->cl_boot_time) when we want to reset all state. This is mainly needed when the server tells us that it is revoking our open or lock stateids. Handle revoking of recallable state by expiring the delegations. Handle callback path issues by expiring the delegations and then resetting the session. Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
- 10 2月, 2010 3 次提交
-
-
由 Andy Adamson 提交于
Signed-off-by: NAndy Adamson <andros@netapp.com> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Andy Adamson 提交于
Drain the fore channel and reset the max_slots to the new value. Signed-off-by: NAndy Adamson <andros@netapp.com> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Jeff Layton 提交于
If a KRB5 TGT ticket expires, we don't want to return an error immediatel. If someone has a long running job and just forgets to run "kinit" in time then this will make it fail. Instead, we want to treat this situation as we would NFS4ERR_DELAY and retry the upcall after delaying a bit with an exponential backoff. This patch just makes any place that would handle NFS4ERR_DELAY also handle -EKEYEXPIRED the same way. In the future, we may want to be more sophisticated however and handle hard vs. soft mounts differently, or specify some upper limit on how long we'll wait for a new TGT to be acquired. Signed-off-by: NJeff Layton <jlayton@redhat.com> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
- 27 1月, 2010 1 次提交
-
-
由 Trond Myklebust 提交于
Even if the server is crazy, we should be able to mark the stateid as being bad, to ensure it gets recovered. Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com> Reviewed-by: NChuck Lever <chuck.lever@oracle.com>
-
- 16 12月, 2009 4 次提交
-
-
由 Trond Myklebust 提交于
Commit 5601a00d (nfs: run state manager in privileged mode) introduces a regression in the NFSv4 code when compiled with CONFIG_NFS_V4_1. The calls to nfs4_end_drain_session() from the main loop in nfs4_state_manager() Oops due to the lack of an NFSv4.1 session when running NFSv4.0. The fix is to move those two calls back into nfs41_init_clientid() and nfs4_reset_session(). The calls to nfs4_end_drain_session() that remain inside nfs4_state_manager() are safe, since the NFSv4.0 code will never set the NFS4CLNT_SESSION_DRAINING bit. Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Trond Myklebust 提交于
If the CLOSE or OPEN_DOWNGRADE call triggers a state recovery, and has to be resent, then we must release the seqid. Otherwise the open recovery will wait for the close to finish, which causes a deadlock. This is mainly a NFSv4.1 problem, although it can theoretically happen with NFSv4.0 too, in a OPEN_DOWNGRADE situation. Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Alexandros Batsakis 提交于
Signed-off-by: NAlexandros Batsakis <batsakis@netapp.com> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Alexandros Batsakis 提交于
Signed-off-by: NAlexandros Batsakis <batsakis@netapp.com> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
- 09 12月, 2009 1 次提交
-
-
由 Ricardo Labiaga 提交于
The NFSv4.1 spec indicates RECLAIM_COMPLETE is to be issued whenever a client establishes a new client id, not only after detecting the server has rebooted. Set the NFS4CLNT_RECLAIM_REBOOT bit after every new client id has been established. This enables us to issue RECLAIM_COMPLETE during the wrap up of the NFS4CLNT_RECLAIM_REBOOT state. Signed-off-by: NRicardo Labiaga <Ricardo.Labiaga@netapp.com> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
- 07 12月, 2009 3 次提交
-
-
由 Ricardo Labiaga 提交于
The state manager was not marking the stateids as needing to be reclaimed after reestablishing the clientid. Signed-off-by: NRicardo Labiaga <Ricardo.Labiaga@netapp.com> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Ricardo Labiaga 提交于
If CREATE_SESSION fails with NFS4ERR_STALE_CLIENTID, don't clear the NFS4CLNT_SESSION_DRAINING flag and don't wake RPCs waiting for the session to be reestablished. We don't have a session yet, so there is no reason to wake other RPCs. This avoids sending spurious compounds with bogus sequenceID during session and state recovery. Signed-off-by: NRicardo Labiaga <Ricardo.Labiaga@netapp.com> [Trond.Myklebust@netapp.com: cleaned up patch by adding the nfs41_begin/end_drain_session() helpers] Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Ricardo Labiaga 提交于
Move call to get the lease time and the setup of the state renewal out of nfs4_create_session so that it can be called after clearing the DRAINING flag. We use the getattr RPC to obtain the lease time, which requires a sequence slot. Signed-off-by: NRicardo Labiaga <Ricardo.Labiaga@netapp.com> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
- 06 12月, 2009 6 次提交
-
-
由 Trond Myklebust 提交于
Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Trond Myklebust 提交于
We should not assume that nfs41_init_clientid() will always want to initialise the session. If it is being called due to a server reboot, then we just want to reset the session after re-establishing the clientid. Fix this by getting rid of the 'reset' parameter in nfs4_proc_create_session(), and instead relying on whether or not the session slot table pointer is non-NULL. Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Ricardo Labiaga 提交于
This patch invokes RECLAIM_COMPLETE after the client is done reclaiming state. There are interpretations of the spec that suggest that RECLAIM_COMPLETE should also be issued after a new clientid has been obtained from the server and even if there is no state to reclaim. This tells the server that the client has no state to reclaim even if the client isn't aware the server may have rebooted. Signed-off-by: NRicardo Labiaga <Ricardo.Labiaga@netapp.com> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Ricardo Labiaga 提交于
Implements RECLAIM_COMPLETE as an asynchronous RPC. NFS4ERR_DELAY is retried, NFS4ERR_DEADSESSION invokes the error handling but does not result in a retry, since we don't want to have a lingering RECLAIM_COMPLETE call sent in the middle of a possible new state recovery cycle. If a session reset occurs, a new wave of reclaim operations will follow, containing their own RECLAIM_COMPLETE call. We don't want a retry to get on the way of recovery by incorrectly indicating to the server that we're done reclaiming state. A subsequent patch invokes the functionality. Signed-off-by: NRicardo Labiaga <Ricardo.Labiaga@netapp.com> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Trond Myklebust 提交于
Otherwise we have no guarantees that other processes won't start another RPC call while we're resetting the session. Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Alexandros Batsakis 提交于
the server can indicate a number of error conditions by setting the appropriate bits in the SEQUENCE operation. The client re-establishes state with the server when it receives one of those, with the action depending on the specific case. Signed-off-by: NAlexandros Batsakis <batsakis@netapp.com> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
- 05 12月, 2009 1 次提交
-
-
由 Andy Adamson 提交于
Replace sync and async handlers setting of the NFS4CLNT_SESSION_SETUP bit with setting NFS4CLNT_CHECK_LEASE, and let the state manager decide to reset the session. Signed-off-by: NAndy Adamson <andros@netapp.com> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-