- 03 3月, 2010 2 次提交
-
-
由 Alexandros Batsakis 提交于
If the renewd send queue gets backlogged (e.g., if the server goes down), we will keep filling the queue with periodic RENEW/SEQUENCE requests. This patch schedules a new renewd request if and only if the previous one returns (either success or failure) Signed-off-by: NAlexandros Batsakis <batsakis@netapp.com> [Trond.Myklebust@netapp.com: moved nfs4_schedule_state_renewal() into separate nfs4_renew_release() and nfs41_sequence_release() callbacks to ensure correct behaviour on call setup failure] Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Alexandros Batsakis 提交于
renewd should be synchronously killed before we destroy the session in nfs4_clear_minor_version Signed-off-by: NAlexandros Batsakis <batsakis@netapp.com> [Trond.Myklebust@netapp.com: clean up to remove 'unused function warning when !CONFIG_NFS_V4] Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
- 10 2月, 2010 21 次提交
-
-
由 Chuck Lever 提交于
For NFSv2 and v3: O_DIRECT writes are always synchronous, and aren't cached, so nothing should be flushed when closing an NFS O_DIRECT file descriptor. Thus there are no write errors to report on close(2). In addition, there's no cached data to verify on the next open(2), so we don't need clean GETATTR results at close time to compare with. Thus, there's no need for the nfs_revalidate_inode() call when closing an NFS O_DIRECT file. This reduces the number of synchronous on-the-wire requests for a simple open-write-close of an NFS O_DIRECT file by roughly 20%. For NFSv4: Call nfs4_do_close() with wait set to zero when closing an NFS O_DIRECT file. The CLOSE will go on the wire, but the application won't wait for it to complete. Signed-off-by: NChuck Lever <chuck.lever@oracle.com> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Chuck Lever 提交于
The bytes counted by the performance counters for NFS writes should reflect write and sync errors. If the write(2) system call reports an error, the bytes should not be counted. And, if the write is short, the actual number of bytes that was written should be counted, not the number of bytes that was requested. Signed-off-by: NChuck Lever <chuck.lever@oracle.com> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Chuck Lever 提交于
Bytes read via the splice API should be accounted for in the NFS performance statistics. Signed-off-by: NChuck Lever <chuck.lever@oracle.com> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Chuck Lever 提交于
Currently, the NFS I/O counters count the number of bytes requested by applications, rather than the number of bytes actually read by the system calls. The number of bytes requested for reads is actually not that useful, because the value is usually a buffer size for reads. That is, that requested number is usually a maximum, and frequently doesn't reflect the actual number of bytes read. Signed-off-by: NChuck Lever <chuck.lever@oracle.com> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Chuck Lever 提交于
Nit: The VFSOPEN and VFSFLUSH counters are function call counters. Count every call to these routines. Signed-off-by: NChuck Lever <chuck.lever@oracle.com> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Andy Adamson 提交于
Signed-off-by: NAndy Adamson <andros@netapp.com> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Andy Adamson 提交于
Signed-off-by: NAndy Adamson <andros@netapp.com> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Andy Adamson 提交于
Return NFS4_OK if target high slotid equals enforced high slotid. Fix nfs_client reference leak. Signed-off-by: NAndy Adamson <andros@netapp.com> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Andy Adamson 提交于
When session is reset, client can renegotiate slot table size. Signed-off-by: NAndy Adamson <andros@netapp.com> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Andy Adamson 提交于
Drain the fore channel and reset the max_slots to the new value. Signed-off-by: NAndy Adamson <andros@netapp.com> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Andy Adamson 提交于
For now the back channel ca_maxresponsesize_cached is 0 and there is no backchannel DRC. Return NFS4ERR_REP_TOO_BIG_TO_CACHE when a cb_sequence cachethis is true. When it is false, return NFS4ERR_RETRY_UNCACHED_REP as the next operation error. Remember the replay error accross compound operation processing. Signed-off-by: NAndy Adamson <andros@netapp.com> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Andy Adamson 提交于
Make all cb_sequence arguments available to verify_seqid which will make replay decisions. Signed-off-by: NAndy Adamson <andros@netapp.com> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Andy Adamson 提交于
All callback operations have arguments to decode and require processing. The preprocess_nfs4X_op functions catch unsupported or illegal ops so decode_args and process_op pointers are always non NULL. Signed-off-by: NAndy Adamson <andros@netapp.com> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Andy Adamson 提交于
Skip all other processing when error is encountered. Signed-off-by: NAndy Adamson <andros@netapp.com> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Andy Adamson 提交于
Set NFS4ERR_RESOURCE as CB_COMPOUND status and do not return an op on decode_op_hdr or encode_op_hdr buffer overflow. NFS4ERR_RESOURCE is correct for v4.0. Will fix the return for v4.1 along with all the other NFS4ERR_RESOURCE errors in a later patch. Signed-off-by: NAndy Adamson <andros@netapp.com> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Mike Sager 提交于
If a CB_SEQUENCE referring call triple matches a slot table entry, the client is still waiting for a response to the original request. In this case, return NFS4ERR_DELAY as the response to the callback. Signed-off-by: NMike Sager <sager@netapp.com> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Mike Sager 提交于
Traverse a list of referring calls and look for a session/slot/seq number match. Signed-off-by: NMike Sager <sager@netapp.com> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Mike Sager 提交于
For the CREATE_SESSION attribute ca_maxresponsesize_cached, calculate the value based on the rpc reply header size plus the maximum nfs compound reply size. Signed-off-by: NMike Sager <sager@netapp.com> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Jeff Layton 提交于
Add a wrapper around rpc_call_sync that handles -EKEYEXPIRED errors from the RPC layer as it would an -EJUKEBOX error if NFSv2 had such a thing. Also, add a handler for that error for async calls that makes it resubmit the RPC on -EKEYEXPIRED. Signed-off-by: NJeff Layton <jlayton@redhat.com> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Jeff Layton 提交于
We're using -EKEYEXPIRED to indicate that a krb5 credcache contains an expired ticket and that we should have the NFS layer retry the RPC call instead of returning an error back to the caller. Handle this as we would an -EJUKEBOX error return. Signed-off-by: NJeff Layton <jlayton@redhat.com> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Jeff Layton 提交于
If a KRB5 TGT ticket expires, we don't want to return an error immediatel. If someone has a long running job and just forgets to run "kinit" in time then this will make it fail. Instead, we want to treat this situation as we would NFS4ERR_DELAY and retry the upcall after delaying a bit with an exponential backoff. This patch just makes any place that would handle NFS4ERR_DELAY also handle -EKEYEXPIRED the same way. In the future, we may want to be more sophisticated however and handle hard vs. soft mounts differently, or specify some upper limit on how long we'll wait for a new TGT to be acquired. Signed-off-by: NJeff Layton <jlayton@redhat.com> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
- 03 2月, 2010 3 次提交
-
-
由 Trond Myklebust 提交于
If the NFS_ATTR_FATTR_TYPE field isn't set in fattr->valid, then we should not set the S_IFMT part of inode->i_mode. Reported-by: NAl Viro <viro@zeniv.linux.org.uk> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Trond Myklebust 提交于
Ensure that we unregister the bdi before kill_anon_super() calls ida_remove() on our device name. Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com> Cc: stable@kernel.org
-
由 Trond Myklebust 提交于
The VM/VFS does not allow mapping->a_ops->invalidatepage() to fail. Unfortunately, nfs_wb_page_cancel() may fail if a fatal signal occurs. Since the NFS code assumes that the page stays mapped for as long as the writeback is active, we can end up Oopsing (among other things). The only safe fix here is to convert nfs_wait_on_request(), so as to make it uninterruptible (as is already the case with wait_on_page_writeback()). Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com> Cc: stable@kernel.org
-
- 27 1月, 2010 8 次提交
-
-
由 Trond Myklebust 提交于
Even if the server is crazy, we should be able to mark the stateid as being bad, to ensure it gets recovered. Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com> Reviewed-by: NChuck Lever <chuck.lever@oracle.com>
-
由 Trond Myklebust 提交于
Currently, nfs4_handle_exception() will call it twice if called with an error of -NFS4ERR_STALE_CLIENTID, -NFS4ERR_STALE_STATEID or -NFS4ERR_EXPIRED. Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com> Reviewed-by: NChuck Lever <chuck.lever@oracle.com>
-
由 Trond Myklebust 提交于
Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com> Cc: stable@kernel.org Reviewed-by: NChuck Lever <chuck.lever@oracle.com>
-
由 Trond Myklebust 提交于
In most cases, we just want to mark the lock_stateid sequence id as being uninitialised. Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com> Cc: stable@kernel.org Reviewed-by: NChuck Lever <chuck.lever@oracle.com>
-
由 David Howells 提交于
Avoid the following warnings when CONFIG_NFS_V4=n: fs/nfs/sysctl.c:19: warning: unused variable `nfs_set_port_max' fs/nfs/sysctl.c:18: warning: unused variable `nfs_set_port_min' by making those variables contingent on NFSv4 being configured. Signed-off-by: NDavid Howells <dhowells@redhat.com> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com> Reviewed-by: NChuck Lever <chuck.lever@oracle.com>
-
由 H Hartley Sweeten 提交于
The symbol nfs_commitdata_release is only used locally in this file. Make it static to prevent the following sparse warning: warning: symbol 'nfs_commitdata_release' was not declared. Should it be static? Signed-off-by: NH Hartley Sweeten <hsweeten@visionengravers.com> Cc: Trond Myklebust <Trond.Myklebust@netapp.com> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com> Reviewed-by: NChuck Lever <chuck.lever@oracle.com>
-
由 Trond Myklebust 提交于
If someone calls nfs_release_page(), we presumably already know that the page is clean, however it may be holding an unstable write. Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com> Cc: stable@kernel.org Reviewed-by: NChuck Lever <chuck.lever@oracle.com>
-
由 Trond Myklebust 提交于
Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com> Cc: stable@kernel.org Reviewed-by: NChuck Lever <chuck.lever@oracle.com>
-
- 07 1月, 2010 1 次提交
-
-
由 OGAWA Hirofumi 提交于
Recent change is missing to update "rehash". With that change, it will become the cause of adding dentry to hash twice. This explains the reason of Oops (dereference the freed dentry in __d_lookup()) on my machine. Signed-off-by: NOGAWA Hirofumi <hirofumi@mail.parknet.co.jp> Reported-by: NMarvin <marvin24@gmx.de> Cc: Trond Myklebust <trond.myklebust@fys.uio.no> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
- 18 12月, 2009 1 次提交
-
-
由 Linus Torvalds 提交于
This reverts commit e9496ff4. Quoth Al: "it's dependent on a lot of other stuff not currently in mainline and badly broken with current fs/namespace.c. Sorry, badly out-of-order cherry-pick from old queue. PS: there's a large pending series reworking the refcounting and lifetime rules for vfsmounts that will, among other things, allow to rip a subtree away _without_ dissolving connections in it, to be garbage-collected when all active references are gone. It's considerably saner wrt "is the subtree busy" logics, but it's nowhere near being ready for merge at the moment; this changeset is one of the things becoming possible with that sucker, but it certainly shouldn't have been picked during this cycle. My apologies..." Noticed-by: NEric Paris <eparis@redhat.com> Requested-by: NAl Viro <viro@ZenIV.linux.org.uk> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 17 12月, 2009 1 次提交
-
-
由 Al Viro 提交于
Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
- 16 12月, 2009 3 次提交
-
-
由 Trond Myklebust 提交于
Commit 5601a00d (nfs: run state manager in privileged mode) introduces a regression in the NFSv4 code when compiled with CONFIG_NFS_V4_1. The calls to nfs4_end_drain_session() from the main loop in nfs4_state_manager() Oops due to the lack of an NFSv4.1 session when running NFSv4.0. The fix is to move those two calls back into nfs41_init_clientid() and nfs4_reset_session(). The calls to nfs4_end_drain_session() that remain inside nfs4_state_manager() are safe, since the NFSv4.0 code will never set the NFS4CLNT_SESSION_DRAINING bit. Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Trond Myklebust 提交于
If the CLOSE or OPEN_DOWNGRADE call triggers a state recovery, and has to be resent, then we must release the seqid. Otherwise the open recovery will wait for the close to finish, which causes a deadlock. This is mainly a NFSv4.1 problem, although it can theoretically happen with NFSv4.0 too, in a OPEN_DOWNGRADE situation. Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Andy Adamson 提交于
If the rsize or wsize is not set on the mount command, negotiate the highest supported rsize and wsize in session creation. Fixes a bug where the client negotiated nfs41_maxwrite_overhead as ca_maxrequestsize and nfs41_maxread_overhead as ca_maxresponsesize resulting in NFS4ERR_REQ_TOO_BIG errors on writes. Signed-off-by: NAndy Adamson <andros@netapp.com> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-