- 24 10月, 2010 10 次提交
-
-
由 Bryan Schumaker 提交于
Remove the page size checking code for a readdir decode. This is now done by decode_dirent with xdr_streams. Signed-off-by: NBryan Schumaker <bjschuma@netapp.com> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Bryan Schumaker 提交于
Convert nfs*xdr.c to use an xdr stream in decode_dirent. This will prevent a kernel oops that has been occuring when reading a vmapped page. Signed-off-by: NBryan Schumaker <bjschuma@netapp.com> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Trond Myklebust 提交于
We sometimes need to be able to read ahead in an xdr_stream without incrementing the current pointer position. Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Bryan Schumaker 提交于
We will now use readdir plus even on directories that are very large. Signed-off-by: NBryan Schumaker <bjschuma@netapp.com> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Bryan Schumaker 提交于
This patch adds readdir plus support to the cache array. Signed-off-by: NBryan Schumaker <bjschuma@netapp.com> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Trond Myklebust 提交于
If we're going through the loop in nfs_readdir() more than once, we usually do not want to restart searching from the beginning of the pages cache. We only want to do that if the previous search failed... Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Bryan Schumaker 提交于
This patch adds the readdir cache array and functions to retreive the array stored on a cache page, clear the array by freeing allocated memory, add an entry to the array, and search the array for a given cookie. It then modifies readdir to make use of the new cache array. With the new cache array method, we no longer need some of this code. Finally, nfs_llseek_dir() will set file->f_pos to a value greater than 0 and desc->dir_cookie to zero. When we see this, readdir needs to find the file at position file->f_pos from the start of the directory. Signed-off-by: NBryan Schumaker <bjschuma@netapp.com> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Randy Dunlap 提交于
nfs4state.c uses interfaces from ratelimit.h. It needs to include that header file to fix build errors: fs/nfs/nfs4state.c:1195: warning: type defaults to 'int' in declaration of 'DEFINE_RATELIMIT_STATE' fs/nfs/nfs4state.c:1195: warning: parameter names (without types) in function declaration fs/nfs/nfs4state.c:1195: error: invalid storage class for function 'DEFINE_RATELIMIT_STATE' fs/nfs/nfs4state.c:1195: error: implicit declaration of function '__ratelimit' fs/nfs/nfs4state.c:1195: error: '_rs' undeclared (first use in this function) Signed-off-by: NRandy Dunlap <randy.dunlap@oracle.com> Cc: Trond Myklebust <Trond.Myklebust@netapp.com> Cc: linux-nfs@vger.kernel.org Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Trond Myklebust 提交于
Otherwise, we cannot recover state correctly. Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Trond Myklebust 提交于
If nfs_intent_set_file() returns an error, we usually want to pass that back up the stack. Also ensure that nfs_open_revalidate() returns '1' on success. Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
- 20 10月, 2010 4 次提交
-
-
由 Trond Myklebust 提交于
If the server sends us an NFS4ERR_STALE_CLIENTID while the state management thread is busy reclaiming state, we do want to treat all state that wasn't reclaimed before the STALE_CLIENTID as if a network partition occurred (see the edge conditions described in RFC3530 and RFC5661). What we do not want to do is to send an nfs4_reclaim_complete(), since we haven't yet even started reclaiming state after the server rebooted. Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com> Cc: stable@kernel.org
-
由 Trond Myklebust 提交于
In the case of a server reboot, the state recovery thread starts by calling nfs4_state_end_reclaim_reboot() in order to avoid edge conditions when the server reboots while the client is in the middle of recovery. However, if the client has already marked the nfs4_state as requiring reboot recovery, then the above behaviour will cause the recovery thread to treat the open as if it was part of such an edge condition: the open will be recovered as if it was part of a lease expiration (and all the locks will be lost). Fix is to remove the call to nfs4_state_mark_reclaim_reboot from nfs4_async_handle_error(), and nfs4_handle_exception(). Instead we leave it to the recovery thread to do this for us. Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com> Cc: stable@kernel.org
-
由 Trond Myklebust 提交于
NFSv4 open recovery is currently broken: since we do not clear the state->flags states before attempting recovery, we end up with the 'can_open_cached()' function triggering. This again leads to no OPEN call being put on the wire. Reported-by: NSachin Prabhu <sprabhu@redhat.com> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com> Cc: stable@kernel.org
-
由 Trond Myklebust 提交于
In the case where we lock the page, and then find out that the page has been thrown out of the page cache, we should just return VM_FAULT_NOPAGE. This is what block_page_mkwrite() does in these situations. Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com> Cc: stable@kernel.org
-
- 08 10月, 2010 1 次提交
-
-
由 Bryan Schumaker 提交于
This patch creates a new idmapper system that uses the request-key function to place a call into userspace to map user and group ids to names. The old idmapper was single threaded, which prevented more than one request from running at a single time. This means that a user would have to wait for an upcall to finish before accessing a cached result. The upcall result is stored on a keyring of type id_resolver. See the file Documentation/filesystems/nfs/idmapper.txt for instructions. Signed-off-by: NBryan Schumaker <bjschuma@netapp.com> [Trond: fix up the return value of nfs_idmap_lookup_name and clean up code] Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
- 30 9月, 2010 3 次提交
-
-
由 Trond Myklebust 提交于
We may end up removing the current entry from nfs_access_lru_list. Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Jeff Layton 提交于
WB_SYNC_NONE is supposed to mean "don't wait on anything". That should also include not waiting for COMMIT calls to complete. WB_SYNC_NONE is also implied when wbc->nonblocking and wbc->for_background are set, so we can replace those checks in nfs_commit_unstable_pages with a check for WB_SYNC_NONE. Signed-off-by: NJeff Layton <jlayton@redhat.com> Reviewed-by: NWu Fengguang <fengguang.wu@intel.com> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Trond Myklebust 提交于
In nfs_open_revalidate(), if the open_context() call returns an inode that is not the same as dentry->d_inode, then we will call put_nfs_open_context() with a valid dentry->d_inode, but without the context being part of the nfsi->open_files list. In this case too, we want to just skip the list removal, but we do want to call the ->close_context() callback in order to close the NFSv4 state. Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com> Acked-by: NJeff Layton <jlayton@redhat.com>
-
- 24 9月, 2010 3 次提交
-
-
由 Benny Halevy 提交于
Having to explicitly initialize sr_slotid to NFS4_MAX_SLOT_TABLE resulted in numerous bugs. Keeping the current slot as a pointer to the slot table is more straight forward and robust as it's implicitly set up to NULL wherever the seq_res member is initialized to zeroes. Signed-off-by: NBenny Halevy <bhalevy@panasas.com> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Suresh Jayaraman 提交于
Display the status of 'local_lock' mount option in /proc/mounts. Signed-off-by: NSuresh Jayaraman <sjayaraman@suse.de> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Benny Halevy 提交于
inode may be NULL when put_nfs_open_context is called from nfs_atomic_lookup before d_add_unique(dentry, inode) Signed-off-by: NBenny Halevy <bhalevy@panasas.com> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
- 23 9月, 2010 1 次提交
-
-
由 Suresh Jayaraman 提交于
NFS clients since 2.6.12 support flock locks by emulating fcntl byte-range locks. Due to this, some windows applications which seem to use both flock (share mode lock mapped as flock by Samba) and fcntl locks sequentially on the same file, can't lock as they falsely assume the file is already locked. The problem was reported on a setup with windows clients accessing excel files on a Samba exported share which is originally a NFS mount from a NetApp filer. Older NFS clients (< 2.6.12) did not see this problem as flock locks were considered local. To support legacy flock behavior, this patch adds a mount option "-olocal_lock=" which can take the following values: 'none' - Neither flock locks nor POSIX locks are local 'flock' - flock locks are local 'posix' - fcntl/POSIX locks are local 'all' - Both flock locks and POSIX locks are local Testing: - This patch was tested by using -olocal_lock option with different values and the NLM calls were noted from the network packet captured. 'none' - NLM calls were seen during both flock() and fcntl(), flock lock was granted, fcntl was denied 'flock' - no NLM calls for flock(), NLM call was seen for fcntl(), granted 'posix' - NLM call was seen for flock() - granted, no NLM call for fcntl() 'all' - no NLM calls were seen during both flock() and fcntl() - No bugs were seen during NFSv4 locking/unlocking in general and NFSv4 reboot recovery. Cc: Neil Brown <neilb@suse.de> Signed-off-by: NSuresh Jayaraman <sjayaraman@suse.de> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
- 22 9月, 2010 8 次提交
-
-
由 Bryan Schumaker 提交于
This patch removes all calls to lock_kernel() from the client. This patch should be applied after the "fs/lock.c prepare for BKL removal" patch submitted by Arnd Bergmann on September 18. Signed-off-by: NBryan Schumaker <bjschuma@netapp.com> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Chuck Lever 提交于
Clean up: Introduce a helper to '\0'-terminate XDR strings that are placed in a page in the page cache. Signed-off-by: NChuck Lever <chuck.lever@oracle.com> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Chuck Lever 提交于
Clean up. Signed-off-by: NChuck Lever <chuck.lever@oracle.com> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Chuck Lever 提交于
Clean up. Signed-off-by: NChuck Lever <chuck.lever@oracle.com> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Trond Myklebust 提交于
create_singlethread_workqueue() is deprecated. Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Trond Myklebust 提交于
create_workqueue() is a deprecated function. Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Trond Myklebust 提交于
This fixes an Oopsable condition that was introduced by commit d3d4152a (nfs: make sillyrename an async operation) Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Trond Myklebust 提交于
The call to nfs_async_rename_release() after rpc_run_task() is incorrect. The rpc_run_task() is always guaranteed to call the ->rpc_release() method. Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
- 18 9月, 2010 4 次提交
-
-
由 Jeff Layton 提交于
A synchronous rename can be interrupted by a SIGKILL. If that happens during a sillyrename operation, it's possible for the rename call to be sent to the server, but the task exits before processing the reply. If this happens, the sillyrenamed file won't get cleaned up during nfs_dentry_iput and the server is left with a dangling .nfs* file hanging around. Fix this problem by turning sillyrename into an asynchronous operation and have the task doing the sillyrename just wait on the reply. If the task is killed before the sillyrename completes, it'll still proceed to completion. Signed-off-by: NJeff Layton <jlayton@redhat.com> Reviewed-by: NChuck Lever <chuck.lever@oracle.com> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Jeff Layton 提交于
...since that's where most of the sillyrenaming code lives. A comment block is added to the beginning as well to clarify how sillyrenaming works. Also, make nfs_async_unlink static as nfs_sillyrename is the only caller. Signed-off-by: NJeff Layton <jlayton@redhat.com> Reviewed-by: NChuck Lever <chuck.lever@oracle.com> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Jeff Layton 提交于
Right now, v3 and v4 have their own variants. Create a standard struct that will work for v3 and v4. v2 doesn't get anything but a simple error and so isn't affected by this. Signed-off-by: NJeff Layton <jlayton@redhat.com> Reviewed-by: NChuck Lever <chuck.lever@oracle.com> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Jeff Layton 提交于
Each NFS version has its own version of the rename args container. Standardize them on a common one that's identical to the one NFSv4 uses. Signed-off-by: NJeff Layton <jlayton@redhat.com> Reviewed-by: NChuck Lever <chuck.lever@oracle.com> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
- 17 9月, 2010 6 次提交
-
-
由 Trond Myklebust 提交于
Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Trond Myklebust 提交于
Remove all remaining references to the struct nameidata from the low level NFS layers. Again pass down a partially initialised struct nfs_open_context when we want to do atomic open+create. Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Trond Myklebust 提交于
Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Trond Myklebust 提交于
Remove references to 'struct nameidata' from the low-level open_revalidate code, and replace them with a struct nfs_open_context which will be correctly initialised upon success. Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Trond Myklebust 提交于
Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Trond Myklebust 提交于
Start moving the 'struct nameidata' dependent code out of the lower level NFS code in preparation for the removal of open intents. Instead of the struct nameidata, we pass down a partially initialised struct nfs_open_context that will be fully initialised by the atomic open upon success. Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-