- 11 3月, 2009 1 次提交
-
-
由 Trond Myklebust 提交于
Fix a memory leak due to allocation in the XDR layer. In cases where the RPC call needs to be retransmitted, we end up allocating new pages without clearing the old ones. Fix this by moving the allocation into nfs3_proc_setacls(). Also fix an issue discovered by Kevin Rudd, whereby the amount of memory reserved for the acls in the xdr_buf->head was miscalculated, and causing corruption. Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
- 24 12月, 2008 2 次提交
-
-
由 Trond Myklebust 提交于
Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Trond Myklebust 提交于
Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
- 15 10月, 2008 1 次提交
-
-
由 Trond Myklebust 提交于
It appears that 'jiffies' timestamps do not have high enough resolution for nfs_inode_attrs_need_update(). One problem is that a GETATTR can be launched within < 1 jiffy of the last operation that updated the attribute. Another problem is that RPC calls can take < 1 jiffy to execute. We can fix this by switching the variables to use a simple global counter that gets incremented every time we start another GETATTR call. Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
- 09 10月, 2008 1 次提交
-
-
由 Trond Myklebust 提交于
Peter Staubach suggested reducing NFS4_SETCLIENTID_NAMELEN by one byte so as to avoid 7 bytes of unnecessary padding. Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
- 08 10月, 2008 2 次提交
-
-
由 Chuck Lever 提交于
The sc_name field is currently 56 bytes long. This is not large enough to hold a pair of IPv6 addresses, the authentication type, the protocol name, and a uniquifier number. The maximum possible size of the name string using IPv6 addresses is just under 110 bytes, so I increased the size of the sc_name field to accomodate this maximum. In addition, the strings in the nfs4_setclientid structure are constructed with scnprintf(), which wants to terminate its output with '\0'. The sc_netid field was large enough only for a three byte netid string and a '\0' so inet6 netids were being truncated. Perhaps we don't need the overhead of scnprintf() to do a simple string copy, but I fixed this by increasing the size of the buffer by one byte. Since all three of the string buffers in nfs4_setclientid are constructed with scnprintf(), I increased the size of all three by one byte to document the requirement, although I don't think either the universal address field or the name field will be so small that these strings get truncated in this way. The size of the Linux client's client ID on the wire will be larger than before. RFC 3530 suggests the size limit for client IDs is 1024, and we are still well below that. Signed-off-by: NChuck Lever <chuck.lever@oracle.com> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Richard Kennedy 提交于
remove 8 bytes of padding from struct nfs_fattr on 64 bit builds This also removes padding from several nfs structures, including 16 bytes from nfs4_opendata, nfs4_createdata,nfs3_createdata & 8 bytes from nfs_read_data,nfs_write_data,nfs_removeres,nfs4_closedata This also reduces the reported stack usage of many nfs functions (30+). Signed-off-by: NRichard Kennedy <richard@rsk.demon.co.uk> ---- This patch is against the latest git 2.6.27-rc4. I've built & run this on my AMD64 desktop, & successfully run _simple_ tests with a 64 bit client => 32 bit server & 32 bit client to 64 bit server. On fedora with gcc (GCC) 4.3.0 20080428 (Red Hat 4.3.0-8) checkpatch reports 33 functions with reduced stack usage. e.g. __nfs_revalidate_inode [nfs] 216 => 200 _nfs4_proc_access [nfs] 304 => 288 _nfs4_proc_link [nfs] 536 => 504 _nfs4_proc_remove [nfs] 304 => 288 _nfs4_proc_rename [nfs] 584 => 552 nfs3_proc_access [nfs] 272 => 256 nfs3_proc_getacl [nfs] 384 => 368 nfs3_proc_link [nfs] 496 => 464 etc I can supply the complete list if anyone is interested. regards Richard Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
- 10 7月, 2008 2 次提交
-
-
由 Trond Myklebust 提交于
All instances are set to nfs_open(), so we should just remove the redundant indirection. Ditto for the file_release op Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Trond Myklebust 提交于
NFSv2 file locking currently fails the Connectathon tests, because the calls to the VFS locking code do not return an EINVAL error if the struct file_lock overflows the 32-bit boundaries. The problem is due to the fact that we occasionally call helpers from fs/locks.c in order to avoid RPC calls to the server when we know that a local process holds the lock. These helpers are, of course, always 64-bit enabled, so EINVAL is not returned in cases when it would if the call had gone to the NLM code. For consistency, we therefore add support for a bounds-checking helper. Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
- 20 4月, 2008 1 次提交
-
-
由 Trond Myklebust 提交于
It is quite possible that the OPEN, CLOSE, LOCK, LOCKU,... compounds fail before the actual stateful operation has been executed (for instance in the PUTFH call). There is no way to tell from the overall status result which operations were executed from the COMPOUND. The fix is to move incrementing of the sequence id into the XDR layer, so that we do it as we process the results from the stateful operation. Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
- 30 1月, 2008 4 次提交
-
-
由 Chuck Lever 提交于
RPC protocol version numbers are unsigned. Signed-off-by: NChuck Lever <chuck.lever@oracle.com> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Trond Myklebust 提交于
Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Chuck Lever 提交于
Ensure that the RPC buffer size specified for NFSv4 SETCLIENTID procedures matches what we are encoding into the buffer. See the definition of struct nfs4_setclientid {} and the encode_setclientid() function. Signed-off-by: NChuck Lever <chuck.lever@oracle.com> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Trond Myklebust 提交于
Move the common code for setting up the nfs_write_data and nfs_read_data structures into fs/nfs/read.c, fs/nfs/write.c and fs/nfs/direct.c. Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
- 10 10月, 2007 2 次提交
-
-
由 Trond Myklebust 提交于
NFSv2 and v4 don't offer weak cache consistency attributes on WRITE calls. In NFSv3, returning wcc data is optional. In all cases, we want to prevent the client from invalidating our cached data whenever ->write_done() attempts to update the inode attributes. Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Trond Myklebust 提交于
It doesn't really make sense to cache an access call without also revalidating the attributes. Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
- 20 7月, 2007 2 次提交
-
-
由 Trond Myklebust 提交于
Fix a couple of bugs: - Don't rely on the parent dentry still being valid when the call completes. Fixes a race with shrink_dcache_for_umount_subtree() - Don't remove the file if the filehandle has been labelled as stale. Fix a couple of inefficiencies - Remove the global list of sillyrenamed files. Instead we can cache the sillyrename information in the dentry->d_fsdata - Move common code from unlink_setup/unlink_done into fs/nfs/unlink.c Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Trond Myklebust 提交于
We need a common structure for setting up an unlink() rpc call in order to fix the asynchronous unlink code. Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
- 11 7月, 2007 2 次提交
-
-
由 Trond Myklebust 提交于
Currently we just use a 32-bit counter. Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Jeff Layton 提交于
The Linux NFS4 client simply skips over the bitmask in an O_EXCL open call and so it doesn't bother to reset any fields that may be holding the verifier. This patch has us save the first two words of the bitmask (which is all the current client has #defines for). The client then later checks this bitmask and turns on the appropriate flags in the sattr->ia_verify field for the following SETATTR call. This patch only currently checks to see if the server used the atime and mtime slots for the verifier (which is what the Linux server uses for this). I'm not sure of what other fields the server could reasonably use, but adding checks for others should be trivial. Signed-off-by: NJeff Layton <jlayton@redhat.com> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
- 13 2月, 2007 1 次提交
-
-
由 Arjan van de Ven 提交于
Many struct inode_operations in the kernel can be "const". Marking them const moves these to the .rodata section, which avoids false sharing with potential dirty data. In addition it'll catch accidental writes at compile time to these shared resources. Signed-off-by: NArjan van de Ven <arjan@linux.intel.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 04 2月, 2007 1 次提交
-
-
由 Trond Myklebust 提交于
It makes no sense to maintain 2 parallel systems for reading in pages. Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
- 06 12月, 2006 1 次提交
-
-
由 Trond Myklebust 提交于
Maintaining two parallel ways of doing synchronous writes is rather pointless. This patch gets rid of the legacy nfs_writepage_sync(), and replaces it with the faster asynchronous writes. Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
- 21 10月, 2006 2 次提交
-
-
由 Al Viro 提交于
Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk> Acked-by: NTrond Myklebust <trond.myklebust@fys.uio.no> Acked-by: NNeil Brown <neilb@suse.de> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Al Viro 提交于
on-the-wire data is big-endian [in large part pulled from Alexey's patch] Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk> Acked-by: NTrond Myklebust <trond.myklebust@fys.uio.no> Acked-by: NNeil Brown <neilb@suse.de> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
- 23 9月, 2006 6 次提交
-
-
由 Chuck Lever 提交于
Now that we have a copy of the symlink path in the page cache, we can pass a struct page down to the XDR routines instead of a string buffer. Test plan: Connectathon, all NFS versions. Signed-off-by: NChuck Lever <chuck.lever@oracle.com> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Chuck Lever 提交于
If the LOOKUP or GETATTR in nfs_instantiate fail, nfs_instantiate will do a d_drop before returning. But some callers already do a d_drop in the case of an error return. Make certain we do only one d_drop in all error paths. This issue was introduced because over time, the symlink proc API diverged slightly from the create/mkdir/mknod proc API. To prevent other coding mistakes of this type, change the symlink proc API to be more like create/mkdir/mknod and move the nfs_instantiate call into the symlink proc routines so it is used in exactly the same way for create, mkdir, mknod, and symlink. Test plan: Connectathon, all versions of NFS. Signed-off-by: NChuck Lever <chuck.lever@oracle.com> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Chuck Lever 提交于
include/linux/sunrpc/clnt.h already includes include/linux/sunrpc/xprt.h. We can remove xprt.h from source files that already include clnt.h. Likewise include/linux/sunrpc/timer.h. Test plan: Compile kernel with CONFIG_NFS enabled. Signed-off-by: NChuck Lever <chuck.lever@oracle.com> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 David Howells 提交于
Add some extra const qualifiers into NFS. Signed-Off-By: NDavid Howells <dhowells@redhat.com> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 David Howells 提交于
Add a set_capabilities NFS RPC op so that the server capabilities can be set. Signed-Off-By: NDavid Howells <dhowells@redhat.com> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 David Howells 提交于
Add a lookup filehandle NFS RPC op so that a file handle can be looked up without requiring dentries and inodes and other VFS stuff when doing an NFS4 pathwalk during mounting. Signed-Off-By: NDavid Howells <dhowells@redhat.com> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
- 09 9月, 2006 1 次提交
-
-
由 Trond Myklebust 提交于
The logic in nfs_direct_read_schedule and nfs_direct_write_schedule can allow data->npages to be one larger than rpages. This causes a page pointer to be written beyond the end of the pagevec in nfs_read_data (or nfs_write_data). Fix this by making nfs_(read|write)_alloc() calculate the size of the pagevec array, and initialise data->npages. Also get rid of the redundant argument to nfs_commit_alloc(). Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com> Cc: Chuck Lever <chuck.lever@oracle.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
- 25 8月, 2006 1 次提交
-
-
由 J. Bruce Fields 提交于
Neil Brown observed that the current limit of 32 bytes isn't enough to hold two ip addresses and the rest of the stuff we're putting in it, so it's often truncated to the point where it's unlikely to be unique. This can cause spurious CLID_INUSE's from the server. Signed-off-by: NJ. Bruce Fields <bfields@citi.umich.edu> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com> (cherry picked from fc8c17ec251e984ab3df9182ed097aa5b577c915 commit)
-
- 29 6月, 2006 1 次提交
-
-
由 Trond Myklebust 提交于
This reverts ccf01ef7 commit. No idea how git managed this one: when I asked it to merge the odirect topic branch it actually generated a patch which reverted the change. Reverting the 'merge' will once again reveal Chuck's recent NFS/O_DIRECT work to the world. Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
- 25 6月, 2006 2 次提交
-
-
由 Trond Myklebust 提交于
-
由 Chuck Lever 提交于
Neil Brown observed that the kmalloc() in nfs_get_user_pages() is more likely to fail if the I/O is large enough to require the allocation of more than a single page to keep track of all the pinned pages in the user's buffer. Instead of tracking one large page array per dreq/iocb, track pages per nfs_read/write_data, just like the cached I/O path does. An array for pages is already allocated for us by nfs_readdata_alloc() (and the write and commit equivalents). This is also required for adding support for vectored I/O to the NFS direct I/O path. The original reason to pin the user buffer and allocate all the NFS data structures before trying to schedule I/O was to ensure all needed resources are allocated on the client before starting to send requests. This reduces the chance that resource exhaustion on the client will cause a short read or write. On the other hand, for an application making very large application I/O requests, this means that it will be nearly impossible for the application to make forward progress on a resource-limited client. Thus, moving the buffer pinning functionality into the I/O scheduling loops should be good for scalability. The next patch will do the same for NFS data structure allocation. Signed-off-by: NChuck Lever <cel@netapp.com> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
- 09 6月, 2006 4 次提交
-
-
由 Manoj Naik 提交于
Respond to a moved error on NFS lookup by setting up the referral. Note: We don't actually follow the referral during lookup/getattr, but later when we detect fsid mismatch in inode revalidation (similar to the processing done for cloning submounts). Referrals will have fake attributes until they are actually followed or traversed. Signed-off-by: NManoj Naik <manoj@almaden.ibm.com> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Manoj Naik 提交于
Use component4-style formats for decoding list of servers and pathnames in fs_locations. Signed-off-by: NManoj Naik <manoj@almaden.ibm.com> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Trond Myklebust 提交于
NFSv4 allows for the fact that filesystems may be replicated across several servers or that they may be migrated to a backup server in case of failure of the primary server. fs_locations is an NFSv4 operation for retrieving information about the location of migrated and/or replicated filesystems. Based on an initial implementation by Jiaying Zhang <jiayingz@citi.umich.edu> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Trond Myklebust 提交于
This should enable us to detect if we are crossing a mountpoint in the case where the server is exporting "nohide" mounts. Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-