- 10 10月, 2007 25 次提交
-
-
由 \"Talpey, Thomas\ 提交于
This file implements the configuration target, protocol template and constants for the rpcrdma transport framing, for use by the xprtrdma rpc transport implementation. Signed-off-by: NTom Talpey <talpey@netapp.com> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 \"Talpey, Thomas\ 提交于
Use the per-transport strings to display the transport protocol accurately. Signed-off-by: NTom Talpey <tmt@netapp.com> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 \"Talpey, Thomas\ 提交于
Instead of an { address family, raw IP protocol number }-tuple, use the newly-defined RPC identifier when creating clients in the upper layers. Signed-off-by: NTom Talpey <tmt@netapp.com> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 \"Talpey, Thomas\ 提交于
Adds a flag word to the xdrbuf struct which indicates any bulk disposition of the data. This enables RPC transport providers to marshal it efficiently/appropriately, and may enable other optimizations. Signed-off-by: NTom Talpey <tmt@netapp.com> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Trond Myklebust 提交于
The previous patch introduced a bug when copying the server address. Also clarify a copy into the auth_flavours array: currently the two size calculations are equivalent, but we may decide to change the size of auth_flavors[] at some point. Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 \"Talpey, Thomas\ 提交于
The user-visible nfs4_mount_data does not contain sufficient data to describe new mount options, and also is now a legacy structure. Replace it with the internal nfs_parsed_mount_data for nfsv4 in-kernel use. Signed-off-by: NTom Talpey <tmt@netapp.com> Acked-by: NChuck Lever <chuck.lever@oracle.com> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 \"Talpey, Thomas\ 提交于
The user-visible nfs_mount_data does not contain sufficient data to describe new mount options, and also is now a legacy structure. Replace it with the internal nfs_parsed_mount_data for nfsv[23] in-kernel use. Signed-off-by: NTom Talpey <tmt@netapp.com> Acked-by: NChuck Lever <chuck.lever@oracle.com> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 \"Talpey, Thomas\ 提交于
In preparation for rearranging the nfs mount argument passing, make the nfs_parsed_mount_data struct visible across nfs kernel files. Signed-off-by: NTom Talpey <tmt@netapp.com> Acked-by: NChuck Lever <chuck.lever@oracle.com> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Chuck Lever 提交于
Due to recent edict to remove or replace printk's that can flood the system log. Signed-off-by: NChuck Lever <chuck.lever@oracle.com> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Chuck Lever 提交于
Due to recent edict to remove or replace printk's that might flood the system log. Signed-off-by: NChuck Lever <chuck.lever@oracle.com> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Chuck Lever 提交于
Due to recent edict to replace or remove printk's that can be triggered en masse by remote misbehavior. Left a few that only occur just before a BUG. Signed-off-by: NChuck Lever <chuck.lever@oracle.com> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Chuck Lever 提交于
I got the 'mounthost=' option wrong - it shouldn't look for an address value, but rather a hostname value. However, the in-kernel mount client and NFS client cannot resolve a hostname by themselves; they rely on user-land to pass in the resolved address. Create a new mount option that does take an address so that the mount program's address can be passed in. The mount hostname is now ignored by the kernel. Signed-off-by: NChuck Lever <chuck.lever@oracle.com> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 James Lentini 提交于
If no mount server port number is specified, the previous change to the kernel mount client inadvertently allows the NFS server's port number to be the used as the mount server's port number. If the user specifies an NFS server port (-o port=x), the mount will fail. The fix below sets the mount server's port to 0 if no mount server port is specified by the user. Signed-off-by: NJames Lentini <jlentini@netapp.com> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Chuck Lever 提交于
Simplify the in-kernel mount client by using autobind instead of an explicit call to rpc_getport_sync. Signed-off-by: NChuck Lever <chuck.lever@oracle.com> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Jeff Layton 提交于
A minor thing, but useful when working with a server with multiple addrs. This looks like it might also be necessary if Miklos' effort to eliminate /etc/mtab ever comes to fruition. When displaying mount options in /proc/mounts, the kernel prints "addr=hostname". This info is redundant since we already have the hostname displayed as part of the "device" section of the mount. This patch changes it to display the IP address to which the socket is connected. Signed-off-by: NJeff Layton <jlayton@redhat.com> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Christoph Hellwig 提交于
no need to set up foo-objs these days. Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Fabio Olive Leite 提交于
I would like to discuss the idea that the current checks for attribute timeout using time_after are inadequate for 32bit architectures, since time_after works correctly only when the two timestamps being compared are within 2^31 jiffies of each other. The signed overflow caused by comparing values more than 2^31 jiffies apart will flip the result, causing incorrect assumptions of validity. 2^31 jiffies is a fairly large period of time (~25 days) when compared to the lifetime of most kernel data structures, but for long lived NFS mounts that can sit idle for months (think that for some reason autofs cannot be used), it is easy to compare inode attribute timestamps with very disparate or even bogus values (as in when jiffies have wrapped many times, where the comparison doesn't even make sense). Currently the code tests for attribute timeout by simply adding the desired amount of jiffies to the stored timestamp and comparing that with the current timestamp of obtained attribute data with time_after. This is incorrect, as it returns true for the desired timeout period and another full 2^31 range of jiffies. In testing with artificial jumps (several small jumps, not one big crank) of the jiffies I was able to reproduce a problem found in a server with very long lived NFS mounts, where attributes would not be refreshed even after touching files and directories in the server: Initial uptime: 03:42:01 up 6 min, 0 users, load average: 0.01, 0.12, 0.07 NFS volume is mounted and time is advanced: 03:38:09 up 25 days, 2 min, 0 users, load average: 1.22, 1.05, 1.08 # ls -l /local/A/foo/bar /nfs/A/foo/bar -rw-r--r-- 1 root root 0 Dec 17 03:38 /local/A/foo/bar -rw-r--r-- 1 root root 0 Nov 22 00:36 /nfs/A/foo/bar # touch /local/A/foo/bar # ls -l /local/A/foo/bar /nfs/A/foo/bar -rw-r--r-- 1 root root 0 Dec 17 03:47 /local/A/foo/bar -rw-r--r-- 1 root root 0 Nov 22 00:36 /nfs/A/foo/bar We can see the local mtime is updated, but the NFS mount still shows the old value. The patch below makes it work: Initial setup... 07:11:02 up 25 days, 1 min, 0 users, load average: 0.15, 0.03, 0.04 # ls -l /local/A/foo/bar /nfs/A/foo/bar -rw-r--r-- 1 root root 0 Jan 11 07:11 /local/A/foo/bar -rw-r--r-- 1 root root 0 Jan 11 07:11 /nfs/A/foo/bar # touch /local/A/foo/bar # ls -l /local/A/foo/bar /nfs/A/foo/bar -rw-r--r-- 1 root root 0 Jan 11 07:14 /local/A/foo/bar -rw-r--r-- 1 root root 0 Jan 11 07:14 /nfs/A/foo/bar Signed-off-by: NFabio Olive Leite <fleite@redhat.com> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Peter Staubach 提交于
Hi. Attached is a patch to modify the NFS client code to support 64 bit ino's, as appropriate for the system and the NFS protocol version. The code basically just expand the NFS interfaces for routines which handle ino's from using ino_t to u64 and then uses the fileid in the nfs_inode instead of i_ino in the inode. The code paths that were updated are in the getattr method and the readdir methods. This should be no real change on 64 bit platforms. Since the ino_t is an unsigned long, it would already be 64 bits wide. Thanx... ps Signed-off-by: NPeter Staubach <staubach@redhat.com> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Trond Myklebust 提交于
This helps prevent huge queues of background writes from building up whenever the server runs out of disk or quota space, or if someone changes the file access modes behind our backs. Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Trond Myklebust 提交于
Schedule writes using WB_SYNC_NONE first, then come back for a second pass using WB_SYNC_ALL. Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Trond Myklebust 提交于
The only user of nfs_sync_mapping_range() is nfs_getattr(), which uses it to flush out the entire inode without sending a commit. We therefore replace nfs_sync_mapping_range with a more appropriate helper. Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Trond Myklebust 提交于
Just call write_cache_pages directly instead of hacking the writeback control structure in order to find out if we were called from writepages() or directly from the VM. Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Trond Myklebust 提交于
The addition of nfs_page_mkwrite means that We should no longer need to create requests inside nfs_writepage() Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Trond Myklebust 提交于
This is needed in order to set up a proper nfs_page request for mmapped files. Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Trond Myklebust 提交于
The recent fix for a circular lock dependency unfortunately introduced a potential memory leak in the event where the call to nlmsvc_lookup_host fails for some reason. Thanks to Roel Kluin for spotting this. Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 09 10月, 2007 1 次提交
-
-
由 Yan Zheng 提交于
When IOCB_FLAG_RESFD flag is set and iocb->aio_resfd is incorrect, statement 'goto out_put_req' is executed. At label 'out_put_req', aio_put_req(..) is called, which requires 'req->ki_filp' set. Signed-off-by: Yan Zheng<yanzheng@21cn.com> Cc: Zach Brown <zach.brown@oracle.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 04 10月, 2007 1 次提交
-
-
由 Sunil Mushran 提交于
The fs was not unlocking the local alloc inode mutex in the code path in which it failed to find a window of free bits in the global bitmap. Signed-off-by: NSunil Mushran <sunil.mushran@oracle.com> Signed-off-by: NMark Fasheh <mark.fasheh@oracle.com>
-
- 02 10月, 2007 1 次提交
-
-
由 Linus Torvalds 提交于
Nick Piggin points out that splice isn't being good about the mmap semaphore: while two readers can nest inside each others, it does leave a possible deadlock if a writer (ie a new mmap()) comes in during that nesting. Original "just move the locking" patch by Nick, replaced by one by me based on an optimistic pagefault_disable(). And then Jens tested and updated that patch. Reported-by: NNick Piggin <npiggin@suse.de> Tested-by: NJens Axboe <jens.axboe@oracle.com> Cc: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 01 10月, 2007 1 次提交
-
-
由 Tim Shimmin 提交于
This reverts commit b394e43e. Lachlan McIlroy says: It tried to fix an issue where log replay is replaying an inode cluster initialisation transaction that should not be replayed because the inode cluster on disk is more up to date. Since we don't log file sizes (we rely on inode flushing to get them to disk) then we can't just replay all the transations in the log and expect the inode to be completely restored. We lose file size updates. Unfortunately this fix is causing more (serious) problems than it is fixing. SGI-PV: 969656 SGI-Modid: xfs-linux-melb:xfs-kern:29804a Signed-off-by: NLachlan McIlroy <lachlan@sgi.com> Signed-off-by: NTim Shimmin <tes@sgi.com>
-
- 29 9月, 2007 1 次提交
-
-
由 Trond Myklebust 提交于
It doesn't look as if the NFS file name limit is being initialised correctly in the struct nfs_server. Make sure that we limit whatever is being set in nfs_probe_fsinfo() and nfs_init_server(). Also ensure that readdirplus and nfs4_path_walk respect our file name limits. Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 27 9月, 2007 1 次提交
-
-
由 Trond Myklebust 提交于
The problem is that the garbage collector for the 'host' structures nlm_gc_hosts(), holds nlm_host_mutex while calling down to nlmsvc_mark_resources, which, eventually takes the file->f_mutex. We cannot therefore call nlmsvc_lookup_host() from within nlmsvc_create_block, since the caller will already hold file->f_mutex, so the attempt to grab nlm_host_mutex may deadlock. Fix the problem by calling nlmsvc_lookup_host() outside the file->f_mutex. Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 25 9月, 2007 1 次提交
-
-
由 Evgeniy Dushistov 提交于
Different types of ufs hold state in different places, to hide complexity of this, there is ufs_get_fs_state, it returns state according to "UFS_SB(sb)->s_flags", but during mount ufs_get_fs_state is called, before setting s_flags, this cause message for ufs types like sun ufs: "fs need fsck", and remount in readonly state. Signed-off-by: NEvgeniy Dushistov <dushistov@mail.ru> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 03 10月, 2007 2 次提交
-
-
由 Andrew Morton 提交于
Cc: Bernd Schmidt <bernd.schmidt@analog.com> Cc: David McCullough <davidm@snapgear.com> Cc: Greg Ungerer <gerg@snapgear.com> Cc: Miles Bader <miles.bader@necel.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NBryan Wu <bryan.wu@analog.com>
-
由 Bernd Schmidt 提交于
Add minimum support for the Blackfin relocations, since we don't have enough space in each reloc. The idea is to store a value with one relocation so that subsequent ones can access it. Actually, this patch is required for Blackfin. Currently if BINFMT_FLAT is enabled, git-tree kernel will fail to compile. Signed-off-by: NBernd Schmidt <bernd.schmidt@analog.com> Signed-off-by: NBryan Wu <bryan.wu@analog.com> Cc: David McCullough <davidm@snapgear.com> Cc: Greg Ungerer <gerg@snapgear.com> Cc: Miles Bader <miles.bader@necel.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
-
- 21 9月, 2007 6 次提交
-
-
由 Jean Tourrilhes 提交于
Johannes just found that we are missing a compat-ioctl declaration. The fix is trivial. As previous patches for compat-ioctl, this should also go to stable. More info : http://marc.info/?l=linux-wireless&m=119029667902588&w=2Signed-off-by: NJean Tourrilhes <jt@hpl.hp.com> Signed-off-by: NJohn W. Linville <linville@tuxdriver.com>
-
由 Sunil Mushran 提交于
The ocfs2_vote_msg and ocfs2_response_msg structs needed to be packed to ensure similar sizeofs in 32-bit and 64-bit arches. Without this, we had inadvertantly broken 32/64 bit cross mounts. Signed-off-by: NSunil Mushran <sunil.mushran@oracle.com> Signed-off-by: NMark Fasheh <mark.fasheh@oracle.com>
-
由 Mark Fasheh 提交于
The target page offsets were being incorrectly set a second time in ocfs2_prepare_page_for_write(), which was causing problems on a 16k page size kernel. Additionally, ocfs2_write_failure() was incorrectly using those parameters instead of the parameters for the individual page being cleaned up. Signed-off-by: NMark Fasheh <mark.fasheh@oracle.com>
-
由 Mark Fasheh 提交于
This was broken for file systems whose cluster size is greater than page size. Pos needs to be incremented as we loop through the descriptors, and len needs to be capped to the size of a single cluster. Signed-off-by: NMark Fasheh <mark.fasheh@oracle.com>
-
由 Mark Fasheh 提交于
The ocfs2 write code loops through a page much like the block code, except that ocfs2 allocation units can be any size, including larger than page size. Typically it's equal to or larger than page size - most kernels run 4k pages, the minimum ocfs2 allocation (cluster) size. Some changes introduced during 2.6.23 changed the way writes to pages are handled, and inadvertantly broke support for > 4k page size. Instead of just writing one cluster at a time, we now handle the whole page in one pass. This means that multiple (small) seperate allocations might happen in the same pass. The allocation code howver typically optimizes by getting the maximum which was reserved. This triggered a BUG_ON in the extend code where it'd ask for a single bit (for one part of a > 4k page) and get back more than it asked for. Fix this by providing a variant of the high level allocation function which allows the caller to specify a maximum. The traditional function remains and just calls the new one with a maximum determined from the initial reservation. Signed-off-by: NMark Fasheh <mark.fasheh@oracle.com>
-
由 Davide Libenzi 提交于
This simplifies signalfd code, by avoiding it to remain attached to the sighand during its lifetime. In this way, the signalfd remain attached to the sighand only during poll(2) (and select and epoll) and read(2). This also allows to remove all the custom "tsk == current" checks in kernel/signal.c, since dequeue_signal() will only be called by "current". I think this is also what Ben was suggesting time ago. The external effect of this, is that a thread can extract only its own private signals and the group ones. I think this is an acceptable behaviour, in that those are the signals the thread would be able to fetch w/out signalfd. Signed-off-by: NDavide Libenzi <davidel@xmailserver.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-