- 19 5月, 2011 4 次提交
-
-
由 Shirish Pargaonkar 提交于
Define (global) data structures to store ids, uids and gids, to which a SID maps. There are two separate trees, one for SID/uid and another one for SID/gid. A new type of key, cifs_idmap_key_type, is used. Keys are instantiated and searched using credential of the root by overriding and restoring the credentials of the caller requesting the key. Id mapping functions are invoked under config option of cifs acl. Signed-off-by: NShirish Pargaonkar <shirishpargaonkar@gmail.com> Signed-off-by: NSteve French <sfrench@us.ibm.com>
-
由 Shirish Pargaonkar 提交于
Remove config flag CIFS_EXPERIMENTAL. Do export operations under new config flag CIFS_NFSD_EXPORT Signed-off-by: NShirish Pargaonkar <shirishpargaonkar@gmail.com> Reviewed-by: NJeff Layton <jlayton@redhat.com> Signed-off-by: NSteve French <sfrench@us.ibm.com>
-
由 Steve French 提交于
The CIFSSMBNotify worker is unused, pending changes to allow it to be called via inotify, so move it into its own experimental config option so it does not get built in, until the necessary VFS support is fixed. It used to be used in dnotify, but according to Jeff, inotify needs minor changes before we can reenable this. CC: Jeff Layton <jlayton@redhat.com> Signed-off-by: NSteve French <sfrench@us.ibm.com>
-
由 Shirish Pargaonkar 提交于
ino is unused in function cifs_root_iget(). Signed-off-by: NShirish Pargaonkar <shirishpargaonkar@gmail.com> Signed-off-by: NSteve French <sfrench@us.ibm.com>
-
- 12 4月, 2011 4 次提交
-
-
由 Jeff Layton 提交于
This is more or less the same patch as before, but with some merge conflicts fixed up. If a process has a dirty page mapped into its page tables, then it has the ability to change it while the client is trying to write the data out to the server. If that happens after the signature has been calculated then that signature will then be wrong, and the server will likely reset the TCP connection. This patch adds a page_mkwrite handler for CIFS that simply takes the page lock. Because the page lock is held over the life of writepage and writepages, this prevents the page from becoming writeable until the write call has completed. With this, we can also remove the "sign_zero_copy" module option and always inline the pages when writing. Signed-off-by: NJeff Layton <jlayton@redhat.com> Signed-off-by: NSteve French <sfrench@us.ibm.com>
-
由 Jeff Layton 提交于
Commit 522440ed made cifs set backing_dev_info on the mapping attached to new inodes. This change caused a fairly significant read performance regression, as cifs started doing page-sized reads exclusively. By virtue of the fact that they're allocated as part of cifs_sb_info by kzalloc, the ra_pages on cifs BDIs get set to 0, which prevents any readahead. This forces the normal read codepaths to use readpage instead of readpages causing a four-fold increase in the number of read calls with the default rsize. Fix it by setting ra_pages in the BDI to the same value as that in the default_backing_dev_info. Fixes https://bugzilla.kernel.org/show_bug.cgi?id=31662 Cc: stable@kernel.org Reported-and-Tested-by: NTill <till2.schaefer@uni-dortmund.de> Signed-off-by: NJeff Layton <jlayton@redhat.com> Signed-off-by: NSteve French <sfrench@us.ibm.com>
-
由 Steve French 提交于
We artificially limited the user name to 32 bytes, but modern servers handle larger. Set the maximum length to a reasonable 256, and make the user name string dynamically allocated rather than a fixed size in session structure. Also clean up old checkpatch warning. Signed-off-by: NSteve French <sfrench@us.ibm.com>
-
由 Jeff Layton 提交于
This flag currently only affects whether we allow "zero-copy" writes with signing enabled. Typically we map pages in the pagecache directly into the write request. If signing is enabled however and the contents of the page change after the signature is calculated but before the write is sent then the signature will be wrong. Servers typically respond to this by closing down the socket. Still, this can provide a performance benefit so the "Experimental" flag was overloaded to allow this. That's really not a good place for this option however since it's not clear what that flag does. Move that flag instead to a new module parameter that better describes its purpose. That's also better since it can be set at module insertion time by configuring modprobe.d. Reviewed-by: NSuresh Jayaraman <sjayaraman@suse.de> Signed-off-by: NJeff Layton <jlayton@redhat.com> Signed-off-by: NSteve French <sfrench@us.ibm.com>
-
- 26 1月, 2011 1 次提交
-
-
由 Pavel Shilovsky 提交于
If we don't have Exclusive oplock we write a data to the server. Also set invalidate_mapping flag on the inode if we wrote something to the server. Add cifs_iovec_write to let the client write iovec buffers through CIFSSMBWrite2. Signed-off-by: NPavel Shilovsky <piastryyy@gmail.com> Reviewed-by: NJeff Layton <jlayton@redhat.com> Signed-off-by: NSteve French <sfrench@us.ibm.com>
-
- 21 1月, 2011 4 次提交
-
-
由 Pavel Shilovsky 提交于
Read from the cache if we have at least Level II oplock - otherwise read from the server. Add cifs_user_readv to let the client read into iovec buffers. Reviewed-by: NJeff Layton <jlayton@redhat.com> Signed-off-by: NPavel Shilovsky <piastryyy@gmail.com> Signed-off-by: NSteve French <sfrench@us.ibm.com>
-
由 Pavel Shilovsky 提交于
Invalidate inode mapping if we don't have at least Level II oplock. Reviewed-by: NJeff Layton <jlayton@redhat.com> Signed-off-by: NPavel Shilovsky <piastryyy@gmail.com> Signed-off-by: NSteve French <sfrench@us.ibm.com>
-
由 Pavel Shilovsky 提交于
Invalidate inode mapping if we don't have at least Level II oplock in cifs_strict_fsync. Also remove filemap_write_and_wait call from cifs_fsync because it is previously called from vfs_fsync_range. Add file operations' structures for strict cache mode. Reviewed-by: NJeff Layton <jlayton@redhat.com> Signed-off-by: NPavel Shilovsky <piastryyy@gmail.com> Signed-off-by: NSteve French <sfrench@us.ibm.com>
-
由 Steve French 提交于
If the server isn't responding to echoes, we don't want to leave tasks hung waiting for it to reply. At that point, we'll want to reconnect so that soft mounts can return an error to userspace quickly. If the client hasn't received a reply after a specified number of echo intervals, assume that the transport is down and attempt to reconnect the socket. The number of echo_intervals to wait before attempting to reconnect is tunable via a module parameter. Setting it to 0, means that the client will never attempt to reconnect. The default is 5. Signed-off-by: NJeff Layton <jlayton@redhat.com>
-
- 13 1月, 2011 1 次提交
-
-
由 Al Viro 提交于
Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
- 10 1月, 2011 1 次提交
-
-
由 Jeff Layton 提交于
Reduce false inode collisions by using the CreationTime like an i_generation field. This way, even if the server ends up reusing a uniqueid after a delete/create cycle, we can avoid matching the inode incorrectly. Signed-off-by: NJeff Layton <jlayton@redhat.com> Signed-off-by: NSteve French <sfrench@us.ibm.com>
-
- 07 1月, 2011 3 次提交
-
-
由 Nick Piggin 提交于
Signed-off-by: NNick Piggin <npiggin@kernel.dk>
-
由 Nick Piggin 提交于
RCU free the struct inode. This will allow: - Subsequent store-free path walking patch. The inode must be consulted for permissions when walking, so an RCU inode reference is a must. - sb_inode_list_lock to be moved inside i_lock because sb list walkers who want to take i_lock no longer need to take sb_inode_list_lock to walk the list in the first place. This will simplify and optimize locking. - Could remove some nested trylock loops in dcache code - Could potentially simplify things a bit in VM land. Do not need to take the page lock to follow page->mapping. The downsides of this is the performance cost of using RCU. In a simple creat/unlink microbenchmark, performance drops by about 10% due to inability to reuse cache-hot slab objects. As iterations increase and RCU freeing starts kicking over, this increases to about 20%. In cases where inode lifetimes are longer (ie. many inodes may be allocated during the average life span of a single inode), a lot of this cache reuse is not applicable, so the regression caused by this patch is smaller. The cache-hot regression could largely be avoided by using SLAB_DESTROY_BY_RCU, however this adds some complexity to list walking and store-free path walking, so I prefer to implement this at a later date, if it is shown to be a win in real situations. I haven't found a regression in any non-micro benchmark so I doubt it will be a problem. Signed-off-by: NNick Piggin <npiggin@kernel.dk>
-
由 Pavel Shilovsky 提交于
Make connect logic more ip-protocol independent and move RFC1001 stuff into a separate function. Also replace union addr in TCP_Server_Info structure with sockaddr_storage. Signed-off-by: NPavel Shilovsky <piastryyy@gmail.com> Reviewed-and-Tested-by: NJeff Layton <jlayton@samba.org> Signed-off-by: NSteve French <sfrench@us.ibm.com>
-
- 07 12月, 2010 1 次提交
-
-
由 Jeff Layton 提交于
...this string is zeroed out and nothing ever changes it. Signed-off-by: NJeff Layton <jlayton@redhat.com> Signed-off-by: NSteve French <sfrench@us.ibm.com>
-
- 03 12月, 2010 1 次提交
-
-
由 Suresh Jayaraman 提交于
Currently, the attribute cache timeout for CIFS is hardcoded to 1 second. This means that the client might have to issue a QPATHINFO/QFILEINFO call every 1 second to verify if something has changes, which seems too expensive. On the other hand, if the timeout is hardcoded to a higher value, workloads that expect strict cache coherency might see unexpected results. Making attribute cache timeout as a tunable will allow us to make a tradeoff between performance and cache metadata correctness depending on the application/workload needs. Add 'actimeo' tunable that can be used to tune the attribute cache timeout. The default timeout is set to 1 second. Also, display actimeo option value in /proc/mounts. It appears to me that 'actimeo' and the proposed (but not yet merged) 'strictcache' option cannot coexist, so care must be taken that we reset the other option if one of them is set. Changes since last post: - fix option parsing and handle possible values correcly Reviewed-by: NJeff Layton <jlayton@redhat.com> Signed-off-by: NSuresh Jayaraman <sjayaraman@suse.de> Signed-off-by: NSteve French <sfrench@us.ibm.com>
-
- 30 11月, 2010 1 次提交
-
-
由 Suresh Jayaraman 提交于
Reviewed-by: NJeff Layton <jlayton@redhat.com> Signed-off-by: NSuresh Jayaraman <sjayaraman@suse.de> Signed-off-by: NSteve French <sfrench@us.ibm.com>
-
- 06 11月, 2010 1 次提交
-
-
由 Pavel Shilovsky 提交于
All the callers already have a pointer to struct cifsInodeInfo. Use it. Signed-off-by: NSuresh Jayaraman <sjayaraman@suse.de> Signed-off-by: NPavel Shilovsky <piastryyy@gmail.com> Signed-off-by: NSteve French <sfrench@us.ibm.com>
-
- 03 11月, 2010 1 次提交
-
-
由 Jeff Layton 提交于
Radix trees are ideal when you want to track a bunch of pointers and can't embed a tracking structure within the target of those pointers. The tradeoff is an increase in memory, particularly if the tree is sparse. In CIFS, we use the tlink_tree to track tcon_link structs. A tcon_link can never be in more than one tlink_tree, so there's no impediment to using a rb_tree here instead of a radix tree. Convert the new multiuser mount code to use a rb_tree instead. This should reduce the memory required to manage the tlink_tree. Signed-off-by: NJeff Layton <jlayton@redhat.com> Signed-off-by: NSteve French <sfrench@us.ibm.com>
-
- 31 10月, 2010 2 次提交
-
-
由 Christoph Hellwig 提交于
The caller allocated it, the caller should free it. The only issue so far is that we could change the flp pointer even on an error return if the fl_change callback failed. But we can simply move the flp assignment after the fl_change invocation, as the callers don't care about the flp return value if the setlease call failed. Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 J. Bruce Fields 提交于
We modified setlease to require the caller to allocate the new lease in the case of creating a new lease, but forgot to fix up the filesystem methods. Cc: Steven Whitehouse <swhiteho@redhat.com> Cc: Steve French <sfrench@samba.org> Cc: Trond Myklebust <Trond.Myklebust@netapp.com> Signed-off-by: NJ. Bruce Fields <bfields@redhat.com> Acked-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 29 10月, 2010 1 次提交
-
-
由 Al Viro 提交于
Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
- 25 10月, 2010 1 次提交
-
-
由 Jeff Layton 提交于
write_behind_rc is redundant and just adds complexity to the code. What we really want to do instead is to use mapping_set_error to reset the flags on the mapping when we find a writeback error and can't report it to userspace yet. For cifs_flush and cifs_fsync, we shouldn't reset the flags since errors returned there do get reported to userspace. Signed-off-by: NJeff Layton <jlayton@redhat.com> Reviewed-by: NSuresh Jayaraman <sjayaraman@suse.de> Reviewed-by: NDave Kleikamp <shaggy@linux.vnet.ibm.com> Signed-off-by: NSteve French <sfrench@us.ibm.com>
-
- 21 10月, 2010 1 次提交
-
-
由 Suresh Jayaraman 提交于
cifs_tcp_ses_lock is a rwlock with protects the cifs_tcp_ses_list, server->smb_ses_list and the ses->tcon_list. It also protects a few ref counters in server, ses and tcon. In most cases the critical section doesn't seem to be large, in a few cases where it is slightly large, there seem to be really no benefit from concurrent access. I briefly considered RCU mechanism but it appears to me that there is no real need. Replace it with a spinlock and get rid of the last rwlock in the cifs code. Signed-off-by: NSuresh Jayaraman <sjayaraman@suse.de> Signed-off-by: NSteve French <sfrench@us.ibm.com>
-
- 18 10月, 2010 1 次提交
-
-
由 Jeff Layton 提交于
Convert this lock to a regular spinlock A rwlock_t offers little value here. It's more expensive than a regular spinlock unless you have a fairly large section of code that runs under the read lock and can benefit from the concurrency. Additionally, we need to ensure that the refcounting for files isn't racy and to do that we need to lock areas that can increment it for write. That means that the areas that can actually use a read_lock are very few and relatively infrequently used. While we're at it, change the name to something easier to type, and fix a bug in find_writable_file. cifsFileInfo_put can sleep and shouldn't be called while holding the lock. Signed-off-by: NJeff Layton <jlayton@redhat.com> Reviewed-by: NSuresh Jayaraman <sjayaraman@suse.de> Signed-off-by: NSteve French <sfrench@us.ibm.com>
-
- 13 10月, 2010 1 次提交
-
-
由 Jeff Layton 提交于
Filesystems aren't really supposed to do anything with a vfsmount. It's considered a layering violation since vfsmounts are entirely managed at the VFS layer. CIFS currently keeps an active reference to a vfsmount in order to prevent the superblock vanishing before an oplock break has completed. What we really want to do instead is to keep sb->s_active high until the oplock break has completed. This patch borrows the scheme that NFS uses for handling sillyrenames. An atomic_t is added to the cifs_sb_info. When it transitions from 0 to 1, an extra reference to the superblock is taken (by bumping the s_active value). When it transitions from 1 to 0, that reference is dropped and a the superblock teardown may proceed if there are no more references to it. Also, the vfsmount pointer is removed from cifsFileInfo and from cifs_new_fileinfo, and some bogus forward declarations are removed from cifsfs.h. Signed-off-by: NJeff Layton <jlayton@redhat.com> Reviewed-by: NSuresh Jayaraman <sjayaraman@suse.de> Acked-by: NDave Kleikamp <shaggy@linux.vnet.ibm.com> Signed-off-by: NSteve French <sfrench@us.ibm.com>
-
- 09 10月, 2010 1 次提交
-
-
由 Jeff Layton 提交于
Signed-off-by: NJeff Layton <jlayton@redhat.com> Signed-off-by: NSteve French <sfrench@us.ibm.com>
-
- 07 10月, 2010 2 次提交
-
-
由 Jeff Layton 提交于
...based on CIFS_MOUNT_MULTIUSER flag. Signed-off-by: NJeff Layton <jlayton@redhat.com> Signed-off-by: NSteve French <sfrench@us.ibm.com>
-
由 Jeff Layton 提交于
cifsFileInfo needs a pointer to a tcon, but it doesn't currently hold a reference to it. Change it to keep a pointer to a tcon_link instead and hold a reference to it. That will keep the tcon from being freed until the file is closed. Signed-off-by: NJeff Layton <jlayton@redhat.com> Signed-off-by: NSteve French <sfrench@us.ibm.com>
-
- 05 10月, 2010 3 次提交
-
-
由 Arnd Bergmann 提交于
This prepares the removal of the big kernel lock from the file locking code. We still use the BKL as long as fs/lockd uses it and ceph might sleep, but we can flip the definition to a private spinlock as soon as that's done. All users outside of fs/lockd get converted to use lock_flocks() instead of lock_kernel() where appropriate. Based on an earlier patch to use a spinlock from Matthew Wilcox, who has attempted this a few times before, the earliest patch from over 10 years ago turned it into a semaphore, which ended up being slower than the BKL and was subsequently reverted. Someone should do some serious performance testing when this becomes a spinlock, since this has caused problems before. Using a spinlock should be at least as good as the BKL in theory, but who knows... Signed-off-by: NArnd Bergmann <arnd@arndb.de> Acked-by: NMatthew Wilcox <willy@linux.intel.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Trond Myklebust <trond.myklebust@fys.uio.no> Cc: "J. Bruce Fields" <bfields@fieldses.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Miklos Szeredi <mszeredi@suse.cz> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: John Kacur <jkacur@redhat.com> Cc: Sage Weil <sage@newdream.net> Cc: linux-kernel@vger.kernel.org Cc: linux-fsdevel@vger.kernel.org
-
由 Jan Blunck 提交于
The BKL is only used in put_super and fill_super that are both protected by the superblocks s_umount rw_semaphore. Therefore it is safe to remove the BKL entirely. Signed-off-by: NJan Blunck <jblunck@infradead.org> Cc: Steve French <smfrench@gmail.com> Signed-off-by: NArnd Bergmann <arnd@arndb.de>
-
由 Jan Blunck 提交于
This patch is a preparation necessary to remove the BKL from do_new_mount(). It explicitly adds calls to lock_kernel()/unlock_kernel() around get_sb/fill_super operations for filesystems that still uses the BKL. I've read through all the code formerly covered by the BKL inside do_kern_mount() and have satisfied myself that it doesn't need the BKL any more. do_kern_mount() is already called without the BKL when mounting the rootfs and in nfsctl. do_kern_mount() calls vfs_kern_mount(), which is called from various places without BKL: simple_pin_fs(), nfs_do_clone_mount() through nfs_follow_mountpoint(), afs_mntpt_do_automount() through afs_mntpt_follow_link(). Both later functions are actually the filesystems follow_link inode operation. vfs_kern_mount() is calling the specified get_sb function and lets the filesystem do its job by calling the given fill_super function. Therefore I think it is safe to push down the BKL from the VFS to the low-level filesystems get_sb/fill_super operation. [arnd: do not add the BKL to those file systems that already don't use it elsewhere] Signed-off-by: NJan Blunck <jblunck@infradead.org> Signed-off-by: NArnd Bergmann <arnd@arndb.de> Cc: Matthew Wilcox <matthew@wil.cx> Cc: Christoph Hellwig <hch@infradead.org>
-
- 30 9月, 2010 4 次提交
-
-
由 Jeff Layton 提交于
At mount time, we'll always need to create a tcon that will serve as a template for others that are associated with the mount. This tcon is known as the "master" tcon. In some cases, we'll need to use that tcon regardless of who's accessing the mount. Add an accessor function for the master tcon and go ahead and switch the appropriate places to use it. Signed-off-by: NJeff Layton <jlayton@redhat.com> Signed-off-by: NSteve French <sfrench@us.ibm.com>
-
由 Jeff Layton 提交于
When we convert cifs to do multiple sessions per mount, we'll need more than one tcon per superblock. At that point "cifs_sb->tcon" will make no sense. Add a new accessor function that gets a tcon given a cifs_sb. For now, it just returns cifs_sb->tcon. Later it'll do more. Signed-off-by: NJeff Layton <jlayton@redhat.com> Signed-off-by: NSteve French <sfrench@us.ibm.com>
-
由 Jeff Layton 提交于
...where it's available and appropriate. Signed-off-by: NJeff Layton <jlayton@redhat.com> Signed-off-by: NSteve French <sfrench@us.ibm.com>
-
由 Steve French 提交于
If registering fs cache failed, we weren't cleaning up proc. Acked-by: NJeff Layton <jlayton@redhat.com> CC: Suresh Jayaraman <sjayaraman@suse.de> Signed-off-by: NSteve French <sfrench@us.ibm.com>
-