- 16 11月, 2013 1 次提交
-
-
由 Al Viro 提交于
Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
- 04 11月, 2013 3 次提交
-
-
由 Steven Whitehouse 提交于
By using the generic list_lru code, we can now separate the per sb quota list locking from the lru locking. The lru lock is made into the inner-most lock. As a result of this new lock order, we may occasionally see items on the per-sb quota list which are "dead" so that the two places where we traverse that list are updated to take account of that. As a result of this patch, the gfs2 quota shrinker is now NUMA zone aware, and we are also laying the foundations for further improvments in due course. Signed-off-by: NSteven Whitehouse <swhiteho@redhat.com> Signed-off-by: NAbhijith Das <adas@redhat.com> Tested-by: NAbhijith Das <adas@redhat.com> Cc: Dave Chinner <dchinner@redhat.com>
-
由 Steven Whitehouse 提交于
This is a straight forward rename which is in preparation for introducing the generic list_lru infrastructure in the following patch. Signed-off-by: NSteven Whitehouse <swhiteho@redhat.com> Signed-off-by: NAbhijith Das <adas@redhat.com> Tested-by: NAbhijith Das <adas@redhat.com>
-
由 Steven Whitehouse 提交于
This patch adds reflink support to the quota data cache. It looks a bit strange because we still don't have a sensible split in the lookup by id and the lru list. That is coming in later patches though. The intent here is just to swap the current ref count for reflinks in all cases with as little as possible other change. Signed-off-by: NSteven Whitehouse <swhiteho@redhat.com> Signed-off-by: NAbhijith Das <adas@redhat.com> Tested-by: NAbhijith Das <adas@redhat.com>
-
- 25 10月, 2013 1 次提交
-
-
由 Al Viro 提交于
duplicated to hell and back... Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
- 15 10月, 2013 1 次提交
-
-
由 Steven Whitehouse 提交于
Currently glocks have an atomic reference count and also a spinlock which covers various internal fields, such as the state. This intent of this patch is to replace the spinlock and the atomic reference count with a lockref structure. This contains a spinlock which we can continue to use as before, and a reference counter which is used in conjuction with the spinlock to replace the previous atomic counter. As a result of this there are some new rules for reference counting on glocks. We need to distinguish between reference count changes under gl_spin (which are now just increment or decrement of the new counter, provided the count cannot hit zero) and those which are outside of gl_spin, but which now take gl_spin internally. The conversion is relatively straight forward. There is probably some further clean up which can be done, but the priority at this stage is to make the change in as simple a manner as possible. A consequence of this change is that the reference count is being decoupled from the lru list processing. This should allow future adoption of the lru_list code with glocks in due course. The reason for using the "dead" state and not just relying on 0 being the "invalid state" is so that in due course 0 ref counts can be allowable. The intent is to eventually be able to remove the ref count changes which are currently hidden away in state_change(). Signed-off-by: NSteven Whitehouse <swhiteho@redhat.com>
-
- 04 10月, 2013 4 次提交
-
-
由 Steven Whitehouse 提交于
Now that gfs2_quota_sync can be potentially called from multiple threads, we should protect this bit of code, and the sync generation number in particular in order to ensure that there are no races when syncing quotas. Signed-off-by: NSteven Whitehouse <swhiteho@redhat.com> Cc: Abhijith Das <adas@redhat.com>
-
由 Steven Whitehouse 提交于
The function qd_trylock was not a trylock despite its name and can be inlined into gfs2_quota_unlock in order to make the code a bit clearer. There should be no functional change as a result of this patch. Signed-off-by: NSteven Whitehouse <swhiteho@redhat.com> Cc: Abhijith Das <adas@redhat.com>
-
由 Steven Whitehouse 提交于
There should be no functional change bar the removal of a test of the MS_READONLY flag which would never be reachable. This merges the common code from qd_fish and qd_trylock into a single function and calls it from both those places. Signed-off-by: NSteven Whitehouse <swhiteho@redhat.com> Cc: Abhijith Das <adas@redhat.com>
-
由 Steven Whitehouse 提交于
There is no need for a paramater which relates to the internals of quota to be exposed to users. The only possible use would be to turn it up so large that the memory allocation fails. So lets remove it and set it to a sensible value which ensures that we don't ask for multipage allocations. Currently the size of struct gfs2_holder means that the caluclated value is identical to the previous default value, so there should be no functional change. Signed-off-by: NSteven Whitehouse <swhiteho@redhat.com> Cc: Abhijith Das <adas@redhat.com>
-
- 02 10月, 2013 3 次提交
-
-
由 Steven Whitehouse 提交于
This function is only called twice, and both callers are quota related, so lets move this function into quota.c and make it static. Signed-off-by: NSteven Whitehouse <swhiteho@redhat.com>
-
由 Steven Whitehouse 提交于
When setting the starting point for block allocation, there were calls to both gfs2_rbm_to_block() and gfs2_rbm_from_block() in the common case of there being an active reservation. The gfs2_rbm_from_block() function can be quite slow, and since the two conversions were effectively a no-op, it makes sense to avoid them entirely in this case. There is no functional change here, but the code should be a bit more efficient after this patch. Signed-off-by: NSteven Whitehouse <swhiteho@redhat.com>
-
由 Steven Whitehouse 提交于
This patch adds a structure to contain allocation parameters with the intention of future expansion of this structure. The idea is that we should be able to add more information about the allocation in the future in order to allow the allocator to make a better job of placing the requests on-disk. There is no functional difference from applying this patch. Signed-off-by: NSteven Whitehouse <swhiteho@redhat.com>
-
- 27 9月, 2013 1 次提交
-
-
由 Steven Whitehouse 提交于
The reservation for an inode should be cleared when it is truncated so that we can start again at a different offset for future allocations. We could try and do better than that, by resetting the search based on where the truncation started from, but this is only a first step. In addition, there are three callers of gfs2_rs_delete() but only one of those should really be testing the value of i_writecount. While we get away with that in the other cases currently, I think it would be better if we made that test specific to the one case which requires it. Signed-off-by: NSteven Whitehouse <swhiteho@redhat.com>
-
- 23 9月, 2013 1 次提交
-
-
由 Miklos Szeredi 提交于
We need to dput() the result of d_splice_alias(), unless it is passed to finish_no_open(). Edited by Steven Whitehouse in order to make it apply to the current GFS2 git tree, and taking account of a prerequisite patch which hasn't been applied. Signed-off-by: NMiklos Szeredi <mszeredi@suse.cz> Signed-off-by: NSteven Whitehouse <swhiteho@redhat.com> Cc: stable@vger.kernel.org
-
- 18 9月, 2013 2 次提交
-
-
由 Bob Peterson 提交于
Since the previous patch eliminated bi in favor of bii, this follow-on patch needed to be adjusted accordingly. Here is the revised version. This patch adds a new function, gfs2_rbm_incr, which increments an rbm structure. This is more efficient than calling gfs2_rbm_to_block, incrementing, then calling gfs2_rbm_from_block. Signed-off-by: NBob Peterson <rpeterso@redhat.com> Signed-off-by: NSteven Whitehouse <swhiteho@redhat.com>
-
由 Bob Peterson 提交于
This is a respin of the original patch. As Steve pointed out, the introduction of field bii makes it easy to eliminate bi itself. This revised patch does just that, replacing bi with bii. This patch adds a new field to the rbm structure, called bii, which is an index into the array of bitmaps for an rgrp. This replaces *bi which was a pointer to the bitmap. This is being done for further optimizations. Signed-off-by: NBob Peterson <rpeterso@redhat.com> Signed-off-by: NSteven Whitehouse <swhiteho@redhat.com>
-
- 17 9月, 2013 5 次提交
-
-
由 Bob Peterson 提交于
When we used try locks for rgrps on block allocations, it was important to clear the flags field so that we used a blocking hold on the glock. Now that we're not doing try locks, clearing flags is unnecessary, and a waste of time. In fact, it's probably doing the wrong thing because it clears the GL_SKIP bit that was set for the lvb tracking purposes. Signed-off-by: NBob Peterson <rpeterso@redhat.com> Signed-off-by: NSteven Whitehouse <swhiteho@redhat.com>
-
由 Bob Peterson 提交于
This patch introduces a new field in the bitmap structure called bi_blocks. Its purpose is to save us from constantly multiplying bi_len by the constant GFS2_NBBY. It also paves the way for more optimization in a future patch. Signed-off-by: NBob Peterson <rpeterso@redhat.com> Signed-off-by: NSteven Whitehouse <swhiteho@redhat.com>
-
由 Bob Peterson 提交于
In function gfs2_rbm_from_block, it starts by checking if the block falls within the first bitmap. It does so by checking if the rbm's offset is less than (rbm->bi->bi_start + rbm->bi->bi_len) * GFS2_NBBY. However, the first bitmap will always have bi_start==0. Therefore this is an unnecessary calculation in a function that gets called billions of times. This patch removes the reference to bi_start. Signed-off-by: NBob Peterson <rpeterso@redhat.com> Signed-off-by: NSteven Whitehouse <swhiteho@redhat.com>
-
由 Miklos Szeredi 提交于
unless it was given an IS_ERR(inode), which isn't the case here. So clean up the unnecessary error handling in gfs2_create_inode(). This paves the way for real fixes (hence the stable Cc). Signed-off-by: NMiklos Szeredi <mszeredi@suse.cz> Signed-off-by: NSteven Whitehouse <swhiteho@redhat.com> Cc: stable@vger.kernel.org
-
由 Miklos Szeredi 提交于
In gfs2_create_inode() set FILE_CREATED in *opened. Signed-off-by: NMiklos Szeredi <mszeredi@suse.cz> Cc: Steven Whitehouse <swhiteho@redhat.com> Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
- 13 9月, 2013 1 次提交
-
-
由 Kirill A. Shutemov 提交于
truncate_pagecache() doesn't care about old size since commit cedabed4 ("vfs: Fix vmtruncate() regression"). Let's drop it. Signed-off-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: OGAWA Hirofumi <hirofumi@mail.parknet.co.jp> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 11 9月, 2013 2 次提交
-
-
由 Dave Chinner 提交于
Convert the filesystem shrinkers to use the new API, and standardise some of the behaviours of the shrinkers at the same time. For example, nr_to_scan means the number of objects to scan, not the number of objects to free. I refactored the CIFS idmap shrinker a little - it really needs to be broken up into a shrinker per tree and keep an item count with the tree root so that we don't need to walk the tree every time the shrinker needs to count the number of objects in the tree (i.e. all the time under memory pressure). [glommer@openvz.org: fixes for ext4, ubifs, nfs, cifs and glock. Fixes are needed mainly due to new code merged in the tree] [assorted fixes folded in] Signed-off-by: NDave Chinner <dchinner@redhat.com> Signed-off-by: NGlauber Costa <glommer@openvz.org> Acked-by: NMel Gorman <mgorman@suse.de> Acked-by: NArtem Bityutskiy <artem.bityutskiy@linux.intel.com> Acked-by: NJan Kara <jack@suse.cz> Acked-by: NSteven Whitehouse <swhiteho@redhat.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: "Theodore Ts'o" <tytso@mit.edu> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Artem Bityutskiy <artem.bityutskiy@linux.intel.com> Cc: Arve Hjønnevåg <arve@android.com> Cc: Carlos Maiolino <cmaiolino@redhat.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Chuck Lever <chuck.lever@oracle.com> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: David Rientjes <rientjes@google.com> Cc: Gleb Natapov <gleb@redhat.com> Cc: Greg Thelen <gthelen@google.com> Cc: J. Bruce Fields <bfields@redhat.com> Cc: Jan Kara <jack@suse.cz> Cc: Jerome Glisse <jglisse@redhat.com> Cc: John Stultz <john.stultz@linaro.org> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Kent Overstreet <koverstreet@google.com> Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: Marcelo Tosatti <mtosatti@redhat.com> Cc: Mel Gorman <mgorman@suse.de> Cc: Steven Whitehouse <swhiteho@redhat.com> Cc: Thomas Hellstrom <thellstrom@vmware.com> Cc: Trond Myklebust <Trond.Myklebust@netapp.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
由 Glauber Costa 提交于
The sysctl knob sysctl_vfs_cache_pressure is used to determine which percentage of the shrinkable objects in our cache we should actively try to shrink. It works great in situations in which we have many objects (at least more than 100), because the aproximation errors will be negligible. But if this is not the case, specially when total_objects < 100, we may end up concluding that we have no objects at all (total / 100 = 0, if total < 100). This is certainly not the biggest killer in the world, but may matter in very low kernel memory situations. Signed-off-by: NGlauber Costa <glommer@openvz.org> Reviewed-by: NCarlos Maiolino <cmaiolino@redhat.com> Acked-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Acked-by: NMel Gorman <mgorman@suse.de> Cc: Dave Chinner <david@fromorbit.com> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: "Theodore Ts'o" <tytso@mit.edu> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Artem Bityutskiy <artem.bityutskiy@linux.intel.com> Cc: Arve Hjønnevåg <arve@android.com> Cc: Carlos Maiolino <cmaiolino@redhat.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Chuck Lever <chuck.lever@oracle.com> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: David Rientjes <rientjes@google.com> Cc: Gleb Natapov <gleb@redhat.com> Cc: Greg Thelen <gthelen@google.com> Cc: J. Bruce Fields <bfields@redhat.com> Cc: Jan Kara <jack@suse.cz> Cc: Jerome Glisse <jglisse@redhat.com> Cc: John Stultz <john.stultz@linaro.org> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Kent Overstreet <koverstreet@google.com> Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: Marcelo Tosatti <mtosatti@redhat.com> Cc: Mel Gorman <mgorman@suse.de> Cc: Steven Whitehouse <swhiteho@redhat.com> Cc: Thomas Hellstrom <thellstrom@vmware.com> Cc: Trond Myklebust <Trond.Myklebust@netapp.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
- 06 9月, 2013 1 次提交
-
-
由 Miklos Szeredi 提交于
Do have_submounts(), shrink_dcache_parent() and d_drop() atomically. check_submounts_and_drop() can deal with negative dentries and non-directories as well. Non-directories can also be mounted on. And just like directories we don't want these to disappear with invalidation. Signed-off-by: NMiklos Szeredi <mszeredi@suse.cz> CC: Steven Whitehouse <swhiteho@redhat.com> Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
- 05 9月, 2013 2 次提交
-
-
由 Benjamin Marzinski 提交于
GFS2 was only setting I_DIRTY_DATASYNC on files that it wrote to, when it actually increased the file size. If gfs2_fsync was called without I_DIRTY_DATASYNC set, it didn't flush the incore data to the log before returning, so any metadata or journaled data changes were not getting fsynced. This meant that writes to the middle of files were not always getting fsynced properly. This patch makes gfs2 set I_DIRTY_DATASYNC whenever metadata has been updated during a write. It also make gfs2_sync flush the incore log if I_DIRTY_PAGES is set, and the file is using data journalling. This will make sure that all incore logged data gets written to disk before returning from a fsync. Signed-off-by: NBenjamin Marzinski <bmarzins@redhat.com> Signed-off-by: NSteven Whitehouse <swhiteho@redhat.com>
-
由 Bob Peterson 提交于
This patch checks for the first mounter being a specator. If so, it makes sure all the journals are clean. If there's a dirty journal, the mount fails. Testing results: # insmod gfs2.ko # mount -tgfs2 -o spectator /dev/sasdrives/scratch /mnt/gfs2 mount: permission denied # dmesg | tail -2 [ 3390.655996] GFS2: fsid=MUSKETEER:home: Now mounting FS... [ 3390.841336] GFS2: fsid=MUSKETEER:home.s: jid=0: Journal is dirty, so the first mounter must not be a spectator. # mount -tgfs2 /dev/sasdrives/scratch /mnt/gfs2 # umount /mnt/gfs2 # mount -tgfs2 -o spectator /dev/sasdrives/scratch /mnt/gfs2 # ls /mnt/gfs2|wc -l 352 # umount /mnt/gfs2 Signed-off-by: NBob Peterson <rpeterso@redhat.com> Signed-off-by: NSteven Whitehouse <swhiteho@redhat.com>
-
- 04 9月, 2013 1 次提交
-
-
由 Bob Peterson 提交于
Function test_and_clear_bit implies a memory barrier, so subsequent memory barriers are unnecessary. Signed-off-by: NBob Peterson <rpeterso@redhat.com> Signed-off-by: NSteven Whitehouse <swhiteho@redhat.com>
-
- 28 8月, 2013 1 次提交
-
-
由 Steven Whitehouse 提交于
The writepages function was recently merged between writeback and ordered mode. This completes the change by doing the same with writepage. The remaining differences in writepage were left over from some earlier time and not actually doing anything useful. Signed-off-by: NSteven Whitehouse <swhiteho@redhat.com>
-
- 20 8月, 2013 3 次提交
-
-
由 Joe Perches 提交于
Don't emit OOM warnings when k.alloc calls fail when there there is a v.alloc immediately afterwards. Converted a kmalloc/vmalloc with memset to kzalloc/vzalloc. Signed-off-by: NJoe Perches <joe@perches.com> Acked-by: N"Theodore Ts'o" <tytso@mit.edu> Signed-off-by: NJiri Kosina <jkosina@suse.cz>
-
由 Steven Whitehouse 提交于
We need to check the glock ref counter in a race free way in order to ensure that the gfs2_glock_hold() call will succeed. The easiest way to do that is to simply take the reference count early in the common code of examine_bucket, skipping any glocks with zero ref count. That means that the examiner functions all need to put their reference on the glock once they've performed their function. Signed-off-by: NSteven Whitehouse <swhiteho@redhat.com> Reported-by: NDavid Teigland <teigland@redhat.com> Tested-by: NDavid Teigland <teigland@redhat.com>
-
由 Steven Whitehouse 提交于
Since gfs2_sync_meta() is only called from a single file, lets move it to lops.c where it is used, and mark it static. At the same time, we can clean up the meta_io.h header too. Signed-off-by: NSteven Whitehouse <swhiteho@redhat.com>
-
- 19 8月, 2013 5 次提交
-
-
由 Steven Whitehouse 提交于
Since the introduction of atomic_open, gfs2_getxattr can be called with the glock already held, so we need to allow for this. Signed-off-by: NSteven Whitehouse <swhiteho@redhat.com> Reported-by: NDavid Teigland <teigland@redhat.com> Tested-by: NDavid Teigland <teigland@redhat.com>
-
由 Dan Carpenter 提交于
alloc_workqueue() returns a NULL on error, it doesn't return an ERR_PTR. Signed-off-by: NDan Carpenter <dan.carpenter@oracle.com> Signed-off-by: NSteven Whitehouse <swhiteho@redhat.com>
-
由 Benjamin Marzinski 提交于
When run during fsync, a gfs2_log_flush could happen between the time when gfs2_ail_flush checked the number of blocks to revoke, and when it actually started the transaction to do those revokes. This occassionally caused it to need more revokes than it reserved, causing gfs2 to crash. Instead of just reserving enough revokes to handle the blocks that currently need them, this patch makes gfs2_ail_flush reserve the maximum number of revokes it can, without increasing the total number of reserved log blocks. This patch also passes the number of reserved revokes to __gfs2_ail_flush() so that it doesn't go over its limit and cause a crash like we're seeing. Non-fsync calls to __gfs2_ail_flush will still cause a BUG() necessary revokes are skipped. Signed-off-by: NBenjamin Marzinski <bmarzins@redhat.com> Signed-off-by: NSteven Whitehouse <swhiteho@redhat.com>
-
由 Tejun Heo 提交于
dbf2576e ("workqueue: make all workqueues non-reentrant") made WQ_NON_REENTRANT no-op and the flag is going away. Remove its usages. This patch doesn't introduce any behavior changes. Signed-off-by: NTejun Heo <tj@kernel.org> Signed-off-by: NSteven Whitehouse <swhiteho@redhat.com> Cc: cluster-devel@redhat.com
-
由 Steven Whitehouse 提交于
PTR_RET should be PTR_ERR Reported-by: NSachin Kamat <sachin.kamat@linaro.org> Cc: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: NSteven Whitehouse <swhiteho@redhat.com>
-
- 22 7月, 2013 1 次提交
-
-
由 Steven Whitehouse 提交于
PTR_RET should be PTR_ERR Reported-by: NSachin Kamat <sachin.kamat@linaro.org> Cc: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: NSteven Whitehouse <swhiteho@redhat.com> Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
-
- 29 6月, 2013 1 次提交
-
-
由 Jeff Layton 提交于
Having a global lock that protects all of this code is a clear scalability problem. Instead of doing that, move most of the code to be protected by the i_lock instead. The exceptions are the global lists that the ->fl_link sits on, and the ->fl_block list. ->fl_link is what connects these structures to the global lists, so we must ensure that we hold those locks when iterating over or updating these lists. Furthermore, sound deadlock detection requires that we hold the blocked_list state steady while checking for loops. We also must ensure that the search and update to the list are atomic. For the checking and insertion side of the blocked_list, push the acquisition of the global lock into __posix_lock_file and ensure that checking and update of the blocked_list is done without dropping the lock in between. On the removal side, when waking up blocked lock waiters, take the global lock before walking the blocked list and dequeue the waiters from the global list prior to removal from the fl_block list. With this, deadlock detection should be race free while we minimize excessive file_lock_lock thrashing. Finally, in order to avoid a lock inversion problem when handling /proc/locks output we must ensure that manipulations of the fl_block list are also protected by the file_lock_lock. Signed-off-by: NJeff Layton <jlayton@redhat.com> Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-