- 03 6月, 2015 1 次提交
-
-
由 Abhi Das 提交于
For smaller block sizes (512B, 1K, 2K), some quotas straddle block boundaries such that the usage value is on one block and the rest of the quota is on the previous block. In such cases, the value does not get updated correctly. This patch fixes that by addressing the boundary conditions correctly. This patch also adds a (s64) cast that was missing in a call to gfs2_quota_change() in inode.c Signed-off-by: NAbhi Das <adas@redhat.com> Signed-off-by: NBob Peterson <rpeterso@redhat.com>
-
- 06 5月, 2015 1 次提交
-
-
由 Benjamin Marzinski 提交于
gfs2 now uses the rename2 directory iop, and supports the RENAME_EXCHANGE flag (as well as RENAME_NOREPLACE, which the vfs takes care of). Signed-off-by: Benjamin Marzinski <bmarzins redhat com> Signed-off-by: NBob Peterson <rpeterso@redhat.com>
-
- 02 5月, 2015 1 次提交
-
-
由 Antonio Ospite 提交于
Follow the same style used for the other functions in the same file. Signed-off-by: NAntonio Ospite <ao2@ao2.it> Signed-off-by: NBob Peterson <rpeterso@redhat.com>
-
- 19 3月, 2015 1 次提交
-
-
由 Abhi Das 提交于
Use struct gfs2_alloc_parms as an argument to gfs2_quota_check() and gfs2_quota_lock_check() to check for quota violations while accounting for the new blocks requested by the current operation in ap->target. Previously, the number of new blocks requested during an operation were not accounted for during quota_check and would allow these operations to exceed quota. This was not very apparent since most operations allocated only 1 block at a time and quotas would get violated in the next operation. i.e. quota excess would only be by 1 block or so. With fallocate, (where we allocate a bunch of blocks at once) the quota excess is non-trivial and is addressed by this patch. Signed-off-by: NAbhi Das <adas@redhat.com> Signed-off-by: NBob Peterson <rpeterso@redhat.com> Acked-by: NSteven Whitehouse <swhiteho@redhat.com>
-
- 27 1月, 2015 1 次提交
-
-
由 Bob Peterson 提交于
This patch just removes a goto that did nothing. Signed-off-by: NBob Peterson <rpeterso@redhat.com> Signed-off-by: NSteven Whitehouse <swhiteho@redhat.com>
-
- 20 11月, 2014 6 次提交
-
-
由 Al Viro 提交于
In ->atomic_open(inode, dentry, file, opened) calling finish_no_open(file, NULL) is equivalent to dget(dentry); return finish_no_open(file, dentry); No need to open-code that... Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk> Signed-off-by: NSteven Whitehouse <swhiteho@redhat.com>
-
由 Al Viro 提交于
dentry is always hashed and negative, inode - non-error, non-NULL and non-directory. In such conditions d_splice_alias() is equivalent to "d_instantiate(dentry, inode) and return NULL", which simplifies the downstream code and is consistent with the "have to create a new object" case. Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk> Signed-off-by: NSteven Whitehouse <swhiteho@redhat.com>
-
由 Al Viro 提交于
Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk> Signed-off-by: NSteven Whitehouse <swhiteho@redhat.com>
-
由 Al Viro 提交于
In ->atomic_open(inode, dentry, file, opened) calling finish_no_open(file, NULL) is equivalent to dget(dentry); return finish_no_open(file, dentry); No need to open-code that... Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
由 Al Viro 提交于
dentry is always hashed and negative, inode - non-error, non-NULL and non-directory. In such conditions d_splice_alias() is equivalent to "d_instantiate(dentry, inode) and return NULL", which simplifies the downstream code and is consistent with the "have to create a new object" case. Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
由 Al Viro 提交于
Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
- 17 11月, 2014 1 次提交
-
-
由 Benjamin Marzinski 提交于
The current gfs2 freezing code is considerably more complicated than it should be because it doesn't use the vfs freezing code on any node except the one that begins the freeze. This is because it needs to acquire a cluster glock before calling the vfs code to prevent a deadlock, and without the new freeze_super and thaw_super hooks, that was impossible. To deal with the issue, gfs2 had to do some hacky locking tricks to make sure that a frozen node couldn't be holding on a lock it needed to do the unfreeze ioctl. This patch makes use of the new hooks to simply the gfs2 locking code. Now, all the nodes in the cluster freeze and thaw in exactly the same way. Every node in the cluster caches the freeze glock in the shared state. The new freeze_super hook allows the freezing node to grab this freeze glock in the exclusive state without first calling the vfs freeze_super function. All the nodes in the cluster see this lock change, and call the vfs freeze_super function. The vfs locking code guarantees that the nodes can't get stuck holding the glocks necessary to unfreeze the system. To unfreeze, the freezing node uses the new thaw_super hook to drop the freeze glock. Again, all the nodes notice this, reacquire the glock in shared mode and call the vfs thaw_super function. Signed-off-by: NBenjamin Marzinski <bmarzins@redhat.com> Signed-off-by: NSteven Whitehouse <swhiteho@redhat.com>
-
- 04 11月, 2014 1 次提交
-
-
由 Fabian Frederick 提交于
No need to store gfs2_dir_check result and test it before returning. Signed-off-by: NFabian Frederick <fabf@skynet.be> Signed-off-by: NSteven Whitehouse <swhiteho@redhat.com>
-
- 09 10月, 2014 1 次提交
-
-
由 Al Viro 提交于
hashed dentry can be passed to ->atomic_open() only if a) it has just passed revalidation and b) it's negative Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
- 01 10月, 2014 1 次提交
-
-
由 Bob Peterson 提交于
This patch fixes a regression in the patch "GFS2: Remember directory insert point", commit 2b47dad8. The problem had to do with the rename function: The function found space for the new dirent, and remembered that location. But then the old dirent was removed, which often moved the eligible location for the renamed dirent. Putting the new dirent at the saved location caused file system corruption. This patch adds a new "save_loc" variable to struct gfs2_diradd. If 1, the dirent location is saved. If 0, the dirent location is not saved and the buffer_head is released as per previous behavior. Signed-off-by: NBob Peterson <rpeterso@redhat.com> Signed-off-by: NSteven Whitehouse <swhiteho@redhat.com>
-
- 19 9月, 2014 1 次提交
-
-
由 Abhi Das 提交于
This patch checks if i_goal is either zero or if doesn't exist within any rgrp (i.e gfs2_blk2rgrpd() returns NULL). If so, it assigns the ip->i_no_addr block as the i_goal. There are two scenarios where a bad i_goal can result in a -EBADSLT error. 1. Attempting to allocate to an existing inode: Control reaches gfs2_inplace_reserve() and ip->i_goal is bad. We need to fix i_goal here. 2. A new inode is created in a directory whose i_goal is hosed: In this case, the parent dir's i_goal is copied onto the new inode. Since the new inode is not yet created, the ip->i_no_addr field is invalid and so, the fix in gfs2_inplace_reserve() as per 1) won't work in this scenario. We need to catch and fix it sooner in the parent dir itself (gfs2_create_inode()), before it is copied to the new inode. Signed-off-by: NAbhi Das <adas@redhat.com> Signed-off-by: NSteven Whitehouse <swhiteho@redhat.com>
-
- 13 9月, 2014 1 次提交
-
-
由 Al Viro 提交于
Callers of d_splice_alias(dentry, inode) don't need iput(), neither on success nor on failure. Either the reference to inode is stored in a previously negative dentry, or it's dropped. In either case inode reference the caller used to hold is consumed. __gfs2_lookup() does iput() in case when d_splice_alias() has failed. Double iput() if we ever hit that. And gfs2_create_inode() ends up not only with double iput(), but with link count dropped to zero - on an inode it has just found in directory. Cc: stable@vger.kernel.org # v3.14+ Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk> Signed-off-by: NSteven Whitehouse <swhiteho@redhat.com>
-
- 11 9月, 2014 1 次提交
-
-
由 Benjamin Coddington 提交于
Fix a regression introduced by: 6d4ade98 GFS2: Add atomic_open support where an early return misses d_splice_alias() which had been adding the negative dentry. Signed-off-by: NBenjamin Coddington <bcodding@redhat.com> Signed-off-by: NBob Peterson <rpeterso@redhat.com> Signed-off-by: NSteven Whitehouse <swhiteho@redhat.com>
-
- 14 5月, 2014 1 次提交
-
-
由 Benjamin Marzinski 提交于
GFS2 has a transaction glock, which must be grabbed for every transaction, whose purpose is to deal with freezing the filesystem. Aside from this involving a large amount of locking, it is very easy to make the current fsfreeze code hang on unfreezing. This patch rewrites how gfs2 handles freezing the filesystem. The transaction glock is removed. In it's place is a freeze glock, which is cached (but not held) in a shared state by every node in the cluster when the filesystem is mounted. This lock only needs to be grabbed on freezing, and actions which need to be safe from freezing, like recovery. When a node wants to freeze the filesystem, it grabs this glock exclusively. When the freeze glock state changes on the nodes (either from shared to unlocked, or shared to exclusive), the filesystem does a special log flush. gfs2_log_flush() does all the work for flushing out the and shutting down the incore log, and then it tries to grab the freeze glock in a shared state again. Since the filesystem is stuck in gfs2_log_flush, no new transaction can start, and nothing can be written to disk. Unfreezing the filesytem simply involes dropping the freeze glock, allowing gfs2_log_flush() to grab and then release the shared lock, so it is cached for next time. However, in order for the unfreezing ioctl to occur, gfs2 needs to get a shared lock on the filesystem root directory inode to check permissions. If that glock has already been grabbed exclusively, fsfreeze will be unable to get the shared lock and unfreeze the filesystem. In order to allow the unfreeze, this patch makes gfs2 grab a shared lock on the filesystem root directory during the freeze, and hold it until it unfreezes the filesystem. The functions which need to grab a shared lock in order to allow the unfreeze ioctl to be issued now use the lock grabbed by the freeze code instead. The freeze and unfreeze code take care to make sure that this shared lock will not be dropped while another process is using it. Signed-off-by: NBenjamin Marzinski <bmarzins@redhat.com> Signed-off-by: NSteven Whitehouse <swhiteho@redhat.com>
-
- 31 3月, 2014 1 次提交
-
-
由 Abhi Das 提交于
When gfs2_create_inode() fails due to quota violation, the VFS inode is not completely uninitialized. This can cause a list corruption error. This patch correctly uninitializes the VFS inode when a quota violation occurs in the gfs2_create_inode codepath. Resolves: rhbz#1059808 Signed-off-by: NAbhi Das <adas@redhat.com> Signed-off-by: NSteven Whitehouse <swhiteho@redhat.com>
-
- 19 3月, 2014 1 次提交
-
-
由 Bob Peterson 提交于
This patch eliminates function gfs2_security_init in favor of just calling security_inode_init_security directly. Signed-off-by: NBob Peterson <rpeterso@redhat.com> Signed-off-by: NSteven Whitehouse <swhiteho@redhat.com>
-
- 12 3月, 2014 1 次提交
-
-
由 Abhi Das 提交于
gfs2_lookupi() can return NULL if the path to the root is broken by another rename/rmdir. In this case gfs2_ok_to_move() must check for this NULL pointer and return error. Resolves: rhbz#1060246 Signed-off-by: NAbhi Das <adas@redhat.com> Signed-off-by: NSteven Whitehouse <swhiteho@redhat.com>
-
- 04 2月, 2014 1 次提交
-
-
由 Steven Whitehouse 提交于
This is another step towards improving the allocation of xattr blocks at inode allocation time. Here we take advantage of Christoph's recent work on ACLs to allocate a block for the xattrs early if we know that we will be adding ACLs to the inode later on. The advantage of that is that it is much more likely that we'll get a contiguous run of two blocks where the first is the inode and the second is the xattr block. We still have to fall back to the original system in case we don't get the requested two contiguous blocks, or in case the ACLs are too large to fit into the block. Future patches will move more of the ACL setting code further up the gfs2_inode_create() function. Also, I'd like to be able to do the same thing with the xattrs from LSMs in due course, too. That way we should be able to slowly reduce the number of independent transactions, at least in the most common cases. Signed-off-by: NSteven Whitehouse <swhiteho@redhat.com>
-
- 26 1月, 2014 1 次提交
-
-
由 Christoph Hellwig 提交于
This contains some major refactoring for the create path so that inodes are created with the right mode to start with instead of fixing it up later. Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
- 18 1月, 2014 1 次提交
-
-
由 J. Bruce Fields 提交于
0d0d1107 asserts that "d_splice_alias() can't return error unless it was given an IS_ERR(inode)". That was true of the implementation of d_splice_alias, but this is really a problem with d_splice_alias: at a minimum it should be able to return -ELOOP in the case where inserting the given dentry would cause a directory loop. Signed-off-by: NJ. Bruce Fields <bfields@redhat.com> Signed-off-by: NSteven Whitehouse <swhiteho@redhat.com>
-
- 16 1月, 2014 1 次提交
-
-
由 Steven Whitehouse 提交于
Al Viro has tactfully pointed out that we are using the incorrect error code in some cases. This patch fixes that, and also removes the (unused) return value for glock dumping. > * gfs2_iget() - ENOBUFS instead of ENOMEM. ENOBUFS is > "No buffer space available (POSIX.1 (XSI STREAMS option))" and since > we don't support STREAMS it's probably fair game, but... what the hell? Signed-off-by: NSteven Whitehouse <swhiteho@redhat.com> Cc: Al Viro <viro@ZenIV.linux.org.uk>
-
- 07 1月, 2014 1 次提交
-
-
由 Bob Peterson 提交于
This patch calls get_write_access in function gfs2_setattr_chown, which merely increases inode->i_writecount for the duration of the function. That will ensure that any file closes won't delete the inode's multi-block reservation while the function is running. It also ensures that a multi-block reservation exists when needed for quota change operations during the chown. Signed-off-by: NBob Peterson <rpeterso@redhat.com> Signed-off-by: NSteven Whitehouse <swhiteho@redhat.com>
-
- 06 1月, 2014 3 次提交
-
-
由 Steven Whitehouse 提交于
When we look to see if there is enough space to add a dir entry without allocation, we have then been repeating the same search later when we do the actual insertion. This patch caches the details of the location in the gfs2_diradd structure, so that we do not have to repeat the search. This will provide a performance improvement which will be greater as the size of the directory increases. Signed-off-by: NSteven Whitehouse <swhiteho@redhat.com>
-
由 Steven Whitehouse 提交于
There are three cases where we need to calculate the number of blocks to reserve in a transaction involving linking an inode into a directory. The one in rename is a bit more complicated, but the basis of it is the same as for link and create. So it makes sense to move this calculation into a single function rather than repeating it three times. Signed-off-by: NSteven Whitehouse <swhiteho@redhat.com>
-
由 Steven Whitehouse 提交于
The intent is that this structure will hold the information required when adding entries to a directory (linking). To start with, it will contain only the number of blocks which are required to link the new entry into the directory. The current calculation returns either 0 or the maximim number of blocks that can ever be requested by such a transaction. The intent is that in a later patch, we can update the dir code to calculate this value more accurately. In addition further patches will also add further fields to the new structure to increase its utility. In addition this patch fixes a bug where the link used during inode creation was adding requesting too many blocks in some cases. This is harmless unless the fs is close to being full. Signed-off-by: NSteven Whitehouse <swhiteho@redhat.com>
-
- 22 11月, 2013 1 次提交
-
-
由 Steven Whitehouse 提交于
In the case that atomic_open calls finish_no_open() with the dentry that was supplied to gfs2_atomic_open() an extra reference count is required. This patch fixes that issue preventing a bug trap triggering at umount time. Signed-off-by: NSteven Whitehouse <swhiteho@redhat.com>
-
- 25 10月, 2013 1 次提交
-
-
由 Al Viro 提交于
duplicated to hell and back... Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
- 02 10月, 2013 1 次提交
-
-
由 Steven Whitehouse 提交于
This patch adds a structure to contain allocation parameters with the intention of future expansion of this structure. The idea is that we should be able to add more information about the allocation in the future in order to allow the allocator to make a better job of placing the requests on-disk. There is no functional difference from applying this patch. Signed-off-by: NSteven Whitehouse <swhiteho@redhat.com>
-
- 27 9月, 2013 1 次提交
-
-
由 Steven Whitehouse 提交于
The reservation for an inode should be cleared when it is truncated so that we can start again at a different offset for future allocations. We could try and do better than that, by resetting the search based on where the truncation started from, but this is only a first step. In addition, there are three callers of gfs2_rs_delete() but only one of those should really be testing the value of i_writecount. While we get away with that in the other cases currently, I think it would be better if we made that test specific to the one case which requires it. Signed-off-by: NSteven Whitehouse <swhiteho@redhat.com>
-
- 23 9月, 2013 1 次提交
-
-
由 Miklos Szeredi 提交于
We need to dput() the result of d_splice_alias(), unless it is passed to finish_no_open(). Edited by Steven Whitehouse in order to make it apply to the current GFS2 git tree, and taking account of a prerequisite patch which hasn't been applied. Signed-off-by: NMiklos Szeredi <mszeredi@suse.cz> Signed-off-by: NSteven Whitehouse <swhiteho@redhat.com> Cc: stable@vger.kernel.org
-
- 17 9月, 2013 2 次提交
-
-
由 Miklos Szeredi 提交于
unless it was given an IS_ERR(inode), which isn't the case here. So clean up the unnecessary error handling in gfs2_create_inode(). This paves the way for real fixes (hence the stable Cc). Signed-off-by: NMiklos Szeredi <mszeredi@suse.cz> Signed-off-by: NSteven Whitehouse <swhiteho@redhat.com> Cc: stable@vger.kernel.org
-
由 Miklos Szeredi 提交于
In gfs2_create_inode() set FILE_CREATED in *opened. Signed-off-by: NMiklos Szeredi <mszeredi@suse.cz> Cc: Steven Whitehouse <swhiteho@redhat.com> Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
- 19 8月, 2013 2 次提交
-
-
由 Steven Whitehouse 提交于
Since the introduction of atomic_open, gfs2_getxattr can be called with the glock already held, so we need to allow for this. Signed-off-by: NSteven Whitehouse <swhiteho@redhat.com> Reported-by: NDavid Teigland <teigland@redhat.com> Tested-by: NDavid Teigland <teigland@redhat.com>
-
由 Steven Whitehouse 提交于
PTR_RET should be PTR_ERR Reported-by: NSachin Kamat <sachin.kamat@linaro.org> Cc: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: NSteven Whitehouse <swhiteho@redhat.com>
-
- 22 7月, 2013 1 次提交
-
-
由 Steven Whitehouse 提交于
PTR_RET should be PTR_ERR Reported-by: NSachin Kamat <sachin.kamat@linaro.org> Cc: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: NSteven Whitehouse <swhiteho@redhat.com> Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
-