- 01 7月, 2013 11 次提交
-
-
由 Lukas Czerner 提交于
Currently if we pass range into ext4_zero_partial_blocks() which covers entire block we would attempt to zero it even though we should only zero unaligned part of the block. Fix this by checking whether the range covers the whole block skip zeroing if so. Signed-off-by: NLukas Czerner <lczerner@redhat.com> Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
-
由 Theodore Ts'o 提交于
The function ext4_write_inline_data_end() can return an error. So we need to assign it to a signed integer variable to check for an error return (since copied is an unsigned int). Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu> Cc: Zheng Liu <wenqing.lz@taobao.com> Cc: stable@vger.kernel.org
-
由 jon ernst 提交于
Comparing unsigned variable with 0 always returns false. err = 0 is duplicated and unnecessary. [ tytso: Also cleaned up error handling in ext4_block_zero_page_range() ] Signed-off-by: N"Jon Ernst" <jonernst07@gmx.com> Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
-
由 Al Viro 提交于
Both ext3 and ext4 htree_dirblock_to_tree() is just filling the in-core rbtree for use by call_filldir(). All updates of ->f_pos are done by the latter; bumping it here (on error) is obviously wrong - we might very well have it nowhere near the block we'd found an error in. Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk> Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu> Cc: stable@vger.kernel.org
-
由 Theodore Ts'o 提交于
Some of the functions which modify the jbd2 superblock were not updating the checksum before calling jbd2_write_superblock(). Move the call to jbd2_superblock_csum_set() to jbd2_write_superblock(), so that the checksum is calculated consistently. Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu> Cc: Darrick J. Wong <darrick.wong@oracle.com> Cc: stable@vger.kernel.org
-
由 Ashish Sangwan 提交于
No need to pass file pointer when we can directly pass inode pointer. Signed-off-by: NAshish Sangwan <a.sangwan@samsung.com> Signed-off-by: NNamjae Jeon <namjae.jeon@samsung.com> Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
-
由 boxi liu 提交于
In ext4 feature inline_data,it use the xattr's space to store the inline data in inode.When we calculate the inline data as the xattr,we add the pad.But in get_max_inline_xattr_value_size() function we count the free space without pad.It cause some contents are moved to a block even if it can be stored in the inode. Signed-off-by: Nliulei <lewis.liulei@huawei.com> Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu> Reviewed-by: NTao Ma <boyu.mt@taobao.com>
-
由 Joe Perches 提交于
Reduce the object size ~10% could be useful for embedded systems. Add #ifdef CONFIG_PRINTK #else #endif blocks to hold formats and arguments, passing " " to functions when !CONFIG_PRINTK and still verifying format and arguments with no_printk. $ size fs/ext4/built-in.o* text data bss dec hex filename 239375 610 888 240873 3ace9 fs/ext4/built-in.o.new 264167 738 888 265793 40e41 fs/ext4/built-in.o.old $ grep -E "CONFIG_EXT4|CONFIG_PRINTK" .config # CONFIG_PRINTK is not set CONFIG_EXT4_FS=y CONFIG_EXT4_USE_FOR_EXT23=y CONFIG_EXT4_FS_POSIX_ACL=y # CONFIG_EXT4_FS_SECURITY is not set # CONFIG_EXT4_DEBUG is not set Signed-off-by: NJoe Perches <joe@perches.com> Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
-
由 Zheng Liu 提交于
Now we maintain an proper in-order LRU list in ext4 to reclaim entries from extent status tree when we are under heavy memory pressure. For keeping this order, a spin lock is used to protect this list. But this lock burns a lot of CPU time. We can use the following steps to trigger it. % cd /dev/shm % dd if=/dev/zero of=ext4-img bs=1M count=2k % mkfs.ext4 ext4-img % mount -t ext4 -o loop ext4-img /mnt % cd /mnt % for ((i=0;i<160;i++)); do truncate -s 64g $i; done % for ((i=0;i<160;i++)); do cp $i /dev/null &; done % perf record -a -g % perf report This commit tries to fix this problem. Now a new member called i_touch_when is added into ext4_inode_info to record the last access time for an inode. Meanwhile we never need to keep a proper in-order LRU list. So this can avoid to burns some CPU time. When we try to reclaim some entries from extent status tree, we use list_sort() to get a proper in-order list. Then we traverse this list to discard some entries. In ext4_sb_info, we use s_es_last_sorted to record the last time of sorting this list. When we traverse the list, we skip the inode that is newer than this time, and move this inode to the tail of LRU list. When the head of the list is newer than s_es_last_sorted, we will sort the LRU list again. In this commit, we break the loop if s_extent_cache_cnt == 0 because that means that all extents in extent status tree have been reclaimed. Meanwhile in this commit, ext4_es_{un}register_shrinker()'s prototype is changed to save a local variable in these functions. Reported-by: NDave Hansen <dave.hansen@intel.com> Signed-off-by: NZheng Liu <wenqing.lz@taobao.com> Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
-
由 Alexey Khoroshilov 提交于
If memory allocation in ext4_mb_new_group_pa() is failed, it returns error code, ext4_mb_new_preallocation() propages it, but ext4_mb_new_blocks() ignores it. An observed result was: - allocation fail means ext4_mb_new_group_pa() does not update ext4_allocation_context; - ext4_mb_new_blocks() sets ext4_allocation_request->len (ar->len = ac->ac_b_ex.fe_len;) to number of blocks preallocated (512) instead of number of blocks requested (1); - that activates update cycle in ext4_splice_branch(): for (i = 1; i < blks; i++) <-- blks is 512 instead of 1 here *(where->p + i) = cpu_to_le32(current_block++); - it iterates 511 times and corrupts a chunk of memory including inode structure; - page fault happens at EXT4_SB(inode->i_sb) in ext4_mark_inode_dirty(); - system hangs with 'scheduling while atomic' BUG. The patch implements a check for ext4_mb_new_preallocation() error code and handles its failure as if ext4_mb_regular_allocator() fails. Found by Linux File System Verification project (linuxtesting.org). [ Patch restructed by tytso to make the flow of control easier to follow. ] Signed-off-by: NAlexey Khoroshilov <khoroshilov@ispras.ru> Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
-
由 Maarten ter Huurne 提交于
Subtracting the number of the first data block places the superblock backups one block too early, corrupting the file system. When the block size is larger than 1K, the first data block is 0, so the subtraction has no effect and no corruption occurs. Signed-off-by: NMaarten ter Huurne <maarten@treewalker.org> Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu> Reviewed-by: NJan Kara <jack@suse.cz> CC: stable@vger.kernel.org
-
- 17 6月, 2013 1 次提交
-
-
由 Jon Ernst 提交于
This patch removed several unused variables. Signed-off-by: NJon Ernst <jonernst07@gmx.com> Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
-
- 13 6月, 2013 10 次提交
-
-
由 Jie Liu 提交于
Return the FIEMAP_EXTENT_UNKNOWN flag as well except the FIEMAP_EXTENT_DELALLOC because the data location of an delayed allocation extent is unknown. Signed-off-by: NJie Liu <jeff.liu@oracle.com>
-
由 Paul Gortmaker 提交于
Commit b6e96d00 ("jbd2: use module parameters instead of debugfs for jbd_debug") removed any need for a dependency on DEBUG_FS. It also moved the /sys variables out from underneath the typical debugfs mount point. Delete the dependency and update the /sys path to where the debug settings are currently. Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com> Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
-
由 Paul Gortmaker 提交于
Since the jbd_debug() is implemented with two separate printk() calls, it can lead to corrupted and misleading debug output like the following (see lines marked with "*"): [ 290.339362] (fs/jbd2/journal.c, 203): kjournald2: kjournald2 wakes [ 290.339365] (fs/jbd2/journal.c, 155): kjournald2: commit_sequence=42103, commit_request=42104 [ 290.339369] (fs/jbd2/journal.c, 158): kjournald2: OK, requests differ [* 290.339376] (fs/jbd2/journal.c, 648): jbd2_log_wait_commit: [* 290.339379] (fs/jbd2/commit.c, 370): jbd2_journal_commit_transaction: JBD2: want 42104, j_commit_sequence=42103 [* 290.339382] JBD2: starting commit of transaction 42104 [ 290.339410] (fs/jbd2/revoke.c, 566): jbd2_journal_write_revoke_records: Wrote 0 revoke records [ 290.376555] (fs/jbd2/commit.c, 1088): jbd2_journal_commit_transaction: JBD2: commit 42104 complete, head 42079 i.e. the debug output from log_wait_commit and journal_commit_transaction have become interleaved. The output should have been: (fs/jbd2/journal.c, 648): jbd2_log_wait_commit: JBD2: want 42104, j_commit_sequence=42103 (fs/jbd2/commit.c, 370): jbd2_journal_commit_transaction: JBD2: starting commit of transaction 42104 It is expected that this is not easy to replicate -- I was only able to cause it on preempt-rt kernels, and even then only under heavy I/O load. Reported-by: NPaul Gortmaker <paul.gortmaker@windriver.com> Suggested-by: N"Theodore Ts'o" <tytso@mit.edu> Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com> Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
-
由 Paul Gortmaker 提交于
The bit_spinlock functions are only used for the jbd_lock_bh_state functions (and friends) in jbd_common.h and are not directly used by either of jbd.h or jbd2.h content. The jbd_common file is new as of commit 44606672 ("jdb/jbd2: factor out common functions from the jbd[2] header files") but common (and isolated) headers were not considered for factoring at that time. Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com> Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
-
由 Paul Gortmaker 提交于
Currently we see this output: $git grep phase fs/jbd2 fs/jbd2/commit.c: jbd_debug(3, "JBD2: commit phase 1\n"); fs/jbd2/commit.c: jbd_debug(3, "JBD2: commit phase 2\n"); fs/jbd2/commit.c: jbd_debug(3, "JBD2: commit phase 2\n"); fs/jbd2/commit.c: jbd_debug(3, "JBD2: commit phase 3\n"); fs/jbd2/commit.c: jbd_debug(3, "JBD2: commit phase 4\n"); [...] There is clearly a duplicate label for phase 2, and they are both active (i.e. not in #if ... #else block). Rename them to be "2a" and "2b" so the debug output is unambiguous. Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com> Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
-
由 Paul Gortmaker 提交于
While trying to debug an an issue under extreme I/O loading on preempt-rt kernels, the following backtrace was observed via SysRQ output: rm D ffff8802203afbc0 4600 4878 4748 0x00000000 ffff8802217bfb78 0000000000000082 ffff88021fc2bb80 ffff88021fc2bb80 ffff88021fc2bb80 ffff8802217bffd8 ffff8802217bffd8 ffff8802217bffd8 ffff88021f1d4c80 ffff88021fc2bb80 ffff8802217bfb88 ffff88022437b000 Call Trace: [<ffffffff8172dc34>] schedule+0x24/0x70 [<ffffffff81225b5d>] jbd2_log_wait_commit+0xbd/0x140 [<ffffffff81060390>] ? __init_waitqueue_head+0x50/0x50 [<ffffffff81223635>] jbd2_log_do_checkpoint+0xf5/0x520 [<ffffffff81223b09>] __jbd2_log_wait_for_space+0xa9/0x1f0 [<ffffffff8121dc40>] start_this_handle.isra.10+0x2e0/0x530 [<ffffffff81060390>] ? __init_waitqueue_head+0x50/0x50 [<ffffffff8121e0a3>] jbd2__journal_start+0xc3/0x110 [<ffffffff811de7ce>] ? ext4_rmdir+0x6e/0x230 [<ffffffff8121e0fe>] jbd2_journal_start+0xe/0x10 [<ffffffff811f308b>] ext4_journal_start_sb+0x5b/0x160 [<ffffffff811de7ce>] ext4_rmdir+0x6e/0x230 [<ffffffff811435c5>] vfs_rmdir+0xd5/0x140 [<ffffffff8114370f>] do_rmdir+0xdf/0x120 [<ffffffff8105c6b4>] ? task_work_run+0x44/0x80 [<ffffffff81002889>] ? do_notify_resume+0x89/0x100 [<ffffffff817361ae>] ? int_signal+0x12/0x17 [<ffffffff81145d85>] sys_unlinkat+0x25/0x40 [<ffffffff81735f22>] system_call_fastpath+0x16/0x1b What is interesting here, is that we call log_wait_commit, from within wait_for_space, but we are still holding the checkpoint_mutex as it surrounds mostly the whole of wait_for_space. And then, as we are waiting, journal_commit_transaction can run, and if the JBD2_FLUSHED bit is set, then we will also try to take the same checkpoint_mutex. It seems that we need to drop the checkpoint_mutex while sitting in jbd2_log_wait_commit, if we want to guarantee that progress can be made by jbd2_journal_commit_transaction(). There does not seem to be anything preempt-rt specific about this, other then perhaps increasing the odds of it happening. Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com> Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
-
由 Paul Gortmaker 提交于
The state lock is taken after we are doing an assert on the state value, not before. So we might in fact be doing an assert on a transient value. Ensure the state check is within the scope of the state lock being taken. Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com> Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
-
由 Dmitry Monakhov 提交于
If filesystem was aborted after inode's write back is complete but before its metadata was updated we may return success results in data loss. In order to handle fs abort correctly we have to check fs state once we discover that it is in MS_RDONLY state Test case: http://patchwork.ozlabs.org/patch/244297Reviewed-by: NJan Kara <jack@suse.cz> Signed-off-by: NDmitry Monakhov <dmonakhov@openvz.org> Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
-
由 Dmitry Monakhov 提交于
Inode's data or non journaled quota may be written w/o jounral so we _must_ send a barrier at the end of ext4_sync_fs. But it can be skipped if journal commit will do it for us. Also fix data integrity for nojournal mode. Signed-off-by: NDmitry Monakhov <dmonakhov@openvz.org> Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
-
由 Dmitry Monakhov 提交于
Current implementation of jbd2_journal_force_commit() is suboptimal because result in empty and useless commits. But callers just want to force and wait any unfinished commits. We already have jbd2_journal_force_commit_nested() which does exactly what we want, except we are guaranteed that we do not hold journal transaction open. Signed-off-by: NDmitry Monakhov <dmonakhov@openvz.org> Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
-
- 12 6月, 2013 2 次提交
-
-
由 Theodore Ts'o 提交于
Commit 18888cf0: "ext4: speed up truncate/unlink by not using bforget() unless needed" removed the use of EXT4_FREE_BLOCKS_FORGET in the most important codepath for file systems using extents, but a similar optimization also can be done for file systems using indirect blocks, and for the two special cases in the ext4 extents code. Cc: Andrey Sidorov <qrxd43@motorola.com> Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
-
由 Theodore Ts'o 提交于
For a file systems with a very large number of block groups, if all of the block group bitmaps are in memory and the file system is relatively badly fragmented, it's possible ext4_mb_regular_allocator() to take a long time trying to find a good match. This is especially true if the tuning parameter mb_max_to_scan has been sent to a very large number. So add a cond_resched() to avoid soft lockup warnings and to provide better system responsiveness. For ext4_free_blocks(), if we are deleting a large range of blocks, and data=journal is enabled so that EXT4_FREE_BLOCKS_FORGET is passed, the loop to call sb_find_get_block() and to call ext4_forget() can take over 10-15 milliseocnds or more. So it's better to add a cond_resched() here a well. Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
-
- 07 6月, 2013 1 次提交
-
-
由 Theodore Ts'o 提交于
Rename ext4_da_writepages() to ext4_writepages() and use it for all modes. We still need to iterate over all the pages in the case of data=journalling, but in the case of nodelalloc/data=ordered (which is what file systems mounted using ext3 backwards compatibility will use) this will allow us to use a much more efficient I/O submission path. Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
-
- 06 6月, 2013 4 次提交
-
-
由 Theodore Ts'o 提交于
The test_root() function could potentially loop forever due to overflow issues. So rewrite test_root() to avoid this issue; as a bonus, it is 38% faster when benchmarked via a test loop: int main(int argc, char **argv) { int i; for (i = 0; i < 1 << 24; i++) { if (test_root(i, 7)) printf("%d\n", i); } } Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
-
由 Theodore Ts'o 提交于
The group number passed to ext4_get_group_info() should be valid, but let's add an assert to check this before we start creating a pointer based on that group number and dereferencing it. Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
-
由 Theodore Ts'o 提交于
Check the group number for sanity earilier, before calling routines such as ext4_bg_has_super() or ext4_group_overhead_blocks(). Reported-by: NJonathan Salwan <jonathan.salwan@gmail.com> Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
-
由 Theodore Ts'o 提交于
The bio_alloc() function can return NULL if the memory allocation fails. So we need to check for this. Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
-
- 05 6月, 2013 11 次提交
-
-
由 Jan Kara 提交于
Now that we clear PageWriteback after extent conversion, there's no need to wait for io_end processing in ext4_evict_inode(). Running AIO/DIO keeps file reference until aio_complete() is called so ext4_evict_inode() cannot be called. For io_end structures resulting from buffered IO waiting is happening because we wait for PageWriteback in truncate_inode_pages(). Reviewed-by: NZheng Liu <wenqing.lz@taobao.com> Signed-off-by: NJan Kara <jack@suse.cz> Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
-
由 Jan Kara 提交于
We don't have to wait for extent conversion in ext4_punch_hole() as buffered IO for the punched range has been flushed and waited upon (thus all extent conversions for that range have completed). Also we wait for all DIO to finish using inode_dio_wait() so there cannot be any extent conversions pending due to direct IO. Also remove ext4_flush_unwritten_io() since it's unused now. Reviewed-by: NZheng Liu <wenqing.lz@taobao.com> Signed-off-by: NJan Kara <jack@suse.cz> Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
-
由 Jan Kara 提交于
We don't have to wait for unwritten extent conversion in ext4_ind_direct_IO() as all writes that happened before DIO are flushed by the generic code and extent conversion has happened before we cleared PageWriteback bit. Reviewed-by: NZheng Liu <wenqing.lz@taobao.com> Signed-off-by: NJan Kara <jack@suse.cz> Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
-
由 Jan Kara 提交于
After removal of ext4_flush_unwritten_io() call, ext4_file_sync() doesn't need i_mutex anymore. Forcing of transaction commits doesn't need i_mutex as there's nothing inode specific in that code apart from grabbing transaction ids from the inode. So remove the lock. Reviewed-by: NZheng Liu <wenqing.lz@taobao.com> Signed-off-by: NJan Kara <jack@suse.cz> Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
-
由 Jan Kara 提交于
Just use the generic function instead of duplicating it. We only need to reshuffle the read-only check a bit (which is there to prevent writing to a filesystem which has been remounted read-only after error I assume). Reviewed-by: NZheng Liu <wenqing.lz@taobao.com> Signed-off-by: NJan Kara <jack@suse.cz> Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
-
由 Jan Kara 提交于
Since PageWriteback bit is now cleared after extents are converted from unwritten to written ones, we have full exclusion of writeback path from truncate (truncate_inode_pages() waits for PageWriteback bits to get cleared on all invalidated pages). Exclusion from DIO path is achieved by inode_dio_wait() call in ext4_setattr(). So there's no need to wait for extent convertion in ext4_truncate() anymore. Reviewed-by: NZheng Liu <wenqing.lz@taobao.com> Signed-off-by: NJan Kara <jack@suse.cz> Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
-
由 Jan Kara 提交于
Make sure extent conversion after DIO happens while i_dio_count is still elevated so that inode_dio_wait() waits until extent conversion is done. This removes the need for explicit waiting for extent conversion in some cases. Reviewed-by: NZheng Liu <wenqing.lz@taobao.com> Signed-off-by: NJan Kara <jack@suse.cz> Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
-
由 Jan Kara 提交于
Currently PageWriteback bit gets cleared from put_io_page() called from ext4_end_bio(). This is somewhat inconvenient as extent tree is not fully updated at that time (unwritten extents are not marked as written) so we cannot read the data back yet. This design was dictated by lock ordering as we cannot start a transaction while PageWriteback bit is set (we could easily deadlock with ext4_da_writepages()). But now that we use transaction reservation for extent conversion, locking issues are solved and we can move PageWriteback bit clearing after extent conversion is done. As a result we can remove wait for unwritten extent conversion from ext4_sync_file() because it already implicitely happens through wait_on_page_writeback(). We implement deferring of PageWriteback clearing by queueing completed bios to appropriate io_end and processing all the pages when io_end is going to be freed instead of at the moment ext4_io_end() is called. Reviewed-by: NZheng Liu <wenqing.lz@taobao.com> Signed-off-by: NJan Kara <jack@suse.cz> Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
-
由 Jan Kara 提交于
Now that we have extent conversions with reserved transaction, we have to prevent extent conversions without reserved transaction (from DIO code) to block these (as that would effectively void any transaction reservation we did). So split lists, work items, and work queues to reserved and unreserved parts. Reviewed-by: NZheng Liu <wenqing.lz@taobao.com> Signed-off-by: NJan Kara <jack@suse.cz> Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
-
由 Jan Kara 提交于
Later we would like to clear PageWriteback bit only after extent conversion from unwritten to written extents is performed. However it is not possible to start a transaction after PageWriteback is set because that violates lock ordering (and is easy to deadlock). So we have to reserve a transaction before locking pages and sending them for IO and later we use the transaction for extent conversion from ext4_end_io(). Reviewed-by: NZheng Liu <wenqing.lz@taobao.com> Signed-off-by: NJan Kara <jack@suse.cz> Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
-
由 Jan Kara 提交于
There isn't any need for setting BH_Uninit on buffers anymore. It was only used to signal we need to mark io_end as needing extent conversion in add_bh_to_extent() but now we can mark the io_end directly when mapping extent. Reviewed-by: NZheng Liu <wenqing.lz@taobao.com> Signed-off-by: NJan Kara <jack@suse.cz> Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
-