- 10 7月, 2014 5 次提交
-
-
由 Jaegeuk Kim 提交于
This patch cleans up simple unnecessary codes. Signed-off-by: NJaegeuk Kim <jaegeuk@kernel.org>
-
由 Jaegeuk Kim 提交于
This patch adds f2fs_do_tmpfile to eliminate the redundant init_inode_metadata flow. Throught this, we can provide the consistent lock usage, e.g., fi->i_sem, and this will enable better debugging stuffs. Signed-off-by: NJaegeuk Kim <jaegeuk@kernel.org>
-
由 Chao Yu 提交于
Add function f2fs_tmpfile() to support O_TMPFILE file creation, and modify logic of init_inode_metadata to enable linkat temp file. Signed-off-by: NChao Yu <chao2.yu@samsung.com> Signed-off-by: NJaegeuk Kim <jaegeuk@kernel.org>
-
由 Chao Yu 提交于
After we call find_data_page in truncate_partial_data_page, we could not guarantee this page is updated or not as error may occurred in lower layer. We'd better check status of the page to avoid this no updated page be writebacked to device. Signed-off-by: NChao Yu <chao2.yu@samsung.com> Signed-off-by: NJaegeuk Kim <jaegeuk@kernel.org>
-
由 Chao Yu 提交于
We have already set page update in ->write_begin, so we should remove redundant SetPageUptodate in ->write_end. Signed-off-by: NChao Yu <chao2.yu@samsung.com> Signed-off-by: NJaegeuk Kim <jaegeuk@kernel.org>
-
- 09 7月, 2014 7 次提交
-
-
由 Chao Yu 提交于
Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=75861 Denis 2014-05-10 11:28:59 UTC reported: "F2FS-fs (mmcblk0p28): mounting.. Unable to handle kernel NULL pointer dereference at virtual address 00000018 ... [<c0a2f678>] (_raw_spin_lock+0x3c/0x70) from [<c03a0330>] (issue_flush_thread+0x50/0x17c) [<c03a0330>] (issue_flush_thread+0x50/0x17c) from [<c01b4064>] (kthread+0x98/0xa4) [<c01b4064>] (kthread+0x98/0xa4) from [<c0108060>] (kernel_thread_exit+0x0/0x8)" This patch assign cmd_control_info in sm_info before issue_flush_thread is being created, so this make sure that issue flush thread will have no chance to access invalid info in fcc. Signed-off-by: NChao Yu <chao2.yu@samsung.com> Reviewed-by: NGu Zheng <guz.fnst@cn.fujitsu.com> Signed-off-by: NJaegeuk Kim <jaegeuk@kernel.org>
-
由 Jaegeuk Kim 提交于
If we don't check the current backing device status, balance_dirty_pages can fall into infinite pausing routine. This can be occurred when a lot of directories make a small number of dirty dentry pages including files. Reported-by: NBrian Chadwick <brianchad@westnet.com.au> Signed-off-by: NJaegeuk Kim <jaegeuk@kernel.org>
-
由 Jaegeuk Kim 提交于
If an inode is renamed, it should be registered as file_lost_pino to conduct checkpoint at f2fs_sync_file. Otherwise, the inode cannot be recovered due to no dent_mark in the following scenario. Note that, this scenario is from xfstests/322. 1. create "a" 2. fsync "a" 3. rename "a" to "b" 4. fsync "b" 5. Sudden power-cut After recovery is done, "b" should be seen. However, the result shows "a", since the recovery procedure does not enter recover_dentry due to no dent_mark. The reason is like below. - The nid of "a" is checkpointed during #2, f2fs_sync_file. - The inode page for "b" produced by #3 is written without dent_mark by sync_node_pages. So, this patch fixes this bug by assinging file_lost_pino to the "a"'s inode. If the pino is lost, f2fs_sync_file conducts checkpoint, and then recovers the latest pino and its dentry information for further recovery. Signed-off-by: NJaegeuk Kim <jaegeuk@kernel.org>
-
由 Chao Yu 提交于
This patch correct releasing code of new_page to avoid BUG_ON in error patch of f2fs_rename. Signed-off-by: NChao Yu <chao2.yu@samsung.com> Signed-off-by: NJaegeuk Kim <jaegeuk@kernel.org>
-
由 Chao Yu 提交于
If we fail in this path: ->init_inode_metadata ->make_empty_dir ->get_new_data_page ->grab_cache_page return -ENOMEM We will bug on in error path of init_inode_metadata when call remove_inode_page because i_block = 2 (one inode block will be released later & one dentry block). We should release the dentry block in init_inode_metadata to avoid this BUG_ON, and avoid leak of dentry block resource, because we never have second chance to release that block in ->evict_inode as in upper error path we make this inode 'bad'. Signed-off-by: NChao Yu <chao2.yu@samsung.com> Signed-off-by: NJaegeuk Kim <jaegeuk@kernel.org>
-
由 Chao Yu 提交于
This patch add lower bound verification for nid in check_nid_range, so nids reserved like 0, node, meta passed by caller could be checked there. And then check_nid_range could be used in f2fs_nfs_get_inode for simplifying code. Signed-off-by: NChao Yu <chao2.yu@samsung.com> Signed-off-by: NJaegeuk Kim <jaegeuk@kernel.org>
-
由 Chao Yu 提交于
Remove unused variables in struct f2fs_sm_info. Signed-off-by: NChao Yu <chao2.yu@samsung.com> Signed-off-by: NJaegeuk Kim <jaegeuk@kernel.org>
-
- 23 6月, 2014 3 次提交
-
-
由 Jaegeuk Kim 提交于
This patch fixes the fallocate bug like below. (See xfstests/255) In fallocate(fd, 0, 20480), expand_inode_data processes for (index = pg_start; index <= pg_end; index++) { f2fs_reserve_block(); ... } So, even though fallocate requests 20480, 5 blocks, f2fs allocates 6 blocks including pg_end. So, this patch adds one condition to avoid block allocation. Signed-off-by: NJaegeuk Kim <jaegeuk@kernel.org>
-
由 Jaegeuk Kim 提交于
This patch arranges the f2fs_locks to cover the fallocated data and its i_size. Signed-off-by: NJaegeuk Kim <jaegeuk@kernel.org>
-
由 Jaegeuk Kim 提交于
Previous get_block in f2fs didn't report the newly allocated region which has NEW_ADDR. For reader, it should not report, but fiemap needs this. So, this patch introduces two get_block sharing core function. Signed-off-by: NJaegeuk Kim <jaegeuk@kernel.org>
-
- 12 6月, 2014 1 次提交
-
-
由 Al Viro 提交于
iter_file_splice_write() - a ->splice_write() instance that gathers the pipe buffers, builds a bio_vec-based iov_iter covering those and feeds it to ->write_iter(). A bunch of simple cases coverted to that... [AV: fixed the braino spotted by Cyrill] Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
- 08 6月, 2014 1 次提交
-
-
由 Jaegeuk Kim 提交于
This patch links f2fs_fiemap with generic function with get_block. Signed-off-by: NJaegeuk Kim <jaegeuk@kernel.org>
-
- 07 6月, 2014 2 次提交
-
-
由 Jaegeuk Kim 提交于
There is an errorneous case during the recovery like below. In recovery_dentry, 1) dir = f2fs_iget(); 2) mark the dir with FI_DELAY_IPUT 3) goto unmap_out After the end of recovery routine, there is no dirty dentries so the dir cannot be released by iput in remove_dirty_dir_inode. This patch fixes such the bug case by handling the iget and iput in the recovery_dentry procedure. Signed-off-by: NJaegeuk Kim <jaegeuk@kernel.org>
-
由 Jaegeuk Kim 提交于
If a fallocated file is fsynced, we should recover the i_size after sudden power cut. Signed-off-by: NJaegeuk Kim <jaegeuk@kernel.org>
-
- 05 6月, 2014 1 次提交
-
-
由 Mel Gorman 提交于
aops->write_begin may allocate a new page and make it visible only to have mark_page_accessed called almost immediately after. Once the page is visible the atomic operations are necessary which is noticable overhead when writing to an in-memory filesystem like tmpfs but should also be noticable with fast storage. The objective of the patch is to initialse the accessed information with non-atomic operations before the page is visible. The bulk of filesystems directly or indirectly use grab_cache_page_write_begin or find_or_create_page for the initial allocation of a page cache page. This patch adds an init_page_accessed() helper which behaves like the first call to mark_page_accessed() but may called before the page is visible and can be done non-atomically. The primary APIs of concern in this care are the following and are used by most filesystems. find_get_page find_lock_page find_or_create_page grab_cache_page_nowait grab_cache_page_write_begin All of them are very similar in detail to the patch creates a core helper pagecache_get_page() which takes a flags parameter that affects its behavior such as whether the page should be marked accessed or not. Then old API is preserved but is basically a thin wrapper around this core function. Each of the filesystems are then updated to avoid calling mark_page_accessed when it is known that the VM interfaces have already done the job. There is a slight snag in that the timing of the mark_page_accessed() has now changed so in rare cases it's possible a page gets to the end of the LRU as PageReferenced where as previously it might have been repromoted. This is expected to be rare but it's worth the filesystem people thinking about it in case they see a problem with the timing change. It is also the case that some filesystems may be marking pages accessed that previously did not but it makes sense that filesystems have consistent behaviour in this regard. The test case used to evaulate this is a simple dd of a large file done multiple times with the file deleted on each iterations. The size of the file is 1/10th physical memory to avoid dirty page balancing. In the async case it will be possible that the workload completes without even hitting the disk and will have variable results but highlight the impact of mark_page_accessed for async IO. The sync results are expected to be more stable. The exception is tmpfs where the normal case is for the "IO" to not hit the disk. The test machine was single socket and UMA to avoid any scheduling or NUMA artifacts. Throughput and wall times are presented for sync IO, only wall times are shown for async as the granularity reported by dd and the variability is unsuitable for comparison. As async results were variable do to writback timings, I'm only reporting the maximum figures. The sync results were stable enough to make the mean and stddev uninteresting. The performance results are reported based on a run with no profiling. Profile data is based on a separate run with oprofile running. async dd 3.15.0-rc3 3.15.0-rc3 vanilla accessed-v2 ext3 Max elapsed 13.9900 ( 0.00%) 11.5900 ( 17.16%) tmpfs Max elapsed 0.5100 ( 0.00%) 0.4900 ( 3.92%) btrfs Max elapsed 12.8100 ( 0.00%) 12.7800 ( 0.23%) ext4 Max elapsed 18.6000 ( 0.00%) 13.3400 ( 28.28%) xfs Max elapsed 12.5600 ( 0.00%) 2.0900 ( 83.36%) The XFS figure is a bit strange as it managed to avoid a worst case by sheer luck but the average figures looked reasonable. samples percentage ext3 86107 0.9783 vmlinux-3.15.0-rc4-vanilla mark_page_accessed ext3 23833 0.2710 vmlinux-3.15.0-rc4-accessed-v3r25 mark_page_accessed ext3 5036 0.0573 vmlinux-3.15.0-rc4-accessed-v3r25 init_page_accessed ext4 64566 0.8961 vmlinux-3.15.0-rc4-vanilla mark_page_accessed ext4 5322 0.0713 vmlinux-3.15.0-rc4-accessed-v3r25 mark_page_accessed ext4 2869 0.0384 vmlinux-3.15.0-rc4-accessed-v3r25 init_page_accessed xfs 62126 1.7675 vmlinux-3.15.0-rc4-vanilla mark_page_accessed xfs 1904 0.0554 vmlinux-3.15.0-rc4-accessed-v3r25 init_page_accessed xfs 103 0.0030 vmlinux-3.15.0-rc4-accessed-v3r25 mark_page_accessed btrfs 10655 0.1338 vmlinux-3.15.0-rc4-vanilla mark_page_accessed btrfs 2020 0.0273 vmlinux-3.15.0-rc4-accessed-v3r25 init_page_accessed btrfs 587 0.0079 vmlinux-3.15.0-rc4-accessed-v3r25 mark_page_accessed tmpfs 59562 3.2628 vmlinux-3.15.0-rc4-vanilla mark_page_accessed tmpfs 1210 0.0696 vmlinux-3.15.0-rc4-accessed-v3r25 init_page_accessed tmpfs 94 0.0054 vmlinux-3.15.0-rc4-accessed-v3r25 mark_page_accessed [akpm@linux-foundation.org: don't run init_page_accessed() against an uninitialised pointer] Signed-off-by: NMel Gorman <mgorman@suse.de> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: Jan Kara <jack@suse.cz> Cc: Michal Hocko <mhocko@suse.cz> Cc: Hugh Dickins <hughd@google.com> Cc: Dave Hansen <dave.hansen@intel.com> Cc: Theodore Ts'o <tytso@mit.edu> Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Rik van Riel <riel@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Tested-by: NPrabhakar Lad <prabhakar.csengg@gmail.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 04 6月, 2014 4 次提交
-
-
由 Jaegeuk Kim 提交于
If data are overwritten through dio, previous f2fs doesn't remain the fsync mark due to no additional node writes. Note that this patch should resolve the xfstests:311. Signed-off-by: NJaegeuk Kim <jaegeuk@kernel.org>
-
由 Changman Lee 提交于
f2fs's cp has one page which consists of struct f2fs_checkpoint and version bitmap of sit and nat. To support lots of segments, we need more blocks for sit bitmap. So let's arrange sit bitmap as following: +-----------------+------------+ | f2fs_checkpoint | sit bitmap | | + nat bitmap | | +-----------------+------------+ 0 4k N blocks Signed-off-by: NChangman Lee <cm224.lee@samsung.com> [Jaegeuk Kim: simple code change for readability] Signed-off-by: NJaegeuk Kim <jaegeuk@kernel.org>
-
由 Chao Yu 提交于
Previously we allocate pages with no mapping in ra_sum_pages(), so we may encounter a crash in event trace of f2fs_submit_page_mbio where we access mapping data of the page. We'd better allocate pages in bd_inode mapping and invalidate these pages after we restore data from pages. It could avoid crash in above scenario. Changes from V1 o remove redundant code in ra_sum_pages() suggested by Jaegeuk Kim. Call Trace: [<f1031630>] ? ftrace_raw_event_f2fs_write_checkpoint+0x80/0x80 [f2fs] [<f10377bb>] f2fs_submit_page_mbio+0x1cb/0x200 [f2fs] [<f103c5da>] restore_node_summary+0x13a/0x280 [f2fs] [<f103e22d>] build_curseg+0x2bd/0x620 [f2fs] [<f104043b>] build_segment_manager+0x1cb/0x920 [f2fs] [<f1032c85>] f2fs_fill_super+0x535/0x8e0 [f2fs] [<c115b66a>] mount_bdev+0x16a/0x1a0 [<f102f63f>] f2fs_mount+0x1f/0x30 [f2fs] [<c115c096>] mount_fs+0x36/0x170 [<c1173635>] vfs_kern_mount+0x55/0xe0 [<c1175388>] do_mount+0x1e8/0x900 [<c1175d72>] SyS_mount+0x82/0xc0 [<c16059cc>] sysenter_do_call+0x12/0x22 Suggested-by: NJaegeuk Kim <jaegeuk.kim@samsung.com> Signed-off-by: NChao Yu <chao2.yu@samsung.com> Signed-off-by: NJaegeuk Kim <jaegeuk@kernel.org>
-
由 Chao Yu 提交于
When large directory feathure is enable, We have one case which could cause overflow in dir_buckets() as following: special case: level + dir_level >= 32 and level < MAX_DIR_HASH_DEPTH / 2. Here we define MAX_DIR_BUCKETS to limit the return value when the condition could trigger potential overflow. Changes from V1 o modify description of calculation in f2fs.txt suggested by Changman Lee. Suggested-by: NChangman Lee <cm224.lee@samsung.com> Signed-off-by: NChao Yu <chao2.yu@samsung.com> Signed-off-by: NJaegeuk Kim <jaegeuk@kernel.org>
-
- 02 6月, 2014 1 次提交
-
-
由 Jaegeuk Kim 提交于
This patch should resolve the following recursive lock. [<ffffffff8135a9c3>] call_rwsem_down_write_failed+0x13/0x20 [<ffffffffa01749dc>] f2fs_setxattr+0x5c/0xa0 [f2fs] [<ffffffffa0174c99>] __f2fs_set_acl+0x1b9/0x340 [f2fs] [<ffffffffa017515a>] f2fs_init_acl+0x4a/0xcb [f2fs] [<ffffffffa0159abe>] __f2fs_add_link+0x26e/0x780 [f2fs] [<ffffffffa015d4d8>] f2fs_mkdir+0xb8/0x150 [f2fs] [<ffffffff811cebd7>] vfs_mkdir+0xb7/0x160 [<ffffffff811cf89b>] SyS_mkdir+0xab/0xe0 [<ffffffff817244bf>] tracesys+0xe1/0xe6 [<ffffffffffffffff>] 0xffffffffffffffff The call path indicates: - f2fs_add_link : down_write(&fi->i_sem); - init_inode_metadata - f2fs_init_acl - __f2fs_set_acl - f2fs_setxattr : down_write(&fi->i_sem); Here we should not call f2fs_setxattr, but __f2fs_setxattr. But __f2fs_setxattr is a static function in xattr.c, so that I found the other generic approach to use f2fs_setxattr. In f2fs_setxattr, the page pointer is only given from init_inode_metadata. So, this patch adds this condition to avoid this in f2fs_setxattr. Signed-off-by: NJaegeuk Kim <jaegeuk@kernel.org>
-
- 08 5月, 2014 2 次提交
-
-
由 Chao Yu 提交于
This patch uses exported inode_init_owner() to simplify codes in f2fs_new_inode(). Signed-off-by: NChao Yu <chao2.yu@samsung.com> Signed-off-by: NJaegeuk Kim <jaegeuk.kim@samsung.com>
-
由 Chao Yu 提交于
If we use slab memory in f2fs_issue_flush(), we will face memory pressure and latency time caused by racing of kmem_cache_{alloc,free}. Let's alloc memory in stack instead of slab. Signed-off-by: NChao Yu <chao2.yu@samsung.com> Signed-off-by: NJaegeuk Kim <jaegeuk.kim@samsung.com>
-
- 07 5月, 2014 13 次提交
-
-
由 Chao Yu 提交于
This patch adds a tracepoint for f2fs_read_data_page to trace when page is readed by user. Signed-off-by: NChao Yu <chao2.yu@samsung.com> Signed-off-by: NJaegeuk Kim <jaegeuk.kim@samsung.com>
-
由 Chao Yu 提交于
This patch adds a tracepoint for f2fs_write_{meta,node,data}_pages to trace when pages are fsyncing/flushing. Signed-off-by: NChao Yu <chao2.yu@samsung.com> Signed-off-by: NJaegeuk Kim <jaegeuk.kim@samsung.com>
-
由 Chao Yu 提交于
This patch adds a tracepoint for f2fs_write_{meta,node,data}_page to trace when page is writting out. Signed-off-by: NChao Yu <chao2.yu@samsung.com> Signed-off-by: NJaegeuk Kim <jaegeuk.kim@samsung.com>
-
由 Chao Yu 提交于
This patch adds a tracepoint for f2fs_write_end to trace write op of user. Signed-off-by: NChao Yu <chao2.yu@samsung.com> Signed-off-by: NJaegeuk Kim <jaegeuk.kim@samsung.com>
-
由 Chao Yu 提交于
This patch adds a tracepoint for f2fs_write_begin to trace write op of user. Signed-off-by: NChao Yu <chao2.yu@samsung.com> Signed-off-by: NJaegeuk Kim <jaegeuk.kim@samsung.com>
-
由 Zhang Zhen 提交于
fix the following checkpatch warning: WARNING: do {} while (0) macros should not be semicolon terminated Signed-off-by: NZhang Zhen <zhenzhang.zhang@huawei.com> Signed-off-by: NJaegeuk Kim <jaegeuk.kim@samsung.com>
-
由 Jaegeuk Kim 提交于
If the inode page is clean during its inode eviction, it'd better drop the page to reduce further memory pressure. Signed-off-by: NJaegeuk Kim <jaegeuk.kim@samsung.com>
-
由 Jaegeuk Kim 提交于
This patch reduces the lock granularity during write_begin. When the system is under memory pressure, it would be better to reduce the locking time for the data pages. Signed-off-by: NJaegeuk Kim <jaegeuk.kim@samsung.com>
-
由 Jaegeuk Kim 提交于
This patch removes grab_cache_page_write_begin for meta pages. Signed-off-by: NJaegeuk Kim <jaegeuk.kim@samsung.com>
-
由 Jaegeuk Kim 提交于
We don't need to wait on page writeback for these cases. Signed-off-by: NChao Yu <chao2.yu@samsung.com> Signed-off-by: NJaegeuk Kim <jaegeuk.kim@samsung.com>
-
由 Jaegeuk Kim 提交于
This patch splits grab_cache_page_write_begin into grab_cache_page and wait_on_page_writeback for node pages. This patch intends to enhance the latency to get node pages by alleviating unnecessary wait_on_page_writeback. Signed-off-by: NChao Yu <chao2.yu@samsung.com> Signed-off-by: NJaegeuk Kim <jaegeuk.kim@samsung.com>
-
由 Chao Yu 提交于
Previous we do not truncate inline data in inode page when setattr, so following case could still read the inline data which has already truncated: 1.write inline data 2.ftruncate size to 0 3.ftruncate size to max inline data size 4.read from offset 0 This patch introduces truncate_inline_data() to fix this problem. change log from v1: o fix a bug and do not truncate first page data after truncate inline data. Signed-off-by: NChao Yu <chao2.yu@samsung.com> Signed-off-by: NJaegeuk Kim <jaegeuk.kim@samsung.com>
-
由 Chao Yu 提交于
We have no so such readahead mechanism in ->iterate() path as the one in ->read() path, it cause low performance when we read large directory. This patch add readahead in f2fs_readdir() for better performance. Signed-off-by: NChao Yu <chao2.yu@samsung.com> Signed-off-by: NJaegeuk Kim <jaegeuk.kim@samsung.com>
-