提交 d54d35c5 编写于 作者: L Linus Torvalds

Merge tag 'f2fs-for-4.18' of git://git.kernel.org/pub/scm/linux/kernel/git/jaegeuk/f2fs

Pull f2fs updates from Jaegeuk Kim:
 "In this round, we've mainly focused on discard, aka unmap, control
  along with fstrim for Android-specific usage model. In addition, we've
  fixed writepage flow which returned EAGAIN previously resulting in EIO
  of fsync(2) due to mapping's error state. In order to avoid old MM bug
  [1], we decided not to use __GFP_ZERO for the mapping for node and
  meta page caches. As always, we've cleaned up many places for future
  fsverity and symbol conflicts.

  Enhancements:
   - do discard/fstrim in lower priority considering fs utilization
   - split large discard commands into smaller ones for better responsiveness
   - add more sanity checks to address syzbot reports
   - add a mount option, fsync_mode=nobarrier, which can reduce # of cache flushes
   - clean up symbol namespace with modified function names
   - be strict on block allocation and IO control in corner cases

  Bug fixes:
   - don't use __GFP_ZERO for mappings
   - fix error reports in writepage to avoid fsync() failure
   - avoid selinux denial on CAP_RESOURCE on resgid/resuid
   - fix some subtle race conditions in GC/atomic writes/shutdown
   - fix overflow bugs in sanity_check_raw_super
   - fix missing bits on get_flags

  Clean-ups:
   - prepare the generic flow for future fsverity integration
   - fix some broken coding standard"

[1] https://lkml.org/lkml/2018/4/8/661

* tag 'f2fs-for-4.18' of git://git.kernel.org/pub/scm/linux/kernel/git/jaegeuk/f2fs: (79 commits)
  f2fs: fix to clear FI_VOLATILE_FILE correctly
  f2fs: let sync node IO interrupt async one
  f2fs: don't change wbc->sync_mode
  f2fs: fix to update mtime correctly
  fs: f2fs: insert space around that ':' and ', '
  fs: f2fs: add missing blank lines after declarations
  fs: f2fs: changed variable type of offset "unsigned" to "loff_t"
  f2fs: clean up symbol namespace
  f2fs: make set_de_type() static
  f2fs: make __f2fs_write_data_pages() static
  f2fs: fix to avoid accessing cross the boundary
  f2fs: fix to let caller retry allocating block address
  disable loading f2fs module on PAGE_SIZE > 4KB
  f2fs: fix error path of move_data_page
  f2fs: don't drop dentry pages after fs shutdown
  f2fs: fix to avoid race during access gc_thread pointer
  f2fs: clean up with clear_radix_tree_dirty_tag
  f2fs: fix to don't trigger writeback during recovery
  f2fs: clear discard_wake earlier
  f2fs: let discard thread wait a little longer if dev is busy
  ...
...@@ -101,6 +101,7 @@ Date: February 2015 ...@@ -101,6 +101,7 @@ Date: February 2015
Contact: "Jaegeuk Kim" <jaegeuk@kernel.org> Contact: "Jaegeuk Kim" <jaegeuk@kernel.org>
Description: Description:
Controls the trimming rate in batch mode. Controls the trimming rate in batch mode.
<deprecated>
What: /sys/fs/f2fs/<disk>/cp_interval What: /sys/fs/f2fs/<disk>/cp_interval
Date: October 2015 Date: October 2015
...@@ -140,7 +141,7 @@ Contact: "Shuoran Liu" <liushuoran@huawei.com> ...@@ -140,7 +141,7 @@ Contact: "Shuoran Liu" <liushuoran@huawei.com>
Description: Description:
Shows total written kbytes issued to disk. Shows total written kbytes issued to disk.
What: /sys/fs/f2fs/<disk>/feature What: /sys/fs/f2fs/<disk>/features
Date: July 2017 Date: July 2017
Contact: "Jaegeuk Kim" <jaegeuk@kernel.org> Contact: "Jaegeuk Kim" <jaegeuk@kernel.org>
Description: Description:
......
...@@ -182,13 +182,15 @@ whint_mode=%s Control which write hints are passed down to block ...@@ -182,13 +182,15 @@ whint_mode=%s Control which write hints are passed down to block
passes down hints with its policy. passes down hints with its policy.
alloc_mode=%s Adjust block allocation policy, which supports "reuse" alloc_mode=%s Adjust block allocation policy, which supports "reuse"
and "default". and "default".
fsync_mode=%s Control the policy of fsync. Currently supports "posix" fsync_mode=%s Control the policy of fsync. Currently supports "posix",
and "strict". In "posix" mode, which is default, fsync "strict", and "nobarrier". In "posix" mode, which is
will follow POSIX semantics and does a light operation default, fsync will follow POSIX semantics and does a
to improve the filesystem performance. In "strict" mode, light operation to improve the filesystem performance.
fsync will be heavy and behaves in line with xfs, ext4 In "strict" mode, fsync will be heavy and behaves in line
and btrfs, where xfstest generic/342 will pass, but the with xfs, ext4 and btrfs, where xfstest generic/342 will
performance will regress. pass, but the performance will regress. "nobarrier" is
based on "posix", but doesn't issue flush command for
non-atomic files likewise "nobarrier" mount option.
test_dummy_encryption Enable dummy encryption, which provides a fake fscrypt test_dummy_encryption Enable dummy encryption, which provides a fake fscrypt
context. The fake fscrypt context is used by xfstests. context. The fake fscrypt context is used by xfstests.
......
...@@ -26,15 +26,8 @@ ...@@ -26,15 +26,8 @@
#include <linux/namei.h> #include <linux/namei.h>
#include "fscrypt_private.h" #include "fscrypt_private.h"
/* static void __fscrypt_decrypt_bio(struct bio *bio, bool done)
* Call fscrypt_decrypt_page on every single page, reusing the encryption
* context.
*/
static void completion_pages(struct work_struct *work)
{ {
struct fscrypt_ctx *ctx =
container_of(work, struct fscrypt_ctx, r.work);
struct bio *bio = ctx->r.bio;
struct bio_vec *bv; struct bio_vec *bv;
int i; int i;
...@@ -46,22 +39,38 @@ static void completion_pages(struct work_struct *work) ...@@ -46,22 +39,38 @@ static void completion_pages(struct work_struct *work)
if (ret) { if (ret) {
WARN_ON_ONCE(1); WARN_ON_ONCE(1);
SetPageError(page); SetPageError(page);
} else { } else if (done) {
SetPageUptodate(page); SetPageUptodate(page);
} }
unlock_page(page); if (done)
unlock_page(page);
} }
}
void fscrypt_decrypt_bio(struct bio *bio)
{
__fscrypt_decrypt_bio(bio, false);
}
EXPORT_SYMBOL(fscrypt_decrypt_bio);
static void completion_pages(struct work_struct *work)
{
struct fscrypt_ctx *ctx =
container_of(work, struct fscrypt_ctx, r.work);
struct bio *bio = ctx->r.bio;
__fscrypt_decrypt_bio(bio, true);
fscrypt_release_ctx(ctx); fscrypt_release_ctx(ctx);
bio_put(bio); bio_put(bio);
} }
void fscrypt_decrypt_bio_pages(struct fscrypt_ctx *ctx, struct bio *bio) void fscrypt_enqueue_decrypt_bio(struct fscrypt_ctx *ctx, struct bio *bio)
{ {
INIT_WORK(&ctx->r.work, completion_pages); INIT_WORK(&ctx->r.work, completion_pages);
ctx->r.bio = bio; ctx->r.bio = bio;
queue_work(fscrypt_read_workqueue, &ctx->r.work); fscrypt_enqueue_decrypt_work(&ctx->r.work);
} }
EXPORT_SYMBOL(fscrypt_decrypt_bio_pages); EXPORT_SYMBOL(fscrypt_enqueue_decrypt_bio);
void fscrypt_pullback_bio_page(struct page **page, bool restore) void fscrypt_pullback_bio_page(struct page **page, bool restore)
{ {
......
...@@ -45,12 +45,18 @@ static mempool_t *fscrypt_bounce_page_pool = NULL; ...@@ -45,12 +45,18 @@ static mempool_t *fscrypt_bounce_page_pool = NULL;
static LIST_HEAD(fscrypt_free_ctxs); static LIST_HEAD(fscrypt_free_ctxs);
static DEFINE_SPINLOCK(fscrypt_ctx_lock); static DEFINE_SPINLOCK(fscrypt_ctx_lock);
struct workqueue_struct *fscrypt_read_workqueue; static struct workqueue_struct *fscrypt_read_workqueue;
static DEFINE_MUTEX(fscrypt_init_mutex); static DEFINE_MUTEX(fscrypt_init_mutex);
static struct kmem_cache *fscrypt_ctx_cachep; static struct kmem_cache *fscrypt_ctx_cachep;
struct kmem_cache *fscrypt_info_cachep; struct kmem_cache *fscrypt_info_cachep;
void fscrypt_enqueue_decrypt_work(struct work_struct *work)
{
queue_work(fscrypt_read_workqueue, work);
}
EXPORT_SYMBOL(fscrypt_enqueue_decrypt_work);
/** /**
* fscrypt_release_ctx() - Releases an encryption context * fscrypt_release_ctx() - Releases an encryption context
* @ctx: The encryption context to release. * @ctx: The encryption context to release.
......
...@@ -93,7 +93,6 @@ static inline bool fscrypt_valid_enc_modes(u32 contents_mode, ...@@ -93,7 +93,6 @@ static inline bool fscrypt_valid_enc_modes(u32 contents_mode,
/* crypto.c */ /* crypto.c */
extern struct kmem_cache *fscrypt_info_cachep; extern struct kmem_cache *fscrypt_info_cachep;
extern int fscrypt_initialize(unsigned int cop_flags); extern int fscrypt_initialize(unsigned int cop_flags);
extern struct workqueue_struct *fscrypt_read_workqueue;
extern int fscrypt_do_page_crypto(const struct inode *inode, extern int fscrypt_do_page_crypto(const struct inode *inode,
fscrypt_direction_t rw, u64 lblk_num, fscrypt_direction_t rw, u64 lblk_num,
struct page *src_page, struct page *src_page,
......
...@@ -77,7 +77,7 @@ static void mpage_end_io(struct bio *bio) ...@@ -77,7 +77,7 @@ static void mpage_end_io(struct bio *bio)
if (bio->bi_status) { if (bio->bi_status) {
fscrypt_release_ctx(bio->bi_private); fscrypt_release_ctx(bio->bi_private);
} else { } else {
fscrypt_decrypt_bio_pages(bio->bi_private, bio); fscrypt_enqueue_decrypt_bio(bio->bi_private, bio);
return; return;
} }
} }
......
...@@ -24,7 +24,7 @@ ...@@ -24,7 +24,7 @@
#include <trace/events/f2fs.h> #include <trace/events/f2fs.h>
static struct kmem_cache *ino_entry_slab; static struct kmem_cache *ino_entry_slab;
struct kmem_cache *inode_entry_slab; struct kmem_cache *f2fs_inode_entry_slab;
void f2fs_stop_checkpoint(struct f2fs_sb_info *sbi, bool end_io) void f2fs_stop_checkpoint(struct f2fs_sb_info *sbi, bool end_io)
{ {
...@@ -36,7 +36,7 @@ void f2fs_stop_checkpoint(struct f2fs_sb_info *sbi, bool end_io) ...@@ -36,7 +36,7 @@ void f2fs_stop_checkpoint(struct f2fs_sb_info *sbi, bool end_io)
/* /*
* We guarantee no failure on the returned page. * We guarantee no failure on the returned page.
*/ */
struct page *grab_meta_page(struct f2fs_sb_info *sbi, pgoff_t index) struct page *f2fs_grab_meta_page(struct f2fs_sb_info *sbi, pgoff_t index)
{ {
struct address_space *mapping = META_MAPPING(sbi); struct address_space *mapping = META_MAPPING(sbi);
struct page *page = NULL; struct page *page = NULL;
...@@ -100,24 +100,27 @@ static struct page *__get_meta_page(struct f2fs_sb_info *sbi, pgoff_t index, ...@@ -100,24 +100,27 @@ static struct page *__get_meta_page(struct f2fs_sb_info *sbi, pgoff_t index,
* readonly and make sure do not write checkpoint with non-uptodate * readonly and make sure do not write checkpoint with non-uptodate
* meta page. * meta page.
*/ */
if (unlikely(!PageUptodate(page))) if (unlikely(!PageUptodate(page))) {
memset(page_address(page), 0, PAGE_SIZE);
f2fs_stop_checkpoint(sbi, false); f2fs_stop_checkpoint(sbi, false);
}
out: out:
return page; return page;
} }
struct page *get_meta_page(struct f2fs_sb_info *sbi, pgoff_t index) struct page *f2fs_get_meta_page(struct f2fs_sb_info *sbi, pgoff_t index)
{ {
return __get_meta_page(sbi, index, true); return __get_meta_page(sbi, index, true);
} }
/* for POR only */ /* for POR only */
struct page *get_tmp_page(struct f2fs_sb_info *sbi, pgoff_t index) struct page *f2fs_get_tmp_page(struct f2fs_sb_info *sbi, pgoff_t index)
{ {
return __get_meta_page(sbi, index, false); return __get_meta_page(sbi, index, false);
} }
bool is_valid_blkaddr(struct f2fs_sb_info *sbi, block_t blkaddr, int type) bool f2fs_is_valid_meta_blkaddr(struct f2fs_sb_info *sbi,
block_t blkaddr, int type)
{ {
switch (type) { switch (type) {
case META_NAT: case META_NAT:
...@@ -151,7 +154,7 @@ bool is_valid_blkaddr(struct f2fs_sb_info *sbi, block_t blkaddr, int type) ...@@ -151,7 +154,7 @@ bool is_valid_blkaddr(struct f2fs_sb_info *sbi, block_t blkaddr, int type)
/* /*
* Readahead CP/NAT/SIT/SSA pages * Readahead CP/NAT/SIT/SSA pages
*/ */
int ra_meta_pages(struct f2fs_sb_info *sbi, block_t start, int nrpages, int f2fs_ra_meta_pages(struct f2fs_sb_info *sbi, block_t start, int nrpages,
int type, bool sync) int type, bool sync)
{ {
struct page *page; struct page *page;
...@@ -173,7 +176,7 @@ int ra_meta_pages(struct f2fs_sb_info *sbi, block_t start, int nrpages, ...@@ -173,7 +176,7 @@ int ra_meta_pages(struct f2fs_sb_info *sbi, block_t start, int nrpages,
blk_start_plug(&plug); blk_start_plug(&plug);
for (; nrpages-- > 0; blkno++) { for (; nrpages-- > 0; blkno++) {
if (!is_valid_blkaddr(sbi, blkno, type)) if (!f2fs_is_valid_meta_blkaddr(sbi, blkno, type))
goto out; goto out;
switch (type) { switch (type) {
...@@ -217,7 +220,7 @@ int ra_meta_pages(struct f2fs_sb_info *sbi, block_t start, int nrpages, ...@@ -217,7 +220,7 @@ int ra_meta_pages(struct f2fs_sb_info *sbi, block_t start, int nrpages,
return blkno - start; return blkno - start;
} }
void ra_meta_pages_cond(struct f2fs_sb_info *sbi, pgoff_t index) void f2fs_ra_meta_pages_cond(struct f2fs_sb_info *sbi, pgoff_t index)
{ {
struct page *page; struct page *page;
bool readahead = false; bool readahead = false;
...@@ -228,7 +231,7 @@ void ra_meta_pages_cond(struct f2fs_sb_info *sbi, pgoff_t index) ...@@ -228,7 +231,7 @@ void ra_meta_pages_cond(struct f2fs_sb_info *sbi, pgoff_t index)
f2fs_put_page(page, 0); f2fs_put_page(page, 0);
if (readahead) if (readahead)
ra_meta_pages(sbi, index, BIO_MAX_PAGES, META_POR, true); f2fs_ra_meta_pages(sbi, index, BIO_MAX_PAGES, META_POR, true);
} }
static int __f2fs_write_meta_page(struct page *page, static int __f2fs_write_meta_page(struct page *page,
...@@ -249,7 +252,7 @@ static int __f2fs_write_meta_page(struct page *page, ...@@ -249,7 +252,7 @@ static int __f2fs_write_meta_page(struct page *page,
if (wbc->for_reclaim && page->index < GET_SUM_BLOCK(sbi, 0)) if (wbc->for_reclaim && page->index < GET_SUM_BLOCK(sbi, 0))
goto redirty_out; goto redirty_out;
write_meta_page(sbi, page, io_type); f2fs_do_write_meta_page(sbi, page, io_type);
dec_page_count(sbi, F2FS_DIRTY_META); dec_page_count(sbi, F2FS_DIRTY_META);
if (wbc->for_reclaim) if (wbc->for_reclaim)
...@@ -294,7 +297,7 @@ static int f2fs_write_meta_pages(struct address_space *mapping, ...@@ -294,7 +297,7 @@ static int f2fs_write_meta_pages(struct address_space *mapping,
trace_f2fs_writepages(mapping->host, wbc, META); trace_f2fs_writepages(mapping->host, wbc, META);
diff = nr_pages_to_write(sbi, META, wbc); diff = nr_pages_to_write(sbi, META, wbc);
written = sync_meta_pages(sbi, META, wbc->nr_to_write, FS_META_IO); written = f2fs_sync_meta_pages(sbi, META, wbc->nr_to_write, FS_META_IO);
mutex_unlock(&sbi->cp_mutex); mutex_unlock(&sbi->cp_mutex);
wbc->nr_to_write = max((long)0, wbc->nr_to_write - written - diff); wbc->nr_to_write = max((long)0, wbc->nr_to_write - written - diff);
return 0; return 0;
...@@ -305,7 +308,7 @@ static int f2fs_write_meta_pages(struct address_space *mapping, ...@@ -305,7 +308,7 @@ static int f2fs_write_meta_pages(struct address_space *mapping,
return 0; return 0;
} }
long sync_meta_pages(struct f2fs_sb_info *sbi, enum page_type type, long f2fs_sync_meta_pages(struct f2fs_sb_info *sbi, enum page_type type,
long nr_to_write, enum iostat_type io_type) long nr_to_write, enum iostat_type io_type)
{ {
struct address_space *mapping = META_MAPPING(sbi); struct address_space *mapping = META_MAPPING(sbi);
...@@ -382,7 +385,7 @@ static int f2fs_set_meta_page_dirty(struct page *page) ...@@ -382,7 +385,7 @@ static int f2fs_set_meta_page_dirty(struct page *page)
if (!PageUptodate(page)) if (!PageUptodate(page))
SetPageUptodate(page); SetPageUptodate(page);
if (!PageDirty(page)) { if (!PageDirty(page)) {
f2fs_set_page_dirty_nobuffers(page); __set_page_dirty_nobuffers(page);
inc_page_count(F2FS_P_SB(page), F2FS_DIRTY_META); inc_page_count(F2FS_P_SB(page), F2FS_DIRTY_META);
SetPagePrivate(page); SetPagePrivate(page);
f2fs_trace_pid(page); f2fs_trace_pid(page);
...@@ -455,20 +458,20 @@ static void __remove_ino_entry(struct f2fs_sb_info *sbi, nid_t ino, int type) ...@@ -455,20 +458,20 @@ static void __remove_ino_entry(struct f2fs_sb_info *sbi, nid_t ino, int type)
spin_unlock(&im->ino_lock); spin_unlock(&im->ino_lock);
} }
void add_ino_entry(struct f2fs_sb_info *sbi, nid_t ino, int type) void f2fs_add_ino_entry(struct f2fs_sb_info *sbi, nid_t ino, int type)
{ {
/* add new dirty ino entry into list */ /* add new dirty ino entry into list */
__add_ino_entry(sbi, ino, 0, type); __add_ino_entry(sbi, ino, 0, type);
} }
void remove_ino_entry(struct f2fs_sb_info *sbi, nid_t ino, int type) void f2fs_remove_ino_entry(struct f2fs_sb_info *sbi, nid_t ino, int type)
{ {
/* remove dirty ino entry from list */ /* remove dirty ino entry from list */
__remove_ino_entry(sbi, ino, type); __remove_ino_entry(sbi, ino, type);
} }
/* mode should be APPEND_INO or UPDATE_INO */ /* mode should be APPEND_INO or UPDATE_INO */
bool exist_written_data(struct f2fs_sb_info *sbi, nid_t ino, int mode) bool f2fs_exist_written_data(struct f2fs_sb_info *sbi, nid_t ino, int mode)
{ {
struct inode_management *im = &sbi->im[mode]; struct inode_management *im = &sbi->im[mode];
struct ino_entry *e; struct ino_entry *e;
...@@ -479,7 +482,7 @@ bool exist_written_data(struct f2fs_sb_info *sbi, nid_t ino, int mode) ...@@ -479,7 +482,7 @@ bool exist_written_data(struct f2fs_sb_info *sbi, nid_t ino, int mode)
return e ? true : false; return e ? true : false;
} }
void release_ino_entry(struct f2fs_sb_info *sbi, bool all) void f2fs_release_ino_entry(struct f2fs_sb_info *sbi, bool all)
{ {
struct ino_entry *e, *tmp; struct ino_entry *e, *tmp;
int i; int i;
...@@ -498,13 +501,13 @@ void release_ino_entry(struct f2fs_sb_info *sbi, bool all) ...@@ -498,13 +501,13 @@ void release_ino_entry(struct f2fs_sb_info *sbi, bool all)
} }
} }
void set_dirty_device(struct f2fs_sb_info *sbi, nid_t ino, void f2fs_set_dirty_device(struct f2fs_sb_info *sbi, nid_t ino,
unsigned int devidx, int type) unsigned int devidx, int type)
{ {
__add_ino_entry(sbi, ino, devidx, type); __add_ino_entry(sbi, ino, devidx, type);
} }
bool is_dirty_device(struct f2fs_sb_info *sbi, nid_t ino, bool f2fs_is_dirty_device(struct f2fs_sb_info *sbi, nid_t ino,
unsigned int devidx, int type) unsigned int devidx, int type)
{ {
struct inode_management *im = &sbi->im[type]; struct inode_management *im = &sbi->im[type];
...@@ -519,7 +522,7 @@ bool is_dirty_device(struct f2fs_sb_info *sbi, nid_t ino, ...@@ -519,7 +522,7 @@ bool is_dirty_device(struct f2fs_sb_info *sbi, nid_t ino,
return is_dirty; return is_dirty;
} }
int acquire_orphan_inode(struct f2fs_sb_info *sbi) int f2fs_acquire_orphan_inode(struct f2fs_sb_info *sbi)
{ {
struct inode_management *im = &sbi->im[ORPHAN_INO]; struct inode_management *im = &sbi->im[ORPHAN_INO];
int err = 0; int err = 0;
...@@ -542,7 +545,7 @@ int acquire_orphan_inode(struct f2fs_sb_info *sbi) ...@@ -542,7 +545,7 @@ int acquire_orphan_inode(struct f2fs_sb_info *sbi)
return err; return err;
} }
void release_orphan_inode(struct f2fs_sb_info *sbi) void f2fs_release_orphan_inode(struct f2fs_sb_info *sbi)
{ {
struct inode_management *im = &sbi->im[ORPHAN_INO]; struct inode_management *im = &sbi->im[ORPHAN_INO];
...@@ -552,14 +555,14 @@ void release_orphan_inode(struct f2fs_sb_info *sbi) ...@@ -552,14 +555,14 @@ void release_orphan_inode(struct f2fs_sb_info *sbi)
spin_unlock(&im->ino_lock); spin_unlock(&im->ino_lock);
} }
void add_orphan_inode(struct inode *inode) void f2fs_add_orphan_inode(struct inode *inode)
{ {
/* add new orphan ino entry into list */ /* add new orphan ino entry into list */
__add_ino_entry(F2FS_I_SB(inode), inode->i_ino, 0, ORPHAN_INO); __add_ino_entry(F2FS_I_SB(inode), inode->i_ino, 0, ORPHAN_INO);
update_inode_page(inode); f2fs_update_inode_page(inode);
} }
void remove_orphan_inode(struct f2fs_sb_info *sbi, nid_t ino) void f2fs_remove_orphan_inode(struct f2fs_sb_info *sbi, nid_t ino)
{ {
/* remove orphan entry from orphan list */ /* remove orphan entry from orphan list */
__remove_ino_entry(sbi, ino, ORPHAN_INO); __remove_ino_entry(sbi, ino, ORPHAN_INO);
...@@ -569,7 +572,7 @@ static int recover_orphan_inode(struct f2fs_sb_info *sbi, nid_t ino) ...@@ -569,7 +572,7 @@ static int recover_orphan_inode(struct f2fs_sb_info *sbi, nid_t ino)
{ {
struct inode *inode; struct inode *inode;
struct node_info ni; struct node_info ni;
int err = acquire_orphan_inode(sbi); int err = f2fs_acquire_orphan_inode(sbi);
if (err) if (err)
goto err_out; goto err_out;
...@@ -587,16 +590,17 @@ static int recover_orphan_inode(struct f2fs_sb_info *sbi, nid_t ino) ...@@ -587,16 +590,17 @@ static int recover_orphan_inode(struct f2fs_sb_info *sbi, nid_t ino)
} }
err = dquot_initialize(inode); err = dquot_initialize(inode);
if (err) if (err) {
iput(inode);
goto err_out; goto err_out;
}
dquot_initialize(inode);
clear_nlink(inode); clear_nlink(inode);
/* truncate all the data during iput */ /* truncate all the data during iput */
iput(inode); iput(inode);
get_node_info(sbi, ino, &ni); f2fs_get_node_info(sbi, ino, &ni);
/* ENOMEM was fully retried in f2fs_evict_inode. */ /* ENOMEM was fully retried in f2fs_evict_inode. */
if (ni.blk_addr != NULL_ADDR) { if (ni.blk_addr != NULL_ADDR) {
...@@ -614,7 +618,7 @@ static int recover_orphan_inode(struct f2fs_sb_info *sbi, nid_t ino) ...@@ -614,7 +618,7 @@ static int recover_orphan_inode(struct f2fs_sb_info *sbi, nid_t ino)
return err; return err;
} }
int recover_orphan_inodes(struct f2fs_sb_info *sbi) int f2fs_recover_orphan_inodes(struct f2fs_sb_info *sbi)
{ {
block_t start_blk, orphan_blocks, i, j; block_t start_blk, orphan_blocks, i, j;
unsigned int s_flags = sbi->sb->s_flags; unsigned int s_flags = sbi->sb->s_flags;
...@@ -642,10 +646,10 @@ int recover_orphan_inodes(struct f2fs_sb_info *sbi) ...@@ -642,10 +646,10 @@ int recover_orphan_inodes(struct f2fs_sb_info *sbi)
start_blk = __start_cp_addr(sbi) + 1 + __cp_payload(sbi); start_blk = __start_cp_addr(sbi) + 1 + __cp_payload(sbi);
orphan_blocks = __start_sum_addr(sbi) - 1 - __cp_payload(sbi); orphan_blocks = __start_sum_addr(sbi) - 1 - __cp_payload(sbi);
ra_meta_pages(sbi, start_blk, orphan_blocks, META_CP, true); f2fs_ra_meta_pages(sbi, start_blk, orphan_blocks, META_CP, true);
for (i = 0; i < orphan_blocks; i++) { for (i = 0; i < orphan_blocks; i++) {
struct page *page = get_meta_page(sbi, start_blk + i); struct page *page = f2fs_get_meta_page(sbi, start_blk + i);
struct f2fs_orphan_block *orphan_blk; struct f2fs_orphan_block *orphan_blk;
orphan_blk = (struct f2fs_orphan_block *)page_address(page); orphan_blk = (struct f2fs_orphan_block *)page_address(page);
...@@ -695,7 +699,7 @@ static void write_orphan_inodes(struct f2fs_sb_info *sbi, block_t start_blk) ...@@ -695,7 +699,7 @@ static void write_orphan_inodes(struct f2fs_sb_info *sbi, block_t start_blk)
/* loop for each orphan inode entry and write them in Jornal block */ /* loop for each orphan inode entry and write them in Jornal block */
list_for_each_entry(orphan, head, list) { list_for_each_entry(orphan, head, list) {
if (!page) { if (!page) {
page = grab_meta_page(sbi, start_blk++); page = f2fs_grab_meta_page(sbi, start_blk++);
orphan_blk = orphan_blk =
(struct f2fs_orphan_block *)page_address(page); (struct f2fs_orphan_block *)page_address(page);
memset(orphan_blk, 0, sizeof(*orphan_blk)); memset(orphan_blk, 0, sizeof(*orphan_blk));
...@@ -737,7 +741,7 @@ static int get_checkpoint_version(struct f2fs_sb_info *sbi, block_t cp_addr, ...@@ -737,7 +741,7 @@ static int get_checkpoint_version(struct f2fs_sb_info *sbi, block_t cp_addr,
size_t crc_offset = 0; size_t crc_offset = 0;
__u32 crc = 0; __u32 crc = 0;
*cp_page = get_meta_page(sbi, cp_addr); *cp_page = f2fs_get_meta_page(sbi, cp_addr);
*cp_block = (struct f2fs_checkpoint *)page_address(*cp_page); *cp_block = (struct f2fs_checkpoint *)page_address(*cp_page);
crc_offset = le32_to_cpu((*cp_block)->checksum_offset); crc_offset = le32_to_cpu((*cp_block)->checksum_offset);
...@@ -790,7 +794,7 @@ static struct page *validate_checkpoint(struct f2fs_sb_info *sbi, ...@@ -790,7 +794,7 @@ static struct page *validate_checkpoint(struct f2fs_sb_info *sbi,
return NULL; return NULL;
} }
int get_valid_checkpoint(struct f2fs_sb_info *sbi) int f2fs_get_valid_checkpoint(struct f2fs_sb_info *sbi)
{ {
struct f2fs_checkpoint *cp_block; struct f2fs_checkpoint *cp_block;
struct f2fs_super_block *fsb = sbi->raw_super; struct f2fs_super_block *fsb = sbi->raw_super;
...@@ -834,7 +838,7 @@ int get_valid_checkpoint(struct f2fs_sb_info *sbi) ...@@ -834,7 +838,7 @@ int get_valid_checkpoint(struct f2fs_sb_info *sbi)
memcpy(sbi->ckpt, cp_block, blk_size); memcpy(sbi->ckpt, cp_block, blk_size);
/* Sanity checking of checkpoint */ /* Sanity checking of checkpoint */
if (sanity_check_ckpt(sbi)) if (f2fs_sanity_check_ckpt(sbi))
goto free_fail_no_cp; goto free_fail_no_cp;
if (cur_page == cp1) if (cur_page == cp1)
...@@ -853,7 +857,7 @@ int get_valid_checkpoint(struct f2fs_sb_info *sbi) ...@@ -853,7 +857,7 @@ int get_valid_checkpoint(struct f2fs_sb_info *sbi)
void *sit_bitmap_ptr; void *sit_bitmap_ptr;
unsigned char *ckpt = (unsigned char *)sbi->ckpt; unsigned char *ckpt = (unsigned char *)sbi->ckpt;
cur_page = get_meta_page(sbi, cp_blk_no + i); cur_page = f2fs_get_meta_page(sbi, cp_blk_no + i);
sit_bitmap_ptr = page_address(cur_page); sit_bitmap_ptr = page_address(cur_page);
memcpy(ckpt + i * blk_size, sit_bitmap_ptr, blk_size); memcpy(ckpt + i * blk_size, sit_bitmap_ptr, blk_size);
f2fs_put_page(cur_page, 1); f2fs_put_page(cur_page, 1);
...@@ -898,7 +902,7 @@ static void __remove_dirty_inode(struct inode *inode, enum inode_type type) ...@@ -898,7 +902,7 @@ static void __remove_dirty_inode(struct inode *inode, enum inode_type type)
stat_dec_dirty_inode(F2FS_I_SB(inode), type); stat_dec_dirty_inode(F2FS_I_SB(inode), type);
} }
void update_dirty_page(struct inode *inode, struct page *page) void f2fs_update_dirty_page(struct inode *inode, struct page *page)
{ {
struct f2fs_sb_info *sbi = F2FS_I_SB(inode); struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
enum inode_type type = S_ISDIR(inode->i_mode) ? DIR_INODE : FILE_INODE; enum inode_type type = S_ISDIR(inode->i_mode) ? DIR_INODE : FILE_INODE;
...@@ -917,7 +921,7 @@ void update_dirty_page(struct inode *inode, struct page *page) ...@@ -917,7 +921,7 @@ void update_dirty_page(struct inode *inode, struct page *page)
f2fs_trace_pid(page); f2fs_trace_pid(page);
} }
void remove_dirty_inode(struct inode *inode) void f2fs_remove_dirty_inode(struct inode *inode)
{ {
struct f2fs_sb_info *sbi = F2FS_I_SB(inode); struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
enum inode_type type = S_ISDIR(inode->i_mode) ? DIR_INODE : FILE_INODE; enum inode_type type = S_ISDIR(inode->i_mode) ? DIR_INODE : FILE_INODE;
...@@ -934,7 +938,7 @@ void remove_dirty_inode(struct inode *inode) ...@@ -934,7 +938,7 @@ void remove_dirty_inode(struct inode *inode)
spin_unlock(&sbi->inode_lock[type]); spin_unlock(&sbi->inode_lock[type]);
} }
int sync_dirty_inodes(struct f2fs_sb_info *sbi, enum inode_type type) int f2fs_sync_dirty_inodes(struct f2fs_sb_info *sbi, enum inode_type type)
{ {
struct list_head *head; struct list_head *head;
struct inode *inode; struct inode *inode;
...@@ -1017,7 +1021,7 @@ int f2fs_sync_inode_meta(struct f2fs_sb_info *sbi) ...@@ -1017,7 +1021,7 @@ int f2fs_sync_inode_meta(struct f2fs_sb_info *sbi)
/* it's on eviction */ /* it's on eviction */
if (is_inode_flag_set(inode, FI_DIRTY_INODE)) if (is_inode_flag_set(inode, FI_DIRTY_INODE))
update_inode_page(inode); f2fs_update_inode_page(inode);
iput(inode); iput(inode);
} }
} }
...@@ -1057,7 +1061,7 @@ static int block_operations(struct f2fs_sb_info *sbi) ...@@ -1057,7 +1061,7 @@ static int block_operations(struct f2fs_sb_info *sbi)
/* write all the dirty dentry pages */ /* write all the dirty dentry pages */
if (get_pages(sbi, F2FS_DIRTY_DENTS)) { if (get_pages(sbi, F2FS_DIRTY_DENTS)) {
f2fs_unlock_all(sbi); f2fs_unlock_all(sbi);
err = sync_dirty_inodes(sbi, DIR_INODE); err = f2fs_sync_dirty_inodes(sbi, DIR_INODE);
if (err) if (err)
goto out; goto out;
cond_resched(); cond_resched();
...@@ -1085,7 +1089,9 @@ static int block_operations(struct f2fs_sb_info *sbi) ...@@ -1085,7 +1089,9 @@ static int block_operations(struct f2fs_sb_info *sbi)
if (get_pages(sbi, F2FS_DIRTY_NODES)) { if (get_pages(sbi, F2FS_DIRTY_NODES)) {
up_write(&sbi->node_write); up_write(&sbi->node_write);
err = sync_node_pages(sbi, &wbc, false, FS_CP_NODE_IO); atomic_inc(&sbi->wb_sync_req[NODE]);
err = f2fs_sync_node_pages(sbi, &wbc, false, FS_CP_NODE_IO);
atomic_dec(&sbi->wb_sync_req[NODE]);
if (err) { if (err) {
up_write(&sbi->node_change); up_write(&sbi->node_change);
f2fs_unlock_all(sbi); f2fs_unlock_all(sbi);
...@@ -1179,10 +1185,10 @@ static void commit_checkpoint(struct f2fs_sb_info *sbi, ...@@ -1179,10 +1185,10 @@ static void commit_checkpoint(struct f2fs_sb_info *sbi,
/* /*
* pagevec_lookup_tag and lock_page again will take * pagevec_lookup_tag and lock_page again will take
* some extra time. Therefore, update_meta_pages and * some extra time. Therefore, f2fs_update_meta_pages and
* sync_meta_pages are combined in this function. * f2fs_sync_meta_pages are combined in this function.
*/ */
struct page *page = grab_meta_page(sbi, blk_addr); struct page *page = f2fs_grab_meta_page(sbi, blk_addr);
int err; int err;
memcpy(page_address(page), src, PAGE_SIZE); memcpy(page_address(page), src, PAGE_SIZE);
...@@ -1220,7 +1226,7 @@ static int do_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc) ...@@ -1220,7 +1226,7 @@ static int do_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc)
/* Flush all the NAT/SIT pages */ /* Flush all the NAT/SIT pages */
while (get_pages(sbi, F2FS_DIRTY_META)) { while (get_pages(sbi, F2FS_DIRTY_META)) {
sync_meta_pages(sbi, META, LONG_MAX, FS_CP_META_IO); f2fs_sync_meta_pages(sbi, META, LONG_MAX, FS_CP_META_IO);
if (unlikely(f2fs_cp_error(sbi))) if (unlikely(f2fs_cp_error(sbi)))
return -EIO; return -EIO;
} }
...@@ -1229,7 +1235,7 @@ static int do_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc) ...@@ -1229,7 +1235,7 @@ static int do_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc)
* modify checkpoint * modify checkpoint
* version number is already updated * version number is already updated
*/ */
ckpt->elapsed_time = cpu_to_le64(get_mtime(sbi)); ckpt->elapsed_time = cpu_to_le64(get_mtime(sbi, true));
ckpt->free_segment_count = cpu_to_le32(free_segments(sbi)); ckpt->free_segment_count = cpu_to_le32(free_segments(sbi));
for (i = 0; i < NR_CURSEG_NODE_TYPE; i++) { for (i = 0; i < NR_CURSEG_NODE_TYPE; i++) {
ckpt->cur_node_segno[i] = ckpt->cur_node_segno[i] =
...@@ -1249,7 +1255,7 @@ static int do_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc) ...@@ -1249,7 +1255,7 @@ static int do_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc)
} }
/* 2 cp + n data seg summary + orphan inode blocks */ /* 2 cp + n data seg summary + orphan inode blocks */
data_sum_blocks = npages_for_summary_flush(sbi, false); data_sum_blocks = f2fs_npages_for_summary_flush(sbi, false);
spin_lock_irqsave(&sbi->cp_lock, flags); spin_lock_irqsave(&sbi->cp_lock, flags);
if (data_sum_blocks < NR_CURSEG_DATA_TYPE) if (data_sum_blocks < NR_CURSEG_DATA_TYPE)
__set_ckpt_flags(ckpt, CP_COMPACT_SUM_FLAG); __set_ckpt_flags(ckpt, CP_COMPACT_SUM_FLAG);
...@@ -1294,22 +1300,23 @@ static int do_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc) ...@@ -1294,22 +1300,23 @@ static int do_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc)
blk = start_blk + sbi->blocks_per_seg - nm_i->nat_bits_blocks; blk = start_blk + sbi->blocks_per_seg - nm_i->nat_bits_blocks;
for (i = 0; i < nm_i->nat_bits_blocks; i++) for (i = 0; i < nm_i->nat_bits_blocks; i++)
update_meta_page(sbi, nm_i->nat_bits + f2fs_update_meta_page(sbi, nm_i->nat_bits +
(i << F2FS_BLKSIZE_BITS), blk + i); (i << F2FS_BLKSIZE_BITS), blk + i);
/* Flush all the NAT BITS pages */ /* Flush all the NAT BITS pages */
while (get_pages(sbi, F2FS_DIRTY_META)) { while (get_pages(sbi, F2FS_DIRTY_META)) {
sync_meta_pages(sbi, META, LONG_MAX, FS_CP_META_IO); f2fs_sync_meta_pages(sbi, META, LONG_MAX,
FS_CP_META_IO);
if (unlikely(f2fs_cp_error(sbi))) if (unlikely(f2fs_cp_error(sbi)))
return -EIO; return -EIO;
} }
} }
/* write out checkpoint buffer at block 0 */ /* write out checkpoint buffer at block 0 */
update_meta_page(sbi, ckpt, start_blk++); f2fs_update_meta_page(sbi, ckpt, start_blk++);
for (i = 1; i < 1 + cp_payload_blks; i++) for (i = 1; i < 1 + cp_payload_blks; i++)
update_meta_page(sbi, (char *)ckpt + i * F2FS_BLKSIZE, f2fs_update_meta_page(sbi, (char *)ckpt + i * F2FS_BLKSIZE,
start_blk++); start_blk++);
if (orphan_num) { if (orphan_num) {
...@@ -1317,7 +1324,7 @@ static int do_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc) ...@@ -1317,7 +1324,7 @@ static int do_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc)
start_blk += orphan_blocks; start_blk += orphan_blocks;
} }
write_data_summaries(sbi, start_blk); f2fs_write_data_summaries(sbi, start_blk);
start_blk += data_sum_blocks; start_blk += data_sum_blocks;
/* Record write statistics in the hot node summary */ /* Record write statistics in the hot node summary */
...@@ -1328,7 +1335,7 @@ static int do_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc) ...@@ -1328,7 +1335,7 @@ static int do_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc)
seg_i->journal->info.kbytes_written = cpu_to_le64(kbytes_written); seg_i->journal->info.kbytes_written = cpu_to_le64(kbytes_written);
if (__remain_node_summaries(cpc->reason)) { if (__remain_node_summaries(cpc->reason)) {
write_node_summaries(sbi, start_blk); f2fs_write_node_summaries(sbi, start_blk);
start_blk += NR_CURSEG_NODE_TYPE; start_blk += NR_CURSEG_NODE_TYPE;
} }
...@@ -1337,7 +1344,7 @@ static int do_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc) ...@@ -1337,7 +1344,7 @@ static int do_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc)
percpu_counter_set(&sbi->alloc_valid_block_count, 0); percpu_counter_set(&sbi->alloc_valid_block_count, 0);
/* Here, we have one bio having CP pack except cp pack 2 page */ /* Here, we have one bio having CP pack except cp pack 2 page */
sync_meta_pages(sbi, META, LONG_MAX, FS_CP_META_IO); f2fs_sync_meta_pages(sbi, META, LONG_MAX, FS_CP_META_IO);
/* wait for previous submitted meta pages writeback */ /* wait for previous submitted meta pages writeback */
wait_on_all_pages_writeback(sbi); wait_on_all_pages_writeback(sbi);
...@@ -1354,7 +1361,7 @@ static int do_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc) ...@@ -1354,7 +1361,7 @@ static int do_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc)
commit_checkpoint(sbi, ckpt, start_blk); commit_checkpoint(sbi, ckpt, start_blk);
wait_on_all_pages_writeback(sbi); wait_on_all_pages_writeback(sbi);
release_ino_entry(sbi, false); f2fs_release_ino_entry(sbi, false);
if (unlikely(f2fs_cp_error(sbi))) if (unlikely(f2fs_cp_error(sbi)))
return -EIO; return -EIO;
...@@ -1379,7 +1386,7 @@ static int do_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc) ...@@ -1379,7 +1386,7 @@ static int do_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc)
/* /*
* We guarantee that this checkpoint procedure will not fail. * We guarantee that this checkpoint procedure will not fail.
*/ */
int write_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc) int f2fs_write_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc)
{ {
struct f2fs_checkpoint *ckpt = F2FS_CKPT(sbi); struct f2fs_checkpoint *ckpt = F2FS_CKPT(sbi);
unsigned long long ckpt_ver; unsigned long long ckpt_ver;
...@@ -1412,7 +1419,7 @@ int write_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc) ...@@ -1412,7 +1419,7 @@ int write_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc)
/* this is the case of multiple fstrims without any changes */ /* this is the case of multiple fstrims without any changes */
if (cpc->reason & CP_DISCARD) { if (cpc->reason & CP_DISCARD) {
if (!exist_trim_candidates(sbi, cpc)) { if (!f2fs_exist_trim_candidates(sbi, cpc)) {
unblock_operations(sbi); unblock_operations(sbi);
goto out; goto out;
} }
...@@ -1420,8 +1427,8 @@ int write_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc) ...@@ -1420,8 +1427,8 @@ int write_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc)
if (NM_I(sbi)->dirty_nat_cnt == 0 && if (NM_I(sbi)->dirty_nat_cnt == 0 &&
SIT_I(sbi)->dirty_sentries == 0 && SIT_I(sbi)->dirty_sentries == 0 &&
prefree_segments(sbi) == 0) { prefree_segments(sbi) == 0) {
flush_sit_entries(sbi, cpc); f2fs_flush_sit_entries(sbi, cpc);
clear_prefree_segments(sbi, cpc); f2fs_clear_prefree_segments(sbi, cpc);
unblock_operations(sbi); unblock_operations(sbi);
goto out; goto out;
} }
...@@ -1436,15 +1443,15 @@ int write_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc) ...@@ -1436,15 +1443,15 @@ int write_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc)
ckpt->checkpoint_ver = cpu_to_le64(++ckpt_ver); ckpt->checkpoint_ver = cpu_to_le64(++ckpt_ver);
/* write cached NAT/SIT entries to NAT/SIT area */ /* write cached NAT/SIT entries to NAT/SIT area */
flush_nat_entries(sbi, cpc); f2fs_flush_nat_entries(sbi, cpc);
flush_sit_entries(sbi, cpc); f2fs_flush_sit_entries(sbi, cpc);
/* unlock all the fs_lock[] in do_checkpoint() */ /* unlock all the fs_lock[] in do_checkpoint() */
err = do_checkpoint(sbi, cpc); err = do_checkpoint(sbi, cpc);
if (err) if (err)
release_discard_addrs(sbi); f2fs_release_discard_addrs(sbi);
else else
clear_prefree_segments(sbi, cpc); f2fs_clear_prefree_segments(sbi, cpc);
unblock_operations(sbi); unblock_operations(sbi);
stat_inc_cp_count(sbi->stat_info); stat_inc_cp_count(sbi->stat_info);
...@@ -1461,7 +1468,7 @@ int write_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc) ...@@ -1461,7 +1468,7 @@ int write_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc)
return err; return err;
} }
void init_ino_entry_info(struct f2fs_sb_info *sbi) void f2fs_init_ino_entry_info(struct f2fs_sb_info *sbi)
{ {
int i; int i;
...@@ -1479,23 +1486,23 @@ void init_ino_entry_info(struct f2fs_sb_info *sbi) ...@@ -1479,23 +1486,23 @@ void init_ino_entry_info(struct f2fs_sb_info *sbi)
F2FS_ORPHANS_PER_BLOCK; F2FS_ORPHANS_PER_BLOCK;
} }
int __init create_checkpoint_caches(void) int __init f2fs_create_checkpoint_caches(void)
{ {
ino_entry_slab = f2fs_kmem_cache_create("f2fs_ino_entry", ino_entry_slab = f2fs_kmem_cache_create("f2fs_ino_entry",
sizeof(struct ino_entry)); sizeof(struct ino_entry));
if (!ino_entry_slab) if (!ino_entry_slab)
return -ENOMEM; return -ENOMEM;
inode_entry_slab = f2fs_kmem_cache_create("f2fs_inode_entry", f2fs_inode_entry_slab = f2fs_kmem_cache_create("f2fs_inode_entry",
sizeof(struct inode_entry)); sizeof(struct inode_entry));
if (!inode_entry_slab) { if (!f2fs_inode_entry_slab) {
kmem_cache_destroy(ino_entry_slab); kmem_cache_destroy(ino_entry_slab);
return -ENOMEM; return -ENOMEM;
} }
return 0; return 0;
} }
void destroy_checkpoint_caches(void) void f2fs_destroy_checkpoint_caches(void)
{ {
kmem_cache_destroy(ino_entry_slab); kmem_cache_destroy(ino_entry_slab);
kmem_cache_destroy(inode_entry_slab); kmem_cache_destroy(f2fs_inode_entry_slab);
} }
...@@ -19,8 +19,6 @@ ...@@ -19,8 +19,6 @@
#include <linux/bio.h> #include <linux/bio.h>
#include <linux/prefetch.h> #include <linux/prefetch.h>
#include <linux/uio.h> #include <linux/uio.h>
#include <linux/mm.h>
#include <linux/memcontrol.h>
#include <linux/cleancache.h> #include <linux/cleancache.h>
#include <linux/sched/signal.h> #include <linux/sched/signal.h>
...@@ -30,6 +28,11 @@ ...@@ -30,6 +28,11 @@
#include "trace.h" #include "trace.h"
#include <trace/events/f2fs.h> #include <trace/events/f2fs.h>
#define NUM_PREALLOC_POST_READ_CTXS 128
static struct kmem_cache *bio_post_read_ctx_cache;
static mempool_t *bio_post_read_ctx_pool;
static bool __is_cp_guaranteed(struct page *page) static bool __is_cp_guaranteed(struct page *page)
{ {
struct address_space *mapping = page->mapping; struct address_space *mapping = page->mapping;
...@@ -45,16 +48,84 @@ static bool __is_cp_guaranteed(struct page *page) ...@@ -45,16 +48,84 @@ static bool __is_cp_guaranteed(struct page *page)
if (inode->i_ino == F2FS_META_INO(sbi) || if (inode->i_ino == F2FS_META_INO(sbi) ||
inode->i_ino == F2FS_NODE_INO(sbi) || inode->i_ino == F2FS_NODE_INO(sbi) ||
S_ISDIR(inode->i_mode) || S_ISDIR(inode->i_mode) ||
(S_ISREG(inode->i_mode) &&
is_inode_flag_set(inode, FI_ATOMIC_FILE)) ||
is_cold_data(page)) is_cold_data(page))
return true; return true;
return false; return false;
} }
static void f2fs_read_end_io(struct bio *bio) /* postprocessing steps for read bios */
enum bio_post_read_step {
STEP_INITIAL = 0,
STEP_DECRYPT,
};
struct bio_post_read_ctx {
struct bio *bio;
struct work_struct work;
unsigned int cur_step;
unsigned int enabled_steps;
};
static void __read_end_io(struct bio *bio)
{ {
struct bio_vec *bvec; struct page *page;
struct bio_vec *bv;
int i; int i;
bio_for_each_segment_all(bv, bio, i) {
page = bv->bv_page;
/* PG_error was set if any post_read step failed */
if (bio->bi_status || PageError(page)) {
ClearPageUptodate(page);
SetPageError(page);
} else {
SetPageUptodate(page);
}
unlock_page(page);
}
if (bio->bi_private)
mempool_free(bio->bi_private, bio_post_read_ctx_pool);
bio_put(bio);
}
static void bio_post_read_processing(struct bio_post_read_ctx *ctx);
static void decrypt_work(struct work_struct *work)
{
struct bio_post_read_ctx *ctx =
container_of(work, struct bio_post_read_ctx, work);
fscrypt_decrypt_bio(ctx->bio);
bio_post_read_processing(ctx);
}
static void bio_post_read_processing(struct bio_post_read_ctx *ctx)
{
switch (++ctx->cur_step) {
case STEP_DECRYPT:
if (ctx->enabled_steps & (1 << STEP_DECRYPT)) {
INIT_WORK(&ctx->work, decrypt_work);
fscrypt_enqueue_decrypt_work(&ctx->work);
return;
}
ctx->cur_step++;
/* fall-through */
default:
__read_end_io(ctx->bio);
}
}
static bool f2fs_bio_post_read_required(struct bio *bio)
{
return bio->bi_private && !bio->bi_status;
}
static void f2fs_read_end_io(struct bio *bio)
{
#ifdef CONFIG_F2FS_FAULT_INJECTION #ifdef CONFIG_F2FS_FAULT_INJECTION
if (time_to_inject(F2FS_P_SB(bio_first_page_all(bio)), FAULT_IO)) { if (time_to_inject(F2FS_P_SB(bio_first_page_all(bio)), FAULT_IO)) {
f2fs_show_injection_info(FAULT_IO); f2fs_show_injection_info(FAULT_IO);
...@@ -62,28 +133,15 @@ static void f2fs_read_end_io(struct bio *bio) ...@@ -62,28 +133,15 @@ static void f2fs_read_end_io(struct bio *bio)
} }
#endif #endif
if (f2fs_bio_encrypted(bio)) { if (f2fs_bio_post_read_required(bio)) {
if (bio->bi_status) { struct bio_post_read_ctx *ctx = bio->bi_private;
fscrypt_release_ctx(bio->bi_private);
} else {
fscrypt_decrypt_bio_pages(bio->bi_private, bio);
return;
}
}
bio_for_each_segment_all(bvec, bio, i) {
struct page *page = bvec->bv_page;
if (!bio->bi_status) { ctx->cur_step = STEP_INITIAL;
if (!PageUptodate(page)) bio_post_read_processing(ctx);
SetPageUptodate(page); return;
} else {
ClearPageUptodate(page);
SetPageError(page);
}
unlock_page(page);
} }
bio_put(bio);
__read_end_io(bio);
} }
static void f2fs_write_end_io(struct bio *bio) static void f2fs_write_end_io(struct bio *bio)
...@@ -189,7 +247,7 @@ static struct bio *__bio_alloc(struct f2fs_sb_info *sbi, block_t blk_addr, ...@@ -189,7 +247,7 @@ static struct bio *__bio_alloc(struct f2fs_sb_info *sbi, block_t blk_addr,
} else { } else {
bio->bi_end_io = f2fs_write_end_io; bio->bi_end_io = f2fs_write_end_io;
bio->bi_private = sbi; bio->bi_private = sbi;
bio->bi_write_hint = io_type_to_rw_hint(sbi, type, temp); bio->bi_write_hint = f2fs_io_type_to_rw_hint(sbi, type, temp);
} }
if (wbc) if (wbc)
wbc_init_bio(wbc, bio); wbc_init_bio(wbc, bio);
...@@ -404,13 +462,12 @@ int f2fs_submit_page_bio(struct f2fs_io_info *fio) ...@@ -404,13 +462,12 @@ int f2fs_submit_page_bio(struct f2fs_io_info *fio)
return 0; return 0;
} }
int f2fs_submit_page_write(struct f2fs_io_info *fio) void f2fs_submit_page_write(struct f2fs_io_info *fio)
{ {
struct f2fs_sb_info *sbi = fio->sbi; struct f2fs_sb_info *sbi = fio->sbi;
enum page_type btype = PAGE_TYPE_OF_BIO(fio->type); enum page_type btype = PAGE_TYPE_OF_BIO(fio->type);
struct f2fs_bio_info *io = sbi->write_io[btype] + fio->temp; struct f2fs_bio_info *io = sbi->write_io[btype] + fio->temp;
struct page *bio_page; struct page *bio_page;
int err = 0;
f2fs_bug_on(sbi, is_read_io(fio->op)); f2fs_bug_on(sbi, is_read_io(fio->op));
...@@ -420,7 +477,7 @@ int f2fs_submit_page_write(struct f2fs_io_info *fio) ...@@ -420,7 +477,7 @@ int f2fs_submit_page_write(struct f2fs_io_info *fio)
spin_lock(&io->io_lock); spin_lock(&io->io_lock);
if (list_empty(&io->io_list)) { if (list_empty(&io->io_list)) {
spin_unlock(&io->io_lock); spin_unlock(&io->io_lock);
goto out_fail; goto out;
} }
fio = list_first_entry(&io->io_list, fio = list_first_entry(&io->io_list,
struct f2fs_io_info, list); struct f2fs_io_info, list);
...@@ -428,7 +485,7 @@ int f2fs_submit_page_write(struct f2fs_io_info *fio) ...@@ -428,7 +485,7 @@ int f2fs_submit_page_write(struct f2fs_io_info *fio)
spin_unlock(&io->io_lock); spin_unlock(&io->io_lock);
} }
if (fio->old_blkaddr != NEW_ADDR) if (is_valid_blkaddr(fio->old_blkaddr))
verify_block_addr(fio, fio->old_blkaddr); verify_block_addr(fio, fio->old_blkaddr);
verify_block_addr(fio, fio->new_blkaddr); verify_block_addr(fio, fio->new_blkaddr);
...@@ -447,9 +504,9 @@ int f2fs_submit_page_write(struct f2fs_io_info *fio) ...@@ -447,9 +504,9 @@ int f2fs_submit_page_write(struct f2fs_io_info *fio)
if (io->bio == NULL) { if (io->bio == NULL) {
if ((fio->type == DATA || fio->type == NODE) && if ((fio->type == DATA || fio->type == NODE) &&
fio->new_blkaddr & F2FS_IO_SIZE_MASK(sbi)) { fio->new_blkaddr & F2FS_IO_SIZE_MASK(sbi)) {
err = -EAGAIN;
dec_page_count(sbi, WB_DATA_TYPE(bio_page)); dec_page_count(sbi, WB_DATA_TYPE(bio_page));
goto out_fail; fio->retry = true;
goto skip;
} }
io->bio = __bio_alloc(sbi, fio->new_blkaddr, fio->io_wbc, io->bio = __bio_alloc(sbi, fio->new_blkaddr, fio->io_wbc,
BIO_MAX_PAGES, false, BIO_MAX_PAGES, false,
...@@ -469,41 +526,44 @@ int f2fs_submit_page_write(struct f2fs_io_info *fio) ...@@ -469,41 +526,44 @@ int f2fs_submit_page_write(struct f2fs_io_info *fio)
f2fs_trace_ios(fio, 0); f2fs_trace_ios(fio, 0);
trace_f2fs_submit_page_write(fio->page, fio); trace_f2fs_submit_page_write(fio->page, fio);
skip:
if (fio->in_list) if (fio->in_list)
goto next; goto next;
out_fail: out:
up_write(&io->io_rwsem); up_write(&io->io_rwsem);
return err;
} }
static struct bio *f2fs_grab_read_bio(struct inode *inode, block_t blkaddr, static struct bio *f2fs_grab_read_bio(struct inode *inode, block_t blkaddr,
unsigned nr_pages) unsigned nr_pages)
{ {
struct f2fs_sb_info *sbi = F2FS_I_SB(inode); struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
struct fscrypt_ctx *ctx = NULL;
struct bio *bio; struct bio *bio;
struct bio_post_read_ctx *ctx;
if (f2fs_encrypted_file(inode)) { unsigned int post_read_steps = 0;
ctx = fscrypt_get_ctx(inode, GFP_NOFS);
if (IS_ERR(ctx))
return ERR_CAST(ctx);
/* wait the page to be moved by cleaning */
f2fs_wait_on_block_writeback(sbi, blkaddr);
}
bio = f2fs_bio_alloc(sbi, min_t(int, nr_pages, BIO_MAX_PAGES), false); bio = f2fs_bio_alloc(sbi, min_t(int, nr_pages, BIO_MAX_PAGES), false);
if (!bio) { if (!bio)
if (ctx)
fscrypt_release_ctx(ctx);
return ERR_PTR(-ENOMEM); return ERR_PTR(-ENOMEM);
}
f2fs_target_device(sbi, blkaddr, bio); f2fs_target_device(sbi, blkaddr, bio);
bio->bi_end_io = f2fs_read_end_io; bio->bi_end_io = f2fs_read_end_io;
bio->bi_private = ctx;
bio_set_op_attrs(bio, REQ_OP_READ, 0); bio_set_op_attrs(bio, REQ_OP_READ, 0);
if (f2fs_encrypted_file(inode))
post_read_steps |= 1 << STEP_DECRYPT;
if (post_read_steps) {
ctx = mempool_alloc(bio_post_read_ctx_pool, GFP_NOFS);
if (!ctx) {
bio_put(bio);
return ERR_PTR(-ENOMEM);
}
ctx->bio = bio;
ctx->enabled_steps = post_read_steps;
bio->bi_private = ctx;
/* wait the page to be moved by cleaning */
f2fs_wait_on_block_writeback(sbi, blkaddr);
}
return bio; return bio;
} }
...@@ -544,7 +604,7 @@ static void __set_data_blkaddr(struct dnode_of_data *dn) ...@@ -544,7 +604,7 @@ static void __set_data_blkaddr(struct dnode_of_data *dn)
* ->node_page * ->node_page
* update block addresses in the node page * update block addresses in the node page
*/ */
void set_data_blkaddr(struct dnode_of_data *dn) void f2fs_set_data_blkaddr(struct dnode_of_data *dn)
{ {
f2fs_wait_on_page_writeback(dn->node_page, NODE, true); f2fs_wait_on_page_writeback(dn->node_page, NODE, true);
__set_data_blkaddr(dn); __set_data_blkaddr(dn);
...@@ -555,12 +615,12 @@ void set_data_blkaddr(struct dnode_of_data *dn) ...@@ -555,12 +615,12 @@ void set_data_blkaddr(struct dnode_of_data *dn)
void f2fs_update_data_blkaddr(struct dnode_of_data *dn, block_t blkaddr) void f2fs_update_data_blkaddr(struct dnode_of_data *dn, block_t blkaddr)
{ {
dn->data_blkaddr = blkaddr; dn->data_blkaddr = blkaddr;
set_data_blkaddr(dn); f2fs_set_data_blkaddr(dn);
f2fs_update_extent_cache(dn); f2fs_update_extent_cache(dn);
} }
/* dn->ofs_in_node will be returned with up-to-date last block pointer */ /* dn->ofs_in_node will be returned with up-to-date last block pointer */
int reserve_new_blocks(struct dnode_of_data *dn, blkcnt_t count) int f2fs_reserve_new_blocks(struct dnode_of_data *dn, blkcnt_t count)
{ {
struct f2fs_sb_info *sbi = F2FS_I_SB(dn->inode); struct f2fs_sb_info *sbi = F2FS_I_SB(dn->inode);
int err; int err;
...@@ -594,12 +654,12 @@ int reserve_new_blocks(struct dnode_of_data *dn, blkcnt_t count) ...@@ -594,12 +654,12 @@ int reserve_new_blocks(struct dnode_of_data *dn, blkcnt_t count)
} }
/* Should keep dn->ofs_in_node unchanged */ /* Should keep dn->ofs_in_node unchanged */
int reserve_new_block(struct dnode_of_data *dn) int f2fs_reserve_new_block(struct dnode_of_data *dn)
{ {
unsigned int ofs_in_node = dn->ofs_in_node; unsigned int ofs_in_node = dn->ofs_in_node;
int ret; int ret;
ret = reserve_new_blocks(dn, 1); ret = f2fs_reserve_new_blocks(dn, 1);
dn->ofs_in_node = ofs_in_node; dn->ofs_in_node = ofs_in_node;
return ret; return ret;
} }
...@@ -609,12 +669,12 @@ int f2fs_reserve_block(struct dnode_of_data *dn, pgoff_t index) ...@@ -609,12 +669,12 @@ int f2fs_reserve_block(struct dnode_of_data *dn, pgoff_t index)
bool need_put = dn->inode_page ? false : true; bool need_put = dn->inode_page ? false : true;
int err; int err;
err = get_dnode_of_data(dn, index, ALLOC_NODE); err = f2fs_get_dnode_of_data(dn, index, ALLOC_NODE);
if (err) if (err)
return err; return err;
if (dn->data_blkaddr == NULL_ADDR) if (dn->data_blkaddr == NULL_ADDR)
err = reserve_new_block(dn); err = f2fs_reserve_new_block(dn);
if (err || need_put) if (err || need_put)
f2fs_put_dnode(dn); f2fs_put_dnode(dn);
return err; return err;
...@@ -633,7 +693,7 @@ int f2fs_get_block(struct dnode_of_data *dn, pgoff_t index) ...@@ -633,7 +693,7 @@ int f2fs_get_block(struct dnode_of_data *dn, pgoff_t index)
return f2fs_reserve_block(dn, index); return f2fs_reserve_block(dn, index);
} }
struct page *get_read_data_page(struct inode *inode, pgoff_t index, struct page *f2fs_get_read_data_page(struct inode *inode, pgoff_t index,
int op_flags, bool for_write) int op_flags, bool for_write)
{ {
struct address_space *mapping = inode->i_mapping; struct address_space *mapping = inode->i_mapping;
...@@ -652,7 +712,7 @@ struct page *get_read_data_page(struct inode *inode, pgoff_t index, ...@@ -652,7 +712,7 @@ struct page *get_read_data_page(struct inode *inode, pgoff_t index,
} }
set_new_dnode(&dn, inode, NULL, NULL, 0); set_new_dnode(&dn, inode, NULL, NULL, 0);
err = get_dnode_of_data(&dn, index, LOOKUP_NODE); err = f2fs_get_dnode_of_data(&dn, index, LOOKUP_NODE);
if (err) if (err)
goto put_err; goto put_err;
f2fs_put_dnode(&dn); f2fs_put_dnode(&dn);
...@@ -671,7 +731,8 @@ struct page *get_read_data_page(struct inode *inode, pgoff_t index, ...@@ -671,7 +731,8 @@ struct page *get_read_data_page(struct inode *inode, pgoff_t index,
* A new dentry page is allocated but not able to be written, since its * A new dentry page is allocated but not able to be written, since its
* new inode page couldn't be allocated due to -ENOSPC. * new inode page couldn't be allocated due to -ENOSPC.
* In such the case, its blkaddr can be remained as NEW_ADDR. * In such the case, its blkaddr can be remained as NEW_ADDR.
* see, f2fs_add_link -> get_new_data_page -> init_inode_metadata. * see, f2fs_add_link -> f2fs_get_new_data_page ->
* f2fs_init_inode_metadata.
*/ */
if (dn.data_blkaddr == NEW_ADDR) { if (dn.data_blkaddr == NEW_ADDR) {
zero_user_segment(page, 0, PAGE_SIZE); zero_user_segment(page, 0, PAGE_SIZE);
...@@ -691,7 +752,7 @@ struct page *get_read_data_page(struct inode *inode, pgoff_t index, ...@@ -691,7 +752,7 @@ struct page *get_read_data_page(struct inode *inode, pgoff_t index,
return ERR_PTR(err); return ERR_PTR(err);
} }
struct page *find_data_page(struct inode *inode, pgoff_t index) struct page *f2fs_find_data_page(struct inode *inode, pgoff_t index)
{ {
struct address_space *mapping = inode->i_mapping; struct address_space *mapping = inode->i_mapping;
struct page *page; struct page *page;
...@@ -701,7 +762,7 @@ struct page *find_data_page(struct inode *inode, pgoff_t index) ...@@ -701,7 +762,7 @@ struct page *find_data_page(struct inode *inode, pgoff_t index)
return page; return page;
f2fs_put_page(page, 0); f2fs_put_page(page, 0);
page = get_read_data_page(inode, index, 0, false); page = f2fs_get_read_data_page(inode, index, 0, false);
if (IS_ERR(page)) if (IS_ERR(page))
return page; return page;
...@@ -721,13 +782,13 @@ struct page *find_data_page(struct inode *inode, pgoff_t index) ...@@ -721,13 +782,13 @@ struct page *find_data_page(struct inode *inode, pgoff_t index)
* Because, the callers, functions in dir.c and GC, should be able to know * Because, the callers, functions in dir.c and GC, should be able to know
* whether this page exists or not. * whether this page exists or not.
*/ */
struct page *get_lock_data_page(struct inode *inode, pgoff_t index, struct page *f2fs_get_lock_data_page(struct inode *inode, pgoff_t index,
bool for_write) bool for_write)
{ {
struct address_space *mapping = inode->i_mapping; struct address_space *mapping = inode->i_mapping;
struct page *page; struct page *page;
repeat: repeat:
page = get_read_data_page(inode, index, 0, for_write); page = f2fs_get_read_data_page(inode, index, 0, for_write);
if (IS_ERR(page)) if (IS_ERR(page))
return page; return page;
...@@ -753,7 +814,7 @@ struct page *get_lock_data_page(struct inode *inode, pgoff_t index, ...@@ -753,7 +814,7 @@ struct page *get_lock_data_page(struct inode *inode, pgoff_t index,
* Note that, ipage is set only by make_empty_dir, and if any error occur, * Note that, ipage is set only by make_empty_dir, and if any error occur,
* ipage should be released by this function. * ipage should be released by this function.
*/ */
struct page *get_new_data_page(struct inode *inode, struct page *f2fs_get_new_data_page(struct inode *inode,
struct page *ipage, pgoff_t index, bool new_i_size) struct page *ipage, pgoff_t index, bool new_i_size)
{ {
struct address_space *mapping = inode->i_mapping; struct address_space *mapping = inode->i_mapping;
...@@ -792,7 +853,7 @@ struct page *get_new_data_page(struct inode *inode, ...@@ -792,7 +853,7 @@ struct page *get_new_data_page(struct inode *inode,
/* if ipage exists, blkaddr should be NEW_ADDR */ /* if ipage exists, blkaddr should be NEW_ADDR */
f2fs_bug_on(F2FS_I_SB(inode), ipage); f2fs_bug_on(F2FS_I_SB(inode), ipage);
page = get_lock_data_page(inode, index, true); page = f2fs_get_lock_data_page(inode, index, true);
if (IS_ERR(page)) if (IS_ERR(page))
return page; return page;
} }
...@@ -824,15 +885,15 @@ static int __allocate_data_block(struct dnode_of_data *dn, int seg_type) ...@@ -824,15 +885,15 @@ static int __allocate_data_block(struct dnode_of_data *dn, int seg_type)
return err; return err;
alloc: alloc:
get_node_info(sbi, dn->nid, &ni); f2fs_get_node_info(sbi, dn->nid, &ni);
set_summary(&sum, dn->nid, dn->ofs_in_node, ni.version); set_summary(&sum, dn->nid, dn->ofs_in_node, ni.version);
allocate_data_block(sbi, NULL, dn->data_blkaddr, &dn->data_blkaddr, f2fs_allocate_data_block(sbi, NULL, dn->data_blkaddr, &dn->data_blkaddr,
&sum, seg_type, NULL, false); &sum, seg_type, NULL, false);
set_data_blkaddr(dn); f2fs_set_data_blkaddr(dn);
/* update i_size */ /* update i_size */
fofs = start_bidx_of_node(ofs_of_node(dn->node_page), dn->inode) + fofs = f2fs_start_bidx_of_node(ofs_of_node(dn->node_page), dn->inode) +
dn->ofs_in_node; dn->ofs_in_node;
if (i_size_read(dn->inode) < ((loff_t)(fofs + 1) << PAGE_SHIFT)) if (i_size_read(dn->inode) < ((loff_t)(fofs + 1) << PAGE_SHIFT))
f2fs_i_size_write(dn->inode, f2fs_i_size_write(dn->inode,
...@@ -870,7 +931,7 @@ int f2fs_preallocate_blocks(struct kiocb *iocb, struct iov_iter *from) ...@@ -870,7 +931,7 @@ int f2fs_preallocate_blocks(struct kiocb *iocb, struct iov_iter *from)
map.m_seg_type = NO_CHECK_TYPE; map.m_seg_type = NO_CHECK_TYPE;
if (direct_io) { if (direct_io) {
map.m_seg_type = rw_hint_to_seg_type(iocb->ki_hint); map.m_seg_type = f2fs_rw_hint_to_seg_type(iocb->ki_hint);
flag = f2fs_force_buffered_io(inode, WRITE) ? flag = f2fs_force_buffered_io(inode, WRITE) ?
F2FS_GET_BLOCK_PRE_AIO : F2FS_GET_BLOCK_PRE_AIO :
F2FS_GET_BLOCK_PRE_DIO; F2FS_GET_BLOCK_PRE_DIO;
...@@ -960,7 +1021,7 @@ int f2fs_map_blocks(struct inode *inode, struct f2fs_map_blocks *map, ...@@ -960,7 +1021,7 @@ int f2fs_map_blocks(struct inode *inode, struct f2fs_map_blocks *map,
/* When reading holes, we need its node page */ /* When reading holes, we need its node page */
set_new_dnode(&dn, inode, NULL, NULL, 0); set_new_dnode(&dn, inode, NULL, NULL, 0);
err = get_dnode_of_data(&dn, pgofs, mode); err = f2fs_get_dnode_of_data(&dn, pgofs, mode);
if (err) { if (err) {
if (flag == F2FS_GET_BLOCK_BMAP) if (flag == F2FS_GET_BLOCK_BMAP)
map->m_pblk = 0; map->m_pblk = 0;
...@@ -968,10 +1029,10 @@ int f2fs_map_blocks(struct inode *inode, struct f2fs_map_blocks *map, ...@@ -968,10 +1029,10 @@ int f2fs_map_blocks(struct inode *inode, struct f2fs_map_blocks *map,
err = 0; err = 0;
if (map->m_next_pgofs) if (map->m_next_pgofs)
*map->m_next_pgofs = *map->m_next_pgofs =
get_next_page_offset(&dn, pgofs); f2fs_get_next_page_offset(&dn, pgofs);
if (map->m_next_extent) if (map->m_next_extent)
*map->m_next_extent = *map->m_next_extent =
get_next_page_offset(&dn, pgofs); f2fs_get_next_page_offset(&dn, pgofs);
} }
goto unlock_out; goto unlock_out;
} }
...@@ -984,7 +1045,7 @@ int f2fs_map_blocks(struct inode *inode, struct f2fs_map_blocks *map, ...@@ -984,7 +1045,7 @@ int f2fs_map_blocks(struct inode *inode, struct f2fs_map_blocks *map,
next_block: next_block:
blkaddr = datablock_addr(dn.inode, dn.node_page, dn.ofs_in_node); blkaddr = datablock_addr(dn.inode, dn.node_page, dn.ofs_in_node);
if (blkaddr == NEW_ADDR || blkaddr == NULL_ADDR) { if (!is_valid_blkaddr(blkaddr)) {
if (create) { if (create) {
if (unlikely(f2fs_cp_error(sbi))) { if (unlikely(f2fs_cp_error(sbi))) {
err = -EIO; err = -EIO;
...@@ -1057,7 +1118,7 @@ int f2fs_map_blocks(struct inode *inode, struct f2fs_map_blocks *map, ...@@ -1057,7 +1118,7 @@ int f2fs_map_blocks(struct inode *inode, struct f2fs_map_blocks *map,
(pgofs == end || dn.ofs_in_node == end_offset)) { (pgofs == end || dn.ofs_in_node == end_offset)) {
dn.ofs_in_node = ofs_in_node; dn.ofs_in_node = ofs_in_node;
err = reserve_new_blocks(&dn, prealloc); err = f2fs_reserve_new_blocks(&dn, prealloc);
if (err) if (err)
goto sync_out; goto sync_out;
...@@ -1176,7 +1237,7 @@ static int get_data_block_dio(struct inode *inode, sector_t iblock, ...@@ -1176,7 +1237,7 @@ static int get_data_block_dio(struct inode *inode, sector_t iblock,
{ {
return __get_data_block(inode, iblock, bh_result, create, return __get_data_block(inode, iblock, bh_result, create,
F2FS_GET_BLOCK_DEFAULT, NULL, F2FS_GET_BLOCK_DEFAULT, NULL,
rw_hint_to_seg_type( f2fs_rw_hint_to_seg_type(
inode->i_write_hint)); inode->i_write_hint));
} }
...@@ -1221,7 +1282,7 @@ static int f2fs_xattr_fiemap(struct inode *inode, ...@@ -1221,7 +1282,7 @@ static int f2fs_xattr_fiemap(struct inode *inode,
if (!page) if (!page)
return -ENOMEM; return -ENOMEM;
get_node_info(sbi, inode->i_ino, &ni); f2fs_get_node_info(sbi, inode->i_ino, &ni);
phys = (__u64)blk_to_logical(inode, ni.blk_addr); phys = (__u64)blk_to_logical(inode, ni.blk_addr);
offset = offsetof(struct f2fs_inode, i_addr) + offset = offsetof(struct f2fs_inode, i_addr) +
...@@ -1248,7 +1309,7 @@ static int f2fs_xattr_fiemap(struct inode *inode, ...@@ -1248,7 +1309,7 @@ static int f2fs_xattr_fiemap(struct inode *inode,
if (!page) if (!page)
return -ENOMEM; return -ENOMEM;
get_node_info(sbi, xnid, &ni); f2fs_get_node_info(sbi, xnid, &ni);
phys = (__u64)blk_to_logical(inode, ni.blk_addr); phys = (__u64)blk_to_logical(inode, ni.blk_addr);
len = inode->i_sb->s_blocksize; len = inode->i_sb->s_blocksize;
...@@ -1525,7 +1586,7 @@ static int encrypt_one_page(struct f2fs_io_info *fio) ...@@ -1525,7 +1586,7 @@ static int encrypt_one_page(struct f2fs_io_info *fio)
if (!f2fs_encrypted_file(inode)) if (!f2fs_encrypted_file(inode))
return 0; return 0;
/* wait for GCed encrypted page writeback */ /* wait for GCed page writeback via META_MAPPING */
f2fs_wait_on_block_writeback(fio->sbi, fio->old_blkaddr); f2fs_wait_on_block_writeback(fio->sbi, fio->old_blkaddr);
retry_encrypt: retry_encrypt:
...@@ -1552,12 +1613,12 @@ static inline bool check_inplace_update_policy(struct inode *inode, ...@@ -1552,12 +1613,12 @@ static inline bool check_inplace_update_policy(struct inode *inode,
if (policy & (0x1 << F2FS_IPU_FORCE)) if (policy & (0x1 << F2FS_IPU_FORCE))
return true; return true;
if (policy & (0x1 << F2FS_IPU_SSR) && need_SSR(sbi)) if (policy & (0x1 << F2FS_IPU_SSR) && f2fs_need_SSR(sbi))
return true; return true;
if (policy & (0x1 << F2FS_IPU_UTIL) && if (policy & (0x1 << F2FS_IPU_UTIL) &&
utilization(sbi) > SM_I(sbi)->min_ipu_util) utilization(sbi) > SM_I(sbi)->min_ipu_util)
return true; return true;
if (policy & (0x1 << F2FS_IPU_SSR_UTIL) && need_SSR(sbi) && if (policy & (0x1 << F2FS_IPU_SSR_UTIL) && f2fs_need_SSR(sbi) &&
utilization(sbi) > SM_I(sbi)->min_ipu_util) utilization(sbi) > SM_I(sbi)->min_ipu_util)
return true; return true;
...@@ -1578,7 +1639,7 @@ static inline bool check_inplace_update_policy(struct inode *inode, ...@@ -1578,7 +1639,7 @@ static inline bool check_inplace_update_policy(struct inode *inode,
return false; return false;
} }
bool should_update_inplace(struct inode *inode, struct f2fs_io_info *fio) bool f2fs_should_update_inplace(struct inode *inode, struct f2fs_io_info *fio)
{ {
if (f2fs_is_pinned_file(inode)) if (f2fs_is_pinned_file(inode))
return true; return true;
...@@ -1590,7 +1651,7 @@ bool should_update_inplace(struct inode *inode, struct f2fs_io_info *fio) ...@@ -1590,7 +1651,7 @@ bool should_update_inplace(struct inode *inode, struct f2fs_io_info *fio)
return check_inplace_update_policy(inode, fio); return check_inplace_update_policy(inode, fio);
} }
bool should_update_outplace(struct inode *inode, struct f2fs_io_info *fio) bool f2fs_should_update_outplace(struct inode *inode, struct f2fs_io_info *fio)
{ {
struct f2fs_sb_info *sbi = F2FS_I_SB(inode); struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
...@@ -1613,22 +1674,13 @@ static inline bool need_inplace_update(struct f2fs_io_info *fio) ...@@ -1613,22 +1674,13 @@ static inline bool need_inplace_update(struct f2fs_io_info *fio)
{ {
struct inode *inode = fio->page->mapping->host; struct inode *inode = fio->page->mapping->host;
if (should_update_outplace(inode, fio)) if (f2fs_should_update_outplace(inode, fio))
return false; return false;
return should_update_inplace(inode, fio); return f2fs_should_update_inplace(inode, fio);
} }
static inline bool valid_ipu_blkaddr(struct f2fs_io_info *fio) int f2fs_do_write_data_page(struct f2fs_io_info *fio)
{
if (fio->old_blkaddr == NEW_ADDR)
return false;
if (fio->old_blkaddr == NULL_ADDR)
return false;
return true;
}
int do_write_data_page(struct f2fs_io_info *fio)
{ {
struct page *page = fio->page; struct page *page = fio->page;
struct inode *inode = page->mapping->host; struct inode *inode = page->mapping->host;
...@@ -1642,7 +1694,7 @@ int do_write_data_page(struct f2fs_io_info *fio) ...@@ -1642,7 +1694,7 @@ int do_write_data_page(struct f2fs_io_info *fio)
f2fs_lookup_extent_cache(inode, page->index, &ei)) { f2fs_lookup_extent_cache(inode, page->index, &ei)) {
fio->old_blkaddr = ei.blk + page->index - ei.fofs; fio->old_blkaddr = ei.blk + page->index - ei.fofs;
if (valid_ipu_blkaddr(fio)) { if (is_valid_blkaddr(fio->old_blkaddr)) {
ipu_force = true; ipu_force = true;
fio->need_lock = LOCK_DONE; fio->need_lock = LOCK_DONE;
goto got_it; goto got_it;
...@@ -1653,7 +1705,7 @@ int do_write_data_page(struct f2fs_io_info *fio) ...@@ -1653,7 +1705,7 @@ int do_write_data_page(struct f2fs_io_info *fio)
if (fio->need_lock == LOCK_REQ && !f2fs_trylock_op(fio->sbi)) if (fio->need_lock == LOCK_REQ && !f2fs_trylock_op(fio->sbi))
return -EAGAIN; return -EAGAIN;
err = get_dnode_of_data(&dn, page->index, LOOKUP_NODE); err = f2fs_get_dnode_of_data(&dn, page->index, LOOKUP_NODE);
if (err) if (err)
goto out; goto out;
...@@ -1669,16 +1721,18 @@ int do_write_data_page(struct f2fs_io_info *fio) ...@@ -1669,16 +1721,18 @@ int do_write_data_page(struct f2fs_io_info *fio)
* If current allocation needs SSR, * If current allocation needs SSR,
* it had better in-place writes for updated data. * it had better in-place writes for updated data.
*/ */
if (ipu_force || (valid_ipu_blkaddr(fio) && need_inplace_update(fio))) { if (ipu_force || (is_valid_blkaddr(fio->old_blkaddr) &&
need_inplace_update(fio))) {
err = encrypt_one_page(fio); err = encrypt_one_page(fio);
if (err) if (err)
goto out_writepage; goto out_writepage;
set_page_writeback(page); set_page_writeback(page);
ClearPageError(page);
f2fs_put_dnode(&dn); f2fs_put_dnode(&dn);
if (fio->need_lock == LOCK_REQ) if (fio->need_lock == LOCK_REQ)
f2fs_unlock_op(fio->sbi); f2fs_unlock_op(fio->sbi);
err = rewrite_data_page(fio); err = f2fs_inplace_write_data(fio);
trace_f2fs_do_write_data_page(fio->page, IPU); trace_f2fs_do_write_data_page(fio->page, IPU);
set_inode_flag(inode, FI_UPDATE_WRITE); set_inode_flag(inode, FI_UPDATE_WRITE);
return err; return err;
...@@ -1697,9 +1751,10 @@ int do_write_data_page(struct f2fs_io_info *fio) ...@@ -1697,9 +1751,10 @@ int do_write_data_page(struct f2fs_io_info *fio)
goto out_writepage; goto out_writepage;
set_page_writeback(page); set_page_writeback(page);
ClearPageError(page);
/* LFS mode write path */ /* LFS mode write path */
write_data_page(&dn, fio); f2fs_outplace_write_data(&dn, fio);
trace_f2fs_do_write_data_page(page, OPU); trace_f2fs_do_write_data_page(page, OPU);
set_inode_flag(inode, FI_APPEND_WRITE); set_inode_flag(inode, FI_APPEND_WRITE);
if (page->index == 0) if (page->index == 0)
...@@ -1745,6 +1800,12 @@ static int __write_data_page(struct page *page, bool *submitted, ...@@ -1745,6 +1800,12 @@ static int __write_data_page(struct page *page, bool *submitted,
/* we should bypass data pages to proceed the kworkder jobs */ /* we should bypass data pages to proceed the kworkder jobs */
if (unlikely(f2fs_cp_error(sbi))) { if (unlikely(f2fs_cp_error(sbi))) {
mapping_set_error(page->mapping, -EIO); mapping_set_error(page->mapping, -EIO);
/*
* don't drop any dirty dentry pages for keeping lastest
* directory structure.
*/
if (S_ISDIR(inode->i_mode))
goto redirty_out;
goto out; goto out;
} }
...@@ -1769,13 +1830,13 @@ static int __write_data_page(struct page *page, bool *submitted, ...@@ -1769,13 +1830,13 @@ static int __write_data_page(struct page *page, bool *submitted,
/* we should not write 0'th page having journal header */ /* we should not write 0'th page having journal header */
if (f2fs_is_volatile_file(inode) && (!page->index || if (f2fs_is_volatile_file(inode) && (!page->index ||
(!wbc->for_reclaim && (!wbc->for_reclaim &&
available_free_memory(sbi, BASE_CHECK)))) f2fs_available_free_memory(sbi, BASE_CHECK))))
goto redirty_out; goto redirty_out;
/* Dentry blocks are controlled by checkpoint */ /* Dentry blocks are controlled by checkpoint */
if (S_ISDIR(inode->i_mode)) { if (S_ISDIR(inode->i_mode)) {
fio.need_lock = LOCK_DONE; fio.need_lock = LOCK_DONE;
err = do_write_data_page(&fio); err = f2fs_do_write_data_page(&fio);
goto done; goto done;
} }
...@@ -1794,10 +1855,10 @@ static int __write_data_page(struct page *page, bool *submitted, ...@@ -1794,10 +1855,10 @@ static int __write_data_page(struct page *page, bool *submitted,
} }
if (err == -EAGAIN) { if (err == -EAGAIN) {
err = do_write_data_page(&fio); err = f2fs_do_write_data_page(&fio);
if (err == -EAGAIN) { if (err == -EAGAIN) {
fio.need_lock = LOCK_REQ; fio.need_lock = LOCK_REQ;
err = do_write_data_page(&fio); err = f2fs_do_write_data_page(&fio);
} }
} }
...@@ -1822,7 +1883,7 @@ static int __write_data_page(struct page *page, bool *submitted, ...@@ -1822,7 +1883,7 @@ static int __write_data_page(struct page *page, bool *submitted,
if (wbc->for_reclaim) { if (wbc->for_reclaim) {
f2fs_submit_merged_write_cond(sbi, inode, 0, page->index, DATA); f2fs_submit_merged_write_cond(sbi, inode, 0, page->index, DATA);
clear_inode_flag(inode, FI_HOT_DATA); clear_inode_flag(inode, FI_HOT_DATA);
remove_dirty_inode(inode); f2fs_remove_dirty_inode(inode);
submitted = NULL; submitted = NULL;
} }
...@@ -1842,7 +1903,13 @@ static int __write_data_page(struct page *page, bool *submitted, ...@@ -1842,7 +1903,13 @@ static int __write_data_page(struct page *page, bool *submitted,
redirty_out: redirty_out:
redirty_page_for_writepage(wbc, page); redirty_page_for_writepage(wbc, page);
if (!err) /*
* pageout() in MM traslates EAGAIN, so calls handle_write_error()
* -> mapping_set_error() -> set_bit(AS_EIO, ...).
* file_write_and_wait_range() will see EIO error, which is critical
* to return value of fsync() followed by atomic_write failure to user.
*/
if (!err || wbc->for_reclaim)
return AOP_WRITEPAGE_ACTIVATE; return AOP_WRITEPAGE_ACTIVATE;
unlock_page(page); unlock_page(page);
return err; return err;
...@@ -1866,6 +1933,7 @@ static int f2fs_write_cache_pages(struct address_space *mapping, ...@@ -1866,6 +1933,7 @@ static int f2fs_write_cache_pages(struct address_space *mapping,
int ret = 0; int ret = 0;
int done = 0; int done = 0;
struct pagevec pvec; struct pagevec pvec;
struct f2fs_sb_info *sbi = F2FS_M_SB(mapping);
int nr_pages; int nr_pages;
pgoff_t uninitialized_var(writeback_index); pgoff_t uninitialized_var(writeback_index);
pgoff_t index; pgoff_t index;
...@@ -1919,6 +1987,13 @@ static int f2fs_write_cache_pages(struct address_space *mapping, ...@@ -1919,6 +1987,13 @@ static int f2fs_write_cache_pages(struct address_space *mapping,
struct page *page = pvec.pages[i]; struct page *page = pvec.pages[i];
bool submitted = false; bool submitted = false;
/* give a priority to WB_SYNC threads */
if (atomic_read(&sbi->wb_sync_req[DATA]) &&
wbc->sync_mode == WB_SYNC_NONE) {
done = 1;
break;
}
done_index = page->index; done_index = page->index;
retry_write: retry_write:
lock_page(page); lock_page(page);
...@@ -1973,9 +2048,7 @@ static int f2fs_write_cache_pages(struct address_space *mapping, ...@@ -1973,9 +2048,7 @@ static int f2fs_write_cache_pages(struct address_space *mapping,
last_idx = page->index; last_idx = page->index;
} }
/* give a priority to WB_SYNC threads */ if (--wbc->nr_to_write <= 0 &&
if ((atomic_read(&F2FS_M_SB(mapping)->wb_sync_req) ||
--wbc->nr_to_write <= 0) &&
wbc->sync_mode == WB_SYNC_NONE) { wbc->sync_mode == WB_SYNC_NONE) {
done = 1; done = 1;
break; break;
...@@ -2001,7 +2074,7 @@ static int f2fs_write_cache_pages(struct address_space *mapping, ...@@ -2001,7 +2074,7 @@ static int f2fs_write_cache_pages(struct address_space *mapping,
return ret; return ret;
} }
int __f2fs_write_data_pages(struct address_space *mapping, static int __f2fs_write_data_pages(struct address_space *mapping,
struct writeback_control *wbc, struct writeback_control *wbc,
enum iostat_type io_type) enum iostat_type io_type)
{ {
...@@ -2024,7 +2097,7 @@ int __f2fs_write_data_pages(struct address_space *mapping, ...@@ -2024,7 +2097,7 @@ int __f2fs_write_data_pages(struct address_space *mapping,
if (S_ISDIR(inode->i_mode) && wbc->sync_mode == WB_SYNC_NONE && if (S_ISDIR(inode->i_mode) && wbc->sync_mode == WB_SYNC_NONE &&
get_dirty_pages(inode) < nr_pages_to_skip(sbi, DATA) && get_dirty_pages(inode) < nr_pages_to_skip(sbi, DATA) &&
available_free_memory(sbi, DIRTY_DENTS)) f2fs_available_free_memory(sbi, DIRTY_DENTS))
goto skip_write; goto skip_write;
/* skip writing during file defragment */ /* skip writing during file defragment */
...@@ -2035,8 +2108,8 @@ int __f2fs_write_data_pages(struct address_space *mapping, ...@@ -2035,8 +2108,8 @@ int __f2fs_write_data_pages(struct address_space *mapping,
/* to avoid spliting IOs due to mixed WB_SYNC_ALL and WB_SYNC_NONE */ /* to avoid spliting IOs due to mixed WB_SYNC_ALL and WB_SYNC_NONE */
if (wbc->sync_mode == WB_SYNC_ALL) if (wbc->sync_mode == WB_SYNC_ALL)
atomic_inc(&sbi->wb_sync_req); atomic_inc(&sbi->wb_sync_req[DATA]);
else if (atomic_read(&sbi->wb_sync_req)) else if (atomic_read(&sbi->wb_sync_req[DATA]))
goto skip_write; goto skip_write;
blk_start_plug(&plug); blk_start_plug(&plug);
...@@ -2044,13 +2117,13 @@ int __f2fs_write_data_pages(struct address_space *mapping, ...@@ -2044,13 +2117,13 @@ int __f2fs_write_data_pages(struct address_space *mapping,
blk_finish_plug(&plug); blk_finish_plug(&plug);
if (wbc->sync_mode == WB_SYNC_ALL) if (wbc->sync_mode == WB_SYNC_ALL)
atomic_dec(&sbi->wb_sync_req); atomic_dec(&sbi->wb_sync_req[DATA]);
/* /*
* if some pages were truncated, we cannot guarantee its mapping->host * if some pages were truncated, we cannot guarantee its mapping->host
* to detect pending bios. * to detect pending bios.
*/ */
remove_dirty_inode(inode); f2fs_remove_dirty_inode(inode);
return ret; return ret;
skip_write: skip_write:
...@@ -2077,7 +2150,7 @@ static void f2fs_write_failed(struct address_space *mapping, loff_t to) ...@@ -2077,7 +2150,7 @@ static void f2fs_write_failed(struct address_space *mapping, loff_t to)
if (to > i_size) { if (to > i_size) {
down_write(&F2FS_I(inode)->i_mmap_sem); down_write(&F2FS_I(inode)->i_mmap_sem);
truncate_pagecache(inode, i_size); truncate_pagecache(inode, i_size);
truncate_blocks(inode, i_size, true); f2fs_truncate_blocks(inode, i_size, true);
up_write(&F2FS_I(inode)->i_mmap_sem); up_write(&F2FS_I(inode)->i_mmap_sem);
} }
} }
...@@ -2109,7 +2182,7 @@ static int prepare_write_begin(struct f2fs_sb_info *sbi, ...@@ -2109,7 +2182,7 @@ static int prepare_write_begin(struct f2fs_sb_info *sbi,
} }
restart: restart:
/* check inline_data */ /* check inline_data */
ipage = get_node_page(sbi, inode->i_ino); ipage = f2fs_get_node_page(sbi, inode->i_ino);
if (IS_ERR(ipage)) { if (IS_ERR(ipage)) {
err = PTR_ERR(ipage); err = PTR_ERR(ipage);
goto unlock_out; goto unlock_out;
...@@ -2119,7 +2192,7 @@ static int prepare_write_begin(struct f2fs_sb_info *sbi, ...@@ -2119,7 +2192,7 @@ static int prepare_write_begin(struct f2fs_sb_info *sbi,
if (f2fs_has_inline_data(inode)) { if (f2fs_has_inline_data(inode)) {
if (pos + len <= MAX_INLINE_DATA(inode)) { if (pos + len <= MAX_INLINE_DATA(inode)) {
read_inline_data(page, ipage); f2fs_do_read_inline_data(page, ipage);
set_inode_flag(inode, FI_DATA_EXIST); set_inode_flag(inode, FI_DATA_EXIST);
if (inode->i_nlink) if (inode->i_nlink)
set_inline_node(ipage); set_inline_node(ipage);
...@@ -2137,7 +2210,7 @@ static int prepare_write_begin(struct f2fs_sb_info *sbi, ...@@ -2137,7 +2210,7 @@ static int prepare_write_begin(struct f2fs_sb_info *sbi,
dn.data_blkaddr = ei.blk + index - ei.fofs; dn.data_blkaddr = ei.blk + index - ei.fofs;
} else { } else {
/* hole case */ /* hole case */
err = get_dnode_of_data(&dn, index, LOOKUP_NODE); err = f2fs_get_dnode_of_data(&dn, index, LOOKUP_NODE);
if (err || dn.data_blkaddr == NULL_ADDR) { if (err || dn.data_blkaddr == NULL_ADDR) {
f2fs_put_dnode(&dn); f2fs_put_dnode(&dn);
__do_map_lock(sbi, F2FS_GET_BLOCK_PRE_AIO, __do_map_lock(sbi, F2FS_GET_BLOCK_PRE_AIO,
...@@ -2174,7 +2247,7 @@ static int f2fs_write_begin(struct file *file, struct address_space *mapping, ...@@ -2174,7 +2247,7 @@ static int f2fs_write_begin(struct file *file, struct address_space *mapping,
trace_f2fs_write_begin(inode, pos, len, flags); trace_f2fs_write_begin(inode, pos, len, flags);
if (f2fs_is_atomic_file(inode) && if (f2fs_is_atomic_file(inode) &&
!available_free_memory(sbi, INMEM_PAGES)) { !f2fs_available_free_memory(sbi, INMEM_PAGES)) {
err = -ENOMEM; err = -ENOMEM;
drop_atomic = true; drop_atomic = true;
goto fail; goto fail;
...@@ -2222,8 +2295,8 @@ static int f2fs_write_begin(struct file *file, struct address_space *mapping, ...@@ -2222,8 +2295,8 @@ static int f2fs_write_begin(struct file *file, struct address_space *mapping,
f2fs_wait_on_page_writeback(page, DATA, false); f2fs_wait_on_page_writeback(page, DATA, false);
/* wait for GCed encrypted page writeback */ /* wait for GCed page writeback via META_MAPPING */
if (f2fs_encrypted_file(inode)) if (f2fs_post_read_required(inode))
f2fs_wait_on_block_writeback(sbi, blkaddr); f2fs_wait_on_block_writeback(sbi, blkaddr);
if (len == PAGE_SIZE || PageUptodate(page)) if (len == PAGE_SIZE || PageUptodate(page))
...@@ -2258,7 +2331,7 @@ static int f2fs_write_begin(struct file *file, struct address_space *mapping, ...@@ -2258,7 +2331,7 @@ static int f2fs_write_begin(struct file *file, struct address_space *mapping,
f2fs_put_page(page, 1); f2fs_put_page(page, 1);
f2fs_write_failed(mapping, pos + len); f2fs_write_failed(mapping, pos + len);
if (drop_atomic) if (drop_atomic)
drop_inmem_pages_all(sbi); f2fs_drop_inmem_pages_all(sbi, false);
return err; return err;
} }
...@@ -2333,17 +2406,17 @@ static ssize_t f2fs_direct_IO(struct kiocb *iocb, struct iov_iter *iter) ...@@ -2333,17 +2406,17 @@ static ssize_t f2fs_direct_IO(struct kiocb *iocb, struct iov_iter *iter)
if (rw == WRITE && whint_mode == WHINT_MODE_OFF) if (rw == WRITE && whint_mode == WHINT_MODE_OFF)
iocb->ki_hint = WRITE_LIFE_NOT_SET; iocb->ki_hint = WRITE_LIFE_NOT_SET;
if (!down_read_trylock(&F2FS_I(inode)->dio_rwsem[rw])) { if (!down_read_trylock(&F2FS_I(inode)->i_gc_rwsem[rw])) {
if (iocb->ki_flags & IOCB_NOWAIT) { if (iocb->ki_flags & IOCB_NOWAIT) {
iocb->ki_hint = hint; iocb->ki_hint = hint;
err = -EAGAIN; err = -EAGAIN;
goto out; goto out;
} }
down_read(&F2FS_I(inode)->dio_rwsem[rw]); down_read(&F2FS_I(inode)->i_gc_rwsem[rw]);
} }
err = blockdev_direct_IO(iocb, inode, iter, get_data_block_dio); err = blockdev_direct_IO(iocb, inode, iter, get_data_block_dio);
up_read(&F2FS_I(inode)->dio_rwsem[rw]); up_read(&F2FS_I(inode)->i_gc_rwsem[rw]);
if (rw == WRITE) { if (rw == WRITE) {
if (whint_mode == WHINT_MODE_OFF) if (whint_mode == WHINT_MODE_OFF)
...@@ -2380,13 +2453,13 @@ void f2fs_invalidate_page(struct page *page, unsigned int offset, ...@@ -2380,13 +2453,13 @@ void f2fs_invalidate_page(struct page *page, unsigned int offset,
dec_page_count(sbi, F2FS_DIRTY_NODES); dec_page_count(sbi, F2FS_DIRTY_NODES);
} else { } else {
inode_dec_dirty_pages(inode); inode_dec_dirty_pages(inode);
remove_dirty_inode(inode); f2fs_remove_dirty_inode(inode);
} }
} }
/* This is atomic written page, keep Private */ /* This is atomic written page, keep Private */
if (IS_ATOMIC_WRITTEN_PAGE(page)) if (IS_ATOMIC_WRITTEN_PAGE(page))
return drop_inmem_page(inode, page); return f2fs_drop_inmem_page(inode, page);
set_page_private(page, 0); set_page_private(page, 0);
ClearPagePrivate(page); ClearPagePrivate(page);
...@@ -2407,35 +2480,6 @@ int f2fs_release_page(struct page *page, gfp_t wait) ...@@ -2407,35 +2480,6 @@ int f2fs_release_page(struct page *page, gfp_t wait)
return 1; return 1;
} }
/*
* This was copied from __set_page_dirty_buffers which gives higher performance
* in very high speed storages. (e.g., pmem)
*/
void f2fs_set_page_dirty_nobuffers(struct page *page)
{
struct address_space *mapping = page->mapping;
unsigned long flags;
if (unlikely(!mapping))
return;
spin_lock(&mapping->private_lock);
lock_page_memcg(page);
SetPageDirty(page);
spin_unlock(&mapping->private_lock);
xa_lock_irqsave(&mapping->i_pages, flags);
WARN_ON_ONCE(!PageUptodate(page));
account_page_dirtied(page, mapping);
radix_tree_tag_set(&mapping->i_pages,
page_index(page), PAGECACHE_TAG_DIRTY);
xa_unlock_irqrestore(&mapping->i_pages, flags);
unlock_page_memcg(page);
__mark_inode_dirty(mapping->host, I_DIRTY_PAGES);
return;
}
static int f2fs_set_data_page_dirty(struct page *page) static int f2fs_set_data_page_dirty(struct page *page)
{ {
struct address_space *mapping = page->mapping; struct address_space *mapping = page->mapping;
...@@ -2448,7 +2492,7 @@ static int f2fs_set_data_page_dirty(struct page *page) ...@@ -2448,7 +2492,7 @@ static int f2fs_set_data_page_dirty(struct page *page)
if (f2fs_is_atomic_file(inode) && !f2fs_is_commit_atomic_write(inode)) { if (f2fs_is_atomic_file(inode) && !f2fs_is_commit_atomic_write(inode)) {
if (!IS_ATOMIC_WRITTEN_PAGE(page)) { if (!IS_ATOMIC_WRITTEN_PAGE(page)) {
register_inmem_page(inode, page); f2fs_register_inmem_page(inode, page);
return 1; return 1;
} }
/* /*
...@@ -2459,8 +2503,8 @@ static int f2fs_set_data_page_dirty(struct page *page) ...@@ -2459,8 +2503,8 @@ static int f2fs_set_data_page_dirty(struct page *page)
} }
if (!PageDirty(page)) { if (!PageDirty(page)) {
f2fs_set_page_dirty_nobuffers(page); __set_page_dirty_nobuffers(page);
update_dirty_page(inode, page); f2fs_update_dirty_page(inode, page);
return 1; return 1;
} }
return 0; return 0;
...@@ -2555,3 +2599,38 @@ const struct address_space_operations f2fs_dblock_aops = { ...@@ -2555,3 +2599,38 @@ const struct address_space_operations f2fs_dblock_aops = {
.migratepage = f2fs_migrate_page, .migratepage = f2fs_migrate_page,
#endif #endif
}; };
void f2fs_clear_radix_tree_dirty_tag(struct page *page)
{
struct address_space *mapping = page_mapping(page);
unsigned long flags;
xa_lock_irqsave(&mapping->i_pages, flags);
radix_tree_tag_clear(&mapping->i_pages, page_index(page),
PAGECACHE_TAG_DIRTY);
xa_unlock_irqrestore(&mapping->i_pages, flags);
}
int __init f2fs_init_post_read_processing(void)
{
bio_post_read_ctx_cache = KMEM_CACHE(bio_post_read_ctx, 0);
if (!bio_post_read_ctx_cache)
goto fail;
bio_post_read_ctx_pool =
mempool_create_slab_pool(NUM_PREALLOC_POST_READ_CTXS,
bio_post_read_ctx_cache);
if (!bio_post_read_ctx_pool)
goto fail_free_cache;
return 0;
fail_free_cache:
kmem_cache_destroy(bio_post_read_ctx_cache);
fail:
return -ENOMEM;
}
void __exit f2fs_destroy_post_read_processing(void)
{
mempool_destroy(bio_post_read_ctx_pool);
kmem_cache_destroy(bio_post_read_ctx_cache);
}
...@@ -104,6 +104,8 @@ static void update_general_status(struct f2fs_sb_info *sbi) ...@@ -104,6 +104,8 @@ static void update_general_status(struct f2fs_sb_info *sbi)
si->avail_nids = NM_I(sbi)->available_nids; si->avail_nids = NM_I(sbi)->available_nids;
si->alloc_nids = NM_I(sbi)->nid_cnt[PREALLOC_NID]; si->alloc_nids = NM_I(sbi)->nid_cnt[PREALLOC_NID];
si->bg_gc = sbi->bg_gc; si->bg_gc = sbi->bg_gc;
si->skipped_atomic_files[BG_GC] = sbi->skipped_atomic_files[BG_GC];
si->skipped_atomic_files[FG_GC] = sbi->skipped_atomic_files[FG_GC];
si->util_free = (int)(free_user_blocks(sbi) >> sbi->log_blocks_per_seg) si->util_free = (int)(free_user_blocks(sbi) >> sbi->log_blocks_per_seg)
* 100 / (int)(sbi->user_block_count >> sbi->log_blocks_per_seg) * 100 / (int)(sbi->user_block_count >> sbi->log_blocks_per_seg)
/ 2; / 2;
...@@ -342,6 +344,10 @@ static int stat_show(struct seq_file *s, void *v) ...@@ -342,6 +344,10 @@ static int stat_show(struct seq_file *s, void *v)
si->bg_data_blks); si->bg_data_blks);
seq_printf(s, " - node blocks : %d (%d)\n", si->node_blks, seq_printf(s, " - node blocks : %d (%d)\n", si->node_blks,
si->bg_node_blks); si->bg_node_blks);
seq_printf(s, "Skipped : atomic write %llu (%llu)\n",
si->skipped_atomic_files[BG_GC] +
si->skipped_atomic_files[FG_GC],
si->skipped_atomic_files[BG_GC]);
seq_puts(s, "\nExtent Cache:\n"); seq_puts(s, "\nExtent Cache:\n");
seq_printf(s, " - Hit Count: L1-1:%llu L1-2:%llu L2:%llu\n", seq_printf(s, " - Hit Count: L1-1:%llu L1-2:%llu L2:%llu\n",
si->hit_largest, si->hit_cached, si->hit_largest, si->hit_cached,
......
...@@ -60,12 +60,12 @@ static unsigned char f2fs_type_by_mode[S_IFMT >> S_SHIFT] = { ...@@ -60,12 +60,12 @@ static unsigned char f2fs_type_by_mode[S_IFMT >> S_SHIFT] = {
[S_IFLNK >> S_SHIFT] = F2FS_FT_SYMLINK, [S_IFLNK >> S_SHIFT] = F2FS_FT_SYMLINK,
}; };
void set_de_type(struct f2fs_dir_entry *de, umode_t mode) static void set_de_type(struct f2fs_dir_entry *de, umode_t mode)
{ {
de->file_type = f2fs_type_by_mode[(mode & S_IFMT) >> S_SHIFT]; de->file_type = f2fs_type_by_mode[(mode & S_IFMT) >> S_SHIFT];
} }
unsigned char get_de_type(struct f2fs_dir_entry *de) unsigned char f2fs_get_de_type(struct f2fs_dir_entry *de)
{ {
if (de->file_type < F2FS_FT_MAX) if (de->file_type < F2FS_FT_MAX)
return f2fs_filetype_table[de->file_type]; return f2fs_filetype_table[de->file_type];
...@@ -97,14 +97,14 @@ static struct f2fs_dir_entry *find_in_block(struct page *dentry_page, ...@@ -97,14 +97,14 @@ static struct f2fs_dir_entry *find_in_block(struct page *dentry_page,
dentry_blk = (struct f2fs_dentry_block *)page_address(dentry_page); dentry_blk = (struct f2fs_dentry_block *)page_address(dentry_page);
make_dentry_ptr_block(NULL, &d, dentry_blk); make_dentry_ptr_block(NULL, &d, dentry_blk);
de = find_target_dentry(fname, namehash, max_slots, &d); de = f2fs_find_target_dentry(fname, namehash, max_slots, &d);
if (de) if (de)
*res_page = dentry_page; *res_page = dentry_page;
return de; return de;
} }
struct f2fs_dir_entry *find_target_dentry(struct fscrypt_name *fname, struct f2fs_dir_entry *f2fs_find_target_dentry(struct fscrypt_name *fname,
f2fs_hash_t namehash, int *max_slots, f2fs_hash_t namehash, int *max_slots,
struct f2fs_dentry_ptr *d) struct f2fs_dentry_ptr *d)
{ {
...@@ -171,7 +171,7 @@ static struct f2fs_dir_entry *find_in_level(struct inode *dir, ...@@ -171,7 +171,7 @@ static struct f2fs_dir_entry *find_in_level(struct inode *dir,
for (; bidx < end_block; bidx++) { for (; bidx < end_block; bidx++) {
/* no need to allocate new dentry pages to all the indices */ /* no need to allocate new dentry pages to all the indices */
dentry_page = find_data_page(dir, bidx); dentry_page = f2fs_find_data_page(dir, bidx);
if (IS_ERR(dentry_page)) { if (IS_ERR(dentry_page)) {
if (PTR_ERR(dentry_page) == -ENOENT) { if (PTR_ERR(dentry_page) == -ENOENT) {
room = true; room = true;
...@@ -210,7 +210,7 @@ struct f2fs_dir_entry *__f2fs_find_entry(struct inode *dir, ...@@ -210,7 +210,7 @@ struct f2fs_dir_entry *__f2fs_find_entry(struct inode *dir,
if (f2fs_has_inline_dentry(dir)) { if (f2fs_has_inline_dentry(dir)) {
*res_page = NULL; *res_page = NULL;
de = find_in_inline_dir(dir, fname, res_page); de = f2fs_find_in_inline_dir(dir, fname, res_page);
goto out; goto out;
} }
...@@ -319,7 +319,7 @@ static void init_dent_inode(const struct qstr *name, struct page *ipage) ...@@ -319,7 +319,7 @@ static void init_dent_inode(const struct qstr *name, struct page *ipage)
set_page_dirty(ipage); set_page_dirty(ipage);
} }
void do_make_empty_dir(struct inode *inode, struct inode *parent, void f2fs_do_make_empty_dir(struct inode *inode, struct inode *parent,
struct f2fs_dentry_ptr *d) struct f2fs_dentry_ptr *d)
{ {
struct qstr dot = QSTR_INIT(".", 1); struct qstr dot = QSTR_INIT(".", 1);
...@@ -340,23 +340,23 @@ static int make_empty_dir(struct inode *inode, ...@@ -340,23 +340,23 @@ static int make_empty_dir(struct inode *inode,
struct f2fs_dentry_ptr d; struct f2fs_dentry_ptr d;
if (f2fs_has_inline_dentry(inode)) if (f2fs_has_inline_dentry(inode))
return make_empty_inline_dir(inode, parent, page); return f2fs_make_empty_inline_dir(inode, parent, page);
dentry_page = get_new_data_page(inode, page, 0, true); dentry_page = f2fs_get_new_data_page(inode, page, 0, true);
if (IS_ERR(dentry_page)) if (IS_ERR(dentry_page))
return PTR_ERR(dentry_page); return PTR_ERR(dentry_page);
dentry_blk = page_address(dentry_page); dentry_blk = page_address(dentry_page);
make_dentry_ptr_block(NULL, &d, dentry_blk); make_dentry_ptr_block(NULL, &d, dentry_blk);
do_make_empty_dir(inode, parent, &d); f2fs_do_make_empty_dir(inode, parent, &d);
set_page_dirty(dentry_page); set_page_dirty(dentry_page);
f2fs_put_page(dentry_page, 1); f2fs_put_page(dentry_page, 1);
return 0; return 0;
} }
struct page *init_inode_metadata(struct inode *inode, struct inode *dir, struct page *f2fs_init_inode_metadata(struct inode *inode, struct inode *dir,
const struct qstr *new_name, const struct qstr *orig_name, const struct qstr *new_name, const struct qstr *orig_name,
struct page *dpage) struct page *dpage)
{ {
...@@ -365,7 +365,7 @@ struct page *init_inode_metadata(struct inode *inode, struct inode *dir, ...@@ -365,7 +365,7 @@ struct page *init_inode_metadata(struct inode *inode, struct inode *dir,
int err; int err;
if (is_inode_flag_set(inode, FI_NEW_INODE)) { if (is_inode_flag_set(inode, FI_NEW_INODE)) {
page = new_inode_page(inode); page = f2fs_new_inode_page(inode);
if (IS_ERR(page)) if (IS_ERR(page))
return page; return page;
...@@ -395,7 +395,7 @@ struct page *init_inode_metadata(struct inode *inode, struct inode *dir, ...@@ -395,7 +395,7 @@ struct page *init_inode_metadata(struct inode *inode, struct inode *dir,
goto put_error; goto put_error;
} }
} else { } else {
page = get_node_page(F2FS_I_SB(dir), inode->i_ino); page = f2fs_get_node_page(F2FS_I_SB(dir), inode->i_ino);
if (IS_ERR(page)) if (IS_ERR(page))
return page; return page;
} }
...@@ -418,19 +418,19 @@ struct page *init_inode_metadata(struct inode *inode, struct inode *dir, ...@@ -418,19 +418,19 @@ struct page *init_inode_metadata(struct inode *inode, struct inode *dir,
* we should remove this inode from orphan list. * we should remove this inode from orphan list.
*/ */
if (inode->i_nlink == 0) if (inode->i_nlink == 0)
remove_orphan_inode(F2FS_I_SB(dir), inode->i_ino); f2fs_remove_orphan_inode(F2FS_I_SB(dir), inode->i_ino);
f2fs_i_links_write(inode, true); f2fs_i_links_write(inode, true);
} }
return page; return page;
put_error: put_error:
clear_nlink(inode); clear_nlink(inode);
update_inode(inode, page); f2fs_update_inode(inode, page);
f2fs_put_page(page, 1); f2fs_put_page(page, 1);
return ERR_PTR(err); return ERR_PTR(err);
} }
void update_parent_metadata(struct inode *dir, struct inode *inode, void f2fs_update_parent_metadata(struct inode *dir, struct inode *inode,
unsigned int current_depth) unsigned int current_depth)
{ {
if (inode && is_inode_flag_set(inode, FI_NEW_INODE)) { if (inode && is_inode_flag_set(inode, FI_NEW_INODE)) {
...@@ -448,7 +448,7 @@ void update_parent_metadata(struct inode *dir, struct inode *inode, ...@@ -448,7 +448,7 @@ void update_parent_metadata(struct inode *dir, struct inode *inode,
clear_inode_flag(inode, FI_INC_LINK); clear_inode_flag(inode, FI_INC_LINK);
} }
int room_for_filename(const void *bitmap, int slots, int max_slots) int f2fs_room_for_filename(const void *bitmap, int slots, int max_slots)
{ {
int bit_start = 0; int bit_start = 0;
int zero_start, zero_end; int zero_start, zero_end;
...@@ -537,12 +537,12 @@ int f2fs_add_regular_entry(struct inode *dir, const struct qstr *new_name, ...@@ -537,12 +537,12 @@ int f2fs_add_regular_entry(struct inode *dir, const struct qstr *new_name,
(le32_to_cpu(dentry_hash) % nbucket)); (le32_to_cpu(dentry_hash) % nbucket));
for (block = bidx; block <= (bidx + nblock - 1); block++) { for (block = bidx; block <= (bidx + nblock - 1); block++) {
dentry_page = get_new_data_page(dir, NULL, block, true); dentry_page = f2fs_get_new_data_page(dir, NULL, block, true);
if (IS_ERR(dentry_page)) if (IS_ERR(dentry_page))
return PTR_ERR(dentry_page); return PTR_ERR(dentry_page);
dentry_blk = page_address(dentry_page); dentry_blk = page_address(dentry_page);
bit_pos = room_for_filename(&dentry_blk->dentry_bitmap, bit_pos = f2fs_room_for_filename(&dentry_blk->dentry_bitmap,
slots, NR_DENTRY_IN_BLOCK); slots, NR_DENTRY_IN_BLOCK);
if (bit_pos < NR_DENTRY_IN_BLOCK) if (bit_pos < NR_DENTRY_IN_BLOCK)
goto add_dentry; goto add_dentry;
...@@ -558,7 +558,7 @@ int f2fs_add_regular_entry(struct inode *dir, const struct qstr *new_name, ...@@ -558,7 +558,7 @@ int f2fs_add_regular_entry(struct inode *dir, const struct qstr *new_name,
if (inode) { if (inode) {
down_write(&F2FS_I(inode)->i_sem); down_write(&F2FS_I(inode)->i_sem);
page = init_inode_metadata(inode, dir, new_name, page = f2fs_init_inode_metadata(inode, dir, new_name,
orig_name, NULL); orig_name, NULL);
if (IS_ERR(page)) { if (IS_ERR(page)) {
err = PTR_ERR(page); err = PTR_ERR(page);
...@@ -576,7 +576,7 @@ int f2fs_add_regular_entry(struct inode *dir, const struct qstr *new_name, ...@@ -576,7 +576,7 @@ int f2fs_add_regular_entry(struct inode *dir, const struct qstr *new_name,
f2fs_put_page(page, 1); f2fs_put_page(page, 1);
} }
update_parent_metadata(dir, inode, current_depth); f2fs_update_parent_metadata(dir, inode, current_depth);
fail: fail:
if (inode) if (inode)
up_write(&F2FS_I(inode)->i_sem); up_write(&F2FS_I(inode)->i_sem);
...@@ -586,7 +586,7 @@ int f2fs_add_regular_entry(struct inode *dir, const struct qstr *new_name, ...@@ -586,7 +586,7 @@ int f2fs_add_regular_entry(struct inode *dir, const struct qstr *new_name,
return err; return err;
} }
int __f2fs_do_add_link(struct inode *dir, struct fscrypt_name *fname, int f2fs_add_dentry(struct inode *dir, struct fscrypt_name *fname,
struct inode *inode, nid_t ino, umode_t mode) struct inode *inode, nid_t ino, umode_t mode)
{ {
struct qstr new_name; struct qstr new_name;
...@@ -610,7 +610,7 @@ int __f2fs_do_add_link(struct inode *dir, struct fscrypt_name *fname, ...@@ -610,7 +610,7 @@ int __f2fs_do_add_link(struct inode *dir, struct fscrypt_name *fname,
* Caller should grab and release a rwsem by calling f2fs_lock_op() and * Caller should grab and release a rwsem by calling f2fs_lock_op() and
* f2fs_unlock_op(). * f2fs_unlock_op().
*/ */
int __f2fs_add_link(struct inode *dir, const struct qstr *name, int f2fs_do_add_link(struct inode *dir, const struct qstr *name,
struct inode *inode, nid_t ino, umode_t mode) struct inode *inode, nid_t ino, umode_t mode)
{ {
struct fscrypt_name fname; struct fscrypt_name fname;
...@@ -639,7 +639,7 @@ int __f2fs_add_link(struct inode *dir, const struct qstr *name, ...@@ -639,7 +639,7 @@ int __f2fs_add_link(struct inode *dir, const struct qstr *name,
} else if (IS_ERR(page)) { } else if (IS_ERR(page)) {
err = PTR_ERR(page); err = PTR_ERR(page);
} else { } else {
err = __f2fs_do_add_link(dir, &fname, inode, ino, mode); err = f2fs_add_dentry(dir, &fname, inode, ino, mode);
} }
fscrypt_free_filename(&fname); fscrypt_free_filename(&fname);
return err; return err;
...@@ -651,7 +651,7 @@ int f2fs_do_tmpfile(struct inode *inode, struct inode *dir) ...@@ -651,7 +651,7 @@ int f2fs_do_tmpfile(struct inode *inode, struct inode *dir)
int err = 0; int err = 0;
down_write(&F2FS_I(inode)->i_sem); down_write(&F2FS_I(inode)->i_sem);
page = init_inode_metadata(inode, dir, NULL, NULL, NULL); page = f2fs_init_inode_metadata(inode, dir, NULL, NULL, NULL);
if (IS_ERR(page)) { if (IS_ERR(page)) {
err = PTR_ERR(page); err = PTR_ERR(page);
goto fail; goto fail;
...@@ -683,9 +683,9 @@ void f2fs_drop_nlink(struct inode *dir, struct inode *inode) ...@@ -683,9 +683,9 @@ void f2fs_drop_nlink(struct inode *dir, struct inode *inode)
up_write(&F2FS_I(inode)->i_sem); up_write(&F2FS_I(inode)->i_sem);
if (inode->i_nlink == 0) if (inode->i_nlink == 0)
add_orphan_inode(inode); f2fs_add_orphan_inode(inode);
else else
release_orphan_inode(sbi); f2fs_release_orphan_inode(sbi);
} }
/* /*
...@@ -698,14 +698,12 @@ void f2fs_delete_entry(struct f2fs_dir_entry *dentry, struct page *page, ...@@ -698,14 +698,12 @@ void f2fs_delete_entry(struct f2fs_dir_entry *dentry, struct page *page,
struct f2fs_dentry_block *dentry_blk; struct f2fs_dentry_block *dentry_blk;
unsigned int bit_pos; unsigned int bit_pos;
int slots = GET_DENTRY_SLOTS(le16_to_cpu(dentry->name_len)); int slots = GET_DENTRY_SLOTS(le16_to_cpu(dentry->name_len));
struct address_space *mapping = page_mapping(page);
unsigned long flags;
int i; int i;
f2fs_update_time(F2FS_I_SB(dir), REQ_TIME); f2fs_update_time(F2FS_I_SB(dir), REQ_TIME);
if (F2FS_OPTION(F2FS_I_SB(dir)).fsync_mode == FSYNC_MODE_STRICT) if (F2FS_OPTION(F2FS_I_SB(dir)).fsync_mode == FSYNC_MODE_STRICT)
add_ino_entry(F2FS_I_SB(dir), dir->i_ino, TRANS_DIR_INO); f2fs_add_ino_entry(F2FS_I_SB(dir), dir->i_ino, TRANS_DIR_INO);
if (f2fs_has_inline_dentry(dir)) if (f2fs_has_inline_dentry(dir))
return f2fs_delete_inline_entry(dentry, page, dir, inode); return f2fs_delete_inline_entry(dentry, page, dir, inode);
...@@ -731,17 +729,13 @@ void f2fs_delete_entry(struct f2fs_dir_entry *dentry, struct page *page, ...@@ -731,17 +729,13 @@ void f2fs_delete_entry(struct f2fs_dir_entry *dentry, struct page *page,
f2fs_drop_nlink(dir, inode); f2fs_drop_nlink(dir, inode);
if (bit_pos == NR_DENTRY_IN_BLOCK && if (bit_pos == NR_DENTRY_IN_BLOCK &&
!truncate_hole(dir, page->index, page->index + 1)) { !f2fs_truncate_hole(dir, page->index, page->index + 1)) {
xa_lock_irqsave(&mapping->i_pages, flags); f2fs_clear_radix_tree_dirty_tag(page);
radix_tree_tag_clear(&mapping->i_pages, page_index(page),
PAGECACHE_TAG_DIRTY);
xa_unlock_irqrestore(&mapping->i_pages, flags);
clear_page_dirty_for_io(page); clear_page_dirty_for_io(page);
ClearPagePrivate(page); ClearPagePrivate(page);
ClearPageUptodate(page); ClearPageUptodate(page);
inode_dec_dirty_pages(dir); inode_dec_dirty_pages(dir);
remove_dirty_inode(dir); f2fs_remove_dirty_inode(dir);
} }
f2fs_put_page(page, 1); f2fs_put_page(page, 1);
} }
...@@ -758,7 +752,7 @@ bool f2fs_empty_dir(struct inode *dir) ...@@ -758,7 +752,7 @@ bool f2fs_empty_dir(struct inode *dir)
return f2fs_empty_inline_dir(dir); return f2fs_empty_inline_dir(dir);
for (bidx = 0; bidx < nblock; bidx++) { for (bidx = 0; bidx < nblock; bidx++) {
dentry_page = get_lock_data_page(dir, bidx, false); dentry_page = f2fs_get_lock_data_page(dir, bidx, false);
if (IS_ERR(dentry_page)) { if (IS_ERR(dentry_page)) {
if (PTR_ERR(dentry_page) == -ENOENT) if (PTR_ERR(dentry_page) == -ENOENT)
continue; continue;
...@@ -806,7 +800,7 @@ int f2fs_fill_dentries(struct dir_context *ctx, struct f2fs_dentry_ptr *d, ...@@ -806,7 +800,7 @@ int f2fs_fill_dentries(struct dir_context *ctx, struct f2fs_dentry_ptr *d,
continue; continue;
} }
d_type = get_de_type(de); d_type = f2fs_get_de_type(de);
de_name.name = d->filename[bit_pos]; de_name.name = d->filename[bit_pos];
de_name.len = le16_to_cpu(de->name_len); de_name.len = le16_to_cpu(de->name_len);
...@@ -830,7 +824,7 @@ int f2fs_fill_dentries(struct dir_context *ctx, struct f2fs_dentry_ptr *d, ...@@ -830,7 +824,7 @@ int f2fs_fill_dentries(struct dir_context *ctx, struct f2fs_dentry_ptr *d,
return 1; return 1;
if (sbi->readdir_ra == 1) if (sbi->readdir_ra == 1)
ra_node_page(sbi, le32_to_cpu(de->ino)); f2fs_ra_node_page(sbi, le32_to_cpu(de->ino));
bit_pos += GET_DENTRY_SLOTS(le16_to_cpu(de->name_len)); bit_pos += GET_DENTRY_SLOTS(le16_to_cpu(de->name_len));
ctx->pos = start_pos + bit_pos; ctx->pos = start_pos + bit_pos;
...@@ -880,7 +874,7 @@ static int f2fs_readdir(struct file *file, struct dir_context *ctx) ...@@ -880,7 +874,7 @@ static int f2fs_readdir(struct file *file, struct dir_context *ctx)
page_cache_sync_readahead(inode->i_mapping, ra, file, n, page_cache_sync_readahead(inode->i_mapping, ra, file, n,
min(npages - n, (pgoff_t)MAX_DIR_RA_PAGES)); min(npages - n, (pgoff_t)MAX_DIR_RA_PAGES));
dentry_page = get_lock_data_page(inode, n, false); dentry_page = f2fs_get_lock_data_page(inode, n, false);
if (IS_ERR(dentry_page)) { if (IS_ERR(dentry_page)) {
err = PTR_ERR(dentry_page); err = PTR_ERR(dentry_page);
if (err == -ENOENT) { if (err == -ENOENT) {
......
...@@ -49,7 +49,7 @@ static struct rb_entry *__lookup_rb_tree_slow(struct rb_root *root, ...@@ -49,7 +49,7 @@ static struct rb_entry *__lookup_rb_tree_slow(struct rb_root *root,
return NULL; return NULL;
} }
struct rb_entry *__lookup_rb_tree(struct rb_root *root, struct rb_entry *f2fs_lookup_rb_tree(struct rb_root *root,
struct rb_entry *cached_re, unsigned int ofs) struct rb_entry *cached_re, unsigned int ofs)
{ {
struct rb_entry *re; struct rb_entry *re;
...@@ -61,7 +61,7 @@ struct rb_entry *__lookup_rb_tree(struct rb_root *root, ...@@ -61,7 +61,7 @@ struct rb_entry *__lookup_rb_tree(struct rb_root *root,
return re; return re;
} }
struct rb_node **__lookup_rb_tree_for_insert(struct f2fs_sb_info *sbi, struct rb_node **f2fs_lookup_rb_tree_for_insert(struct f2fs_sb_info *sbi,
struct rb_root *root, struct rb_node **parent, struct rb_root *root, struct rb_node **parent,
unsigned int ofs) unsigned int ofs)
{ {
...@@ -92,7 +92,7 @@ struct rb_node **__lookup_rb_tree_for_insert(struct f2fs_sb_info *sbi, ...@@ -92,7 +92,7 @@ struct rb_node **__lookup_rb_tree_for_insert(struct f2fs_sb_info *sbi,
* in order to simpfy the insertion after. * in order to simpfy the insertion after.
* tree must stay unchanged between lookup and insertion. * tree must stay unchanged between lookup and insertion.
*/ */
struct rb_entry *__lookup_rb_tree_ret(struct rb_root *root, struct rb_entry *f2fs_lookup_rb_tree_ret(struct rb_root *root,
struct rb_entry *cached_re, struct rb_entry *cached_re,
unsigned int ofs, unsigned int ofs,
struct rb_entry **prev_entry, struct rb_entry **prev_entry,
...@@ -159,7 +159,7 @@ struct rb_entry *__lookup_rb_tree_ret(struct rb_root *root, ...@@ -159,7 +159,7 @@ struct rb_entry *__lookup_rb_tree_ret(struct rb_root *root,
return re; return re;
} }
bool __check_rb_tree_consistence(struct f2fs_sb_info *sbi, bool f2fs_check_rb_tree_consistence(struct f2fs_sb_info *sbi,
struct rb_root *root) struct rb_root *root)
{ {
#ifdef CONFIG_F2FS_CHECK_FS #ifdef CONFIG_F2FS_CHECK_FS
...@@ -390,7 +390,7 @@ static bool f2fs_lookup_extent_tree(struct inode *inode, pgoff_t pgofs, ...@@ -390,7 +390,7 @@ static bool f2fs_lookup_extent_tree(struct inode *inode, pgoff_t pgofs,
goto out; goto out;
} }
en = (struct extent_node *)__lookup_rb_tree(&et->root, en = (struct extent_node *)f2fs_lookup_rb_tree(&et->root,
(struct rb_entry *)et->cached_en, pgofs); (struct rb_entry *)et->cached_en, pgofs);
if (!en) if (!en)
goto out; goto out;
...@@ -470,7 +470,7 @@ static struct extent_node *__insert_extent_tree(struct inode *inode, ...@@ -470,7 +470,7 @@ static struct extent_node *__insert_extent_tree(struct inode *inode,
goto do_insert; goto do_insert;
} }
p = __lookup_rb_tree_for_insert(sbi, &et->root, &parent, ei->fofs); p = f2fs_lookup_rb_tree_for_insert(sbi, &et->root, &parent, ei->fofs);
do_insert: do_insert:
en = __attach_extent_node(sbi, et, ei, parent, p); en = __attach_extent_node(sbi, et, ei, parent, p);
if (!en) if (!en)
...@@ -520,7 +520,7 @@ static void f2fs_update_extent_tree_range(struct inode *inode, ...@@ -520,7 +520,7 @@ static void f2fs_update_extent_tree_range(struct inode *inode,
__drop_largest_extent(inode, fofs, len); __drop_largest_extent(inode, fofs, len);
/* 1. lookup first extent node in range [fofs, fofs + len - 1] */ /* 1. lookup first extent node in range [fofs, fofs + len - 1] */
en = (struct extent_node *)__lookup_rb_tree_ret(&et->root, en = (struct extent_node *)f2fs_lookup_rb_tree_ret(&et->root,
(struct rb_entry *)et->cached_en, fofs, (struct rb_entry *)et->cached_en, fofs,
(struct rb_entry **)&prev_en, (struct rb_entry **)&prev_en,
(struct rb_entry **)&next_en, (struct rb_entry **)&next_en,
...@@ -773,7 +773,7 @@ void f2fs_update_extent_cache(struct dnode_of_data *dn) ...@@ -773,7 +773,7 @@ void f2fs_update_extent_cache(struct dnode_of_data *dn)
else else
blkaddr = dn->data_blkaddr; blkaddr = dn->data_blkaddr;
fofs = start_bidx_of_node(ofs_of_node(dn->node_page), dn->inode) + fofs = f2fs_start_bidx_of_node(ofs_of_node(dn->node_page), dn->inode) +
dn->ofs_in_node; dn->ofs_in_node;
f2fs_update_extent_tree_range(dn->inode, fofs, blkaddr, 1); f2fs_update_extent_tree_range(dn->inode, fofs, blkaddr, 1);
} }
...@@ -788,7 +788,7 @@ void f2fs_update_extent_cache_range(struct dnode_of_data *dn, ...@@ -788,7 +788,7 @@ void f2fs_update_extent_cache_range(struct dnode_of_data *dn,
f2fs_update_extent_tree_range(dn->inode, fofs, blkaddr, len); f2fs_update_extent_tree_range(dn->inode, fofs, blkaddr, len);
} }
void init_extent_cache_info(struct f2fs_sb_info *sbi) void f2fs_init_extent_cache_info(struct f2fs_sb_info *sbi)
{ {
INIT_RADIX_TREE(&sbi->extent_tree_root, GFP_NOIO); INIT_RADIX_TREE(&sbi->extent_tree_root, GFP_NOIO);
mutex_init(&sbi->extent_tree_lock); mutex_init(&sbi->extent_tree_lock);
...@@ -800,7 +800,7 @@ void init_extent_cache_info(struct f2fs_sb_info *sbi) ...@@ -800,7 +800,7 @@ void init_extent_cache_info(struct f2fs_sb_info *sbi)
atomic_set(&sbi->total_ext_node, 0); atomic_set(&sbi->total_ext_node, 0);
} }
int __init create_extent_cache(void) int __init f2fs_create_extent_cache(void)
{ {
extent_tree_slab = f2fs_kmem_cache_create("f2fs_extent_tree", extent_tree_slab = f2fs_kmem_cache_create("f2fs_extent_tree",
sizeof(struct extent_tree)); sizeof(struct extent_tree));
...@@ -815,7 +815,7 @@ int __init create_extent_cache(void) ...@@ -815,7 +815,7 @@ int __init create_extent_cache(void)
return 0; return 0;
} }
void destroy_extent_cache(void) void f2fs_destroy_extent_cache(void)
{ {
kmem_cache_destroy(extent_node_slab); kmem_cache_destroy(extent_node_slab);
kmem_cache_destroy(extent_tree_slab); kmem_cache_destroy(extent_tree_slab);
......
...@@ -176,15 +176,13 @@ enum { ...@@ -176,15 +176,13 @@ enum {
#define CP_DISCARD 0x00000010 #define CP_DISCARD 0x00000010
#define CP_TRIMMED 0x00000020 #define CP_TRIMMED 0x00000020
#define DEF_BATCHED_TRIM_SECTIONS 2048
#define BATCHED_TRIM_SEGMENTS(sbi) \
(GET_SEG_FROM_SEC(sbi, SM_I(sbi)->trim_sections))
#define BATCHED_TRIM_BLOCKS(sbi) \
(BATCHED_TRIM_SEGMENTS(sbi) << (sbi)->log_blocks_per_seg)
#define MAX_DISCARD_BLOCKS(sbi) BLKS_PER_SEC(sbi) #define MAX_DISCARD_BLOCKS(sbi) BLKS_PER_SEC(sbi)
#define DEF_MAX_DISCARD_REQUEST 8 /* issue 8 discards per round */ #define DEF_MAX_DISCARD_REQUEST 8 /* issue 8 discards per round */
#define DEF_MAX_DISCARD_LEN 512 /* Max. 2MB per discard */
#define DEF_MIN_DISCARD_ISSUE_TIME 50 /* 50 ms, if exists */ #define DEF_MIN_DISCARD_ISSUE_TIME 50 /* 50 ms, if exists */
#define DEF_MID_DISCARD_ISSUE_TIME 500 /* 500 ms, if device busy */
#define DEF_MAX_DISCARD_ISSUE_TIME 60000 /* 60 s, if no candidates */ #define DEF_MAX_DISCARD_ISSUE_TIME 60000 /* 60 s, if no candidates */
#define DEF_DISCARD_URGENT_UTIL 80 /* do more discard over 80% */
#define DEF_CP_INTERVAL 60 /* 60 secs */ #define DEF_CP_INTERVAL 60 /* 60 secs */
#define DEF_IDLE_INTERVAL 5 /* 5 secs */ #define DEF_IDLE_INTERVAL 5 /* 5 secs */
...@@ -285,6 +283,7 @@ enum { ...@@ -285,6 +283,7 @@ enum {
struct discard_policy { struct discard_policy {
int type; /* type of discard */ int type; /* type of discard */
unsigned int min_interval; /* used for candidates exist */ unsigned int min_interval; /* used for candidates exist */
unsigned int mid_interval; /* used for device busy */
unsigned int max_interval; /* used for candidates not exist */ unsigned int max_interval; /* used for candidates not exist */
unsigned int max_requests; /* # of discards issued per round */ unsigned int max_requests; /* # of discards issued per round */
unsigned int io_aware_gran; /* minimum granularity discard not be aware of I/O */ unsigned int io_aware_gran; /* minimum granularity discard not be aware of I/O */
...@@ -620,15 +619,20 @@ enum { ...@@ -620,15 +619,20 @@ enum {
#define DEF_DIR_LEVEL 0 #define DEF_DIR_LEVEL 0
enum {
GC_FAILURE_PIN,
GC_FAILURE_ATOMIC,
MAX_GC_FAILURE
};
struct f2fs_inode_info { struct f2fs_inode_info {
struct inode vfs_inode; /* serve a vfs inode */ struct inode vfs_inode; /* serve a vfs inode */
unsigned long i_flags; /* keep an inode flags for ioctl */ unsigned long i_flags; /* keep an inode flags for ioctl */
unsigned char i_advise; /* use to give file attribute hints */ unsigned char i_advise; /* use to give file attribute hints */
unsigned char i_dir_level; /* use for dentry level for large dir */ unsigned char i_dir_level; /* use for dentry level for large dir */
union { unsigned int i_current_depth; /* only for directory depth */
unsigned int i_current_depth; /* only for directory depth */ /* for gc failure statistic */
unsigned short i_gc_failures; /* only for regular file */ unsigned int i_gc_failures[MAX_GC_FAILURE];
};
unsigned int i_pino; /* parent inode number */ unsigned int i_pino; /* parent inode number */
umode_t i_acl_mode; /* keep file acl mode temporarily */ umode_t i_acl_mode; /* keep file acl mode temporarily */
...@@ -656,7 +660,9 @@ struct f2fs_inode_info { ...@@ -656,7 +660,9 @@ struct f2fs_inode_info {
struct task_struct *inmem_task; /* store inmemory task */ struct task_struct *inmem_task; /* store inmemory task */
struct mutex inmem_lock; /* lock for inmemory pages */ struct mutex inmem_lock; /* lock for inmemory pages */
struct extent_tree *extent_tree; /* cached extent_tree entry */ struct extent_tree *extent_tree; /* cached extent_tree entry */
struct rw_semaphore dio_rwsem[2];/* avoid racing between dio and gc */
/* avoid racing between foreground op and gc */
struct rw_semaphore i_gc_rwsem[2];
struct rw_semaphore i_mmap_sem; struct rw_semaphore i_mmap_sem;
struct rw_semaphore i_xattr_sem; /* avoid racing between reading and changing EAs */ struct rw_semaphore i_xattr_sem; /* avoid racing between reading and changing EAs */
...@@ -694,7 +700,8 @@ static inline void set_extent_info(struct extent_info *ei, unsigned int fofs, ...@@ -694,7 +700,8 @@ static inline void set_extent_info(struct extent_info *ei, unsigned int fofs,
static inline bool __is_discard_mergeable(struct discard_info *back, static inline bool __is_discard_mergeable(struct discard_info *back,
struct discard_info *front) struct discard_info *front)
{ {
return back->lstart + back->len == front->lstart; return (back->lstart + back->len == front->lstart) &&
(back->len + front->len < DEF_MAX_DISCARD_LEN);
} }
static inline bool __is_discard_back_mergeable(struct discard_info *cur, static inline bool __is_discard_back_mergeable(struct discard_info *cur,
...@@ -1005,6 +1012,7 @@ struct f2fs_io_info { ...@@ -1005,6 +1012,7 @@ struct f2fs_io_info {
int need_lock; /* indicate we need to lock cp_rwsem */ int need_lock; /* indicate we need to lock cp_rwsem */
bool in_list; /* indicate fio is in io_list */ bool in_list; /* indicate fio is in io_list */
bool is_meta; /* indicate borrow meta inode mapping or not */ bool is_meta; /* indicate borrow meta inode mapping or not */
bool retry; /* need to reallocate block address */
enum iostat_type io_type; /* io type */ enum iostat_type io_type; /* io type */
struct writeback_control *io_wbc; /* writeback control */ struct writeback_control *io_wbc; /* writeback control */
}; };
...@@ -1066,6 +1074,13 @@ enum { ...@@ -1066,6 +1074,13 @@ enum {
MAX_TIME, MAX_TIME,
}; };
enum {
GC_NORMAL,
GC_IDLE_CB,
GC_IDLE_GREEDY,
GC_URGENT,
};
enum { enum {
WHINT_MODE_OFF, /* not pass down write hints */ WHINT_MODE_OFF, /* not pass down write hints */
WHINT_MODE_USER, /* try to pass down hints given by users */ WHINT_MODE_USER, /* try to pass down hints given by users */
...@@ -1080,6 +1095,7 @@ enum { ...@@ -1080,6 +1095,7 @@ enum {
enum fsync_mode { enum fsync_mode {
FSYNC_MODE_POSIX, /* fsync follows posix semantics */ FSYNC_MODE_POSIX, /* fsync follows posix semantics */
FSYNC_MODE_STRICT, /* fsync behaves in line with ext4 */ FSYNC_MODE_STRICT, /* fsync behaves in line with ext4 */
FSYNC_MODE_NOBARRIER, /* fsync behaves nobarrier based on posix */
}; };
#ifdef CONFIG_F2FS_FS_ENCRYPTION #ifdef CONFIG_F2FS_FS_ENCRYPTION
...@@ -1113,6 +1129,8 @@ struct f2fs_sb_info { ...@@ -1113,6 +1129,8 @@ struct f2fs_sb_info {
struct f2fs_bio_info *write_io[NR_PAGE_TYPE]; /* for write bios */ struct f2fs_bio_info *write_io[NR_PAGE_TYPE]; /* for write bios */
struct mutex wio_mutex[NR_PAGE_TYPE - 1][NR_TEMP_TYPE]; struct mutex wio_mutex[NR_PAGE_TYPE - 1][NR_TEMP_TYPE];
/* bio ordering for NODE/DATA */ /* bio ordering for NODE/DATA */
/* keep migration IO order for LFS mode */
struct rw_semaphore io_order_lock;
mempool_t *write_io_dummy; /* Dummy pages */ mempool_t *write_io_dummy; /* Dummy pages */
/* for checkpoint */ /* for checkpoint */
...@@ -1183,7 +1201,7 @@ struct f2fs_sb_info { ...@@ -1183,7 +1201,7 @@ struct f2fs_sb_info {
struct percpu_counter alloc_valid_block_count; struct percpu_counter alloc_valid_block_count;
/* writeback control */ /* writeback control */
atomic_t wb_sync_req; /* count # of WB_SYNC threads */ atomic_t wb_sync_req[META]; /* count # of WB_SYNC threads */
/* valid inode count */ /* valid inode count */
struct percpu_counter total_valid_inode_count; struct percpu_counter total_valid_inode_count;
...@@ -1194,9 +1212,9 @@ struct f2fs_sb_info { ...@@ -1194,9 +1212,9 @@ struct f2fs_sb_info {
struct mutex gc_mutex; /* mutex for GC */ struct mutex gc_mutex; /* mutex for GC */
struct f2fs_gc_kthread *gc_thread; /* GC thread */ struct f2fs_gc_kthread *gc_thread; /* GC thread */
unsigned int cur_victim_sec; /* current victim section num */ unsigned int cur_victim_sec; /* current victim section num */
unsigned int gc_mode; /* current GC state */
/* threshold for converting bg victims for fg */ /* for skip statistic */
u64 fggc_threshold; unsigned long long skipped_atomic_files[2]; /* FG_GC and BG_GC */
/* threshold for gc trials on pinned files */ /* threshold for gc trials on pinned files */
u64 gc_pin_file_threshold; u64 gc_pin_file_threshold;
...@@ -1586,18 +1604,6 @@ static inline bool __exist_node_summaries(struct f2fs_sb_info *sbi) ...@@ -1586,18 +1604,6 @@ static inline bool __exist_node_summaries(struct f2fs_sb_info *sbi)
is_set_ckpt_flags(sbi, CP_FASTBOOT_FLAG)); is_set_ckpt_flags(sbi, CP_FASTBOOT_FLAG));
} }
/*
* Check whether the given nid is within node id range.
*/
static inline int check_nid_range(struct f2fs_sb_info *sbi, nid_t nid)
{
if (unlikely(nid < F2FS_ROOT_INO(sbi)))
return -EINVAL;
if (unlikely(nid >= NM_I(sbi)->max_nid))
return -EINVAL;
return 0;
}
/* /*
* Check whether the inode has blocks or not * Check whether the inode has blocks or not
*/ */
...@@ -1614,7 +1620,7 @@ static inline bool f2fs_has_xattr_block(unsigned int ofs) ...@@ -1614,7 +1620,7 @@ static inline bool f2fs_has_xattr_block(unsigned int ofs)
} }
static inline bool __allow_reserved_blocks(struct f2fs_sb_info *sbi, static inline bool __allow_reserved_blocks(struct f2fs_sb_info *sbi,
struct inode *inode) struct inode *inode, bool cap)
{ {
if (!inode) if (!inode)
return true; return true;
...@@ -1627,7 +1633,7 @@ static inline bool __allow_reserved_blocks(struct f2fs_sb_info *sbi, ...@@ -1627,7 +1633,7 @@ static inline bool __allow_reserved_blocks(struct f2fs_sb_info *sbi,
if (!gid_eq(F2FS_OPTION(sbi).s_resgid, GLOBAL_ROOT_GID) && if (!gid_eq(F2FS_OPTION(sbi).s_resgid, GLOBAL_ROOT_GID) &&
in_group_p(F2FS_OPTION(sbi).s_resgid)) in_group_p(F2FS_OPTION(sbi).s_resgid))
return true; return true;
if (capable(CAP_SYS_RESOURCE)) if (cap && capable(CAP_SYS_RESOURCE))
return true; return true;
return false; return false;
} }
...@@ -1662,7 +1668,7 @@ static inline int inc_valid_block_count(struct f2fs_sb_info *sbi, ...@@ -1662,7 +1668,7 @@ static inline int inc_valid_block_count(struct f2fs_sb_info *sbi,
avail_user_block_count = sbi->user_block_count - avail_user_block_count = sbi->user_block_count -
sbi->current_reserved_blocks; sbi->current_reserved_blocks;
if (!__allow_reserved_blocks(sbi, inode)) if (!__allow_reserved_blocks(sbi, inode, true))
avail_user_block_count -= F2FS_OPTION(sbi).root_reserved_blocks; avail_user_block_count -= F2FS_OPTION(sbi).root_reserved_blocks;
if (unlikely(sbi->total_valid_block_count > avail_user_block_count)) { if (unlikely(sbi->total_valid_block_count > avail_user_block_count)) {
...@@ -1869,7 +1875,7 @@ static inline int inc_valid_node_count(struct f2fs_sb_info *sbi, ...@@ -1869,7 +1875,7 @@ static inline int inc_valid_node_count(struct f2fs_sb_info *sbi,
valid_block_count = sbi->total_valid_block_count + valid_block_count = sbi->total_valid_block_count +
sbi->current_reserved_blocks + 1; sbi->current_reserved_blocks + 1;
if (!__allow_reserved_blocks(sbi, inode)) if (!__allow_reserved_blocks(sbi, inode, false))
valid_block_count += F2FS_OPTION(sbi).root_reserved_blocks; valid_block_count += F2FS_OPTION(sbi).root_reserved_blocks;
if (unlikely(valid_block_count > sbi->user_block_count)) { if (unlikely(valid_block_count > sbi->user_block_count)) {
...@@ -2156,9 +2162,60 @@ static inline void f2fs_change_bit(unsigned int nr, char *addr) ...@@ -2156,9 +2162,60 @@ static inline void f2fs_change_bit(unsigned int nr, char *addr)
*addr ^= mask; *addr ^= mask;
} }
#define F2FS_REG_FLMASK (~(FS_DIRSYNC_FL | FS_TOPDIR_FL)) /*
#define F2FS_OTHER_FLMASK (FS_NODUMP_FL | FS_NOATIME_FL) * Inode flags
#define F2FS_FL_INHERITED (FS_PROJINHERIT_FL) */
#define F2FS_SECRM_FL 0x00000001 /* Secure deletion */
#define F2FS_UNRM_FL 0x00000002 /* Undelete */
#define F2FS_COMPR_FL 0x00000004 /* Compress file */
#define F2FS_SYNC_FL 0x00000008 /* Synchronous updates */
#define F2FS_IMMUTABLE_FL 0x00000010 /* Immutable file */
#define F2FS_APPEND_FL 0x00000020 /* writes to file may only append */
#define F2FS_NODUMP_FL 0x00000040 /* do not dump file */
#define F2FS_NOATIME_FL 0x00000080 /* do not update atime */
/* Reserved for compression usage... */
#define F2FS_DIRTY_FL 0x00000100
#define F2FS_COMPRBLK_FL 0x00000200 /* One or more compressed clusters */
#define F2FS_NOCOMPR_FL 0x00000400 /* Don't compress */
#define F2FS_ENCRYPT_FL 0x00000800 /* encrypted file */
/* End compression flags --- maybe not all used */
#define F2FS_INDEX_FL 0x00001000 /* hash-indexed directory */
#define F2FS_IMAGIC_FL 0x00002000 /* AFS directory */
#define F2FS_JOURNAL_DATA_FL 0x00004000 /* file data should be journaled */
#define F2FS_NOTAIL_FL 0x00008000 /* file tail should not be merged */
#define F2FS_DIRSYNC_FL 0x00010000 /* dirsync behaviour (directories only) */
#define F2FS_TOPDIR_FL 0x00020000 /* Top of directory hierarchies*/
#define F2FS_HUGE_FILE_FL 0x00040000 /* Set to each huge file */
#define F2FS_EXTENTS_FL 0x00080000 /* Inode uses extents */
#define F2FS_EA_INODE_FL 0x00200000 /* Inode used for large EA */
#define F2FS_EOFBLOCKS_FL 0x00400000 /* Blocks allocated beyond EOF */
#define F2FS_INLINE_DATA_FL 0x10000000 /* Inode has inline data. */
#define F2FS_PROJINHERIT_FL 0x20000000 /* Create with parents projid */
#define F2FS_RESERVED_FL 0x80000000 /* reserved for ext4 lib */
#define F2FS_FL_USER_VISIBLE 0x304BDFFF /* User visible flags */
#define F2FS_FL_USER_MODIFIABLE 0x204BC0FF /* User modifiable flags */
/* Flags we can manipulate with through F2FS_IOC_FSSETXATTR */
#define F2FS_FL_XFLAG_VISIBLE (F2FS_SYNC_FL | \
F2FS_IMMUTABLE_FL | \
F2FS_APPEND_FL | \
F2FS_NODUMP_FL | \
F2FS_NOATIME_FL | \
F2FS_PROJINHERIT_FL)
/* Flags that should be inherited by new inodes from their parent. */
#define F2FS_FL_INHERITED (F2FS_SECRM_FL | F2FS_UNRM_FL | F2FS_COMPR_FL |\
F2FS_SYNC_FL | F2FS_NODUMP_FL | F2FS_NOATIME_FL |\
F2FS_NOCOMPR_FL | F2FS_JOURNAL_DATA_FL |\
F2FS_NOTAIL_FL | F2FS_DIRSYNC_FL |\
F2FS_PROJINHERIT_FL)
/* Flags that are appropriate for regular files (all but dir-specific ones). */
#define F2FS_REG_FLMASK (~(F2FS_DIRSYNC_FL | F2FS_TOPDIR_FL))
/* Flags that are appropriate for non-directories/regular files. */
#define F2FS_OTHER_FLMASK (F2FS_NODUMP_FL | F2FS_NOATIME_FL)
static inline __u32 f2fs_mask_flags(umode_t mode, __u32 flags) static inline __u32 f2fs_mask_flags(umode_t mode, __u32 flags)
{ {
...@@ -2201,6 +2258,7 @@ enum { ...@@ -2201,6 +2258,7 @@ enum {
FI_EXTRA_ATTR, /* indicate file has extra attribute */ FI_EXTRA_ATTR, /* indicate file has extra attribute */
FI_PROJ_INHERIT, /* indicate file inherits projectid */ FI_PROJ_INHERIT, /* indicate file inherits projectid */
FI_PIN_FILE, /* indicate file should not be gced */ FI_PIN_FILE, /* indicate file should not be gced */
FI_ATOMIC_REVOKE_REQUEST, /* request to drop atomic data */
}; };
static inline void __mark_inode_dirty_flag(struct inode *inode, static inline void __mark_inode_dirty_flag(struct inode *inode,
...@@ -2299,7 +2357,7 @@ static inline void f2fs_i_depth_write(struct inode *inode, unsigned int depth) ...@@ -2299,7 +2357,7 @@ static inline void f2fs_i_depth_write(struct inode *inode, unsigned int depth)
static inline void f2fs_i_gc_failures_write(struct inode *inode, static inline void f2fs_i_gc_failures_write(struct inode *inode,
unsigned int count) unsigned int count)
{ {
F2FS_I(inode)->i_gc_failures = count; F2FS_I(inode)->i_gc_failures[GC_FAILURE_PIN] = count;
f2fs_mark_inode_dirty_sync(inode, true); f2fs_mark_inode_dirty_sync(inode, true);
} }
...@@ -2568,7 +2626,7 @@ static inline int get_inline_xattr_addrs(struct inode *inode) ...@@ -2568,7 +2626,7 @@ static inline int get_inline_xattr_addrs(struct inode *inode)
return F2FS_I(inode)->i_inline_xattr_size; return F2FS_I(inode)->i_inline_xattr_size;
} }
#define get_inode_mode(i) \ #define f2fs_get_inode_mode(i) \
((is_inode_flag_set(i, FI_ACL_MODE)) ? \ ((is_inode_flag_set(i, FI_ACL_MODE)) ? \
(F2FS_I(i)->i_acl_mode) : ((i)->i_mode)) (F2FS_I(i)->i_acl_mode) : ((i)->i_mode))
...@@ -2607,18 +2665,25 @@ static inline void f2fs_update_iostat(struct f2fs_sb_info *sbi, ...@@ -2607,18 +2665,25 @@ static inline void f2fs_update_iostat(struct f2fs_sb_info *sbi,
spin_unlock(&sbi->iostat_lock); spin_unlock(&sbi->iostat_lock);
} }
static inline bool is_valid_blkaddr(block_t blkaddr)
{
if (blkaddr == NEW_ADDR || blkaddr == NULL_ADDR)
return false;
return true;
}
/* /*
* file.c * file.c
*/ */
int f2fs_sync_file(struct file *file, loff_t start, loff_t end, int datasync); int f2fs_sync_file(struct file *file, loff_t start, loff_t end, int datasync);
void truncate_data_blocks(struct dnode_of_data *dn); void f2fs_truncate_data_blocks(struct dnode_of_data *dn);
int truncate_blocks(struct inode *inode, u64 from, bool lock); int f2fs_truncate_blocks(struct inode *inode, u64 from, bool lock);
int f2fs_truncate(struct inode *inode); int f2fs_truncate(struct inode *inode);
int f2fs_getattr(const struct path *path, struct kstat *stat, int f2fs_getattr(const struct path *path, struct kstat *stat,
u32 request_mask, unsigned int flags); u32 request_mask, unsigned int flags);
int f2fs_setattr(struct dentry *dentry, struct iattr *attr); int f2fs_setattr(struct dentry *dentry, struct iattr *attr);
int truncate_hole(struct inode *inode, pgoff_t pg_start, pgoff_t pg_end); int f2fs_truncate_hole(struct inode *inode, pgoff_t pg_start, pgoff_t pg_end);
void truncate_data_blocks_range(struct dnode_of_data *dn, int count); void f2fs_truncate_data_blocks_range(struct dnode_of_data *dn, int count);
int f2fs_precache_extents(struct inode *inode); int f2fs_precache_extents(struct inode *inode);
long f2fs_ioctl(struct file *filp, unsigned int cmd, unsigned long arg); long f2fs_ioctl(struct file *filp, unsigned int cmd, unsigned long arg);
long f2fs_compat_ioctl(struct file *file, unsigned int cmd, unsigned long arg); long f2fs_compat_ioctl(struct file *file, unsigned int cmd, unsigned long arg);
...@@ -2632,38 +2697,37 @@ bool f2fs_inode_chksum_verify(struct f2fs_sb_info *sbi, struct page *page); ...@@ -2632,38 +2697,37 @@ bool f2fs_inode_chksum_verify(struct f2fs_sb_info *sbi, struct page *page);
void f2fs_inode_chksum_set(struct f2fs_sb_info *sbi, struct page *page); void f2fs_inode_chksum_set(struct f2fs_sb_info *sbi, struct page *page);
struct inode *f2fs_iget(struct super_block *sb, unsigned long ino); struct inode *f2fs_iget(struct super_block *sb, unsigned long ino);
struct inode *f2fs_iget_retry(struct super_block *sb, unsigned long ino); struct inode *f2fs_iget_retry(struct super_block *sb, unsigned long ino);
int try_to_free_nats(struct f2fs_sb_info *sbi, int nr_shrink); int f2fs_try_to_free_nats(struct f2fs_sb_info *sbi, int nr_shrink);
void update_inode(struct inode *inode, struct page *node_page); void f2fs_update_inode(struct inode *inode, struct page *node_page);
void update_inode_page(struct inode *inode); void f2fs_update_inode_page(struct inode *inode);
int f2fs_write_inode(struct inode *inode, struct writeback_control *wbc); int f2fs_write_inode(struct inode *inode, struct writeback_control *wbc);
void f2fs_evict_inode(struct inode *inode); void f2fs_evict_inode(struct inode *inode);
void handle_failed_inode(struct inode *inode); void f2fs_handle_failed_inode(struct inode *inode);
/* /*
* namei.c * namei.c
*/ */
int update_extension_list(struct f2fs_sb_info *sbi, const char *name, int f2fs_update_extension_list(struct f2fs_sb_info *sbi, const char *name,
bool hot, bool set); bool hot, bool set);
struct dentry *f2fs_get_parent(struct dentry *child); struct dentry *f2fs_get_parent(struct dentry *child);
/* /*
* dir.c * dir.c
*/ */
void set_de_type(struct f2fs_dir_entry *de, umode_t mode); unsigned char f2fs_get_de_type(struct f2fs_dir_entry *de);
unsigned char get_de_type(struct f2fs_dir_entry *de); struct f2fs_dir_entry *f2fs_find_target_dentry(struct fscrypt_name *fname,
struct f2fs_dir_entry *find_target_dentry(struct fscrypt_name *fname,
f2fs_hash_t namehash, int *max_slots, f2fs_hash_t namehash, int *max_slots,
struct f2fs_dentry_ptr *d); struct f2fs_dentry_ptr *d);
int f2fs_fill_dentries(struct dir_context *ctx, struct f2fs_dentry_ptr *d, int f2fs_fill_dentries(struct dir_context *ctx, struct f2fs_dentry_ptr *d,
unsigned int start_pos, struct fscrypt_str *fstr); unsigned int start_pos, struct fscrypt_str *fstr);
void do_make_empty_dir(struct inode *inode, struct inode *parent, void f2fs_do_make_empty_dir(struct inode *inode, struct inode *parent,
struct f2fs_dentry_ptr *d); struct f2fs_dentry_ptr *d);
struct page *init_inode_metadata(struct inode *inode, struct inode *dir, struct page *f2fs_init_inode_metadata(struct inode *inode, struct inode *dir,
const struct qstr *new_name, const struct qstr *new_name,
const struct qstr *orig_name, struct page *dpage); const struct qstr *orig_name, struct page *dpage);
void update_parent_metadata(struct inode *dir, struct inode *inode, void f2fs_update_parent_metadata(struct inode *dir, struct inode *inode,
unsigned int current_depth); unsigned int current_depth);
int room_for_filename(const void *bitmap, int slots, int max_slots); int f2fs_room_for_filename(const void *bitmap, int slots, int max_slots);
void f2fs_drop_nlink(struct inode *dir, struct inode *inode); void f2fs_drop_nlink(struct inode *dir, struct inode *inode);
struct f2fs_dir_entry *__f2fs_find_entry(struct inode *dir, struct f2fs_dir_entry *__f2fs_find_entry(struct inode *dir,
struct fscrypt_name *fname, struct page **res_page); struct fscrypt_name *fname, struct page **res_page);
...@@ -2680,9 +2744,9 @@ void f2fs_update_dentry(nid_t ino, umode_t mode, struct f2fs_dentry_ptr *d, ...@@ -2680,9 +2744,9 @@ void f2fs_update_dentry(nid_t ino, umode_t mode, struct f2fs_dentry_ptr *d,
int f2fs_add_regular_entry(struct inode *dir, const struct qstr *new_name, int f2fs_add_regular_entry(struct inode *dir, const struct qstr *new_name,
const struct qstr *orig_name, const struct qstr *orig_name,
struct inode *inode, nid_t ino, umode_t mode); struct inode *inode, nid_t ino, umode_t mode);
int __f2fs_do_add_link(struct inode *dir, struct fscrypt_name *fname, int f2fs_add_dentry(struct inode *dir, struct fscrypt_name *fname,
struct inode *inode, nid_t ino, umode_t mode); struct inode *inode, nid_t ino, umode_t mode);
int __f2fs_add_link(struct inode *dir, const struct qstr *name, int f2fs_do_add_link(struct inode *dir, const struct qstr *name,
struct inode *inode, nid_t ino, umode_t mode); struct inode *inode, nid_t ino, umode_t mode);
void f2fs_delete_entry(struct f2fs_dir_entry *dentry, struct page *page, void f2fs_delete_entry(struct f2fs_dir_entry *dentry, struct page *page,
struct inode *dir, struct inode *inode); struct inode *dir, struct inode *inode);
...@@ -2691,7 +2755,7 @@ bool f2fs_empty_dir(struct inode *dir); ...@@ -2691,7 +2755,7 @@ bool f2fs_empty_dir(struct inode *dir);
static inline int f2fs_add_link(struct dentry *dentry, struct inode *inode) static inline int f2fs_add_link(struct dentry *dentry, struct inode *inode)
{ {
return __f2fs_add_link(d_inode(dentry->d_parent), &dentry->d_name, return f2fs_do_add_link(d_inode(dentry->d_parent), &dentry->d_name,
inode, inode->i_ino, inode->i_mode); inode, inode->i_ino, inode->i_mode);
} }
...@@ -2706,7 +2770,7 @@ int f2fs_commit_super(struct f2fs_sb_info *sbi, bool recover); ...@@ -2706,7 +2770,7 @@ int f2fs_commit_super(struct f2fs_sb_info *sbi, bool recover);
int f2fs_sync_fs(struct super_block *sb, int sync); int f2fs_sync_fs(struct super_block *sb, int sync);
extern __printf(3, 4) extern __printf(3, 4)
void f2fs_msg(struct super_block *sb, const char *level, const char *fmt, ...); void f2fs_msg(struct super_block *sb, const char *level, const char *fmt, ...);
int sanity_check_ckpt(struct f2fs_sb_info *sbi); int f2fs_sanity_check_ckpt(struct f2fs_sb_info *sbi);
/* /*
* hash.c * hash.c
...@@ -2720,179 +2784,183 @@ f2fs_hash_t f2fs_dentry_hash(const struct qstr *name_info, ...@@ -2720,179 +2784,183 @@ f2fs_hash_t f2fs_dentry_hash(const struct qstr *name_info,
struct dnode_of_data; struct dnode_of_data;
struct node_info; struct node_info;
bool available_free_memory(struct f2fs_sb_info *sbi, int type); int f2fs_check_nid_range(struct f2fs_sb_info *sbi, nid_t nid);
int need_dentry_mark(struct f2fs_sb_info *sbi, nid_t nid); bool f2fs_available_free_memory(struct f2fs_sb_info *sbi, int type);
bool is_checkpointed_node(struct f2fs_sb_info *sbi, nid_t nid); int f2fs_need_dentry_mark(struct f2fs_sb_info *sbi, nid_t nid);
bool need_inode_block_update(struct f2fs_sb_info *sbi, nid_t ino); bool f2fs_is_checkpointed_node(struct f2fs_sb_info *sbi, nid_t nid);
void get_node_info(struct f2fs_sb_info *sbi, nid_t nid, struct node_info *ni); bool f2fs_need_inode_block_update(struct f2fs_sb_info *sbi, nid_t ino);
pgoff_t get_next_page_offset(struct dnode_of_data *dn, pgoff_t pgofs); void f2fs_get_node_info(struct f2fs_sb_info *sbi, nid_t nid,
int get_dnode_of_data(struct dnode_of_data *dn, pgoff_t index, int mode); struct node_info *ni);
int truncate_inode_blocks(struct inode *inode, pgoff_t from); pgoff_t f2fs_get_next_page_offset(struct dnode_of_data *dn, pgoff_t pgofs);
int truncate_xattr_node(struct inode *inode); int f2fs_get_dnode_of_data(struct dnode_of_data *dn, pgoff_t index, int mode);
int wait_on_node_pages_writeback(struct f2fs_sb_info *sbi, nid_t ino); int f2fs_truncate_inode_blocks(struct inode *inode, pgoff_t from);
int remove_inode_page(struct inode *inode); int f2fs_truncate_xattr_node(struct inode *inode);
struct page *new_inode_page(struct inode *inode); int f2fs_wait_on_node_pages_writeback(struct f2fs_sb_info *sbi, nid_t ino);
struct page *new_node_page(struct dnode_of_data *dn, unsigned int ofs); int f2fs_remove_inode_page(struct inode *inode);
void ra_node_page(struct f2fs_sb_info *sbi, nid_t nid); struct page *f2fs_new_inode_page(struct inode *inode);
struct page *get_node_page(struct f2fs_sb_info *sbi, pgoff_t nid); struct page *f2fs_new_node_page(struct dnode_of_data *dn, unsigned int ofs);
struct page *get_node_page_ra(struct page *parent, int start); void f2fs_ra_node_page(struct f2fs_sb_info *sbi, nid_t nid);
void move_node_page(struct page *node_page, int gc_type); struct page *f2fs_get_node_page(struct f2fs_sb_info *sbi, pgoff_t nid);
int fsync_node_pages(struct f2fs_sb_info *sbi, struct inode *inode, struct page *f2fs_get_node_page_ra(struct page *parent, int start);
void f2fs_move_node_page(struct page *node_page, int gc_type);
int f2fs_fsync_node_pages(struct f2fs_sb_info *sbi, struct inode *inode,
struct writeback_control *wbc, bool atomic); struct writeback_control *wbc, bool atomic);
int sync_node_pages(struct f2fs_sb_info *sbi, struct writeback_control *wbc, int f2fs_sync_node_pages(struct f2fs_sb_info *sbi,
struct writeback_control *wbc,
bool do_balance, enum iostat_type io_type); bool do_balance, enum iostat_type io_type);
void build_free_nids(struct f2fs_sb_info *sbi, bool sync, bool mount); void f2fs_build_free_nids(struct f2fs_sb_info *sbi, bool sync, bool mount);
bool alloc_nid(struct f2fs_sb_info *sbi, nid_t *nid); bool f2fs_alloc_nid(struct f2fs_sb_info *sbi, nid_t *nid);
void alloc_nid_done(struct f2fs_sb_info *sbi, nid_t nid); void f2fs_alloc_nid_done(struct f2fs_sb_info *sbi, nid_t nid);
void alloc_nid_failed(struct f2fs_sb_info *sbi, nid_t nid); void f2fs_alloc_nid_failed(struct f2fs_sb_info *sbi, nid_t nid);
int try_to_free_nids(struct f2fs_sb_info *sbi, int nr_shrink); int f2fs_try_to_free_nids(struct f2fs_sb_info *sbi, int nr_shrink);
void recover_inline_xattr(struct inode *inode, struct page *page); void f2fs_recover_inline_xattr(struct inode *inode, struct page *page);
int recover_xattr_data(struct inode *inode, struct page *page); int f2fs_recover_xattr_data(struct inode *inode, struct page *page);
int recover_inode_page(struct f2fs_sb_info *sbi, struct page *page); int f2fs_recover_inode_page(struct f2fs_sb_info *sbi, struct page *page);
void restore_node_summary(struct f2fs_sb_info *sbi, void f2fs_restore_node_summary(struct f2fs_sb_info *sbi,
unsigned int segno, struct f2fs_summary_block *sum); unsigned int segno, struct f2fs_summary_block *sum);
void flush_nat_entries(struct f2fs_sb_info *sbi, struct cp_control *cpc); void f2fs_flush_nat_entries(struct f2fs_sb_info *sbi, struct cp_control *cpc);
int build_node_manager(struct f2fs_sb_info *sbi); int f2fs_build_node_manager(struct f2fs_sb_info *sbi);
void destroy_node_manager(struct f2fs_sb_info *sbi); void f2fs_destroy_node_manager(struct f2fs_sb_info *sbi);
int __init create_node_manager_caches(void); int __init f2fs_create_node_manager_caches(void);
void destroy_node_manager_caches(void); void f2fs_destroy_node_manager_caches(void);
/* /*
* segment.c * segment.c
*/ */
bool need_SSR(struct f2fs_sb_info *sbi); bool f2fs_need_SSR(struct f2fs_sb_info *sbi);
void register_inmem_page(struct inode *inode, struct page *page); void f2fs_register_inmem_page(struct inode *inode, struct page *page);
void drop_inmem_pages_all(struct f2fs_sb_info *sbi); void f2fs_drop_inmem_pages_all(struct f2fs_sb_info *sbi, bool gc_failure);
void drop_inmem_pages(struct inode *inode); void f2fs_drop_inmem_pages(struct inode *inode);
void drop_inmem_page(struct inode *inode, struct page *page); void f2fs_drop_inmem_page(struct inode *inode, struct page *page);
int commit_inmem_pages(struct inode *inode); int f2fs_commit_inmem_pages(struct inode *inode);
void f2fs_balance_fs(struct f2fs_sb_info *sbi, bool need); void f2fs_balance_fs(struct f2fs_sb_info *sbi, bool need);
void f2fs_balance_fs_bg(struct f2fs_sb_info *sbi); void f2fs_balance_fs_bg(struct f2fs_sb_info *sbi);
int f2fs_issue_flush(struct f2fs_sb_info *sbi, nid_t ino); int f2fs_issue_flush(struct f2fs_sb_info *sbi, nid_t ino);
int create_flush_cmd_control(struct f2fs_sb_info *sbi); int f2fs_create_flush_cmd_control(struct f2fs_sb_info *sbi);
int f2fs_flush_device_cache(struct f2fs_sb_info *sbi); int f2fs_flush_device_cache(struct f2fs_sb_info *sbi);
void destroy_flush_cmd_control(struct f2fs_sb_info *sbi, bool free); void f2fs_destroy_flush_cmd_control(struct f2fs_sb_info *sbi, bool free);
void invalidate_blocks(struct f2fs_sb_info *sbi, block_t addr); void f2fs_invalidate_blocks(struct f2fs_sb_info *sbi, block_t addr);
bool is_checkpointed_data(struct f2fs_sb_info *sbi, block_t blkaddr); bool f2fs_is_checkpointed_data(struct f2fs_sb_info *sbi, block_t blkaddr);
void init_discard_policy(struct discard_policy *dpolicy, int discard_type, void f2fs_drop_discard_cmd(struct f2fs_sb_info *sbi);
unsigned int granularity); void f2fs_stop_discard_thread(struct f2fs_sb_info *sbi);
void drop_discard_cmd(struct f2fs_sb_info *sbi);
void stop_discard_thread(struct f2fs_sb_info *sbi);
bool f2fs_wait_discard_bios(struct f2fs_sb_info *sbi); bool f2fs_wait_discard_bios(struct f2fs_sb_info *sbi);
void clear_prefree_segments(struct f2fs_sb_info *sbi, struct cp_control *cpc); void f2fs_clear_prefree_segments(struct f2fs_sb_info *sbi,
void release_discard_addrs(struct f2fs_sb_info *sbi); struct cp_control *cpc);
int npages_for_summary_flush(struct f2fs_sb_info *sbi, bool for_ra); void f2fs_release_discard_addrs(struct f2fs_sb_info *sbi);
void allocate_new_segments(struct f2fs_sb_info *sbi); int f2fs_npages_for_summary_flush(struct f2fs_sb_info *sbi, bool for_ra);
void f2fs_allocate_new_segments(struct f2fs_sb_info *sbi);
int f2fs_trim_fs(struct f2fs_sb_info *sbi, struct fstrim_range *range); int f2fs_trim_fs(struct f2fs_sb_info *sbi, struct fstrim_range *range);
bool exist_trim_candidates(struct f2fs_sb_info *sbi, struct cp_control *cpc); bool f2fs_exist_trim_candidates(struct f2fs_sb_info *sbi,
struct page *get_sum_page(struct f2fs_sb_info *sbi, unsigned int segno); struct cp_control *cpc);
void update_meta_page(struct f2fs_sb_info *sbi, void *src, block_t blk_addr); struct page *f2fs_get_sum_page(struct f2fs_sb_info *sbi, unsigned int segno);
void write_meta_page(struct f2fs_sb_info *sbi, struct page *page, void f2fs_update_meta_page(struct f2fs_sb_info *sbi, void *src,
block_t blk_addr);
void f2fs_do_write_meta_page(struct f2fs_sb_info *sbi, struct page *page,
enum iostat_type io_type); enum iostat_type io_type);
void write_node_page(unsigned int nid, struct f2fs_io_info *fio); void f2fs_do_write_node_page(unsigned int nid, struct f2fs_io_info *fio);
void write_data_page(struct dnode_of_data *dn, struct f2fs_io_info *fio); void f2fs_outplace_write_data(struct dnode_of_data *dn,
int rewrite_data_page(struct f2fs_io_info *fio); struct f2fs_io_info *fio);
void __f2fs_replace_block(struct f2fs_sb_info *sbi, struct f2fs_summary *sum, int f2fs_inplace_write_data(struct f2fs_io_info *fio);
void f2fs_do_replace_block(struct f2fs_sb_info *sbi, struct f2fs_summary *sum,
block_t old_blkaddr, block_t new_blkaddr, block_t old_blkaddr, block_t new_blkaddr,
bool recover_curseg, bool recover_newaddr); bool recover_curseg, bool recover_newaddr);
void f2fs_replace_block(struct f2fs_sb_info *sbi, struct dnode_of_data *dn, void f2fs_replace_block(struct f2fs_sb_info *sbi, struct dnode_of_data *dn,
block_t old_addr, block_t new_addr, block_t old_addr, block_t new_addr,
unsigned char version, bool recover_curseg, unsigned char version, bool recover_curseg,
bool recover_newaddr); bool recover_newaddr);
void allocate_data_block(struct f2fs_sb_info *sbi, struct page *page, void f2fs_allocate_data_block(struct f2fs_sb_info *sbi, struct page *page,
block_t old_blkaddr, block_t *new_blkaddr, block_t old_blkaddr, block_t *new_blkaddr,
struct f2fs_summary *sum, int type, struct f2fs_summary *sum, int type,
struct f2fs_io_info *fio, bool add_list); struct f2fs_io_info *fio, bool add_list);
void f2fs_wait_on_page_writeback(struct page *page, void f2fs_wait_on_page_writeback(struct page *page,
enum page_type type, bool ordered); enum page_type type, bool ordered);
void f2fs_wait_on_block_writeback(struct f2fs_sb_info *sbi, block_t blkaddr); void f2fs_wait_on_block_writeback(struct f2fs_sb_info *sbi, block_t blkaddr);
void write_data_summaries(struct f2fs_sb_info *sbi, block_t start_blk); void f2fs_write_data_summaries(struct f2fs_sb_info *sbi, block_t start_blk);
void write_node_summaries(struct f2fs_sb_info *sbi, block_t start_blk); void f2fs_write_node_summaries(struct f2fs_sb_info *sbi, block_t start_blk);
int lookup_journal_in_cursum(struct f2fs_journal *journal, int type, int f2fs_lookup_journal_in_cursum(struct f2fs_journal *journal, int type,
unsigned int val, int alloc); unsigned int val, int alloc);
void flush_sit_entries(struct f2fs_sb_info *sbi, struct cp_control *cpc); void f2fs_flush_sit_entries(struct f2fs_sb_info *sbi, struct cp_control *cpc);
int build_segment_manager(struct f2fs_sb_info *sbi); int f2fs_build_segment_manager(struct f2fs_sb_info *sbi);
void destroy_segment_manager(struct f2fs_sb_info *sbi); void f2fs_destroy_segment_manager(struct f2fs_sb_info *sbi);
int __init create_segment_manager_caches(void); int __init f2fs_create_segment_manager_caches(void);
void destroy_segment_manager_caches(void); void f2fs_destroy_segment_manager_caches(void);
int rw_hint_to_seg_type(enum rw_hint hint); int f2fs_rw_hint_to_seg_type(enum rw_hint hint);
enum rw_hint io_type_to_rw_hint(struct f2fs_sb_info *sbi, enum page_type type, enum rw_hint f2fs_io_type_to_rw_hint(struct f2fs_sb_info *sbi,
enum temp_type temp); enum page_type type, enum temp_type temp);
/* /*
* checkpoint.c * checkpoint.c
*/ */
void f2fs_stop_checkpoint(struct f2fs_sb_info *sbi, bool end_io); void f2fs_stop_checkpoint(struct f2fs_sb_info *sbi, bool end_io);
struct page *grab_meta_page(struct f2fs_sb_info *sbi, pgoff_t index); struct page *f2fs_grab_meta_page(struct f2fs_sb_info *sbi, pgoff_t index);
struct page *get_meta_page(struct f2fs_sb_info *sbi, pgoff_t index); struct page *f2fs_get_meta_page(struct f2fs_sb_info *sbi, pgoff_t index);
struct page *get_tmp_page(struct f2fs_sb_info *sbi, pgoff_t index); struct page *f2fs_get_tmp_page(struct f2fs_sb_info *sbi, pgoff_t index);
bool is_valid_blkaddr(struct f2fs_sb_info *sbi, block_t blkaddr, int type); bool f2fs_is_valid_meta_blkaddr(struct f2fs_sb_info *sbi,
int ra_meta_pages(struct f2fs_sb_info *sbi, block_t start, int nrpages, block_t blkaddr, int type);
int f2fs_ra_meta_pages(struct f2fs_sb_info *sbi, block_t start, int nrpages,
int type, bool sync); int type, bool sync);
void ra_meta_pages_cond(struct f2fs_sb_info *sbi, pgoff_t index); void f2fs_ra_meta_pages_cond(struct f2fs_sb_info *sbi, pgoff_t index);
long sync_meta_pages(struct f2fs_sb_info *sbi, enum page_type type, long f2fs_sync_meta_pages(struct f2fs_sb_info *sbi, enum page_type type,
long nr_to_write, enum iostat_type io_type); long nr_to_write, enum iostat_type io_type);
void add_ino_entry(struct f2fs_sb_info *sbi, nid_t ino, int type); void f2fs_add_ino_entry(struct f2fs_sb_info *sbi, nid_t ino, int type);
void remove_ino_entry(struct f2fs_sb_info *sbi, nid_t ino, int type); void f2fs_remove_ino_entry(struct f2fs_sb_info *sbi, nid_t ino, int type);
void release_ino_entry(struct f2fs_sb_info *sbi, bool all); void f2fs_release_ino_entry(struct f2fs_sb_info *sbi, bool all);
bool exist_written_data(struct f2fs_sb_info *sbi, nid_t ino, int mode); bool f2fs_exist_written_data(struct f2fs_sb_info *sbi, nid_t ino, int mode);
void set_dirty_device(struct f2fs_sb_info *sbi, nid_t ino, void f2fs_set_dirty_device(struct f2fs_sb_info *sbi, nid_t ino,
unsigned int devidx, int type); unsigned int devidx, int type);
bool is_dirty_device(struct f2fs_sb_info *sbi, nid_t ino, bool f2fs_is_dirty_device(struct f2fs_sb_info *sbi, nid_t ino,
unsigned int devidx, int type); unsigned int devidx, int type);
int f2fs_sync_inode_meta(struct f2fs_sb_info *sbi); int f2fs_sync_inode_meta(struct f2fs_sb_info *sbi);
int acquire_orphan_inode(struct f2fs_sb_info *sbi); int f2fs_acquire_orphan_inode(struct f2fs_sb_info *sbi);
void release_orphan_inode(struct f2fs_sb_info *sbi); void f2fs_release_orphan_inode(struct f2fs_sb_info *sbi);
void add_orphan_inode(struct inode *inode); void f2fs_add_orphan_inode(struct inode *inode);
void remove_orphan_inode(struct f2fs_sb_info *sbi, nid_t ino); void f2fs_remove_orphan_inode(struct f2fs_sb_info *sbi, nid_t ino);
int recover_orphan_inodes(struct f2fs_sb_info *sbi); int f2fs_recover_orphan_inodes(struct f2fs_sb_info *sbi);
int get_valid_checkpoint(struct f2fs_sb_info *sbi); int f2fs_get_valid_checkpoint(struct f2fs_sb_info *sbi);
void update_dirty_page(struct inode *inode, struct page *page); void f2fs_update_dirty_page(struct inode *inode, struct page *page);
void remove_dirty_inode(struct inode *inode); void f2fs_remove_dirty_inode(struct inode *inode);
int sync_dirty_inodes(struct f2fs_sb_info *sbi, enum inode_type type); int f2fs_sync_dirty_inodes(struct f2fs_sb_info *sbi, enum inode_type type);
int write_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc); int f2fs_write_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc);
void init_ino_entry_info(struct f2fs_sb_info *sbi); void f2fs_init_ino_entry_info(struct f2fs_sb_info *sbi);
int __init create_checkpoint_caches(void); int __init f2fs_create_checkpoint_caches(void);
void destroy_checkpoint_caches(void); void f2fs_destroy_checkpoint_caches(void);
/* /*
* data.c * data.c
*/ */
int f2fs_init_post_read_processing(void);
void f2fs_destroy_post_read_processing(void);
void f2fs_submit_merged_write(struct f2fs_sb_info *sbi, enum page_type type); void f2fs_submit_merged_write(struct f2fs_sb_info *sbi, enum page_type type);
void f2fs_submit_merged_write_cond(struct f2fs_sb_info *sbi, void f2fs_submit_merged_write_cond(struct f2fs_sb_info *sbi,
struct inode *inode, nid_t ino, pgoff_t idx, struct inode *inode, nid_t ino, pgoff_t idx,
enum page_type type); enum page_type type);
void f2fs_flush_merged_writes(struct f2fs_sb_info *sbi); void f2fs_flush_merged_writes(struct f2fs_sb_info *sbi);
int f2fs_submit_page_bio(struct f2fs_io_info *fio); int f2fs_submit_page_bio(struct f2fs_io_info *fio);
int f2fs_submit_page_write(struct f2fs_io_info *fio); void f2fs_submit_page_write(struct f2fs_io_info *fio);
struct block_device *f2fs_target_device(struct f2fs_sb_info *sbi, struct block_device *f2fs_target_device(struct f2fs_sb_info *sbi,
block_t blk_addr, struct bio *bio); block_t blk_addr, struct bio *bio);
int f2fs_target_device_index(struct f2fs_sb_info *sbi, block_t blkaddr); int f2fs_target_device_index(struct f2fs_sb_info *sbi, block_t blkaddr);
void set_data_blkaddr(struct dnode_of_data *dn); void f2fs_set_data_blkaddr(struct dnode_of_data *dn);
void f2fs_update_data_blkaddr(struct dnode_of_data *dn, block_t blkaddr); void f2fs_update_data_blkaddr(struct dnode_of_data *dn, block_t blkaddr);
int reserve_new_blocks(struct dnode_of_data *dn, blkcnt_t count); int f2fs_reserve_new_blocks(struct dnode_of_data *dn, blkcnt_t count);
int reserve_new_block(struct dnode_of_data *dn); int f2fs_reserve_new_block(struct dnode_of_data *dn);
int f2fs_get_block(struct dnode_of_data *dn, pgoff_t index); int f2fs_get_block(struct dnode_of_data *dn, pgoff_t index);
int f2fs_preallocate_blocks(struct kiocb *iocb, struct iov_iter *from); int f2fs_preallocate_blocks(struct kiocb *iocb, struct iov_iter *from);
int f2fs_reserve_block(struct dnode_of_data *dn, pgoff_t index); int f2fs_reserve_block(struct dnode_of_data *dn, pgoff_t index);
struct page *get_read_data_page(struct inode *inode, pgoff_t index, struct page *f2fs_get_read_data_page(struct inode *inode, pgoff_t index,
int op_flags, bool for_write); int op_flags, bool for_write);
struct page *find_data_page(struct inode *inode, pgoff_t index); struct page *f2fs_find_data_page(struct inode *inode, pgoff_t index);
struct page *get_lock_data_page(struct inode *inode, pgoff_t index, struct page *f2fs_get_lock_data_page(struct inode *inode, pgoff_t index,
bool for_write); bool for_write);
struct page *get_new_data_page(struct inode *inode, struct page *f2fs_get_new_data_page(struct inode *inode,
struct page *ipage, pgoff_t index, bool new_i_size); struct page *ipage, pgoff_t index, bool new_i_size);
int do_write_data_page(struct f2fs_io_info *fio); int f2fs_do_write_data_page(struct f2fs_io_info *fio);
int f2fs_map_blocks(struct inode *inode, struct f2fs_map_blocks *map, int f2fs_map_blocks(struct inode *inode, struct f2fs_map_blocks *map,
int create, int flag); int create, int flag);
int f2fs_fiemap(struct inode *inode, struct fiemap_extent_info *fieinfo, int f2fs_fiemap(struct inode *inode, struct fiemap_extent_info *fieinfo,
u64 start, u64 len); u64 start, u64 len);
bool should_update_inplace(struct inode *inode, struct f2fs_io_info *fio); bool f2fs_should_update_inplace(struct inode *inode, struct f2fs_io_info *fio);
bool should_update_outplace(struct inode *inode, struct f2fs_io_info *fio); bool f2fs_should_update_outplace(struct inode *inode, struct f2fs_io_info *fio);
void f2fs_set_page_dirty_nobuffers(struct page *page);
int __f2fs_write_data_pages(struct address_space *mapping,
struct writeback_control *wbc,
enum iostat_type io_type);
void f2fs_invalidate_page(struct page *page, unsigned int offset, void f2fs_invalidate_page(struct page *page, unsigned int offset,
unsigned int length); unsigned int length);
int f2fs_release_page(struct page *page, gfp_t wait); int f2fs_release_page(struct page *page, gfp_t wait);
...@@ -2901,22 +2969,23 @@ int f2fs_migrate_page(struct address_space *mapping, struct page *newpage, ...@@ -2901,22 +2969,23 @@ int f2fs_migrate_page(struct address_space *mapping, struct page *newpage,
struct page *page, enum migrate_mode mode); struct page *page, enum migrate_mode mode);
#endif #endif
bool f2fs_overwrite_io(struct inode *inode, loff_t pos, size_t len); bool f2fs_overwrite_io(struct inode *inode, loff_t pos, size_t len);
void f2fs_clear_radix_tree_dirty_tag(struct page *page);
/* /*
* gc.c * gc.c
*/ */
int start_gc_thread(struct f2fs_sb_info *sbi); int f2fs_start_gc_thread(struct f2fs_sb_info *sbi);
void stop_gc_thread(struct f2fs_sb_info *sbi); void f2fs_stop_gc_thread(struct f2fs_sb_info *sbi);
block_t start_bidx_of_node(unsigned int node_ofs, struct inode *inode); block_t f2fs_start_bidx_of_node(unsigned int node_ofs, struct inode *inode);
int f2fs_gc(struct f2fs_sb_info *sbi, bool sync, bool background, int f2fs_gc(struct f2fs_sb_info *sbi, bool sync, bool background,
unsigned int segno); unsigned int segno);
void build_gc_manager(struct f2fs_sb_info *sbi); void f2fs_build_gc_manager(struct f2fs_sb_info *sbi);
/* /*
* recovery.c * recovery.c
*/ */
int recover_fsync_data(struct f2fs_sb_info *sbi, bool check_only); int f2fs_recover_fsync_data(struct f2fs_sb_info *sbi, bool check_only);
bool space_for_roll_forward(struct f2fs_sb_info *sbi); bool f2fs_space_for_roll_forward(struct f2fs_sb_info *sbi);
/* /*
* debug.c * debug.c
...@@ -2954,6 +3023,7 @@ struct f2fs_stat_info { ...@@ -2954,6 +3023,7 @@ struct f2fs_stat_info {
int bg_node_segs, bg_data_segs; int bg_node_segs, bg_data_segs;
int tot_blks, data_blks, node_blks; int tot_blks, data_blks, node_blks;
int bg_data_blks, bg_node_blks; int bg_data_blks, bg_node_blks;
unsigned long long skipped_atomic_files[2];
int curseg[NR_CURSEG_TYPE]; int curseg[NR_CURSEG_TYPE];
int cursec[NR_CURSEG_TYPE]; int cursec[NR_CURSEG_TYPE];
int curzone[NR_CURSEG_TYPE]; int curzone[NR_CURSEG_TYPE];
...@@ -3120,29 +3190,31 @@ extern const struct inode_operations f2fs_dir_inode_operations; ...@@ -3120,29 +3190,31 @@ extern const struct inode_operations f2fs_dir_inode_operations;
extern const struct inode_operations f2fs_symlink_inode_operations; extern const struct inode_operations f2fs_symlink_inode_operations;
extern const struct inode_operations f2fs_encrypted_symlink_inode_operations; extern const struct inode_operations f2fs_encrypted_symlink_inode_operations;
extern const struct inode_operations f2fs_special_inode_operations; extern const struct inode_operations f2fs_special_inode_operations;
extern struct kmem_cache *inode_entry_slab; extern struct kmem_cache *f2fs_inode_entry_slab;
/* /*
* inline.c * inline.c
*/ */
bool f2fs_may_inline_data(struct inode *inode); bool f2fs_may_inline_data(struct inode *inode);
bool f2fs_may_inline_dentry(struct inode *inode); bool f2fs_may_inline_dentry(struct inode *inode);
void read_inline_data(struct page *page, struct page *ipage); void f2fs_do_read_inline_data(struct page *page, struct page *ipage);
void truncate_inline_inode(struct inode *inode, struct page *ipage, u64 from); void f2fs_truncate_inline_inode(struct inode *inode,
struct page *ipage, u64 from);
int f2fs_read_inline_data(struct inode *inode, struct page *page); int f2fs_read_inline_data(struct inode *inode, struct page *page);
int f2fs_convert_inline_page(struct dnode_of_data *dn, struct page *page); int f2fs_convert_inline_page(struct dnode_of_data *dn, struct page *page);
int f2fs_convert_inline_inode(struct inode *inode); int f2fs_convert_inline_inode(struct inode *inode);
int f2fs_write_inline_data(struct inode *inode, struct page *page); int f2fs_write_inline_data(struct inode *inode, struct page *page);
bool recover_inline_data(struct inode *inode, struct page *npage); bool f2fs_recover_inline_data(struct inode *inode, struct page *npage);
struct f2fs_dir_entry *find_in_inline_dir(struct inode *dir, struct f2fs_dir_entry *f2fs_find_in_inline_dir(struct inode *dir,
struct fscrypt_name *fname, struct page **res_page); struct fscrypt_name *fname, struct page **res_page);
int make_empty_inline_dir(struct inode *inode, struct inode *parent, int f2fs_make_empty_inline_dir(struct inode *inode, struct inode *parent,
struct page *ipage); struct page *ipage);
int f2fs_add_inline_entry(struct inode *dir, const struct qstr *new_name, int f2fs_add_inline_entry(struct inode *dir, const struct qstr *new_name,
const struct qstr *orig_name, const struct qstr *orig_name,
struct inode *inode, nid_t ino, umode_t mode); struct inode *inode, nid_t ino, umode_t mode);
void f2fs_delete_inline_entry(struct f2fs_dir_entry *dentry, struct page *page, void f2fs_delete_inline_entry(struct f2fs_dir_entry *dentry,
struct inode *dir, struct inode *inode); struct page *page, struct inode *dir,
struct inode *inode);
bool f2fs_empty_inline_dir(struct inode *dir); bool f2fs_empty_inline_dir(struct inode *dir);
int f2fs_read_inline_dir(struct file *file, struct dir_context *ctx, int f2fs_read_inline_dir(struct file *file, struct dir_context *ctx,
struct fscrypt_str *fstr); struct fscrypt_str *fstr);
...@@ -3163,17 +3235,17 @@ void f2fs_leave_shrinker(struct f2fs_sb_info *sbi); ...@@ -3163,17 +3235,17 @@ void f2fs_leave_shrinker(struct f2fs_sb_info *sbi);
/* /*
* extent_cache.c * extent_cache.c
*/ */
struct rb_entry *__lookup_rb_tree(struct rb_root *root, struct rb_entry *f2fs_lookup_rb_tree(struct rb_root *root,
struct rb_entry *cached_re, unsigned int ofs); struct rb_entry *cached_re, unsigned int ofs);
struct rb_node **__lookup_rb_tree_for_insert(struct f2fs_sb_info *sbi, struct rb_node **f2fs_lookup_rb_tree_for_insert(struct f2fs_sb_info *sbi,
struct rb_root *root, struct rb_node **parent, struct rb_root *root, struct rb_node **parent,
unsigned int ofs); unsigned int ofs);
struct rb_entry *__lookup_rb_tree_ret(struct rb_root *root, struct rb_entry *f2fs_lookup_rb_tree_ret(struct rb_root *root,
struct rb_entry *cached_re, unsigned int ofs, struct rb_entry *cached_re, unsigned int ofs,
struct rb_entry **prev_entry, struct rb_entry **next_entry, struct rb_entry **prev_entry, struct rb_entry **next_entry,
struct rb_node ***insert_p, struct rb_node **insert_parent, struct rb_node ***insert_p, struct rb_node **insert_parent,
bool force); bool force);
bool __check_rb_tree_consistence(struct f2fs_sb_info *sbi, bool f2fs_check_rb_tree_consistence(struct f2fs_sb_info *sbi,
struct rb_root *root); struct rb_root *root);
unsigned int f2fs_shrink_extent_tree(struct f2fs_sb_info *sbi, int nr_shrink); unsigned int f2fs_shrink_extent_tree(struct f2fs_sb_info *sbi, int nr_shrink);
bool f2fs_init_extent_tree(struct inode *inode, struct f2fs_extent *i_ext); bool f2fs_init_extent_tree(struct inode *inode, struct f2fs_extent *i_ext);
...@@ -3185,9 +3257,9 @@ bool f2fs_lookup_extent_cache(struct inode *inode, pgoff_t pgofs, ...@@ -3185,9 +3257,9 @@ bool f2fs_lookup_extent_cache(struct inode *inode, pgoff_t pgofs,
void f2fs_update_extent_cache(struct dnode_of_data *dn); void f2fs_update_extent_cache(struct dnode_of_data *dn);
void f2fs_update_extent_cache_range(struct dnode_of_data *dn, void f2fs_update_extent_cache_range(struct dnode_of_data *dn,
pgoff_t fofs, block_t blkaddr, unsigned int len); pgoff_t fofs, block_t blkaddr, unsigned int len);
void init_extent_cache_info(struct f2fs_sb_info *sbi); void f2fs_init_extent_cache_info(struct f2fs_sb_info *sbi);
int __init create_extent_cache(void); int __init f2fs_create_extent_cache(void);
void destroy_extent_cache(void); void f2fs_destroy_extent_cache(void);
/* /*
* sysfs.c * sysfs.c
...@@ -3218,9 +3290,13 @@ static inline void f2fs_set_encrypted_inode(struct inode *inode) ...@@ -3218,9 +3290,13 @@ static inline void f2fs_set_encrypted_inode(struct inode *inode)
#endif #endif
} }
static inline bool f2fs_bio_encrypted(struct bio *bio) /*
* Returns true if the reads of the inode's data need to undergo some
* postprocessing step, like decryption or authenticity verification.
*/
static inline bool f2fs_post_read_required(struct inode *inode)
{ {
return bio->bi_private != NULL; return f2fs_encrypted_file(inode);
} }
#define F2FS_FEATURE_FUNCS(name, flagname) \ #define F2FS_FEATURE_FUNCS(name, flagname) \
...@@ -3288,7 +3364,7 @@ static inline bool f2fs_may_encrypt(struct inode *inode) ...@@ -3288,7 +3364,7 @@ static inline bool f2fs_may_encrypt(struct inode *inode)
static inline bool f2fs_force_buffered_io(struct inode *inode, int rw) static inline bool f2fs_force_buffered_io(struct inode *inode, int rw)
{ {
return (f2fs_encrypted_file(inode) || return (f2fs_post_read_required(inode) ||
(rw == WRITE && test_opt(F2FS_I_SB(inode), LFS)) || (rw == WRITE && test_opt(F2FS_I_SB(inode), LFS)) ||
F2FS_I_SB(inode)->s_ndevs); F2FS_I_SB(inode)->s_ndevs);
} }
......
...@@ -33,19 +33,19 @@ ...@@ -33,19 +33,19 @@
#include "trace.h" #include "trace.h"
#include <trace/events/f2fs.h> #include <trace/events/f2fs.h>
static int f2fs_filemap_fault(struct vm_fault *vmf) static vm_fault_t f2fs_filemap_fault(struct vm_fault *vmf)
{ {
struct inode *inode = file_inode(vmf->vma->vm_file); struct inode *inode = file_inode(vmf->vma->vm_file);
int err; vm_fault_t ret;
down_read(&F2FS_I(inode)->i_mmap_sem); down_read(&F2FS_I(inode)->i_mmap_sem);
err = filemap_fault(vmf); ret = filemap_fault(vmf);
up_read(&F2FS_I(inode)->i_mmap_sem); up_read(&F2FS_I(inode)->i_mmap_sem);
return err; return ret;
} }
static int f2fs_vm_page_mkwrite(struct vm_fault *vmf) static vm_fault_t f2fs_vm_page_mkwrite(struct vm_fault *vmf)
{ {
struct page *page = vmf->page; struct page *page = vmf->page;
struct inode *inode = file_inode(vmf->vma->vm_file); struct inode *inode = file_inode(vmf->vma->vm_file);
...@@ -95,7 +95,8 @@ static int f2fs_vm_page_mkwrite(struct vm_fault *vmf) ...@@ -95,7 +95,8 @@ static int f2fs_vm_page_mkwrite(struct vm_fault *vmf)
/* page is wholly or partially inside EOF */ /* page is wholly or partially inside EOF */
if (((loff_t)(page->index + 1) << PAGE_SHIFT) > if (((loff_t)(page->index + 1) << PAGE_SHIFT) >
i_size_read(inode)) { i_size_read(inode)) {
unsigned offset; loff_t offset;
offset = i_size_read(inode) & ~PAGE_MASK; offset = i_size_read(inode) & ~PAGE_MASK;
zero_user_segment(page, offset, PAGE_SIZE); zero_user_segment(page, offset, PAGE_SIZE);
} }
...@@ -110,8 +111,8 @@ static int f2fs_vm_page_mkwrite(struct vm_fault *vmf) ...@@ -110,8 +111,8 @@ static int f2fs_vm_page_mkwrite(struct vm_fault *vmf)
/* fill the page */ /* fill the page */
f2fs_wait_on_page_writeback(page, DATA, false); f2fs_wait_on_page_writeback(page, DATA, false);
/* wait for GCed encrypted page writeback */ /* wait for GCed page writeback via META_MAPPING */
if (f2fs_encrypted_file(inode)) if (f2fs_post_read_required(inode))
f2fs_wait_on_block_writeback(sbi, dn.data_blkaddr); f2fs_wait_on_block_writeback(sbi, dn.data_blkaddr);
out_sem: out_sem:
...@@ -157,17 +158,18 @@ static inline enum cp_reason_type need_do_checkpoint(struct inode *inode) ...@@ -157,17 +158,18 @@ static inline enum cp_reason_type need_do_checkpoint(struct inode *inode)
cp_reason = CP_SB_NEED_CP; cp_reason = CP_SB_NEED_CP;
else if (file_wrong_pino(inode)) else if (file_wrong_pino(inode))
cp_reason = CP_WRONG_PINO; cp_reason = CP_WRONG_PINO;
else if (!space_for_roll_forward(sbi)) else if (!f2fs_space_for_roll_forward(sbi))
cp_reason = CP_NO_SPC_ROLL; cp_reason = CP_NO_SPC_ROLL;
else if (!is_checkpointed_node(sbi, F2FS_I(inode)->i_pino)) else if (!f2fs_is_checkpointed_node(sbi, F2FS_I(inode)->i_pino))
cp_reason = CP_NODE_NEED_CP; cp_reason = CP_NODE_NEED_CP;
else if (test_opt(sbi, FASTBOOT)) else if (test_opt(sbi, FASTBOOT))
cp_reason = CP_FASTBOOT_MODE; cp_reason = CP_FASTBOOT_MODE;
else if (F2FS_OPTION(sbi).active_logs == 2) else if (F2FS_OPTION(sbi).active_logs == 2)
cp_reason = CP_SPEC_LOG_NUM; cp_reason = CP_SPEC_LOG_NUM;
else if (F2FS_OPTION(sbi).fsync_mode == FSYNC_MODE_STRICT && else if (F2FS_OPTION(sbi).fsync_mode == FSYNC_MODE_STRICT &&
need_dentry_mark(sbi, inode->i_ino) && f2fs_need_dentry_mark(sbi, inode->i_ino) &&
exist_written_data(sbi, F2FS_I(inode)->i_pino, TRANS_DIR_INO)) f2fs_exist_written_data(sbi, F2FS_I(inode)->i_pino,
TRANS_DIR_INO))
cp_reason = CP_RECOVER_DIR; cp_reason = CP_RECOVER_DIR;
return cp_reason; return cp_reason;
...@@ -178,7 +180,7 @@ static bool need_inode_page_update(struct f2fs_sb_info *sbi, nid_t ino) ...@@ -178,7 +180,7 @@ static bool need_inode_page_update(struct f2fs_sb_info *sbi, nid_t ino)
struct page *i = find_get_page(NODE_MAPPING(sbi), ino); struct page *i = find_get_page(NODE_MAPPING(sbi), ino);
bool ret = false; bool ret = false;
/* But we need to avoid that there are some inode updates */ /* But we need to avoid that there are some inode updates */
if ((i && PageDirty(i)) || need_inode_block_update(sbi, ino)) if ((i && PageDirty(i)) || f2fs_need_inode_block_update(sbi, ino))
ret = true; ret = true;
f2fs_put_page(i, 0); f2fs_put_page(i, 0);
return ret; return ret;
...@@ -238,14 +240,14 @@ static int f2fs_do_sync_file(struct file *file, loff_t start, loff_t end, ...@@ -238,14 +240,14 @@ static int f2fs_do_sync_file(struct file *file, loff_t start, loff_t end,
* if there is no written data, don't waste time to write recovery info. * if there is no written data, don't waste time to write recovery info.
*/ */
if (!is_inode_flag_set(inode, FI_APPEND_WRITE) && if (!is_inode_flag_set(inode, FI_APPEND_WRITE) &&
!exist_written_data(sbi, ino, APPEND_INO)) { !f2fs_exist_written_data(sbi, ino, APPEND_INO)) {
/* it may call write_inode just prior to fsync */ /* it may call write_inode just prior to fsync */
if (need_inode_page_update(sbi, ino)) if (need_inode_page_update(sbi, ino))
goto go_write; goto go_write;
if (is_inode_flag_set(inode, FI_UPDATE_WRITE) || if (is_inode_flag_set(inode, FI_UPDATE_WRITE) ||
exist_written_data(sbi, ino, UPDATE_INO)) f2fs_exist_written_data(sbi, ino, UPDATE_INO))
goto flush_out; goto flush_out;
goto out; goto out;
} }
...@@ -272,7 +274,9 @@ static int f2fs_do_sync_file(struct file *file, loff_t start, loff_t end, ...@@ -272,7 +274,9 @@ static int f2fs_do_sync_file(struct file *file, loff_t start, loff_t end,
goto out; goto out;
} }
sync_nodes: sync_nodes:
ret = fsync_node_pages(sbi, inode, &wbc, atomic); atomic_inc(&sbi->wb_sync_req[NODE]);
ret = f2fs_fsync_node_pages(sbi, inode, &wbc, atomic);
atomic_dec(&sbi->wb_sync_req[NODE]);
if (ret) if (ret)
goto out; goto out;
...@@ -282,7 +286,7 @@ static int f2fs_do_sync_file(struct file *file, loff_t start, loff_t end, ...@@ -282,7 +286,7 @@ static int f2fs_do_sync_file(struct file *file, loff_t start, loff_t end,
goto out; goto out;
} }
if (need_inode_block_update(sbi, ino)) { if (f2fs_need_inode_block_update(sbi, ino)) {
f2fs_mark_inode_dirty_sync(inode, true); f2fs_mark_inode_dirty_sync(inode, true);
f2fs_write_inode(inode, NULL); f2fs_write_inode(inode, NULL);
goto sync_nodes; goto sync_nodes;
...@@ -297,21 +301,21 @@ static int f2fs_do_sync_file(struct file *file, loff_t start, loff_t end, ...@@ -297,21 +301,21 @@ static int f2fs_do_sync_file(struct file *file, loff_t start, loff_t end,
* given fsync mark. * given fsync mark.
*/ */
if (!atomic) { if (!atomic) {
ret = wait_on_node_pages_writeback(sbi, ino); ret = f2fs_wait_on_node_pages_writeback(sbi, ino);
if (ret) if (ret)
goto out; goto out;
} }
/* once recovery info is written, don't need to tack this */ /* once recovery info is written, don't need to tack this */
remove_ino_entry(sbi, ino, APPEND_INO); f2fs_remove_ino_entry(sbi, ino, APPEND_INO);
clear_inode_flag(inode, FI_APPEND_WRITE); clear_inode_flag(inode, FI_APPEND_WRITE);
flush_out: flush_out:
if (!atomic) if (!atomic && F2FS_OPTION(sbi).fsync_mode != FSYNC_MODE_NOBARRIER)
ret = f2fs_issue_flush(sbi, inode->i_ino); ret = f2fs_issue_flush(sbi, inode->i_ino);
if (!ret) { if (!ret) {
remove_ino_entry(sbi, ino, UPDATE_INO); f2fs_remove_ino_entry(sbi, ino, UPDATE_INO);
clear_inode_flag(inode, FI_UPDATE_WRITE); clear_inode_flag(inode, FI_UPDATE_WRITE);
remove_ino_entry(sbi, ino, FLUSH_INO); f2fs_remove_ino_entry(sbi, ino, FLUSH_INO);
} }
f2fs_update_time(sbi, REQ_TIME); f2fs_update_time(sbi, REQ_TIME);
out: out:
...@@ -352,7 +356,7 @@ static bool __found_offset(block_t blkaddr, pgoff_t dirty, pgoff_t pgofs, ...@@ -352,7 +356,7 @@ static bool __found_offset(block_t blkaddr, pgoff_t dirty, pgoff_t pgofs,
switch (whence) { switch (whence) {
case SEEK_DATA: case SEEK_DATA:
if ((blkaddr == NEW_ADDR && dirty == pgofs) || if ((blkaddr == NEW_ADDR && dirty == pgofs) ||
(blkaddr != NEW_ADDR && blkaddr != NULL_ADDR)) is_valid_blkaddr(blkaddr))
return true; return true;
break; break;
case SEEK_HOLE: case SEEK_HOLE:
...@@ -392,13 +396,13 @@ static loff_t f2fs_seek_block(struct file *file, loff_t offset, int whence) ...@@ -392,13 +396,13 @@ static loff_t f2fs_seek_block(struct file *file, loff_t offset, int whence)
for (; data_ofs < isize; data_ofs = (loff_t)pgofs << PAGE_SHIFT) { for (; data_ofs < isize; data_ofs = (loff_t)pgofs << PAGE_SHIFT) {
set_new_dnode(&dn, inode, NULL, NULL, 0); set_new_dnode(&dn, inode, NULL, NULL, 0);
err = get_dnode_of_data(&dn, pgofs, LOOKUP_NODE); err = f2fs_get_dnode_of_data(&dn, pgofs, LOOKUP_NODE);
if (err && err != -ENOENT) { if (err && err != -ENOENT) {
goto fail; goto fail;
} else if (err == -ENOENT) { } else if (err == -ENOENT) {
/* direct node does not exists */ /* direct node does not exists */
if (whence == SEEK_DATA) { if (whence == SEEK_DATA) {
pgofs = get_next_page_offset(&dn, pgofs); pgofs = f2fs_get_next_page_offset(&dn, pgofs);
continue; continue;
} else { } else {
goto found; goto found;
...@@ -412,6 +416,7 @@ static loff_t f2fs_seek_block(struct file *file, loff_t offset, int whence) ...@@ -412,6 +416,7 @@ static loff_t f2fs_seek_block(struct file *file, loff_t offset, int whence)
dn.ofs_in_node++, pgofs++, dn.ofs_in_node++, pgofs++,
data_ofs = (loff_t)pgofs << PAGE_SHIFT) { data_ofs = (loff_t)pgofs << PAGE_SHIFT) {
block_t blkaddr; block_t blkaddr;
blkaddr = datablock_addr(dn.inode, blkaddr = datablock_addr(dn.inode,
dn.node_page, dn.ofs_in_node); dn.node_page, dn.ofs_in_node);
...@@ -486,7 +491,7 @@ static int f2fs_file_open(struct inode *inode, struct file *filp) ...@@ -486,7 +491,7 @@ static int f2fs_file_open(struct inode *inode, struct file *filp)
return dquot_file_open(inode, filp); return dquot_file_open(inode, filp);
} }
void truncate_data_blocks_range(struct dnode_of_data *dn, int count) void f2fs_truncate_data_blocks_range(struct dnode_of_data *dn, int count)
{ {
struct f2fs_sb_info *sbi = F2FS_I_SB(dn->inode); struct f2fs_sb_info *sbi = F2FS_I_SB(dn->inode);
struct f2fs_node *raw_node; struct f2fs_node *raw_node;
...@@ -502,12 +507,13 @@ void truncate_data_blocks_range(struct dnode_of_data *dn, int count) ...@@ -502,12 +507,13 @@ void truncate_data_blocks_range(struct dnode_of_data *dn, int count)
for (; count > 0; count--, addr++, dn->ofs_in_node++) { for (; count > 0; count--, addr++, dn->ofs_in_node++) {
block_t blkaddr = le32_to_cpu(*addr); block_t blkaddr = le32_to_cpu(*addr);
if (blkaddr == NULL_ADDR) if (blkaddr == NULL_ADDR)
continue; continue;
dn->data_blkaddr = NULL_ADDR; dn->data_blkaddr = NULL_ADDR;
set_data_blkaddr(dn); f2fs_set_data_blkaddr(dn);
invalidate_blocks(sbi, blkaddr); f2fs_invalidate_blocks(sbi, blkaddr);
if (dn->ofs_in_node == 0 && IS_INODE(dn->node_page)) if (dn->ofs_in_node == 0 && IS_INODE(dn->node_page))
clear_inode_flag(dn->inode, FI_FIRST_BLOCK_WRITTEN); clear_inode_flag(dn->inode, FI_FIRST_BLOCK_WRITTEN);
nr_free++; nr_free++;
...@@ -519,7 +525,7 @@ void truncate_data_blocks_range(struct dnode_of_data *dn, int count) ...@@ -519,7 +525,7 @@ void truncate_data_blocks_range(struct dnode_of_data *dn, int count)
* once we invalidate valid blkaddr in range [ofs, ofs + count], * once we invalidate valid blkaddr in range [ofs, ofs + count],
* we will invalidate all blkaddr in the whole range. * we will invalidate all blkaddr in the whole range.
*/ */
fofs = start_bidx_of_node(ofs_of_node(dn->node_page), fofs = f2fs_start_bidx_of_node(ofs_of_node(dn->node_page),
dn->inode) + ofs; dn->inode) + ofs;
f2fs_update_extent_cache_range(dn, fofs, 0, len); f2fs_update_extent_cache_range(dn, fofs, 0, len);
dec_valid_block_count(sbi, dn->inode, nr_free); dec_valid_block_count(sbi, dn->inode, nr_free);
...@@ -531,15 +537,15 @@ void truncate_data_blocks_range(struct dnode_of_data *dn, int count) ...@@ -531,15 +537,15 @@ void truncate_data_blocks_range(struct dnode_of_data *dn, int count)
dn->ofs_in_node, nr_free); dn->ofs_in_node, nr_free);
} }
void truncate_data_blocks(struct dnode_of_data *dn) void f2fs_truncate_data_blocks(struct dnode_of_data *dn)
{ {
truncate_data_blocks_range(dn, ADDRS_PER_BLOCK); f2fs_truncate_data_blocks_range(dn, ADDRS_PER_BLOCK);
} }
static int truncate_partial_data_page(struct inode *inode, u64 from, static int truncate_partial_data_page(struct inode *inode, u64 from,
bool cache_only) bool cache_only)
{ {
unsigned offset = from & (PAGE_SIZE - 1); loff_t offset = from & (PAGE_SIZE - 1);
pgoff_t index = from >> PAGE_SHIFT; pgoff_t index = from >> PAGE_SHIFT;
struct address_space *mapping = inode->i_mapping; struct address_space *mapping = inode->i_mapping;
struct page *page; struct page *page;
...@@ -555,7 +561,7 @@ static int truncate_partial_data_page(struct inode *inode, u64 from, ...@@ -555,7 +561,7 @@ static int truncate_partial_data_page(struct inode *inode, u64 from,
return 0; return 0;
} }
page = get_lock_data_page(inode, index, true); page = f2fs_get_lock_data_page(inode, index, true);
if (IS_ERR(page)) if (IS_ERR(page))
return PTR_ERR(page) == -ENOENT ? 0 : PTR_ERR(page); return PTR_ERR(page) == -ENOENT ? 0 : PTR_ERR(page);
truncate_out: truncate_out:
...@@ -570,7 +576,7 @@ static int truncate_partial_data_page(struct inode *inode, u64 from, ...@@ -570,7 +576,7 @@ static int truncate_partial_data_page(struct inode *inode, u64 from,
return 0; return 0;
} }
int truncate_blocks(struct inode *inode, u64 from, bool lock) int f2fs_truncate_blocks(struct inode *inode, u64 from, bool lock)
{ {
struct f2fs_sb_info *sbi = F2FS_I_SB(inode); struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
struct dnode_of_data dn; struct dnode_of_data dn;
...@@ -589,21 +595,21 @@ int truncate_blocks(struct inode *inode, u64 from, bool lock) ...@@ -589,21 +595,21 @@ int truncate_blocks(struct inode *inode, u64 from, bool lock)
if (lock) if (lock)
f2fs_lock_op(sbi); f2fs_lock_op(sbi);
ipage = get_node_page(sbi, inode->i_ino); ipage = f2fs_get_node_page(sbi, inode->i_ino);
if (IS_ERR(ipage)) { if (IS_ERR(ipage)) {
err = PTR_ERR(ipage); err = PTR_ERR(ipage);
goto out; goto out;
} }
if (f2fs_has_inline_data(inode)) { if (f2fs_has_inline_data(inode)) {
truncate_inline_inode(inode, ipage, from); f2fs_truncate_inline_inode(inode, ipage, from);
f2fs_put_page(ipage, 1); f2fs_put_page(ipage, 1);
truncate_page = true; truncate_page = true;
goto out; goto out;
} }
set_new_dnode(&dn, inode, ipage, NULL, 0); set_new_dnode(&dn, inode, ipage, NULL, 0);
err = get_dnode_of_data(&dn, free_from, LOOKUP_NODE_RA); err = f2fs_get_dnode_of_data(&dn, free_from, LOOKUP_NODE_RA);
if (err) { if (err) {
if (err == -ENOENT) if (err == -ENOENT)
goto free_next; goto free_next;
...@@ -616,13 +622,13 @@ int truncate_blocks(struct inode *inode, u64 from, bool lock) ...@@ -616,13 +622,13 @@ int truncate_blocks(struct inode *inode, u64 from, bool lock)
f2fs_bug_on(sbi, count < 0); f2fs_bug_on(sbi, count < 0);
if (dn.ofs_in_node || IS_INODE(dn.node_page)) { if (dn.ofs_in_node || IS_INODE(dn.node_page)) {
truncate_data_blocks_range(&dn, count); f2fs_truncate_data_blocks_range(&dn, count);
free_from += count; free_from += count;
} }
f2fs_put_dnode(&dn); f2fs_put_dnode(&dn);
free_next: free_next:
err = truncate_inode_blocks(inode, free_from); err = f2fs_truncate_inode_blocks(inode, free_from);
out: out:
if (lock) if (lock)
f2fs_unlock_op(sbi); f2fs_unlock_op(sbi);
...@@ -661,7 +667,7 @@ int f2fs_truncate(struct inode *inode) ...@@ -661,7 +667,7 @@ int f2fs_truncate(struct inode *inode)
return err; return err;
} }
err = truncate_blocks(inode, i_size_read(inode), true); err = f2fs_truncate_blocks(inode, i_size_read(inode), true);
if (err) if (err)
return err; return err;
...@@ -686,16 +692,16 @@ int f2fs_getattr(const struct path *path, struct kstat *stat, ...@@ -686,16 +692,16 @@ int f2fs_getattr(const struct path *path, struct kstat *stat,
stat->btime.tv_nsec = fi->i_crtime.tv_nsec; stat->btime.tv_nsec = fi->i_crtime.tv_nsec;
} }
flags = fi->i_flags & (FS_FL_USER_VISIBLE | FS_PROJINHERIT_FL); flags = fi->i_flags & F2FS_FL_USER_VISIBLE;
if (flags & FS_APPEND_FL) if (flags & F2FS_APPEND_FL)
stat->attributes |= STATX_ATTR_APPEND; stat->attributes |= STATX_ATTR_APPEND;
if (flags & FS_COMPR_FL) if (flags & F2FS_COMPR_FL)
stat->attributes |= STATX_ATTR_COMPRESSED; stat->attributes |= STATX_ATTR_COMPRESSED;
if (f2fs_encrypted_inode(inode)) if (f2fs_encrypted_inode(inode))
stat->attributes |= STATX_ATTR_ENCRYPTED; stat->attributes |= STATX_ATTR_ENCRYPTED;
if (flags & FS_IMMUTABLE_FL) if (flags & F2FS_IMMUTABLE_FL)
stat->attributes |= STATX_ATTR_IMMUTABLE; stat->attributes |= STATX_ATTR_IMMUTABLE;
if (flags & FS_NODUMP_FL) if (flags & F2FS_NODUMP_FL)
stat->attributes |= STATX_ATTR_NODUMP; stat->attributes |= STATX_ATTR_NODUMP;
stat->attributes_mask |= (STATX_ATTR_APPEND | stat->attributes_mask |= (STATX_ATTR_APPEND |
...@@ -811,7 +817,7 @@ int f2fs_setattr(struct dentry *dentry, struct iattr *attr) ...@@ -811,7 +817,7 @@ int f2fs_setattr(struct dentry *dentry, struct iattr *attr)
__setattr_copy(inode, attr); __setattr_copy(inode, attr);
if (attr->ia_valid & ATTR_MODE) { if (attr->ia_valid & ATTR_MODE) {
err = posix_acl_chmod(inode, get_inode_mode(inode)); err = posix_acl_chmod(inode, f2fs_get_inode_mode(inode));
if (err || is_inode_flag_set(inode, FI_ACL_MODE)) { if (err || is_inode_flag_set(inode, FI_ACL_MODE)) {
inode->i_mode = F2FS_I(inode)->i_acl_mode; inode->i_mode = F2FS_I(inode)->i_acl_mode;
clear_inode_flag(inode, FI_ACL_MODE); clear_inode_flag(inode, FI_ACL_MODE);
...@@ -850,7 +856,7 @@ static int fill_zero(struct inode *inode, pgoff_t index, ...@@ -850,7 +856,7 @@ static int fill_zero(struct inode *inode, pgoff_t index,
f2fs_balance_fs(sbi, true); f2fs_balance_fs(sbi, true);
f2fs_lock_op(sbi); f2fs_lock_op(sbi);
page = get_new_data_page(inode, NULL, index, false); page = f2fs_get_new_data_page(inode, NULL, index, false);
f2fs_unlock_op(sbi); f2fs_unlock_op(sbi);
if (IS_ERR(page)) if (IS_ERR(page))
...@@ -863,7 +869,7 @@ static int fill_zero(struct inode *inode, pgoff_t index, ...@@ -863,7 +869,7 @@ static int fill_zero(struct inode *inode, pgoff_t index,
return 0; return 0;
} }
int truncate_hole(struct inode *inode, pgoff_t pg_start, pgoff_t pg_end) int f2fs_truncate_hole(struct inode *inode, pgoff_t pg_start, pgoff_t pg_end)
{ {
int err; int err;
...@@ -872,10 +878,11 @@ int truncate_hole(struct inode *inode, pgoff_t pg_start, pgoff_t pg_end) ...@@ -872,10 +878,11 @@ int truncate_hole(struct inode *inode, pgoff_t pg_start, pgoff_t pg_end)
pgoff_t end_offset, count; pgoff_t end_offset, count;
set_new_dnode(&dn, inode, NULL, NULL, 0); set_new_dnode(&dn, inode, NULL, NULL, 0);
err = get_dnode_of_data(&dn, pg_start, LOOKUP_NODE); err = f2fs_get_dnode_of_data(&dn, pg_start, LOOKUP_NODE);
if (err) { if (err) {
if (err == -ENOENT) { if (err == -ENOENT) {
pg_start = get_next_page_offset(&dn, pg_start); pg_start = f2fs_get_next_page_offset(&dn,
pg_start);
continue; continue;
} }
return err; return err;
...@@ -886,7 +893,7 @@ int truncate_hole(struct inode *inode, pgoff_t pg_start, pgoff_t pg_end) ...@@ -886,7 +893,7 @@ int truncate_hole(struct inode *inode, pgoff_t pg_start, pgoff_t pg_end)
f2fs_bug_on(F2FS_I_SB(inode), count == 0 || count > end_offset); f2fs_bug_on(F2FS_I_SB(inode), count == 0 || count > end_offset);
truncate_data_blocks_range(&dn, count); f2fs_truncate_data_blocks_range(&dn, count);
f2fs_put_dnode(&dn); f2fs_put_dnode(&dn);
pg_start += count; pg_start += count;
...@@ -942,7 +949,7 @@ static int punch_hole(struct inode *inode, loff_t offset, loff_t len) ...@@ -942,7 +949,7 @@ static int punch_hole(struct inode *inode, loff_t offset, loff_t len)
blk_end - 1); blk_end - 1);
f2fs_lock_op(sbi); f2fs_lock_op(sbi);
ret = truncate_hole(inode, pg_start, pg_end); ret = f2fs_truncate_hole(inode, pg_start, pg_end);
f2fs_unlock_op(sbi); f2fs_unlock_op(sbi);
up_write(&F2FS_I(inode)->i_mmap_sem); up_write(&F2FS_I(inode)->i_mmap_sem);
} }
...@@ -960,7 +967,7 @@ static int __read_out_blkaddrs(struct inode *inode, block_t *blkaddr, ...@@ -960,7 +967,7 @@ static int __read_out_blkaddrs(struct inode *inode, block_t *blkaddr,
next_dnode: next_dnode:
set_new_dnode(&dn, inode, NULL, NULL, 0); set_new_dnode(&dn, inode, NULL, NULL, 0);
ret = get_dnode_of_data(&dn, off, LOOKUP_NODE_RA); ret = f2fs_get_dnode_of_data(&dn, off, LOOKUP_NODE_RA);
if (ret && ret != -ENOENT) { if (ret && ret != -ENOENT) {
return ret; return ret;
} else if (ret == -ENOENT) { } else if (ret == -ENOENT) {
...@@ -977,7 +984,7 @@ static int __read_out_blkaddrs(struct inode *inode, block_t *blkaddr, ...@@ -977,7 +984,7 @@ static int __read_out_blkaddrs(struct inode *inode, block_t *blkaddr,
for (i = 0; i < done; i++, blkaddr++, do_replace++, dn.ofs_in_node++) { for (i = 0; i < done; i++, blkaddr++, do_replace++, dn.ofs_in_node++) {
*blkaddr = datablock_addr(dn.inode, *blkaddr = datablock_addr(dn.inode,
dn.node_page, dn.ofs_in_node); dn.node_page, dn.ofs_in_node);
if (!is_checkpointed_data(sbi, *blkaddr)) { if (!f2fs_is_checkpointed_data(sbi, *blkaddr)) {
if (test_opt(sbi, LFS)) { if (test_opt(sbi, LFS)) {
f2fs_put_dnode(&dn); f2fs_put_dnode(&dn);
...@@ -1010,10 +1017,10 @@ static int __roll_back_blkaddrs(struct inode *inode, block_t *blkaddr, ...@@ -1010,10 +1017,10 @@ static int __roll_back_blkaddrs(struct inode *inode, block_t *blkaddr,
continue; continue;
set_new_dnode(&dn, inode, NULL, NULL, 0); set_new_dnode(&dn, inode, NULL, NULL, 0);
ret = get_dnode_of_data(&dn, off + i, LOOKUP_NODE_RA); ret = f2fs_get_dnode_of_data(&dn, off + i, LOOKUP_NODE_RA);
if (ret) { if (ret) {
dec_valid_block_count(sbi, inode, 1); dec_valid_block_count(sbi, inode, 1);
invalidate_blocks(sbi, *blkaddr); f2fs_invalidate_blocks(sbi, *blkaddr);
} else { } else {
f2fs_update_data_blkaddr(&dn, *blkaddr); f2fs_update_data_blkaddr(&dn, *blkaddr);
} }
...@@ -1043,18 +1050,18 @@ static int __clone_blkaddrs(struct inode *src_inode, struct inode *dst_inode, ...@@ -1043,18 +1050,18 @@ static int __clone_blkaddrs(struct inode *src_inode, struct inode *dst_inode,
pgoff_t ilen; pgoff_t ilen;
set_new_dnode(&dn, dst_inode, NULL, NULL, 0); set_new_dnode(&dn, dst_inode, NULL, NULL, 0);
ret = get_dnode_of_data(&dn, dst + i, ALLOC_NODE); ret = f2fs_get_dnode_of_data(&dn, dst + i, ALLOC_NODE);
if (ret) if (ret)
return ret; return ret;
get_node_info(sbi, dn.nid, &ni); f2fs_get_node_info(sbi, dn.nid, &ni);
ilen = min((pgoff_t) ilen = min((pgoff_t)
ADDRS_PER_PAGE(dn.node_page, dst_inode) - ADDRS_PER_PAGE(dn.node_page, dst_inode) -
dn.ofs_in_node, len - i); dn.ofs_in_node, len - i);
do { do {
dn.data_blkaddr = datablock_addr(dn.inode, dn.data_blkaddr = datablock_addr(dn.inode,
dn.node_page, dn.ofs_in_node); dn.node_page, dn.ofs_in_node);
truncate_data_blocks_range(&dn, 1); f2fs_truncate_data_blocks_range(&dn, 1);
if (do_replace[i]) { if (do_replace[i]) {
f2fs_i_blocks_write(src_inode, f2fs_i_blocks_write(src_inode,
...@@ -1077,10 +1084,11 @@ static int __clone_blkaddrs(struct inode *src_inode, struct inode *dst_inode, ...@@ -1077,10 +1084,11 @@ static int __clone_blkaddrs(struct inode *src_inode, struct inode *dst_inode,
} else { } else {
struct page *psrc, *pdst; struct page *psrc, *pdst;
psrc = get_lock_data_page(src_inode, src + i, true); psrc = f2fs_get_lock_data_page(src_inode,
src + i, true);
if (IS_ERR(psrc)) if (IS_ERR(psrc))
return PTR_ERR(psrc); return PTR_ERR(psrc);
pdst = get_new_data_page(dst_inode, NULL, dst + i, pdst = f2fs_get_new_data_page(dst_inode, NULL, dst + i,
true); true);
if (IS_ERR(pdst)) { if (IS_ERR(pdst)) {
f2fs_put_page(psrc, 1); f2fs_put_page(psrc, 1);
...@@ -1091,7 +1099,8 @@ static int __clone_blkaddrs(struct inode *src_inode, struct inode *dst_inode, ...@@ -1091,7 +1099,8 @@ static int __clone_blkaddrs(struct inode *src_inode, struct inode *dst_inode,
f2fs_put_page(pdst, 1); f2fs_put_page(pdst, 1);
f2fs_put_page(psrc, 1); f2fs_put_page(psrc, 1);
ret = truncate_hole(src_inode, src + i, src + i + 1); ret = f2fs_truncate_hole(src_inode,
src + i, src + i + 1);
if (ret) if (ret)
return ret; return ret;
i++; i++;
...@@ -1144,7 +1153,7 @@ static int __exchange_data_block(struct inode *src_inode, ...@@ -1144,7 +1153,7 @@ static int __exchange_data_block(struct inode *src_inode,
return 0; return 0;
roll_back: roll_back:
__roll_back_blkaddrs(src_inode, src_blkaddr, do_replace, src, len); __roll_back_blkaddrs(src_inode, src_blkaddr, do_replace, src, olen);
kvfree(src_blkaddr); kvfree(src_blkaddr);
kvfree(do_replace); kvfree(do_replace);
return ret; return ret;
...@@ -1187,7 +1196,7 @@ static int f2fs_collapse_range(struct inode *inode, loff_t offset, loff_t len) ...@@ -1187,7 +1196,7 @@ static int f2fs_collapse_range(struct inode *inode, loff_t offset, loff_t len)
pg_end = (offset + len) >> PAGE_SHIFT; pg_end = (offset + len) >> PAGE_SHIFT;
/* avoid gc operation during block exchange */ /* avoid gc operation during block exchange */
down_write(&F2FS_I(inode)->dio_rwsem[WRITE]); down_write(&F2FS_I(inode)->i_gc_rwsem[WRITE]);
down_write(&F2FS_I(inode)->i_mmap_sem); down_write(&F2FS_I(inode)->i_mmap_sem);
/* write out all dirty pages from offset */ /* write out all dirty pages from offset */
...@@ -1208,12 +1217,12 @@ static int f2fs_collapse_range(struct inode *inode, loff_t offset, loff_t len) ...@@ -1208,12 +1217,12 @@ static int f2fs_collapse_range(struct inode *inode, loff_t offset, loff_t len)
new_size = i_size_read(inode) - len; new_size = i_size_read(inode) - len;
truncate_pagecache(inode, new_size); truncate_pagecache(inode, new_size);
ret = truncate_blocks(inode, new_size, true); ret = f2fs_truncate_blocks(inode, new_size, true);
if (!ret) if (!ret)
f2fs_i_size_write(inode, new_size); f2fs_i_size_write(inode, new_size);
out_unlock: out_unlock:
up_write(&F2FS_I(inode)->i_mmap_sem); up_write(&F2FS_I(inode)->i_mmap_sem);
up_write(&F2FS_I(inode)->dio_rwsem[WRITE]); up_write(&F2FS_I(inode)->i_gc_rwsem[WRITE]);
return ret; return ret;
} }
...@@ -1233,7 +1242,7 @@ static int f2fs_do_zero_range(struct dnode_of_data *dn, pgoff_t start, ...@@ -1233,7 +1242,7 @@ static int f2fs_do_zero_range(struct dnode_of_data *dn, pgoff_t start,
} }
dn->ofs_in_node = ofs_in_node; dn->ofs_in_node = ofs_in_node;
ret = reserve_new_blocks(dn, count); ret = f2fs_reserve_new_blocks(dn, count);
if (ret) if (ret)
return ret; return ret;
...@@ -1242,7 +1251,7 @@ static int f2fs_do_zero_range(struct dnode_of_data *dn, pgoff_t start, ...@@ -1242,7 +1251,7 @@ static int f2fs_do_zero_range(struct dnode_of_data *dn, pgoff_t start,
dn->data_blkaddr = datablock_addr(dn->inode, dn->data_blkaddr = datablock_addr(dn->inode,
dn->node_page, dn->ofs_in_node); dn->node_page, dn->ofs_in_node);
/* /*
* reserve_new_blocks will not guarantee entire block * f2fs_reserve_new_blocks will not guarantee entire block
* allocation. * allocation.
*/ */
if (dn->data_blkaddr == NULL_ADDR) { if (dn->data_blkaddr == NULL_ADDR) {
...@@ -1250,9 +1259,9 @@ static int f2fs_do_zero_range(struct dnode_of_data *dn, pgoff_t start, ...@@ -1250,9 +1259,9 @@ static int f2fs_do_zero_range(struct dnode_of_data *dn, pgoff_t start,
break; break;
} }
if (dn->data_blkaddr != NEW_ADDR) { if (dn->data_blkaddr != NEW_ADDR) {
invalidate_blocks(sbi, dn->data_blkaddr); f2fs_invalidate_blocks(sbi, dn->data_blkaddr);
dn->data_blkaddr = NEW_ADDR; dn->data_blkaddr = NEW_ADDR;
set_data_blkaddr(dn); f2fs_set_data_blkaddr(dn);
} }
} }
...@@ -1318,7 +1327,7 @@ static int f2fs_zero_range(struct inode *inode, loff_t offset, loff_t len, ...@@ -1318,7 +1327,7 @@ static int f2fs_zero_range(struct inode *inode, loff_t offset, loff_t len,
f2fs_lock_op(sbi); f2fs_lock_op(sbi);
set_new_dnode(&dn, inode, NULL, NULL, 0); set_new_dnode(&dn, inode, NULL, NULL, 0);
ret = get_dnode_of_data(&dn, index, ALLOC_NODE); ret = f2fs_get_dnode_of_data(&dn, index, ALLOC_NODE);
if (ret) { if (ret) {
f2fs_unlock_op(sbi); f2fs_unlock_op(sbi);
goto out; goto out;
...@@ -1389,10 +1398,10 @@ static int f2fs_insert_range(struct inode *inode, loff_t offset, loff_t len) ...@@ -1389,10 +1398,10 @@ static int f2fs_insert_range(struct inode *inode, loff_t offset, loff_t len)
f2fs_balance_fs(sbi, true); f2fs_balance_fs(sbi, true);
/* avoid gc operation during block exchange */ /* avoid gc operation during block exchange */
down_write(&F2FS_I(inode)->dio_rwsem[WRITE]); down_write(&F2FS_I(inode)->i_gc_rwsem[WRITE]);
down_write(&F2FS_I(inode)->i_mmap_sem); down_write(&F2FS_I(inode)->i_mmap_sem);
ret = truncate_blocks(inode, i_size_read(inode), true); ret = f2fs_truncate_blocks(inode, i_size_read(inode), true);
if (ret) if (ret)
goto out; goto out;
...@@ -1430,7 +1439,7 @@ static int f2fs_insert_range(struct inode *inode, loff_t offset, loff_t len) ...@@ -1430,7 +1439,7 @@ static int f2fs_insert_range(struct inode *inode, loff_t offset, loff_t len)
f2fs_i_size_write(inode, new_size); f2fs_i_size_write(inode, new_size);
out: out:
up_write(&F2FS_I(inode)->i_mmap_sem); up_write(&F2FS_I(inode)->i_mmap_sem);
up_write(&F2FS_I(inode)->dio_rwsem[WRITE]); up_write(&F2FS_I(inode)->i_gc_rwsem[WRITE]);
return ret; return ret;
} }
...@@ -1473,7 +1482,7 @@ static int expand_inode_data(struct inode *inode, loff_t offset, ...@@ -1473,7 +1482,7 @@ static int expand_inode_data(struct inode *inode, loff_t offset,
last_off = map.m_lblk + map.m_len - 1; last_off = map.m_lblk + map.m_len - 1;
/* update new size to the failed position */ /* update new size to the failed position */
new_size = (last_off == pg_end) ? offset + len: new_size = (last_off == pg_end) ? offset + len :
(loff_t)(last_off + 1) << PAGE_SHIFT; (loff_t)(last_off + 1) << PAGE_SHIFT;
} else { } else {
new_size = ((loff_t)pg_end << PAGE_SHIFT) + off_end; new_size = ((loff_t)pg_end << PAGE_SHIFT) + off_end;
...@@ -1553,13 +1562,13 @@ static int f2fs_release_file(struct inode *inode, struct file *filp) ...@@ -1553,13 +1562,13 @@ static int f2fs_release_file(struct inode *inode, struct file *filp)
/* some remained atomic pages should discarded */ /* some remained atomic pages should discarded */
if (f2fs_is_atomic_file(inode)) if (f2fs_is_atomic_file(inode))
drop_inmem_pages(inode); f2fs_drop_inmem_pages(inode);
if (f2fs_is_volatile_file(inode)) { if (f2fs_is_volatile_file(inode)) {
clear_inode_flag(inode, FI_VOLATILE_FILE);
stat_dec_volatile_write(inode);
set_inode_flag(inode, FI_DROP_CACHE); set_inode_flag(inode, FI_DROP_CACHE);
filemap_fdatawrite(inode->i_mapping); filemap_fdatawrite(inode->i_mapping);
clear_inode_flag(inode, FI_DROP_CACHE); clear_inode_flag(inode, FI_DROP_CACHE);
clear_inode_flag(inode, FI_VOLATILE_FILE);
stat_dec_volatile_write(inode);
} }
return 0; return 0;
} }
...@@ -1576,7 +1585,7 @@ static int f2fs_file_flush(struct file *file, fl_owner_t id) ...@@ -1576,7 +1585,7 @@ static int f2fs_file_flush(struct file *file, fl_owner_t id)
*/ */
if (f2fs_is_atomic_file(inode) && if (f2fs_is_atomic_file(inode) &&
F2FS_I(inode)->inmem_task == current) F2FS_I(inode)->inmem_task == current)
drop_inmem_pages(inode); f2fs_drop_inmem_pages(inode);
return 0; return 0;
} }
...@@ -1584,8 +1593,15 @@ static int f2fs_ioc_getflags(struct file *filp, unsigned long arg) ...@@ -1584,8 +1593,15 @@ static int f2fs_ioc_getflags(struct file *filp, unsigned long arg)
{ {
struct inode *inode = file_inode(filp); struct inode *inode = file_inode(filp);
struct f2fs_inode_info *fi = F2FS_I(inode); struct f2fs_inode_info *fi = F2FS_I(inode);
unsigned int flags = fi->i_flags & unsigned int flags = fi->i_flags;
(FS_FL_USER_VISIBLE | FS_PROJINHERIT_FL);
if (file_is_encrypt(inode))
flags |= F2FS_ENCRYPT_FL;
if (f2fs_has_inline_data(inode) || f2fs_has_inline_dentry(inode))
flags |= F2FS_INLINE_DATA_FL;
flags &= F2FS_FL_USER_VISIBLE;
return put_user(flags, (int __user *)arg); return put_user(flags, (int __user *)arg);
} }
...@@ -1602,15 +1618,15 @@ static int __f2fs_ioc_setflags(struct inode *inode, unsigned int flags) ...@@ -1602,15 +1618,15 @@ static int __f2fs_ioc_setflags(struct inode *inode, unsigned int flags)
oldflags = fi->i_flags; oldflags = fi->i_flags;
if ((flags ^ oldflags) & (FS_APPEND_FL | FS_IMMUTABLE_FL)) if ((flags ^ oldflags) & (F2FS_APPEND_FL | F2FS_IMMUTABLE_FL))
if (!capable(CAP_LINUX_IMMUTABLE)) if (!capable(CAP_LINUX_IMMUTABLE))
return -EPERM; return -EPERM;
flags = flags & (FS_FL_USER_MODIFIABLE | FS_PROJINHERIT_FL); flags = flags & F2FS_FL_USER_MODIFIABLE;
flags |= oldflags & ~(FS_FL_USER_MODIFIABLE | FS_PROJINHERIT_FL); flags |= oldflags & ~F2FS_FL_USER_MODIFIABLE;
fi->i_flags = flags; fi->i_flags = flags;
if (fi->i_flags & FS_PROJINHERIT_FL) if (fi->i_flags & F2FS_PROJINHERIT_FL)
set_inode_flag(inode, FI_PROJ_INHERIT); set_inode_flag(inode, FI_PROJ_INHERIT);
else else
clear_inode_flag(inode, FI_PROJ_INHERIT); clear_inode_flag(inode, FI_PROJ_INHERIT);
...@@ -1670,6 +1686,8 @@ static int f2fs_ioc_start_atomic_write(struct file *filp) ...@@ -1670,6 +1686,8 @@ static int f2fs_ioc_start_atomic_write(struct file *filp)
inode_lock(inode); inode_lock(inode);
down_write(&F2FS_I(inode)->i_gc_rwsem[WRITE]);
if (f2fs_is_atomic_file(inode)) if (f2fs_is_atomic_file(inode))
goto out; goto out;
...@@ -1677,28 +1695,25 @@ static int f2fs_ioc_start_atomic_write(struct file *filp) ...@@ -1677,28 +1695,25 @@ static int f2fs_ioc_start_atomic_write(struct file *filp)
if (ret) if (ret)
goto out; goto out;
set_inode_flag(inode, FI_ATOMIC_FILE);
set_inode_flag(inode, FI_HOT_DATA);
f2fs_update_time(F2FS_I_SB(inode), REQ_TIME);
if (!get_dirty_pages(inode)) if (!get_dirty_pages(inode))
goto inc_stat; goto skip_flush;
f2fs_msg(F2FS_I_SB(inode)->sb, KERN_WARNING, f2fs_msg(F2FS_I_SB(inode)->sb, KERN_WARNING,
"Unexpected flush for atomic writes: ino=%lu, npages=%u", "Unexpected flush for atomic writes: ino=%lu, npages=%u",
inode->i_ino, get_dirty_pages(inode)); inode->i_ino, get_dirty_pages(inode));
ret = filemap_write_and_wait_range(inode->i_mapping, 0, LLONG_MAX); ret = filemap_write_and_wait_range(inode->i_mapping, 0, LLONG_MAX);
if (ret) { if (ret)
clear_inode_flag(inode, FI_ATOMIC_FILE);
clear_inode_flag(inode, FI_HOT_DATA);
goto out; goto out;
} skip_flush:
set_inode_flag(inode, FI_ATOMIC_FILE);
clear_inode_flag(inode, FI_ATOMIC_REVOKE_REQUEST);
f2fs_update_time(F2FS_I_SB(inode), REQ_TIME);
inc_stat:
F2FS_I(inode)->inmem_task = current; F2FS_I(inode)->inmem_task = current;
stat_inc_atomic_write(inode); stat_inc_atomic_write(inode);
stat_update_max_atomic_write(inode); stat_update_max_atomic_write(inode);
out: out:
up_write(&F2FS_I(inode)->i_gc_rwsem[WRITE]);
inode_unlock(inode); inode_unlock(inode);
mnt_drop_write_file(filp); mnt_drop_write_file(filp);
return ret; return ret;
...@@ -1718,27 +1733,33 @@ static int f2fs_ioc_commit_atomic_write(struct file *filp) ...@@ -1718,27 +1733,33 @@ static int f2fs_ioc_commit_atomic_write(struct file *filp)
inode_lock(inode); inode_lock(inode);
down_write(&F2FS_I(inode)->dio_rwsem[WRITE]); down_write(&F2FS_I(inode)->i_gc_rwsem[WRITE]);
if (f2fs_is_volatile_file(inode)) if (f2fs_is_volatile_file(inode)) {
ret = -EINVAL;
goto err_out; goto err_out;
}
if (f2fs_is_atomic_file(inode)) { if (f2fs_is_atomic_file(inode)) {
ret = commit_inmem_pages(inode); ret = f2fs_commit_inmem_pages(inode);
if (ret) if (ret)
goto err_out; goto err_out;
ret = f2fs_do_sync_file(filp, 0, LLONG_MAX, 0, true); ret = f2fs_do_sync_file(filp, 0, LLONG_MAX, 0, true);
if (!ret) { if (!ret) {
clear_inode_flag(inode, FI_ATOMIC_FILE); clear_inode_flag(inode, FI_ATOMIC_FILE);
clear_inode_flag(inode, FI_HOT_DATA); F2FS_I(inode)->i_gc_failures[GC_FAILURE_ATOMIC] = 0;
stat_dec_atomic_write(inode); stat_dec_atomic_write(inode);
} }
} else { } else {
ret = f2fs_do_sync_file(filp, 0, LLONG_MAX, 1, false); ret = f2fs_do_sync_file(filp, 0, LLONG_MAX, 1, false);
} }
err_out: err_out:
up_write(&F2FS_I(inode)->dio_rwsem[WRITE]); if (is_inode_flag_set(inode, FI_ATOMIC_REVOKE_REQUEST)) {
clear_inode_flag(inode, FI_ATOMIC_REVOKE_REQUEST);
ret = -EINVAL;
}
up_write(&F2FS_I(inode)->i_gc_rwsem[WRITE]);
inode_unlock(inode); inode_unlock(inode);
mnt_drop_write_file(filp); mnt_drop_write_file(filp);
return ret; return ret;
...@@ -1823,7 +1844,7 @@ static int f2fs_ioc_abort_volatile_write(struct file *filp) ...@@ -1823,7 +1844,7 @@ static int f2fs_ioc_abort_volatile_write(struct file *filp)
inode_lock(inode); inode_lock(inode);
if (f2fs_is_atomic_file(inode)) if (f2fs_is_atomic_file(inode))
drop_inmem_pages(inode); f2fs_drop_inmem_pages(inode);
if (f2fs_is_volatile_file(inode)) { if (f2fs_is_volatile_file(inode)) {
clear_inode_flag(inode, FI_VOLATILE_FILE); clear_inode_flag(inode, FI_VOLATILE_FILE);
stat_dec_volatile_write(inode); stat_dec_volatile_write(inode);
...@@ -1851,9 +1872,11 @@ static int f2fs_ioc_shutdown(struct file *filp, unsigned long arg) ...@@ -1851,9 +1872,11 @@ static int f2fs_ioc_shutdown(struct file *filp, unsigned long arg)
if (get_user(in, (__u32 __user *)arg)) if (get_user(in, (__u32 __user *)arg))
return -EFAULT; return -EFAULT;
ret = mnt_want_write_file(filp); if (in != F2FS_GOING_DOWN_FULLSYNC) {
if (ret) ret = mnt_want_write_file(filp);
return ret; if (ret)
return ret;
}
switch (in) { switch (in) {
case F2FS_GOING_DOWN_FULLSYNC: case F2FS_GOING_DOWN_FULLSYNC:
...@@ -1878,7 +1901,7 @@ static int f2fs_ioc_shutdown(struct file *filp, unsigned long arg) ...@@ -1878,7 +1901,7 @@ static int f2fs_ioc_shutdown(struct file *filp, unsigned long arg)
f2fs_stop_checkpoint(sbi, false); f2fs_stop_checkpoint(sbi, false);
break; break;
case F2FS_GOING_DOWN_METAFLUSH: case F2FS_GOING_DOWN_METAFLUSH:
sync_meta_pages(sbi, META, LONG_MAX, FS_META_IO); f2fs_sync_meta_pages(sbi, META, LONG_MAX, FS_META_IO);
f2fs_stop_checkpoint(sbi, false); f2fs_stop_checkpoint(sbi, false);
break; break;
default: default:
...@@ -1886,15 +1909,16 @@ static int f2fs_ioc_shutdown(struct file *filp, unsigned long arg) ...@@ -1886,15 +1909,16 @@ static int f2fs_ioc_shutdown(struct file *filp, unsigned long arg)
goto out; goto out;
} }
stop_gc_thread(sbi); f2fs_stop_gc_thread(sbi);
stop_discard_thread(sbi); f2fs_stop_discard_thread(sbi);
drop_discard_cmd(sbi); f2fs_drop_discard_cmd(sbi);
clear_opt(sbi, DISCARD); clear_opt(sbi, DISCARD);
f2fs_update_time(sbi, REQ_TIME); f2fs_update_time(sbi, REQ_TIME);
out: out:
mnt_drop_write_file(filp); if (in != F2FS_GOING_DOWN_FULLSYNC)
mnt_drop_write_file(filp);
return ret; return ret;
} }
...@@ -2053,15 +2077,15 @@ static int f2fs_ioc_gc_range(struct file *filp, unsigned long arg) ...@@ -2053,15 +2077,15 @@ static int f2fs_ioc_gc_range(struct file *filp, unsigned long arg)
if (f2fs_readonly(sbi->sb)) if (f2fs_readonly(sbi->sb))
return -EROFS; return -EROFS;
end = range.start + range.len;
if (range.start < MAIN_BLKADDR(sbi) || end >= MAX_BLKADDR(sbi)) {
return -EINVAL;
}
ret = mnt_want_write_file(filp); ret = mnt_want_write_file(filp);
if (ret) if (ret)
return ret; return ret;
end = range.start + range.len;
if (range.start < MAIN_BLKADDR(sbi) || end >= MAX_BLKADDR(sbi)) {
ret = -EINVAL;
goto out;
}
do_more: do_more:
if (!range.sync) { if (!range.sync) {
if (!mutex_trylock(&sbi->gc_mutex)) { if (!mutex_trylock(&sbi->gc_mutex)) {
...@@ -2081,7 +2105,7 @@ static int f2fs_ioc_gc_range(struct file *filp, unsigned long arg) ...@@ -2081,7 +2105,7 @@ static int f2fs_ioc_gc_range(struct file *filp, unsigned long arg)
return ret; return ret;
} }
static int f2fs_ioc_write_checkpoint(struct file *filp, unsigned long arg) static int f2fs_ioc_f2fs_write_checkpoint(struct file *filp, unsigned long arg)
{ {
struct inode *inode = file_inode(filp); struct inode *inode = file_inode(filp);
struct f2fs_sb_info *sbi = F2FS_I_SB(inode); struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
...@@ -2110,7 +2134,7 @@ static int f2fs_defragment_range(struct f2fs_sb_info *sbi, ...@@ -2110,7 +2134,7 @@ static int f2fs_defragment_range(struct f2fs_sb_info *sbi,
struct inode *inode = file_inode(filp); struct inode *inode = file_inode(filp);
struct f2fs_map_blocks map = { .m_next_extent = NULL, struct f2fs_map_blocks map = { .m_next_extent = NULL,
.m_seg_type = NO_CHECK_TYPE }; .m_seg_type = NO_CHECK_TYPE };
struct extent_info ei = {0,0,0}; struct extent_info ei = {0, 0, 0};
pgoff_t pg_start, pg_end, next_pgofs; pgoff_t pg_start, pg_end, next_pgofs;
unsigned int blk_per_seg = sbi->blocks_per_seg; unsigned int blk_per_seg = sbi->blocks_per_seg;
unsigned int total = 0, sec_num; unsigned int total = 0, sec_num;
...@@ -2119,7 +2143,7 @@ static int f2fs_defragment_range(struct f2fs_sb_info *sbi, ...@@ -2119,7 +2143,7 @@ static int f2fs_defragment_range(struct f2fs_sb_info *sbi,
int err; int err;
/* if in-place-update policy is enabled, don't waste time here */ /* if in-place-update policy is enabled, don't waste time here */
if (should_update_inplace(inode, NULL)) if (f2fs_should_update_inplace(inode, NULL))
return -EINVAL; return -EINVAL;
pg_start = range->start >> PAGE_SHIFT; pg_start = range->start >> PAGE_SHIFT;
...@@ -2214,7 +2238,7 @@ static int f2fs_defragment_range(struct f2fs_sb_info *sbi, ...@@ -2214,7 +2238,7 @@ static int f2fs_defragment_range(struct f2fs_sb_info *sbi,
while (idx < map.m_lblk + map.m_len && cnt < blk_per_seg) { while (idx < map.m_lblk + map.m_len && cnt < blk_per_seg) {
struct page *page; struct page *page;
page = get_lock_data_page(inode, idx, true); page = f2fs_get_lock_data_page(inode, idx, true);
if (IS_ERR(page)) { if (IS_ERR(page)) {
err = PTR_ERR(page); err = PTR_ERR(page);
goto clear_out; goto clear_out;
...@@ -2325,12 +2349,12 @@ static int f2fs_move_file_range(struct file *file_in, loff_t pos_in, ...@@ -2325,12 +2349,12 @@ static int f2fs_move_file_range(struct file *file_in, loff_t pos_in,
} }
inode_lock(src); inode_lock(src);
down_write(&F2FS_I(src)->dio_rwsem[WRITE]); down_write(&F2FS_I(src)->i_gc_rwsem[WRITE]);
if (src != dst) { if (src != dst) {
ret = -EBUSY; ret = -EBUSY;
if (!inode_trylock(dst)) if (!inode_trylock(dst))
goto out; goto out;
if (!down_write_trylock(&F2FS_I(dst)->dio_rwsem[WRITE])) { if (!down_write_trylock(&F2FS_I(dst)->i_gc_rwsem[WRITE])) {
inode_unlock(dst); inode_unlock(dst);
goto out; goto out;
} }
...@@ -2392,11 +2416,11 @@ static int f2fs_move_file_range(struct file *file_in, loff_t pos_in, ...@@ -2392,11 +2416,11 @@ static int f2fs_move_file_range(struct file *file_in, loff_t pos_in,
f2fs_unlock_op(sbi); f2fs_unlock_op(sbi);
out_unlock: out_unlock:
if (src != dst) { if (src != dst) {
up_write(&F2FS_I(dst)->dio_rwsem[WRITE]); up_write(&F2FS_I(dst)->i_gc_rwsem[WRITE]);
inode_unlock(dst); inode_unlock(dst);
} }
out: out:
up_write(&F2FS_I(src)->dio_rwsem[WRITE]); up_write(&F2FS_I(src)->i_gc_rwsem[WRITE]);
inode_unlock(src); inode_unlock(src);
return ret; return ret;
} }
...@@ -2554,7 +2578,7 @@ static int f2fs_ioc_setproject(struct file *filp, __u32 projid) ...@@ -2554,7 +2578,7 @@ static int f2fs_ioc_setproject(struct file *filp, __u32 projid)
if (IS_NOQUOTA(inode)) if (IS_NOQUOTA(inode))
goto out_unlock; goto out_unlock;
ipage = get_node_page(sbi, inode->i_ino); ipage = f2fs_get_node_page(sbi, inode->i_ino);
if (IS_ERR(ipage)) { if (IS_ERR(ipage)) {
err = PTR_ERR(ipage); err = PTR_ERR(ipage);
goto out_unlock; goto out_unlock;
...@@ -2568,7 +2592,9 @@ static int f2fs_ioc_setproject(struct file *filp, __u32 projid) ...@@ -2568,7 +2592,9 @@ static int f2fs_ioc_setproject(struct file *filp, __u32 projid)
} }
f2fs_put_page(ipage, 1); f2fs_put_page(ipage, 1);
dquot_initialize(inode); err = dquot_initialize(inode);
if (err)
goto out_unlock;
transfer_to[PRJQUOTA] = dqget(sb, make_kqid_projid(kprojid)); transfer_to[PRJQUOTA] = dqget(sb, make_kqid_projid(kprojid));
if (!IS_ERR(transfer_to[PRJQUOTA])) { if (!IS_ERR(transfer_to[PRJQUOTA])) {
...@@ -2601,17 +2627,17 @@ static inline __u32 f2fs_iflags_to_xflags(unsigned long iflags) ...@@ -2601,17 +2627,17 @@ static inline __u32 f2fs_iflags_to_xflags(unsigned long iflags)
{ {
__u32 xflags = 0; __u32 xflags = 0;
if (iflags & FS_SYNC_FL) if (iflags & F2FS_SYNC_FL)
xflags |= FS_XFLAG_SYNC; xflags |= FS_XFLAG_SYNC;
if (iflags & FS_IMMUTABLE_FL) if (iflags & F2FS_IMMUTABLE_FL)
xflags |= FS_XFLAG_IMMUTABLE; xflags |= FS_XFLAG_IMMUTABLE;
if (iflags & FS_APPEND_FL) if (iflags & F2FS_APPEND_FL)
xflags |= FS_XFLAG_APPEND; xflags |= FS_XFLAG_APPEND;
if (iflags & FS_NODUMP_FL) if (iflags & F2FS_NODUMP_FL)
xflags |= FS_XFLAG_NODUMP; xflags |= FS_XFLAG_NODUMP;
if (iflags & FS_NOATIME_FL) if (iflags & F2FS_NOATIME_FL)
xflags |= FS_XFLAG_NOATIME; xflags |= FS_XFLAG_NOATIME;
if (iflags & FS_PROJINHERIT_FL) if (iflags & F2FS_PROJINHERIT_FL)
xflags |= FS_XFLAG_PROJINHERIT; xflags |= FS_XFLAG_PROJINHERIT;
return xflags; return xflags;
} }
...@@ -2620,31 +2646,23 @@ static inline __u32 f2fs_iflags_to_xflags(unsigned long iflags) ...@@ -2620,31 +2646,23 @@ static inline __u32 f2fs_iflags_to_xflags(unsigned long iflags)
FS_XFLAG_APPEND | FS_XFLAG_NODUMP | \ FS_XFLAG_APPEND | FS_XFLAG_NODUMP | \
FS_XFLAG_NOATIME | FS_XFLAG_PROJINHERIT) FS_XFLAG_NOATIME | FS_XFLAG_PROJINHERIT)
/* Flags we can manipulate with through EXT4_IOC_FSSETXATTR */
#define F2FS_FL_XFLAG_VISIBLE (FS_SYNC_FL | \
FS_IMMUTABLE_FL | \
FS_APPEND_FL | \
FS_NODUMP_FL | \
FS_NOATIME_FL | \
FS_PROJINHERIT_FL)
/* Transfer xflags flags to internal */ /* Transfer xflags flags to internal */
static inline unsigned long f2fs_xflags_to_iflags(__u32 xflags) static inline unsigned long f2fs_xflags_to_iflags(__u32 xflags)
{ {
unsigned long iflags = 0; unsigned long iflags = 0;
if (xflags & FS_XFLAG_SYNC) if (xflags & FS_XFLAG_SYNC)
iflags |= FS_SYNC_FL; iflags |= F2FS_SYNC_FL;
if (xflags & FS_XFLAG_IMMUTABLE) if (xflags & FS_XFLAG_IMMUTABLE)
iflags |= FS_IMMUTABLE_FL; iflags |= F2FS_IMMUTABLE_FL;
if (xflags & FS_XFLAG_APPEND) if (xflags & FS_XFLAG_APPEND)
iflags |= FS_APPEND_FL; iflags |= F2FS_APPEND_FL;
if (xflags & FS_XFLAG_NODUMP) if (xflags & FS_XFLAG_NODUMP)
iflags |= FS_NODUMP_FL; iflags |= F2FS_NODUMP_FL;
if (xflags & FS_XFLAG_NOATIME) if (xflags & FS_XFLAG_NOATIME)
iflags |= FS_NOATIME_FL; iflags |= F2FS_NOATIME_FL;
if (xflags & FS_XFLAG_PROJINHERIT) if (xflags & FS_XFLAG_PROJINHERIT)
iflags |= FS_PROJINHERIT_FL; iflags |= F2FS_PROJINHERIT_FL;
return iflags; return iflags;
} }
...@@ -2657,7 +2675,7 @@ static int f2fs_ioc_fsgetxattr(struct file *filp, unsigned long arg) ...@@ -2657,7 +2675,7 @@ static int f2fs_ioc_fsgetxattr(struct file *filp, unsigned long arg)
memset(&fa, 0, sizeof(struct fsxattr)); memset(&fa, 0, sizeof(struct fsxattr));
fa.fsx_xflags = f2fs_iflags_to_xflags(fi->i_flags & fa.fsx_xflags = f2fs_iflags_to_xflags(fi->i_flags &
(FS_FL_USER_VISIBLE | FS_PROJINHERIT_FL)); F2FS_FL_USER_VISIBLE);
if (f2fs_sb_has_project_quota(inode->i_sb)) if (f2fs_sb_has_project_quota(inode->i_sb))
fa.fsx_projid = (__u32)from_kprojid(&init_user_ns, fa.fsx_projid = (__u32)from_kprojid(&init_user_ns,
...@@ -2717,12 +2735,14 @@ int f2fs_pin_file_control(struct inode *inode, bool inc) ...@@ -2717,12 +2735,14 @@ int f2fs_pin_file_control(struct inode *inode, bool inc)
/* Use i_gc_failures for normal file as a risk signal. */ /* Use i_gc_failures for normal file as a risk signal. */
if (inc) if (inc)
f2fs_i_gc_failures_write(inode, fi->i_gc_failures + 1); f2fs_i_gc_failures_write(inode,
fi->i_gc_failures[GC_FAILURE_PIN] + 1);
if (fi->i_gc_failures > sbi->gc_pin_file_threshold) { if (fi->i_gc_failures[GC_FAILURE_PIN] > sbi->gc_pin_file_threshold) {
f2fs_msg(sbi->sb, KERN_WARNING, f2fs_msg(sbi->sb, KERN_WARNING,
"%s: Enable GC = ino %lx after %x GC trials\n", "%s: Enable GC = ino %lx after %x GC trials\n",
__func__, inode->i_ino, fi->i_gc_failures); __func__, inode->i_ino,
fi->i_gc_failures[GC_FAILURE_PIN]);
clear_inode_flag(inode, FI_PIN_FILE); clear_inode_flag(inode, FI_PIN_FILE);
return -EAGAIN; return -EAGAIN;
} }
...@@ -2753,14 +2773,14 @@ static int f2fs_ioc_set_pin_file(struct file *filp, unsigned long arg) ...@@ -2753,14 +2773,14 @@ static int f2fs_ioc_set_pin_file(struct file *filp, unsigned long arg)
inode_lock(inode); inode_lock(inode);
if (should_update_outplace(inode, NULL)) { if (f2fs_should_update_outplace(inode, NULL)) {
ret = -EINVAL; ret = -EINVAL;
goto out; goto out;
} }
if (!pin) { if (!pin) {
clear_inode_flag(inode, FI_PIN_FILE); clear_inode_flag(inode, FI_PIN_FILE);
F2FS_I(inode)->i_gc_failures = 1; F2FS_I(inode)->i_gc_failures[GC_FAILURE_PIN] = 1;
goto done; goto done;
} }
...@@ -2773,7 +2793,7 @@ static int f2fs_ioc_set_pin_file(struct file *filp, unsigned long arg) ...@@ -2773,7 +2793,7 @@ static int f2fs_ioc_set_pin_file(struct file *filp, unsigned long arg)
goto out; goto out;
set_inode_flag(inode, FI_PIN_FILE); set_inode_flag(inode, FI_PIN_FILE);
ret = F2FS_I(inode)->i_gc_failures; ret = F2FS_I(inode)->i_gc_failures[GC_FAILURE_PIN];
done: done:
f2fs_update_time(F2FS_I_SB(inode), REQ_TIME); f2fs_update_time(F2FS_I_SB(inode), REQ_TIME);
out: out:
...@@ -2788,7 +2808,7 @@ static int f2fs_ioc_get_pin_file(struct file *filp, unsigned long arg) ...@@ -2788,7 +2808,7 @@ static int f2fs_ioc_get_pin_file(struct file *filp, unsigned long arg)
__u32 pin = 0; __u32 pin = 0;
if (is_inode_flag_set(inode, FI_PIN_FILE)) if (is_inode_flag_set(inode, FI_PIN_FILE))
pin = F2FS_I(inode)->i_gc_failures; pin = F2FS_I(inode)->i_gc_failures[GC_FAILURE_PIN];
return put_user(pin, (u32 __user *)arg); return put_user(pin, (u32 __user *)arg);
} }
...@@ -2812,9 +2832,9 @@ int f2fs_precache_extents(struct inode *inode) ...@@ -2812,9 +2832,9 @@ int f2fs_precache_extents(struct inode *inode)
while (map.m_lblk < end) { while (map.m_lblk < end) {
map.m_len = end - map.m_lblk; map.m_len = end - map.m_lblk;
down_write(&fi->dio_rwsem[WRITE]); down_write(&fi->i_gc_rwsem[WRITE]);
err = f2fs_map_blocks(inode, &map, 0, F2FS_GET_BLOCK_PRECACHE); err = f2fs_map_blocks(inode, &map, 0, F2FS_GET_BLOCK_PRECACHE);
up_write(&fi->dio_rwsem[WRITE]); up_write(&fi->i_gc_rwsem[WRITE]);
if (err) if (err)
return err; return err;
...@@ -2866,7 +2886,7 @@ long f2fs_ioctl(struct file *filp, unsigned int cmd, unsigned long arg) ...@@ -2866,7 +2886,7 @@ long f2fs_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
case F2FS_IOC_GARBAGE_COLLECT_RANGE: case F2FS_IOC_GARBAGE_COLLECT_RANGE:
return f2fs_ioc_gc_range(filp, arg); return f2fs_ioc_gc_range(filp, arg);
case F2FS_IOC_WRITE_CHECKPOINT: case F2FS_IOC_WRITE_CHECKPOINT:
return f2fs_ioc_write_checkpoint(filp, arg); return f2fs_ioc_f2fs_write_checkpoint(filp, arg);
case F2FS_IOC_DEFRAGMENT: case F2FS_IOC_DEFRAGMENT:
return f2fs_ioc_defragment(filp, arg); return f2fs_ioc_defragment(filp, arg);
case F2FS_IOC_MOVE_RANGE: case F2FS_IOC_MOVE_RANGE:
...@@ -2894,7 +2914,6 @@ static ssize_t f2fs_file_write_iter(struct kiocb *iocb, struct iov_iter *from) ...@@ -2894,7 +2914,6 @@ static ssize_t f2fs_file_write_iter(struct kiocb *iocb, struct iov_iter *from)
{ {
struct file *file = iocb->ki_filp; struct file *file = iocb->ki_filp;
struct inode *inode = file_inode(file); struct inode *inode = file_inode(file);
struct blk_plug plug;
ssize_t ret; ssize_t ret;
if (unlikely(f2fs_cp_error(F2FS_I_SB(inode)))) if (unlikely(f2fs_cp_error(F2FS_I_SB(inode))))
...@@ -2924,6 +2943,8 @@ static ssize_t f2fs_file_write_iter(struct kiocb *iocb, struct iov_iter *from) ...@@ -2924,6 +2943,8 @@ static ssize_t f2fs_file_write_iter(struct kiocb *iocb, struct iov_iter *from)
iov_iter_count(from)) || iov_iter_count(from)) ||
f2fs_has_inline_data(inode) || f2fs_has_inline_data(inode) ||
f2fs_force_buffered_io(inode, WRITE)) { f2fs_force_buffered_io(inode, WRITE)) {
clear_inode_flag(inode,
FI_NO_PREALLOC);
inode_unlock(inode); inode_unlock(inode);
return -EAGAIN; return -EAGAIN;
} }
...@@ -2939,9 +2960,7 @@ static ssize_t f2fs_file_write_iter(struct kiocb *iocb, struct iov_iter *from) ...@@ -2939,9 +2960,7 @@ static ssize_t f2fs_file_write_iter(struct kiocb *iocb, struct iov_iter *from)
return err; return err;
} }
} }
blk_start_plug(&plug);
ret = __generic_file_write_iter(iocb, from); ret = __generic_file_write_iter(iocb, from);
blk_finish_plug(&plug);
clear_inode_flag(inode, FI_NO_PREALLOC); clear_inode_flag(inode, FI_NO_PREALLOC);
/* if we couldn't write data, we should deallocate blocks. */ /* if we couldn't write data, we should deallocate blocks. */
......
...@@ -76,7 +76,7 @@ static int gc_thread_func(void *data) ...@@ -76,7 +76,7 @@ static int gc_thread_func(void *data)
* invalidated soon after by user update or deletion. * invalidated soon after by user update or deletion.
* So, I'd like to wait some time to collect dirty segments. * So, I'd like to wait some time to collect dirty segments.
*/ */
if (gc_th->gc_urgent) { if (sbi->gc_mode == GC_URGENT) {
wait_ms = gc_th->urgent_sleep_time; wait_ms = gc_th->urgent_sleep_time;
mutex_lock(&sbi->gc_mutex); mutex_lock(&sbi->gc_mutex);
goto do_gc; goto do_gc;
...@@ -114,7 +114,7 @@ static int gc_thread_func(void *data) ...@@ -114,7 +114,7 @@ static int gc_thread_func(void *data)
return 0; return 0;
} }
int start_gc_thread(struct f2fs_sb_info *sbi) int f2fs_start_gc_thread(struct f2fs_sb_info *sbi)
{ {
struct f2fs_gc_kthread *gc_th; struct f2fs_gc_kthread *gc_th;
dev_t dev = sbi->sb->s_bdev->bd_dev; dev_t dev = sbi->sb->s_bdev->bd_dev;
...@@ -131,8 +131,6 @@ int start_gc_thread(struct f2fs_sb_info *sbi) ...@@ -131,8 +131,6 @@ int start_gc_thread(struct f2fs_sb_info *sbi)
gc_th->max_sleep_time = DEF_GC_THREAD_MAX_SLEEP_TIME; gc_th->max_sleep_time = DEF_GC_THREAD_MAX_SLEEP_TIME;
gc_th->no_gc_sleep_time = DEF_GC_THREAD_NOGC_SLEEP_TIME; gc_th->no_gc_sleep_time = DEF_GC_THREAD_NOGC_SLEEP_TIME;
gc_th->gc_idle = 0;
gc_th->gc_urgent = 0;
gc_th->gc_wake= 0; gc_th->gc_wake= 0;
sbi->gc_thread = gc_th; sbi->gc_thread = gc_th;
...@@ -148,7 +146,7 @@ int start_gc_thread(struct f2fs_sb_info *sbi) ...@@ -148,7 +146,7 @@ int start_gc_thread(struct f2fs_sb_info *sbi)
return err; return err;
} }
void stop_gc_thread(struct f2fs_sb_info *sbi) void f2fs_stop_gc_thread(struct f2fs_sb_info *sbi)
{ {
struct f2fs_gc_kthread *gc_th = sbi->gc_thread; struct f2fs_gc_kthread *gc_th = sbi->gc_thread;
if (!gc_th) if (!gc_th)
...@@ -158,21 +156,19 @@ void stop_gc_thread(struct f2fs_sb_info *sbi) ...@@ -158,21 +156,19 @@ void stop_gc_thread(struct f2fs_sb_info *sbi)
sbi->gc_thread = NULL; sbi->gc_thread = NULL;
} }
static int select_gc_type(struct f2fs_gc_kthread *gc_th, int gc_type) static int select_gc_type(struct f2fs_sb_info *sbi, int gc_type)
{ {
int gc_mode = (gc_type == BG_GC) ? GC_CB : GC_GREEDY; int gc_mode = (gc_type == BG_GC) ? GC_CB : GC_GREEDY;
if (!gc_th) switch (sbi->gc_mode) {
return gc_mode; case GC_IDLE_CB:
gc_mode = GC_CB;
if (gc_th->gc_idle) { break;
if (gc_th->gc_idle == 1) case GC_IDLE_GREEDY:
gc_mode = GC_CB; case GC_URGENT:
else if (gc_th->gc_idle == 2)
gc_mode = GC_GREEDY;
}
if (gc_th->gc_urgent)
gc_mode = GC_GREEDY; gc_mode = GC_GREEDY;
break;
}
return gc_mode; return gc_mode;
} }
...@@ -187,7 +183,7 @@ static void select_policy(struct f2fs_sb_info *sbi, int gc_type, ...@@ -187,7 +183,7 @@ static void select_policy(struct f2fs_sb_info *sbi, int gc_type,
p->max_search = dirty_i->nr_dirty[type]; p->max_search = dirty_i->nr_dirty[type];
p->ofs_unit = 1; p->ofs_unit = 1;
} else { } else {
p->gc_mode = select_gc_type(sbi->gc_thread, gc_type); p->gc_mode = select_gc_type(sbi, gc_type);
p->dirty_segmap = dirty_i->dirty_segmap[DIRTY]; p->dirty_segmap = dirty_i->dirty_segmap[DIRTY];
p->max_search = dirty_i->nr_dirty[DIRTY]; p->max_search = dirty_i->nr_dirty[DIRTY];
p->ofs_unit = sbi->segs_per_sec; p->ofs_unit = sbi->segs_per_sec;
...@@ -195,7 +191,7 @@ static void select_policy(struct f2fs_sb_info *sbi, int gc_type, ...@@ -195,7 +191,7 @@ static void select_policy(struct f2fs_sb_info *sbi, int gc_type,
/* we need to check every dirty segments in the FG_GC case */ /* we need to check every dirty segments in the FG_GC case */
if (gc_type != FG_GC && if (gc_type != FG_GC &&
(sbi->gc_thread && !sbi->gc_thread->gc_urgent) && (sbi->gc_mode != GC_URGENT) &&
p->max_search > sbi->max_victim_search) p->max_search > sbi->max_victim_search)
p->max_search = sbi->max_victim_search; p->max_search = sbi->max_victim_search;
...@@ -234,10 +230,6 @@ static unsigned int check_bg_victims(struct f2fs_sb_info *sbi) ...@@ -234,10 +230,6 @@ static unsigned int check_bg_victims(struct f2fs_sb_info *sbi)
for_each_set_bit(secno, dirty_i->victim_secmap, MAIN_SECS(sbi)) { for_each_set_bit(secno, dirty_i->victim_secmap, MAIN_SECS(sbi)) {
if (sec_usage_check(sbi, secno)) if (sec_usage_check(sbi, secno))
continue; continue;
if (no_fggc_candidate(sbi, secno))
continue;
clear_bit(secno, dirty_i->victim_secmap); clear_bit(secno, dirty_i->victim_secmap);
return GET_SEG_FROM_SEC(sbi, secno); return GET_SEG_FROM_SEC(sbi, secno);
} }
...@@ -377,9 +369,6 @@ static int get_victim_by_default(struct f2fs_sb_info *sbi, ...@@ -377,9 +369,6 @@ static int get_victim_by_default(struct f2fs_sb_info *sbi,
goto next; goto next;
if (gc_type == BG_GC && test_bit(secno, dirty_i->victim_secmap)) if (gc_type == BG_GC && test_bit(secno, dirty_i->victim_secmap))
goto next; goto next;
if (gc_type == FG_GC && p.alloc_mode == LFS &&
no_fggc_candidate(sbi, secno))
goto next;
cost = get_gc_cost(sbi, segno, &p); cost = get_gc_cost(sbi, segno, &p);
...@@ -440,7 +429,7 @@ static void add_gc_inode(struct gc_inode_list *gc_list, struct inode *inode) ...@@ -440,7 +429,7 @@ static void add_gc_inode(struct gc_inode_list *gc_list, struct inode *inode)
iput(inode); iput(inode);
return; return;
} }
new_ie = f2fs_kmem_cache_alloc(inode_entry_slab, GFP_NOFS); new_ie = f2fs_kmem_cache_alloc(f2fs_inode_entry_slab, GFP_NOFS);
new_ie->inode = inode; new_ie->inode = inode;
f2fs_radix_tree_insert(&gc_list->iroot, inode->i_ino, new_ie); f2fs_radix_tree_insert(&gc_list->iroot, inode->i_ino, new_ie);
...@@ -454,7 +443,7 @@ static void put_gc_inode(struct gc_inode_list *gc_list) ...@@ -454,7 +443,7 @@ static void put_gc_inode(struct gc_inode_list *gc_list)
radix_tree_delete(&gc_list->iroot, ie->inode->i_ino); radix_tree_delete(&gc_list->iroot, ie->inode->i_ino);
iput(ie->inode); iput(ie->inode);
list_del(&ie->list); list_del(&ie->list);
kmem_cache_free(inode_entry_slab, ie); kmem_cache_free(f2fs_inode_entry_slab, ie);
} }
} }
...@@ -484,12 +473,16 @@ static void gc_node_segment(struct f2fs_sb_info *sbi, ...@@ -484,12 +473,16 @@ static void gc_node_segment(struct f2fs_sb_info *sbi,
block_t start_addr; block_t start_addr;
int off; int off;
int phase = 0; int phase = 0;
bool fggc = (gc_type == FG_GC);
start_addr = START_BLOCK(sbi, segno); start_addr = START_BLOCK(sbi, segno);
next_step: next_step:
entry = sum; entry = sum;
if (fggc && phase == 2)
atomic_inc(&sbi->wb_sync_req[NODE]);
for (off = 0; off < sbi->blocks_per_seg; off++, entry++) { for (off = 0; off < sbi->blocks_per_seg; off++, entry++) {
nid_t nid = le32_to_cpu(entry->nid); nid_t nid = le32_to_cpu(entry->nid);
struct page *node_page; struct page *node_page;
...@@ -503,39 +496,42 @@ static void gc_node_segment(struct f2fs_sb_info *sbi, ...@@ -503,39 +496,42 @@ static void gc_node_segment(struct f2fs_sb_info *sbi,
continue; continue;
if (phase == 0) { if (phase == 0) {
ra_meta_pages(sbi, NAT_BLOCK_OFFSET(nid), 1, f2fs_ra_meta_pages(sbi, NAT_BLOCK_OFFSET(nid), 1,
META_NAT, true); META_NAT, true);
continue; continue;
} }
if (phase == 1) { if (phase == 1) {
ra_node_page(sbi, nid); f2fs_ra_node_page(sbi, nid);
continue; continue;
} }
/* phase == 2 */ /* phase == 2 */
node_page = get_node_page(sbi, nid); node_page = f2fs_get_node_page(sbi, nid);
if (IS_ERR(node_page)) if (IS_ERR(node_page))
continue; continue;
/* block may become invalid during get_node_page */ /* block may become invalid during f2fs_get_node_page */
if (check_valid_map(sbi, segno, off) == 0) { if (check_valid_map(sbi, segno, off) == 0) {
f2fs_put_page(node_page, 1); f2fs_put_page(node_page, 1);
continue; continue;
} }
get_node_info(sbi, nid, &ni); f2fs_get_node_info(sbi, nid, &ni);
if (ni.blk_addr != start_addr + off) { if (ni.blk_addr != start_addr + off) {
f2fs_put_page(node_page, 1); f2fs_put_page(node_page, 1);
continue; continue;
} }
move_node_page(node_page, gc_type); f2fs_move_node_page(node_page, gc_type);
stat_inc_node_blk_count(sbi, 1, gc_type); stat_inc_node_blk_count(sbi, 1, gc_type);
} }
if (++phase < 3) if (++phase < 3)
goto next_step; goto next_step;
if (fggc)
atomic_dec(&sbi->wb_sync_req[NODE]);
} }
/* /*
...@@ -545,7 +541,7 @@ static void gc_node_segment(struct f2fs_sb_info *sbi, ...@@ -545,7 +541,7 @@ static void gc_node_segment(struct f2fs_sb_info *sbi,
* as indirect or double indirect node blocks, are given, it must be a caller's * as indirect or double indirect node blocks, are given, it must be a caller's
* bug. * bug.
*/ */
block_t start_bidx_of_node(unsigned int node_ofs, struct inode *inode) block_t f2fs_start_bidx_of_node(unsigned int node_ofs, struct inode *inode)
{ {
unsigned int indirect_blks = 2 * NIDS_PER_BLOCK + 4; unsigned int indirect_blks = 2 * NIDS_PER_BLOCK + 4;
unsigned int bidx; unsigned int bidx;
...@@ -576,11 +572,11 @@ static bool is_alive(struct f2fs_sb_info *sbi, struct f2fs_summary *sum, ...@@ -576,11 +572,11 @@ static bool is_alive(struct f2fs_sb_info *sbi, struct f2fs_summary *sum,
nid = le32_to_cpu(sum->nid); nid = le32_to_cpu(sum->nid);
ofs_in_node = le16_to_cpu(sum->ofs_in_node); ofs_in_node = le16_to_cpu(sum->ofs_in_node);
node_page = get_node_page(sbi, nid); node_page = f2fs_get_node_page(sbi, nid);
if (IS_ERR(node_page)) if (IS_ERR(node_page))
return false; return false;
get_node_info(sbi, nid, dni); f2fs_get_node_info(sbi, nid, dni);
if (sum->version != dni->version) { if (sum->version != dni->version) {
f2fs_msg(sbi->sb, KERN_WARNING, f2fs_msg(sbi->sb, KERN_WARNING,
...@@ -603,7 +599,7 @@ static bool is_alive(struct f2fs_sb_info *sbi, struct f2fs_summary *sum, ...@@ -603,7 +599,7 @@ static bool is_alive(struct f2fs_sb_info *sbi, struct f2fs_summary *sum,
* This can be used to move blocks, aka LBAs, directly on disk. * This can be used to move blocks, aka LBAs, directly on disk.
*/ */
static void move_data_block(struct inode *inode, block_t bidx, static void move_data_block(struct inode *inode, block_t bidx,
unsigned int segno, int off) int gc_type, unsigned int segno, int off)
{ {
struct f2fs_io_info fio = { struct f2fs_io_info fio = {
.sbi = F2FS_I_SB(inode), .sbi = F2FS_I_SB(inode),
...@@ -614,6 +610,7 @@ static void move_data_block(struct inode *inode, block_t bidx, ...@@ -614,6 +610,7 @@ static void move_data_block(struct inode *inode, block_t bidx,
.op_flags = 0, .op_flags = 0,
.encrypted_page = NULL, .encrypted_page = NULL,
.in_list = false, .in_list = false,
.retry = false,
}; };
struct dnode_of_data dn; struct dnode_of_data dn;
struct f2fs_summary sum; struct f2fs_summary sum;
...@@ -621,6 +618,7 @@ static void move_data_block(struct inode *inode, block_t bidx, ...@@ -621,6 +618,7 @@ static void move_data_block(struct inode *inode, block_t bidx,
struct page *page; struct page *page;
block_t newaddr; block_t newaddr;
int err; int err;
bool lfs_mode = test_opt(fio.sbi, LFS);
/* do not read out */ /* do not read out */
page = f2fs_grab_cache_page(inode->i_mapping, bidx, false); page = f2fs_grab_cache_page(inode->i_mapping, bidx, false);
...@@ -630,8 +628,11 @@ static void move_data_block(struct inode *inode, block_t bidx, ...@@ -630,8 +628,11 @@ static void move_data_block(struct inode *inode, block_t bidx,
if (!check_valid_map(F2FS_I_SB(inode), segno, off)) if (!check_valid_map(F2FS_I_SB(inode), segno, off))
goto out; goto out;
if (f2fs_is_atomic_file(inode)) if (f2fs_is_atomic_file(inode)) {
F2FS_I(inode)->i_gc_failures[GC_FAILURE_ATOMIC]++;
F2FS_I_SB(inode)->skipped_atomic_files[gc_type]++;
goto out; goto out;
}
if (f2fs_is_pinned_file(inode)) { if (f2fs_is_pinned_file(inode)) {
f2fs_pin_file_control(inode, true); f2fs_pin_file_control(inode, true);
...@@ -639,7 +640,7 @@ static void move_data_block(struct inode *inode, block_t bidx, ...@@ -639,7 +640,7 @@ static void move_data_block(struct inode *inode, block_t bidx,
} }
set_new_dnode(&dn, inode, NULL, NULL, 0); set_new_dnode(&dn, inode, NULL, NULL, 0);
err = get_dnode_of_data(&dn, bidx, LOOKUP_NODE); err = f2fs_get_dnode_of_data(&dn, bidx, LOOKUP_NODE);
if (err) if (err)
goto out; goto out;
...@@ -654,14 +655,17 @@ static void move_data_block(struct inode *inode, block_t bidx, ...@@ -654,14 +655,17 @@ static void move_data_block(struct inode *inode, block_t bidx,
*/ */
f2fs_wait_on_page_writeback(page, DATA, true); f2fs_wait_on_page_writeback(page, DATA, true);
get_node_info(fio.sbi, dn.nid, &ni); f2fs_get_node_info(fio.sbi, dn.nid, &ni);
set_summary(&sum, dn.nid, dn.ofs_in_node, ni.version); set_summary(&sum, dn.nid, dn.ofs_in_node, ni.version);
/* read page */ /* read page */
fio.page = page; fio.page = page;
fio.new_blkaddr = fio.old_blkaddr = dn.data_blkaddr; fio.new_blkaddr = fio.old_blkaddr = dn.data_blkaddr;
allocate_data_block(fio.sbi, NULL, fio.old_blkaddr, &newaddr, if (lfs_mode)
down_write(&fio.sbi->io_order_lock);
f2fs_allocate_data_block(fio.sbi, NULL, fio.old_blkaddr, &newaddr,
&sum, CURSEG_COLD_DATA, NULL, false); &sum, CURSEG_COLD_DATA, NULL, false);
fio.encrypted_page = f2fs_pagecache_get_page(META_MAPPING(fio.sbi), fio.encrypted_page = f2fs_pagecache_get_page(META_MAPPING(fio.sbi),
...@@ -693,6 +697,7 @@ static void move_data_block(struct inode *inode, block_t bidx, ...@@ -693,6 +697,7 @@ static void move_data_block(struct inode *inode, block_t bidx,
dec_page_count(fio.sbi, F2FS_DIRTY_META); dec_page_count(fio.sbi, F2FS_DIRTY_META);
set_page_writeback(fio.encrypted_page); set_page_writeback(fio.encrypted_page);
ClearPageError(page);
/* allocate block address */ /* allocate block address */
f2fs_wait_on_page_writeback(dn.node_page, NODE, true); f2fs_wait_on_page_writeback(dn.node_page, NODE, true);
...@@ -700,8 +705,8 @@ static void move_data_block(struct inode *inode, block_t bidx, ...@@ -700,8 +705,8 @@ static void move_data_block(struct inode *inode, block_t bidx,
fio.op = REQ_OP_WRITE; fio.op = REQ_OP_WRITE;
fio.op_flags = REQ_SYNC; fio.op_flags = REQ_SYNC;
fio.new_blkaddr = newaddr; fio.new_blkaddr = newaddr;
err = f2fs_submit_page_write(&fio); f2fs_submit_page_write(&fio);
if (err) { if (fio.retry) {
if (PageWriteback(fio.encrypted_page)) if (PageWriteback(fio.encrypted_page))
end_page_writeback(fio.encrypted_page); end_page_writeback(fio.encrypted_page);
goto put_page_out; goto put_page_out;
...@@ -716,8 +721,10 @@ static void move_data_block(struct inode *inode, block_t bidx, ...@@ -716,8 +721,10 @@ static void move_data_block(struct inode *inode, block_t bidx,
put_page_out: put_page_out:
f2fs_put_page(fio.encrypted_page, 1); f2fs_put_page(fio.encrypted_page, 1);
recover_block: recover_block:
if (lfs_mode)
up_write(&fio.sbi->io_order_lock);
if (err) if (err)
__f2fs_replace_block(fio.sbi, &sum, newaddr, fio.old_blkaddr, f2fs_do_replace_block(fio.sbi, &sum, newaddr, fio.old_blkaddr,
true, true); true, true);
put_out: put_out:
f2fs_put_dnode(&dn); f2fs_put_dnode(&dn);
...@@ -730,15 +737,18 @@ static void move_data_page(struct inode *inode, block_t bidx, int gc_type, ...@@ -730,15 +737,18 @@ static void move_data_page(struct inode *inode, block_t bidx, int gc_type,
{ {
struct page *page; struct page *page;
page = get_lock_data_page(inode, bidx, true); page = f2fs_get_lock_data_page(inode, bidx, true);
if (IS_ERR(page)) if (IS_ERR(page))
return; return;
if (!check_valid_map(F2FS_I_SB(inode), segno, off)) if (!check_valid_map(F2FS_I_SB(inode), segno, off))
goto out; goto out;
if (f2fs_is_atomic_file(inode)) if (f2fs_is_atomic_file(inode)) {
F2FS_I(inode)->i_gc_failures[GC_FAILURE_ATOMIC]++;
F2FS_I_SB(inode)->skipped_atomic_files[gc_type]++;
goto out; goto out;
}
if (f2fs_is_pinned_file(inode)) { if (f2fs_is_pinned_file(inode)) {
if (gc_type == FG_GC) if (gc_type == FG_GC)
f2fs_pin_file_control(inode, true); f2fs_pin_file_control(inode, true);
...@@ -772,15 +782,20 @@ static void move_data_page(struct inode *inode, block_t bidx, int gc_type, ...@@ -772,15 +782,20 @@ static void move_data_page(struct inode *inode, block_t bidx, int gc_type,
f2fs_wait_on_page_writeback(page, DATA, true); f2fs_wait_on_page_writeback(page, DATA, true);
if (clear_page_dirty_for_io(page)) { if (clear_page_dirty_for_io(page)) {
inode_dec_dirty_pages(inode); inode_dec_dirty_pages(inode);
remove_dirty_inode(inode); f2fs_remove_dirty_inode(inode);
} }
set_cold_data(page); set_cold_data(page);
err = do_write_data_page(&fio); err = f2fs_do_write_data_page(&fio);
if (err == -ENOMEM && is_dirty) { if (err) {
congestion_wait(BLK_RW_ASYNC, HZ/50); clear_cold_data(page);
goto retry; if (err == -ENOMEM) {
congestion_wait(BLK_RW_ASYNC, HZ/50);
goto retry;
}
if (is_dirty)
set_page_dirty(page);
} }
} }
out: out:
...@@ -824,13 +839,13 @@ static void gc_data_segment(struct f2fs_sb_info *sbi, struct f2fs_summary *sum, ...@@ -824,13 +839,13 @@ static void gc_data_segment(struct f2fs_sb_info *sbi, struct f2fs_summary *sum,
continue; continue;
if (phase == 0) { if (phase == 0) {
ra_meta_pages(sbi, NAT_BLOCK_OFFSET(nid), 1, f2fs_ra_meta_pages(sbi, NAT_BLOCK_OFFSET(nid), 1,
META_NAT, true); META_NAT, true);
continue; continue;
} }
if (phase == 1) { if (phase == 1) {
ra_node_page(sbi, nid); f2fs_ra_node_page(sbi, nid);
continue; continue;
} }
...@@ -839,7 +854,7 @@ static void gc_data_segment(struct f2fs_sb_info *sbi, struct f2fs_summary *sum, ...@@ -839,7 +854,7 @@ static void gc_data_segment(struct f2fs_sb_info *sbi, struct f2fs_summary *sum,
continue; continue;
if (phase == 2) { if (phase == 2) {
ra_node_page(sbi, dni.ino); f2fs_ra_node_page(sbi, dni.ino);
continue; continue;
} }
...@@ -850,23 +865,23 @@ static void gc_data_segment(struct f2fs_sb_info *sbi, struct f2fs_summary *sum, ...@@ -850,23 +865,23 @@ static void gc_data_segment(struct f2fs_sb_info *sbi, struct f2fs_summary *sum,
if (IS_ERR(inode) || is_bad_inode(inode)) if (IS_ERR(inode) || is_bad_inode(inode))
continue; continue;
/* if encrypted inode, let's go phase 3 */ /* if inode uses special I/O path, let's go phase 3 */
if (f2fs_encrypted_file(inode)) { if (f2fs_post_read_required(inode)) {
add_gc_inode(gc_list, inode); add_gc_inode(gc_list, inode);
continue; continue;
} }
if (!down_write_trylock( if (!down_write_trylock(
&F2FS_I(inode)->dio_rwsem[WRITE])) { &F2FS_I(inode)->i_gc_rwsem[WRITE])) {
iput(inode); iput(inode);
continue; continue;
} }
start_bidx = start_bidx_of_node(nofs, inode); start_bidx = f2fs_start_bidx_of_node(nofs, inode);
data_page = get_read_data_page(inode, data_page = f2fs_get_read_data_page(inode,
start_bidx + ofs_in_node, REQ_RAHEAD, start_bidx + ofs_in_node, REQ_RAHEAD,
true); true);
up_write(&F2FS_I(inode)->dio_rwsem[WRITE]); up_write(&F2FS_I(inode)->i_gc_rwsem[WRITE]);
if (IS_ERR(data_page)) { if (IS_ERR(data_page)) {
iput(inode); iput(inode);
continue; continue;
...@@ -884,11 +899,11 @@ static void gc_data_segment(struct f2fs_sb_info *sbi, struct f2fs_summary *sum, ...@@ -884,11 +899,11 @@ static void gc_data_segment(struct f2fs_sb_info *sbi, struct f2fs_summary *sum,
bool locked = false; bool locked = false;
if (S_ISREG(inode->i_mode)) { if (S_ISREG(inode->i_mode)) {
if (!down_write_trylock(&fi->dio_rwsem[READ])) if (!down_write_trylock(&fi->i_gc_rwsem[READ]))
continue; continue;
if (!down_write_trylock( if (!down_write_trylock(
&fi->dio_rwsem[WRITE])) { &fi->i_gc_rwsem[WRITE])) {
up_write(&fi->dio_rwsem[READ]); up_write(&fi->i_gc_rwsem[READ]);
continue; continue;
} }
locked = true; locked = true;
...@@ -897,17 +912,18 @@ static void gc_data_segment(struct f2fs_sb_info *sbi, struct f2fs_summary *sum, ...@@ -897,17 +912,18 @@ static void gc_data_segment(struct f2fs_sb_info *sbi, struct f2fs_summary *sum,
inode_dio_wait(inode); inode_dio_wait(inode);
} }
start_bidx = start_bidx_of_node(nofs, inode) start_bidx = f2fs_start_bidx_of_node(nofs, inode)
+ ofs_in_node; + ofs_in_node;
if (f2fs_encrypted_file(inode)) if (f2fs_post_read_required(inode))
move_data_block(inode, start_bidx, segno, off); move_data_block(inode, start_bidx, gc_type,
segno, off);
else else
move_data_page(inode, start_bidx, gc_type, move_data_page(inode, start_bidx, gc_type,
segno, off); segno, off);
if (locked) { if (locked) {
up_write(&fi->dio_rwsem[WRITE]); up_write(&fi->i_gc_rwsem[WRITE]);
up_write(&fi->dio_rwsem[READ]); up_write(&fi->i_gc_rwsem[READ]);
} }
stat_inc_data_blk_count(sbi, 1, gc_type); stat_inc_data_blk_count(sbi, 1, gc_type);
...@@ -946,12 +962,12 @@ static int do_garbage_collect(struct f2fs_sb_info *sbi, ...@@ -946,12 +962,12 @@ static int do_garbage_collect(struct f2fs_sb_info *sbi,
/* readahead multi ssa blocks those have contiguous address */ /* readahead multi ssa blocks those have contiguous address */
if (sbi->segs_per_sec > 1) if (sbi->segs_per_sec > 1)
ra_meta_pages(sbi, GET_SUM_BLOCK(sbi, segno), f2fs_ra_meta_pages(sbi, GET_SUM_BLOCK(sbi, segno),
sbi->segs_per_sec, META_SSA, true); sbi->segs_per_sec, META_SSA, true);
/* reference all summary page */ /* reference all summary page */
while (segno < end_segno) { while (segno < end_segno) {
sum_page = get_sum_page(sbi, segno++); sum_page = f2fs_get_sum_page(sbi, segno++);
unlock_page(sum_page); unlock_page(sum_page);
} }
...@@ -1017,6 +1033,8 @@ int f2fs_gc(struct f2fs_sb_info *sbi, bool sync, ...@@ -1017,6 +1033,8 @@ int f2fs_gc(struct f2fs_sb_info *sbi, bool sync,
.ilist = LIST_HEAD_INIT(gc_list.ilist), .ilist = LIST_HEAD_INIT(gc_list.ilist),
.iroot = RADIX_TREE_INIT(gc_list.iroot, GFP_NOFS), .iroot = RADIX_TREE_INIT(gc_list.iroot, GFP_NOFS),
}; };
unsigned long long last_skipped = sbi->skipped_atomic_files[FG_GC];
unsigned int skipped_round = 0, round = 0;
trace_f2fs_gc_begin(sbi->sb, sync, background, trace_f2fs_gc_begin(sbi->sb, sync, background,
get_pages(sbi, F2FS_DIRTY_NODES), get_pages(sbi, F2FS_DIRTY_NODES),
...@@ -1045,7 +1063,7 @@ int f2fs_gc(struct f2fs_sb_info *sbi, bool sync, ...@@ -1045,7 +1063,7 @@ int f2fs_gc(struct f2fs_sb_info *sbi, bool sync,
* secure free segments which doesn't need fggc any more. * secure free segments which doesn't need fggc any more.
*/ */
if (prefree_segments(sbi)) { if (prefree_segments(sbi)) {
ret = write_checkpoint(sbi, &cpc); ret = f2fs_write_checkpoint(sbi, &cpc);
if (ret) if (ret)
goto stop; goto stop;
} }
...@@ -1068,17 +1086,27 @@ int f2fs_gc(struct f2fs_sb_info *sbi, bool sync, ...@@ -1068,17 +1086,27 @@ int f2fs_gc(struct f2fs_sb_info *sbi, bool sync,
sec_freed++; sec_freed++;
total_freed += seg_freed; total_freed += seg_freed;
if (gc_type == FG_GC) {
if (sbi->skipped_atomic_files[FG_GC] > last_skipped)
skipped_round++;
last_skipped = sbi->skipped_atomic_files[FG_GC];
round++;
}
if (gc_type == FG_GC) if (gc_type == FG_GC)
sbi->cur_victim_sec = NULL_SEGNO; sbi->cur_victim_sec = NULL_SEGNO;
if (!sync) { if (!sync) {
if (has_not_enough_free_secs(sbi, sec_freed, 0)) { if (has_not_enough_free_secs(sbi, sec_freed, 0)) {
if (skipped_round > MAX_SKIP_ATOMIC_COUNT &&
skipped_round * 2 >= round)
f2fs_drop_inmem_pages_all(sbi, true);
segno = NULL_SEGNO; segno = NULL_SEGNO;
goto gc_more; goto gc_more;
} }
if (gc_type == FG_GC) if (gc_type == FG_GC)
ret = write_checkpoint(sbi, &cpc); ret = f2fs_write_checkpoint(sbi, &cpc);
} }
stop: stop:
SIT_I(sbi)->last_victim[ALLOC_NEXT] = 0; SIT_I(sbi)->last_victim[ALLOC_NEXT] = 0;
...@@ -1102,19 +1130,10 @@ int f2fs_gc(struct f2fs_sb_info *sbi, bool sync, ...@@ -1102,19 +1130,10 @@ int f2fs_gc(struct f2fs_sb_info *sbi, bool sync,
return ret; return ret;
} }
void build_gc_manager(struct f2fs_sb_info *sbi) void f2fs_build_gc_manager(struct f2fs_sb_info *sbi)
{ {
u64 main_count, resv_count, ovp_count;
DIRTY_I(sbi)->v_ops = &default_v_ops; DIRTY_I(sbi)->v_ops = &default_v_ops;
/* threshold of # of valid blocks in a section for victims of FG_GC */
main_count = SM_I(sbi)->main_segments << sbi->log_blocks_per_seg;
resv_count = SM_I(sbi)->reserved_segments << sbi->log_blocks_per_seg;
ovp_count = SM_I(sbi)->ovp_segments << sbi->log_blocks_per_seg;
sbi->fggc_threshold = div64_u64((main_count - ovp_count) *
BLKS_PER_SEC(sbi), (main_count - resv_count));
sbi->gc_pin_file_threshold = DEF_GC_FAILED_PINNED_FILES; sbi->gc_pin_file_threshold = DEF_GC_FAILED_PINNED_FILES;
/* give warm/cold data area from slower device */ /* give warm/cold data area from slower device */
......
...@@ -36,8 +36,6 @@ struct f2fs_gc_kthread { ...@@ -36,8 +36,6 @@ struct f2fs_gc_kthread {
unsigned int no_gc_sleep_time; unsigned int no_gc_sleep_time;
/* for changing gc mode */ /* for changing gc mode */
unsigned int gc_idle;
unsigned int gc_urgent;
unsigned int gc_wake; unsigned int gc_wake;
}; };
......
...@@ -25,7 +25,7 @@ bool f2fs_may_inline_data(struct inode *inode) ...@@ -25,7 +25,7 @@ bool f2fs_may_inline_data(struct inode *inode)
if (i_size_read(inode) > MAX_INLINE_DATA(inode)) if (i_size_read(inode) > MAX_INLINE_DATA(inode))
return false; return false;
if (f2fs_encrypted_file(inode)) if (f2fs_post_read_required(inode))
return false; return false;
return true; return true;
...@@ -42,7 +42,7 @@ bool f2fs_may_inline_dentry(struct inode *inode) ...@@ -42,7 +42,7 @@ bool f2fs_may_inline_dentry(struct inode *inode)
return true; return true;
} }
void read_inline_data(struct page *page, struct page *ipage) void f2fs_do_read_inline_data(struct page *page, struct page *ipage)
{ {
struct inode *inode = page->mapping->host; struct inode *inode = page->mapping->host;
void *src_addr, *dst_addr; void *src_addr, *dst_addr;
...@@ -64,7 +64,8 @@ void read_inline_data(struct page *page, struct page *ipage) ...@@ -64,7 +64,8 @@ void read_inline_data(struct page *page, struct page *ipage)
SetPageUptodate(page); SetPageUptodate(page);
} }
void truncate_inline_inode(struct inode *inode, struct page *ipage, u64 from) void f2fs_truncate_inline_inode(struct inode *inode,
struct page *ipage, u64 from)
{ {
void *addr; void *addr;
...@@ -85,7 +86,7 @@ int f2fs_read_inline_data(struct inode *inode, struct page *page) ...@@ -85,7 +86,7 @@ int f2fs_read_inline_data(struct inode *inode, struct page *page)
{ {
struct page *ipage; struct page *ipage;
ipage = get_node_page(F2FS_I_SB(inode), inode->i_ino); ipage = f2fs_get_node_page(F2FS_I_SB(inode), inode->i_ino);
if (IS_ERR(ipage)) { if (IS_ERR(ipage)) {
unlock_page(page); unlock_page(page);
return PTR_ERR(ipage); return PTR_ERR(ipage);
...@@ -99,7 +100,7 @@ int f2fs_read_inline_data(struct inode *inode, struct page *page) ...@@ -99,7 +100,7 @@ int f2fs_read_inline_data(struct inode *inode, struct page *page)
if (page->index) if (page->index)
zero_user_segment(page, 0, PAGE_SIZE); zero_user_segment(page, 0, PAGE_SIZE);
else else
read_inline_data(page, ipage); f2fs_do_read_inline_data(page, ipage);
if (!PageUptodate(page)) if (!PageUptodate(page))
SetPageUptodate(page); SetPageUptodate(page);
...@@ -131,7 +132,7 @@ int f2fs_convert_inline_page(struct dnode_of_data *dn, struct page *page) ...@@ -131,7 +132,7 @@ int f2fs_convert_inline_page(struct dnode_of_data *dn, struct page *page)
f2fs_bug_on(F2FS_P_SB(page), PageWriteback(page)); f2fs_bug_on(F2FS_P_SB(page), PageWriteback(page));
read_inline_data(page, dn->inode_page); f2fs_do_read_inline_data(page, dn->inode_page);
set_page_dirty(page); set_page_dirty(page);
/* clear dirty state */ /* clear dirty state */
...@@ -139,20 +140,21 @@ int f2fs_convert_inline_page(struct dnode_of_data *dn, struct page *page) ...@@ -139,20 +140,21 @@ int f2fs_convert_inline_page(struct dnode_of_data *dn, struct page *page)
/* write data page to try to make data consistent */ /* write data page to try to make data consistent */
set_page_writeback(page); set_page_writeback(page);
ClearPageError(page);
fio.old_blkaddr = dn->data_blkaddr; fio.old_blkaddr = dn->data_blkaddr;
set_inode_flag(dn->inode, FI_HOT_DATA); set_inode_flag(dn->inode, FI_HOT_DATA);
write_data_page(dn, &fio); f2fs_outplace_write_data(dn, &fio);
f2fs_wait_on_page_writeback(page, DATA, true); f2fs_wait_on_page_writeback(page, DATA, true);
if (dirty) { if (dirty) {
inode_dec_dirty_pages(dn->inode); inode_dec_dirty_pages(dn->inode);
remove_dirty_inode(dn->inode); f2fs_remove_dirty_inode(dn->inode);
} }
/* this converted inline_data should be recovered. */ /* this converted inline_data should be recovered. */
set_inode_flag(dn->inode, FI_APPEND_WRITE); set_inode_flag(dn->inode, FI_APPEND_WRITE);
/* clear inline data and flag after data writeback */ /* clear inline data and flag after data writeback */
truncate_inline_inode(dn->inode, dn->inode_page, 0); f2fs_truncate_inline_inode(dn->inode, dn->inode_page, 0);
clear_inline_node(dn->inode_page); clear_inline_node(dn->inode_page);
clear_out: clear_out:
stat_dec_inline_inode(dn->inode); stat_dec_inline_inode(dn->inode);
...@@ -177,7 +179,7 @@ int f2fs_convert_inline_inode(struct inode *inode) ...@@ -177,7 +179,7 @@ int f2fs_convert_inline_inode(struct inode *inode)
f2fs_lock_op(sbi); f2fs_lock_op(sbi);
ipage = get_node_page(sbi, inode->i_ino); ipage = f2fs_get_node_page(sbi, inode->i_ino);
if (IS_ERR(ipage)) { if (IS_ERR(ipage)) {
err = PTR_ERR(ipage); err = PTR_ERR(ipage);
goto out; goto out;
...@@ -203,12 +205,10 @@ int f2fs_write_inline_data(struct inode *inode, struct page *page) ...@@ -203,12 +205,10 @@ int f2fs_write_inline_data(struct inode *inode, struct page *page)
{ {
void *src_addr, *dst_addr; void *src_addr, *dst_addr;
struct dnode_of_data dn; struct dnode_of_data dn;
struct address_space *mapping = page_mapping(page);
unsigned long flags;
int err; int err;
set_new_dnode(&dn, inode, NULL, NULL, 0); set_new_dnode(&dn, inode, NULL, NULL, 0);
err = get_dnode_of_data(&dn, 0, LOOKUP_NODE); err = f2fs_get_dnode_of_data(&dn, 0, LOOKUP_NODE);
if (err) if (err)
return err; return err;
...@@ -226,10 +226,7 @@ int f2fs_write_inline_data(struct inode *inode, struct page *page) ...@@ -226,10 +226,7 @@ int f2fs_write_inline_data(struct inode *inode, struct page *page)
kunmap_atomic(src_addr); kunmap_atomic(src_addr);
set_page_dirty(dn.inode_page); set_page_dirty(dn.inode_page);
xa_lock_irqsave(&mapping->i_pages, flags); f2fs_clear_radix_tree_dirty_tag(page);
radix_tree_tag_clear(&mapping->i_pages, page_index(page),
PAGECACHE_TAG_DIRTY);
xa_unlock_irqrestore(&mapping->i_pages, flags);
set_inode_flag(inode, FI_APPEND_WRITE); set_inode_flag(inode, FI_APPEND_WRITE);
set_inode_flag(inode, FI_DATA_EXIST); set_inode_flag(inode, FI_DATA_EXIST);
...@@ -239,7 +236,7 @@ int f2fs_write_inline_data(struct inode *inode, struct page *page) ...@@ -239,7 +236,7 @@ int f2fs_write_inline_data(struct inode *inode, struct page *page)
return 0; return 0;
} }
bool recover_inline_data(struct inode *inode, struct page *npage) bool f2fs_recover_inline_data(struct inode *inode, struct page *npage)
{ {
struct f2fs_sb_info *sbi = F2FS_I_SB(inode); struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
struct f2fs_inode *ri = NULL; struct f2fs_inode *ri = NULL;
...@@ -260,7 +257,7 @@ bool recover_inline_data(struct inode *inode, struct page *npage) ...@@ -260,7 +257,7 @@ bool recover_inline_data(struct inode *inode, struct page *npage)
if (f2fs_has_inline_data(inode) && if (f2fs_has_inline_data(inode) &&
ri && (ri->i_inline & F2FS_INLINE_DATA)) { ri && (ri->i_inline & F2FS_INLINE_DATA)) {
process_inline: process_inline:
ipage = get_node_page(sbi, inode->i_ino); ipage = f2fs_get_node_page(sbi, inode->i_ino);
f2fs_bug_on(sbi, IS_ERR(ipage)); f2fs_bug_on(sbi, IS_ERR(ipage));
f2fs_wait_on_page_writeback(ipage, NODE, true); f2fs_wait_on_page_writeback(ipage, NODE, true);
...@@ -278,20 +275,20 @@ bool recover_inline_data(struct inode *inode, struct page *npage) ...@@ -278,20 +275,20 @@ bool recover_inline_data(struct inode *inode, struct page *npage)
} }
if (f2fs_has_inline_data(inode)) { if (f2fs_has_inline_data(inode)) {
ipage = get_node_page(sbi, inode->i_ino); ipage = f2fs_get_node_page(sbi, inode->i_ino);
f2fs_bug_on(sbi, IS_ERR(ipage)); f2fs_bug_on(sbi, IS_ERR(ipage));
truncate_inline_inode(inode, ipage, 0); f2fs_truncate_inline_inode(inode, ipage, 0);
clear_inode_flag(inode, FI_INLINE_DATA); clear_inode_flag(inode, FI_INLINE_DATA);
f2fs_put_page(ipage, 1); f2fs_put_page(ipage, 1);
} else if (ri && (ri->i_inline & F2FS_INLINE_DATA)) { } else if (ri && (ri->i_inline & F2FS_INLINE_DATA)) {
if (truncate_blocks(inode, 0, false)) if (f2fs_truncate_blocks(inode, 0, false))
return false; return false;
goto process_inline; goto process_inline;
} }
return false; return false;
} }
struct f2fs_dir_entry *find_in_inline_dir(struct inode *dir, struct f2fs_dir_entry *f2fs_find_in_inline_dir(struct inode *dir,
struct fscrypt_name *fname, struct page **res_page) struct fscrypt_name *fname, struct page **res_page)
{ {
struct f2fs_sb_info *sbi = F2FS_SB(dir->i_sb); struct f2fs_sb_info *sbi = F2FS_SB(dir->i_sb);
...@@ -302,7 +299,7 @@ struct f2fs_dir_entry *find_in_inline_dir(struct inode *dir, ...@@ -302,7 +299,7 @@ struct f2fs_dir_entry *find_in_inline_dir(struct inode *dir,
void *inline_dentry; void *inline_dentry;
f2fs_hash_t namehash; f2fs_hash_t namehash;
ipage = get_node_page(sbi, dir->i_ino); ipage = f2fs_get_node_page(sbi, dir->i_ino);
if (IS_ERR(ipage)) { if (IS_ERR(ipage)) {
*res_page = ipage; *res_page = ipage;
return NULL; return NULL;
...@@ -313,7 +310,7 @@ struct f2fs_dir_entry *find_in_inline_dir(struct inode *dir, ...@@ -313,7 +310,7 @@ struct f2fs_dir_entry *find_in_inline_dir(struct inode *dir,
inline_dentry = inline_data_addr(dir, ipage); inline_dentry = inline_data_addr(dir, ipage);
make_dentry_ptr_inline(dir, &d, inline_dentry); make_dentry_ptr_inline(dir, &d, inline_dentry);
de = find_target_dentry(fname, namehash, NULL, &d); de = f2fs_find_target_dentry(fname, namehash, NULL, &d);
unlock_page(ipage); unlock_page(ipage);
if (de) if (de)
*res_page = ipage; *res_page = ipage;
...@@ -323,7 +320,7 @@ struct f2fs_dir_entry *find_in_inline_dir(struct inode *dir, ...@@ -323,7 +320,7 @@ struct f2fs_dir_entry *find_in_inline_dir(struct inode *dir,
return de; return de;
} }
int make_empty_inline_dir(struct inode *inode, struct inode *parent, int f2fs_make_empty_inline_dir(struct inode *inode, struct inode *parent,
struct page *ipage) struct page *ipage)
{ {
struct f2fs_dentry_ptr d; struct f2fs_dentry_ptr d;
...@@ -332,7 +329,7 @@ int make_empty_inline_dir(struct inode *inode, struct inode *parent, ...@@ -332,7 +329,7 @@ int make_empty_inline_dir(struct inode *inode, struct inode *parent,
inline_dentry = inline_data_addr(inode, ipage); inline_dentry = inline_data_addr(inode, ipage);
make_dentry_ptr_inline(inode, &d, inline_dentry); make_dentry_ptr_inline(inode, &d, inline_dentry);
do_make_empty_dir(inode, parent, &d); f2fs_do_make_empty_dir(inode, parent, &d);
set_page_dirty(ipage); set_page_dirty(ipage);
...@@ -367,7 +364,6 @@ static int f2fs_move_inline_dirents(struct inode *dir, struct page *ipage, ...@@ -367,7 +364,6 @@ static int f2fs_move_inline_dirents(struct inode *dir, struct page *ipage,
goto out; goto out;
f2fs_wait_on_page_writeback(page, DATA, true); f2fs_wait_on_page_writeback(page, DATA, true);
zero_user_segment(page, MAX_INLINE_DATA(dir), PAGE_SIZE);
dentry_blk = page_address(page); dentry_blk = page_address(page);
...@@ -391,7 +387,7 @@ static int f2fs_move_inline_dirents(struct inode *dir, struct page *ipage, ...@@ -391,7 +387,7 @@ static int f2fs_move_inline_dirents(struct inode *dir, struct page *ipage,
set_page_dirty(page); set_page_dirty(page);
/* clear inline dir and flag after data writeback */ /* clear inline dir and flag after data writeback */
truncate_inline_inode(dir, ipage, 0); f2fs_truncate_inline_inode(dir, ipage, 0);
stat_dec_inline_dir(dir); stat_dec_inline_dir(dir);
clear_inode_flag(dir, FI_INLINE_DENTRY); clear_inode_flag(dir, FI_INLINE_DENTRY);
...@@ -434,7 +430,7 @@ static int f2fs_add_inline_entries(struct inode *dir, void *inline_dentry) ...@@ -434,7 +430,7 @@ static int f2fs_add_inline_entries(struct inode *dir, void *inline_dentry)
new_name.len = le16_to_cpu(de->name_len); new_name.len = le16_to_cpu(de->name_len);
ino = le32_to_cpu(de->ino); ino = le32_to_cpu(de->ino);
fake_mode = get_de_type(de) << S_SHIFT; fake_mode = f2fs_get_de_type(de) << S_SHIFT;
err = f2fs_add_regular_entry(dir, &new_name, NULL, NULL, err = f2fs_add_regular_entry(dir, &new_name, NULL, NULL,
ino, fake_mode); ino, fake_mode);
...@@ -446,8 +442,8 @@ static int f2fs_add_inline_entries(struct inode *dir, void *inline_dentry) ...@@ -446,8 +442,8 @@ static int f2fs_add_inline_entries(struct inode *dir, void *inline_dentry)
return 0; return 0;
punch_dentry_pages: punch_dentry_pages:
truncate_inode_pages(&dir->i_data, 0); truncate_inode_pages(&dir->i_data, 0);
truncate_blocks(dir, 0, false); f2fs_truncate_blocks(dir, 0, false);
remove_dirty_inode(dir); f2fs_remove_dirty_inode(dir);
return err; return err;
} }
...@@ -465,7 +461,7 @@ static int f2fs_move_rehashed_dirents(struct inode *dir, struct page *ipage, ...@@ -465,7 +461,7 @@ static int f2fs_move_rehashed_dirents(struct inode *dir, struct page *ipage,
} }
memcpy(backup_dentry, inline_dentry, MAX_INLINE_DATA(dir)); memcpy(backup_dentry, inline_dentry, MAX_INLINE_DATA(dir));
truncate_inline_inode(dir, ipage, 0); f2fs_truncate_inline_inode(dir, ipage, 0);
unlock_page(ipage); unlock_page(ipage);
...@@ -514,14 +510,14 @@ int f2fs_add_inline_entry(struct inode *dir, const struct qstr *new_name, ...@@ -514,14 +510,14 @@ int f2fs_add_inline_entry(struct inode *dir, const struct qstr *new_name,
struct page *page = NULL; struct page *page = NULL;
int err = 0; int err = 0;
ipage = get_node_page(sbi, dir->i_ino); ipage = f2fs_get_node_page(sbi, dir->i_ino);
if (IS_ERR(ipage)) if (IS_ERR(ipage))
return PTR_ERR(ipage); return PTR_ERR(ipage);
inline_dentry = inline_data_addr(dir, ipage); inline_dentry = inline_data_addr(dir, ipage);
make_dentry_ptr_inline(dir, &d, inline_dentry); make_dentry_ptr_inline(dir, &d, inline_dentry);
bit_pos = room_for_filename(d.bitmap, slots, d.max); bit_pos = f2fs_room_for_filename(d.bitmap, slots, d.max);
if (bit_pos >= d.max) { if (bit_pos >= d.max) {
err = f2fs_convert_inline_dir(dir, ipage, inline_dentry); err = f2fs_convert_inline_dir(dir, ipage, inline_dentry);
if (err) if (err)
...@@ -532,7 +528,7 @@ int f2fs_add_inline_entry(struct inode *dir, const struct qstr *new_name, ...@@ -532,7 +528,7 @@ int f2fs_add_inline_entry(struct inode *dir, const struct qstr *new_name,
if (inode) { if (inode) {
down_write(&F2FS_I(inode)->i_sem); down_write(&F2FS_I(inode)->i_sem);
page = init_inode_metadata(inode, dir, new_name, page = f2fs_init_inode_metadata(inode, dir, new_name,
orig_name, ipage); orig_name, ipage);
if (IS_ERR(page)) { if (IS_ERR(page)) {
err = PTR_ERR(page); err = PTR_ERR(page);
...@@ -553,7 +549,7 @@ int f2fs_add_inline_entry(struct inode *dir, const struct qstr *new_name, ...@@ -553,7 +549,7 @@ int f2fs_add_inline_entry(struct inode *dir, const struct qstr *new_name,
f2fs_put_page(page, 1); f2fs_put_page(page, 1);
} }
update_parent_metadata(dir, inode, 0); f2fs_update_parent_metadata(dir, inode, 0);
fail: fail:
if (inode) if (inode)
up_write(&F2FS_I(inode)->i_sem); up_write(&F2FS_I(inode)->i_sem);
...@@ -599,7 +595,7 @@ bool f2fs_empty_inline_dir(struct inode *dir) ...@@ -599,7 +595,7 @@ bool f2fs_empty_inline_dir(struct inode *dir)
void *inline_dentry; void *inline_dentry;
struct f2fs_dentry_ptr d; struct f2fs_dentry_ptr d;
ipage = get_node_page(sbi, dir->i_ino); ipage = f2fs_get_node_page(sbi, dir->i_ino);
if (IS_ERR(ipage)) if (IS_ERR(ipage))
return false; return false;
...@@ -630,7 +626,7 @@ int f2fs_read_inline_dir(struct file *file, struct dir_context *ctx, ...@@ -630,7 +626,7 @@ int f2fs_read_inline_dir(struct file *file, struct dir_context *ctx,
if (ctx->pos == d.max) if (ctx->pos == d.max)
return 0; return 0;
ipage = get_node_page(F2FS_I_SB(inode), inode->i_ino); ipage = f2fs_get_node_page(F2FS_I_SB(inode), inode->i_ino);
if (IS_ERR(ipage)) if (IS_ERR(ipage))
return PTR_ERR(ipage); return PTR_ERR(ipage);
...@@ -656,7 +652,7 @@ int f2fs_inline_data_fiemap(struct inode *inode, ...@@ -656,7 +652,7 @@ int f2fs_inline_data_fiemap(struct inode *inode,
struct page *ipage; struct page *ipage;
int err = 0; int err = 0;
ipage = get_node_page(F2FS_I_SB(inode), inode->i_ino); ipage = f2fs_get_node_page(F2FS_I_SB(inode), inode->i_ino);
if (IS_ERR(ipage)) if (IS_ERR(ipage))
return PTR_ERR(ipage); return PTR_ERR(ipage);
...@@ -672,7 +668,7 @@ int f2fs_inline_data_fiemap(struct inode *inode, ...@@ -672,7 +668,7 @@ int f2fs_inline_data_fiemap(struct inode *inode,
ilen = start + len; ilen = start + len;
ilen -= start; ilen -= start;
get_node_info(F2FS_I_SB(inode), inode->i_ino, &ni); f2fs_get_node_info(F2FS_I_SB(inode), inode->i_ino, &ni);
byteaddr = (__u64)ni.blk_addr << inode->i_sb->s_blocksize_bits; byteaddr = (__u64)ni.blk_addr << inode->i_sb->s_blocksize_bits;
byteaddr += (char *)inline_data_addr(inode, ipage) - byteaddr += (char *)inline_data_addr(inode, ipage) -
(char *)F2FS_INODE(ipage); (char *)F2FS_INODE(ipage);
......
...@@ -36,15 +36,15 @@ void f2fs_set_inode_flags(struct inode *inode) ...@@ -36,15 +36,15 @@ void f2fs_set_inode_flags(struct inode *inode)
unsigned int flags = F2FS_I(inode)->i_flags; unsigned int flags = F2FS_I(inode)->i_flags;
unsigned int new_fl = 0; unsigned int new_fl = 0;
if (flags & FS_SYNC_FL) if (flags & F2FS_SYNC_FL)
new_fl |= S_SYNC; new_fl |= S_SYNC;
if (flags & FS_APPEND_FL) if (flags & F2FS_APPEND_FL)
new_fl |= S_APPEND; new_fl |= S_APPEND;
if (flags & FS_IMMUTABLE_FL) if (flags & F2FS_IMMUTABLE_FL)
new_fl |= S_IMMUTABLE; new_fl |= S_IMMUTABLE;
if (flags & FS_NOATIME_FL) if (flags & F2FS_NOATIME_FL)
new_fl |= S_NOATIME; new_fl |= S_NOATIME;
if (flags & FS_DIRSYNC_FL) if (flags & F2FS_DIRSYNC_FL)
new_fl |= S_DIRSYNC; new_fl |= S_DIRSYNC;
if (f2fs_encrypted_inode(inode)) if (f2fs_encrypted_inode(inode))
new_fl |= S_ENCRYPTED; new_fl |= S_ENCRYPTED;
...@@ -72,7 +72,7 @@ static bool __written_first_block(struct f2fs_inode *ri) ...@@ -72,7 +72,7 @@ static bool __written_first_block(struct f2fs_inode *ri)
{ {
block_t addr = le32_to_cpu(ri->i_addr[offset_in_addr(ri)]); block_t addr = le32_to_cpu(ri->i_addr[offset_in_addr(ri)]);
if (addr != NEW_ADDR && addr != NULL_ADDR) if (is_valid_blkaddr(addr))
return true; return true;
return false; return false;
} }
...@@ -117,7 +117,6 @@ static void __recover_inline_status(struct inode *inode, struct page *ipage) ...@@ -117,7 +117,6 @@ static void __recover_inline_status(struct inode *inode, struct page *ipage)
static bool f2fs_enable_inode_chksum(struct f2fs_sb_info *sbi, struct page *page) static bool f2fs_enable_inode_chksum(struct f2fs_sb_info *sbi, struct page *page)
{ {
struct f2fs_inode *ri = &F2FS_NODE(page)->i; struct f2fs_inode *ri = &F2FS_NODE(page)->i;
int extra_isize = le32_to_cpu(ri->i_extra_isize);
if (!f2fs_sb_has_inode_chksum(sbi->sb)) if (!f2fs_sb_has_inode_chksum(sbi->sb))
return false; return false;
...@@ -125,7 +124,8 @@ static bool f2fs_enable_inode_chksum(struct f2fs_sb_info *sbi, struct page *page ...@@ -125,7 +124,8 @@ static bool f2fs_enable_inode_chksum(struct f2fs_sb_info *sbi, struct page *page
if (!RAW_IS_INODE(F2FS_NODE(page)) || !(ri->i_inline & F2FS_EXTRA_ATTR)) if (!RAW_IS_INODE(F2FS_NODE(page)) || !(ri->i_inline & F2FS_EXTRA_ATTR))
return false; return false;
if (!F2FS_FITS_IN_INODE(ri, extra_isize, i_inode_checksum)) if (!F2FS_FITS_IN_INODE(ri, le16_to_cpu(ri->i_extra_isize),
i_inode_checksum))
return false; return false;
return true; return true;
...@@ -185,6 +185,21 @@ void f2fs_inode_chksum_set(struct f2fs_sb_info *sbi, struct page *page) ...@@ -185,6 +185,21 @@ void f2fs_inode_chksum_set(struct f2fs_sb_info *sbi, struct page *page)
ri->i_inode_checksum = cpu_to_le32(f2fs_inode_chksum(sbi, page)); ri->i_inode_checksum = cpu_to_le32(f2fs_inode_chksum(sbi, page));
} }
static bool sanity_check_inode(struct inode *inode)
{
struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
if (f2fs_sb_has_flexible_inline_xattr(sbi->sb)
&& !f2fs_has_extra_attr(inode)) {
set_sbi_flag(sbi, SBI_NEED_FSCK);
f2fs_msg(sbi->sb, KERN_WARNING,
"%s: corrupted inode ino=%lx, run fsck to fix.",
__func__, inode->i_ino);
return false;
}
return true;
}
static int do_read_inode(struct inode *inode) static int do_read_inode(struct inode *inode)
{ {
struct f2fs_sb_info *sbi = F2FS_I_SB(inode); struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
...@@ -194,14 +209,10 @@ static int do_read_inode(struct inode *inode) ...@@ -194,14 +209,10 @@ static int do_read_inode(struct inode *inode)
projid_t i_projid; projid_t i_projid;
/* Check if ino is within scope */ /* Check if ino is within scope */
if (check_nid_range(sbi, inode->i_ino)) { if (f2fs_check_nid_range(sbi, inode->i_ino))
f2fs_msg(inode->i_sb, KERN_ERR, "bad inode number: %lu",
(unsigned long) inode->i_ino);
WARN_ON(1);
return -EINVAL; return -EINVAL;
}
node_page = get_node_page(sbi, inode->i_ino); node_page = f2fs_get_node_page(sbi, inode->i_ino);
if (IS_ERR(node_page)) if (IS_ERR(node_page))
return PTR_ERR(node_page); return PTR_ERR(node_page);
...@@ -221,8 +232,11 @@ static int do_read_inode(struct inode *inode) ...@@ -221,8 +232,11 @@ static int do_read_inode(struct inode *inode)
inode->i_ctime.tv_nsec = le32_to_cpu(ri->i_ctime_nsec); inode->i_ctime.tv_nsec = le32_to_cpu(ri->i_ctime_nsec);
inode->i_mtime.tv_nsec = le32_to_cpu(ri->i_mtime_nsec); inode->i_mtime.tv_nsec = le32_to_cpu(ri->i_mtime_nsec);
inode->i_generation = le32_to_cpu(ri->i_generation); inode->i_generation = le32_to_cpu(ri->i_generation);
if (S_ISDIR(inode->i_mode))
fi->i_current_depth = le32_to_cpu(ri->i_current_depth); fi->i_current_depth = le32_to_cpu(ri->i_current_depth);
else if (S_ISREG(inode->i_mode))
fi->i_gc_failures[GC_FAILURE_PIN] =
le16_to_cpu(ri->i_gc_failures);
fi->i_xattr_nid = le32_to_cpu(ri->i_xattr_nid); fi->i_xattr_nid = le32_to_cpu(ri->i_xattr_nid);
fi->i_flags = le32_to_cpu(ri->i_flags); fi->i_flags = le32_to_cpu(ri->i_flags);
fi->flags = 0; fi->flags = 0;
...@@ -239,7 +253,6 @@ static int do_read_inode(struct inode *inode) ...@@ -239,7 +253,6 @@ static int do_read_inode(struct inode *inode)
le16_to_cpu(ri->i_extra_isize) : 0; le16_to_cpu(ri->i_extra_isize) : 0;
if (f2fs_sb_has_flexible_inline_xattr(sbi->sb)) { if (f2fs_sb_has_flexible_inline_xattr(sbi->sb)) {
f2fs_bug_on(sbi, !f2fs_has_extra_attr(inode));
fi->i_inline_xattr_size = le16_to_cpu(ri->i_inline_xattr_size); fi->i_inline_xattr_size = le16_to_cpu(ri->i_inline_xattr_size);
} else if (f2fs_has_inline_xattr(inode) || } else if (f2fs_has_inline_xattr(inode) ||
f2fs_has_inline_dentry(inode)) { f2fs_has_inline_dentry(inode)) {
...@@ -265,10 +278,10 @@ static int do_read_inode(struct inode *inode) ...@@ -265,10 +278,10 @@ static int do_read_inode(struct inode *inode)
if (__written_first_block(ri)) if (__written_first_block(ri))
set_inode_flag(inode, FI_FIRST_BLOCK_WRITTEN); set_inode_flag(inode, FI_FIRST_BLOCK_WRITTEN);
if (!need_inode_block_update(sbi, inode->i_ino)) if (!f2fs_need_inode_block_update(sbi, inode->i_ino))
fi->last_disk_size = inode->i_size; fi->last_disk_size = inode->i_size;
if (fi->i_flags & FS_PROJINHERIT_FL) if (fi->i_flags & F2FS_PROJINHERIT_FL)
set_inode_flag(inode, FI_PROJ_INHERIT); set_inode_flag(inode, FI_PROJ_INHERIT);
if (f2fs_has_extra_attr(inode) && f2fs_sb_has_project_quota(sbi->sb) && if (f2fs_has_extra_attr(inode) && f2fs_sb_has_project_quota(sbi->sb) &&
...@@ -317,13 +330,17 @@ struct inode *f2fs_iget(struct super_block *sb, unsigned long ino) ...@@ -317,13 +330,17 @@ struct inode *f2fs_iget(struct super_block *sb, unsigned long ino)
ret = do_read_inode(inode); ret = do_read_inode(inode);
if (ret) if (ret)
goto bad_inode; goto bad_inode;
if (!sanity_check_inode(inode)) {
ret = -EINVAL;
goto bad_inode;
}
make_now: make_now:
if (ino == F2FS_NODE_INO(sbi)) { if (ino == F2FS_NODE_INO(sbi)) {
inode->i_mapping->a_ops = &f2fs_node_aops; inode->i_mapping->a_ops = &f2fs_node_aops;
mapping_set_gfp_mask(inode->i_mapping, GFP_F2FS_ZERO); mapping_set_gfp_mask(inode->i_mapping, GFP_NOFS);
} else if (ino == F2FS_META_INO(sbi)) { } else if (ino == F2FS_META_INO(sbi)) {
inode->i_mapping->a_ops = &f2fs_meta_aops; inode->i_mapping->a_ops = &f2fs_meta_aops;
mapping_set_gfp_mask(inode->i_mapping, GFP_F2FS_ZERO); mapping_set_gfp_mask(inode->i_mapping, GFP_NOFS);
} else if (S_ISREG(inode->i_mode)) { } else if (S_ISREG(inode->i_mode)) {
inode->i_op = &f2fs_file_inode_operations; inode->i_op = &f2fs_file_inode_operations;
inode->i_fop = &f2fs_file_operations; inode->i_fop = &f2fs_file_operations;
...@@ -373,7 +390,7 @@ struct inode *f2fs_iget_retry(struct super_block *sb, unsigned long ino) ...@@ -373,7 +390,7 @@ struct inode *f2fs_iget_retry(struct super_block *sb, unsigned long ino)
return inode; return inode;
} }
void update_inode(struct inode *inode, struct page *node_page) void f2fs_update_inode(struct inode *inode, struct page *node_page)
{ {
struct f2fs_inode *ri; struct f2fs_inode *ri;
struct extent_tree *et = F2FS_I(inode)->extent_tree; struct extent_tree *et = F2FS_I(inode)->extent_tree;
...@@ -408,7 +425,12 @@ void update_inode(struct inode *inode, struct page *node_page) ...@@ -408,7 +425,12 @@ void update_inode(struct inode *inode, struct page *node_page)
ri->i_atime_nsec = cpu_to_le32(inode->i_atime.tv_nsec); ri->i_atime_nsec = cpu_to_le32(inode->i_atime.tv_nsec);
ri->i_ctime_nsec = cpu_to_le32(inode->i_ctime.tv_nsec); ri->i_ctime_nsec = cpu_to_le32(inode->i_ctime.tv_nsec);
ri->i_mtime_nsec = cpu_to_le32(inode->i_mtime.tv_nsec); ri->i_mtime_nsec = cpu_to_le32(inode->i_mtime.tv_nsec);
ri->i_current_depth = cpu_to_le32(F2FS_I(inode)->i_current_depth); if (S_ISDIR(inode->i_mode))
ri->i_current_depth =
cpu_to_le32(F2FS_I(inode)->i_current_depth);
else if (S_ISREG(inode->i_mode))
ri->i_gc_failures =
cpu_to_le16(F2FS_I(inode)->i_gc_failures[GC_FAILURE_PIN]);
ri->i_xattr_nid = cpu_to_le32(F2FS_I(inode)->i_xattr_nid); ri->i_xattr_nid = cpu_to_le32(F2FS_I(inode)->i_xattr_nid);
ri->i_flags = cpu_to_le32(F2FS_I(inode)->i_flags); ri->i_flags = cpu_to_le32(F2FS_I(inode)->i_flags);
ri->i_pino = cpu_to_le32(F2FS_I(inode)->i_pino); ri->i_pino = cpu_to_le32(F2FS_I(inode)->i_pino);
...@@ -454,12 +476,12 @@ void update_inode(struct inode *inode, struct page *node_page) ...@@ -454,12 +476,12 @@ void update_inode(struct inode *inode, struct page *node_page)
F2FS_I(inode)->i_disk_time[3] = F2FS_I(inode)->i_crtime; F2FS_I(inode)->i_disk_time[3] = F2FS_I(inode)->i_crtime;
} }
void update_inode_page(struct inode *inode) void f2fs_update_inode_page(struct inode *inode)
{ {
struct f2fs_sb_info *sbi = F2FS_I_SB(inode); struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
struct page *node_page; struct page *node_page;
retry: retry:
node_page = get_node_page(sbi, inode->i_ino); node_page = f2fs_get_node_page(sbi, inode->i_ino);
if (IS_ERR(node_page)) { if (IS_ERR(node_page)) {
int err = PTR_ERR(node_page); int err = PTR_ERR(node_page);
if (err == -ENOMEM) { if (err == -ENOMEM) {
...@@ -470,7 +492,7 @@ void update_inode_page(struct inode *inode) ...@@ -470,7 +492,7 @@ void update_inode_page(struct inode *inode)
} }
return; return;
} }
update_inode(inode, node_page); f2fs_update_inode(inode, node_page);
f2fs_put_page(node_page, 1); f2fs_put_page(node_page, 1);
} }
...@@ -489,7 +511,7 @@ int f2fs_write_inode(struct inode *inode, struct writeback_control *wbc) ...@@ -489,7 +511,7 @@ int f2fs_write_inode(struct inode *inode, struct writeback_control *wbc)
* We need to balance fs here to prevent from producing dirty node pages * We need to balance fs here to prevent from producing dirty node pages
* during the urgent cleaning time when runing out of free sections. * during the urgent cleaning time when runing out of free sections.
*/ */
update_inode_page(inode); f2fs_update_inode_page(inode);
if (wbc && wbc->nr_to_write) if (wbc && wbc->nr_to_write)
f2fs_balance_fs(sbi, true); f2fs_balance_fs(sbi, true);
return 0; return 0;
...@@ -506,7 +528,7 @@ void f2fs_evict_inode(struct inode *inode) ...@@ -506,7 +528,7 @@ void f2fs_evict_inode(struct inode *inode)
/* some remained atomic pages should discarded */ /* some remained atomic pages should discarded */
if (f2fs_is_atomic_file(inode)) if (f2fs_is_atomic_file(inode))
drop_inmem_pages(inode); f2fs_drop_inmem_pages(inode);
trace_f2fs_evict_inode(inode); trace_f2fs_evict_inode(inode);
truncate_inode_pages_final(&inode->i_data); truncate_inode_pages_final(&inode->i_data);
...@@ -516,7 +538,7 @@ void f2fs_evict_inode(struct inode *inode) ...@@ -516,7 +538,7 @@ void f2fs_evict_inode(struct inode *inode)
goto out_clear; goto out_clear;
f2fs_bug_on(sbi, get_dirty_pages(inode)); f2fs_bug_on(sbi, get_dirty_pages(inode));
remove_dirty_inode(inode); f2fs_remove_dirty_inode(inode);
f2fs_destroy_extent_tree(inode); f2fs_destroy_extent_tree(inode);
...@@ -525,9 +547,9 @@ void f2fs_evict_inode(struct inode *inode) ...@@ -525,9 +547,9 @@ void f2fs_evict_inode(struct inode *inode)
dquot_initialize(inode); dquot_initialize(inode);
remove_ino_entry(sbi, inode->i_ino, APPEND_INO); f2fs_remove_ino_entry(sbi, inode->i_ino, APPEND_INO);
remove_ino_entry(sbi, inode->i_ino, UPDATE_INO); f2fs_remove_ino_entry(sbi, inode->i_ino, UPDATE_INO);
remove_ino_entry(sbi, inode->i_ino, FLUSH_INO); f2fs_remove_ino_entry(sbi, inode->i_ino, FLUSH_INO);
sb_start_intwrite(inode->i_sb); sb_start_intwrite(inode->i_sb);
set_inode_flag(inode, FI_NO_ALLOC); set_inode_flag(inode, FI_NO_ALLOC);
...@@ -544,7 +566,7 @@ void f2fs_evict_inode(struct inode *inode) ...@@ -544,7 +566,7 @@ void f2fs_evict_inode(struct inode *inode)
#endif #endif
if (!err) { if (!err) {
f2fs_lock_op(sbi); f2fs_lock_op(sbi);
err = remove_inode_page(inode); err = f2fs_remove_inode_page(inode);
f2fs_unlock_op(sbi); f2fs_unlock_op(sbi);
if (err == -ENOENT) if (err == -ENOENT)
err = 0; err = 0;
...@@ -557,7 +579,7 @@ void f2fs_evict_inode(struct inode *inode) ...@@ -557,7 +579,7 @@ void f2fs_evict_inode(struct inode *inode)
} }
if (err) if (err)
update_inode_page(inode); f2fs_update_inode_page(inode);
dquot_free_inode(inode); dquot_free_inode(inode);
sb_end_intwrite(inode->i_sb); sb_end_intwrite(inode->i_sb);
no_delete: no_delete:
...@@ -580,16 +602,19 @@ void f2fs_evict_inode(struct inode *inode) ...@@ -580,16 +602,19 @@ void f2fs_evict_inode(struct inode *inode)
invalidate_mapping_pages(NODE_MAPPING(sbi), xnid, xnid); invalidate_mapping_pages(NODE_MAPPING(sbi), xnid, xnid);
if (inode->i_nlink) { if (inode->i_nlink) {
if (is_inode_flag_set(inode, FI_APPEND_WRITE)) if (is_inode_flag_set(inode, FI_APPEND_WRITE))
add_ino_entry(sbi, inode->i_ino, APPEND_INO); f2fs_add_ino_entry(sbi, inode->i_ino, APPEND_INO);
if (is_inode_flag_set(inode, FI_UPDATE_WRITE)) if (is_inode_flag_set(inode, FI_UPDATE_WRITE))
add_ino_entry(sbi, inode->i_ino, UPDATE_INO); f2fs_add_ino_entry(sbi, inode->i_ino, UPDATE_INO);
} }
if (is_inode_flag_set(inode, FI_FREE_NID)) { if (is_inode_flag_set(inode, FI_FREE_NID)) {
alloc_nid_failed(sbi, inode->i_ino); f2fs_alloc_nid_failed(sbi, inode->i_ino);
clear_inode_flag(inode, FI_FREE_NID); clear_inode_flag(inode, FI_FREE_NID);
} else { } else {
f2fs_bug_on(sbi, err && /*
!exist_written_data(sbi, inode->i_ino, ORPHAN_INO)); * If xattr nid is corrupted, we can reach out error condition,
* err & !f2fs_exist_written_data(sbi, inode->i_ino, ORPHAN_INO)).
* In that case, f2fs_check_nid_range() is enough to give a clue.
*/
} }
out_clear: out_clear:
fscrypt_put_encryption_info(inode); fscrypt_put_encryption_info(inode);
...@@ -597,7 +622,7 @@ void f2fs_evict_inode(struct inode *inode) ...@@ -597,7 +622,7 @@ void f2fs_evict_inode(struct inode *inode)
} }
/* caller should call f2fs_lock_op() */ /* caller should call f2fs_lock_op() */
void handle_failed_inode(struct inode *inode) void f2fs_handle_failed_inode(struct inode *inode)
{ {
struct f2fs_sb_info *sbi = F2FS_I_SB(inode); struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
struct node_info ni; struct node_info ni;
...@@ -612,7 +637,7 @@ void handle_failed_inode(struct inode *inode) ...@@ -612,7 +637,7 @@ void handle_failed_inode(struct inode *inode)
* we must call this to avoid inode being remained as dirty, resulting * we must call this to avoid inode being remained as dirty, resulting
* in a panic when flushing dirty inodes in gdirty_list. * in a panic when flushing dirty inodes in gdirty_list.
*/ */
update_inode_page(inode); f2fs_update_inode_page(inode);
f2fs_inode_synced(inode); f2fs_inode_synced(inode);
/* don't make bad inode, since it becomes a regular file. */ /* don't make bad inode, since it becomes a regular file. */
...@@ -623,18 +648,18 @@ void handle_failed_inode(struct inode *inode) ...@@ -623,18 +648,18 @@ void handle_failed_inode(struct inode *inode)
* so we can prevent losing this orphan when encoutering checkpoint * so we can prevent losing this orphan when encoutering checkpoint
* and following suddenly power-off. * and following suddenly power-off.
*/ */
get_node_info(sbi, inode->i_ino, &ni); f2fs_get_node_info(sbi, inode->i_ino, &ni);
if (ni.blk_addr != NULL_ADDR) { if (ni.blk_addr != NULL_ADDR) {
int err = acquire_orphan_inode(sbi); int err = f2fs_acquire_orphan_inode(sbi);
if (err) { if (err) {
set_sbi_flag(sbi, SBI_NEED_FSCK); set_sbi_flag(sbi, SBI_NEED_FSCK);
f2fs_msg(sbi->sb, KERN_WARNING, f2fs_msg(sbi->sb, KERN_WARNING,
"Too many orphan inodes, run fsck to fix."); "Too many orphan inodes, run fsck to fix.");
} else { } else {
add_orphan_inode(inode); f2fs_add_orphan_inode(inode);
} }
alloc_nid_done(sbi, inode->i_ino); f2fs_alloc_nid_done(sbi, inode->i_ino);
} else { } else {
set_inode_flag(inode, FI_FREE_NID); set_inode_flag(inode, FI_FREE_NID);
} }
......
...@@ -37,7 +37,7 @@ static struct inode *f2fs_new_inode(struct inode *dir, umode_t mode) ...@@ -37,7 +37,7 @@ static struct inode *f2fs_new_inode(struct inode *dir, umode_t mode)
return ERR_PTR(-ENOMEM); return ERR_PTR(-ENOMEM);
f2fs_lock_op(sbi); f2fs_lock_op(sbi);
if (!alloc_nid(sbi, &ino)) { if (!f2fs_alloc_nid(sbi, &ino)) {
f2fs_unlock_op(sbi); f2fs_unlock_op(sbi);
err = -ENOSPC; err = -ENOSPC;
goto fail; goto fail;
...@@ -54,6 +54,9 @@ static struct inode *f2fs_new_inode(struct inode *dir, umode_t mode) ...@@ -54,6 +54,9 @@ static struct inode *f2fs_new_inode(struct inode *dir, umode_t mode)
F2FS_I(inode)->i_crtime = current_time(inode); F2FS_I(inode)->i_crtime = current_time(inode);
inode->i_generation = sbi->s_next_generation++; inode->i_generation = sbi->s_next_generation++;
if (S_ISDIR(inode->i_mode))
F2FS_I(inode)->i_current_depth = 1;
err = insert_inode_locked(inode); err = insert_inode_locked(inode);
if (err) { if (err) {
err = -EINVAL; err = -EINVAL;
...@@ -61,7 +64,7 @@ static struct inode *f2fs_new_inode(struct inode *dir, umode_t mode) ...@@ -61,7 +64,7 @@ static struct inode *f2fs_new_inode(struct inode *dir, umode_t mode)
} }
if (f2fs_sb_has_project_quota(sbi->sb) && if (f2fs_sb_has_project_quota(sbi->sb) &&
(F2FS_I(dir)->i_flags & FS_PROJINHERIT_FL)) (F2FS_I(dir)->i_flags & F2FS_PROJINHERIT_FL))
F2FS_I(inode)->i_projid = F2FS_I(dir)->i_projid; F2FS_I(inode)->i_projid = F2FS_I(dir)->i_projid;
else else
F2FS_I(inode)->i_projid = make_kprojid(&init_user_ns, F2FS_I(inode)->i_projid = make_kprojid(&init_user_ns,
...@@ -116,9 +119,9 @@ static struct inode *f2fs_new_inode(struct inode *dir, umode_t mode) ...@@ -116,9 +119,9 @@ static struct inode *f2fs_new_inode(struct inode *dir, umode_t mode)
f2fs_mask_flags(mode, F2FS_I(dir)->i_flags & F2FS_FL_INHERITED); f2fs_mask_flags(mode, F2FS_I(dir)->i_flags & F2FS_FL_INHERITED);
if (S_ISDIR(inode->i_mode)) if (S_ISDIR(inode->i_mode))
F2FS_I(inode)->i_flags |= FS_INDEX_FL; F2FS_I(inode)->i_flags |= F2FS_INDEX_FL;
if (F2FS_I(inode)->i_flags & FS_PROJINHERIT_FL) if (F2FS_I(inode)->i_flags & F2FS_PROJINHERIT_FL)
set_inode_flag(inode, FI_PROJ_INHERIT); set_inode_flag(inode, FI_PROJ_INHERIT);
trace_f2fs_new_inode(inode, 0); trace_f2fs_new_inode(inode, 0);
...@@ -193,7 +196,7 @@ static inline void set_file_temperature(struct f2fs_sb_info *sbi, struct inode * ...@@ -193,7 +196,7 @@ static inline void set_file_temperature(struct f2fs_sb_info *sbi, struct inode *
up_read(&sbi->sb_lock); up_read(&sbi->sb_lock);
} }
int update_extension_list(struct f2fs_sb_info *sbi, const char *name, int f2fs_update_extension_list(struct f2fs_sb_info *sbi, const char *name,
bool hot, bool set) bool hot, bool set)
{ {
__u8 (*extlist)[F2FS_EXTENSION_LEN] = sbi->raw_super->extension_list; __u8 (*extlist)[F2FS_EXTENSION_LEN] = sbi->raw_super->extension_list;
...@@ -292,7 +295,7 @@ static int f2fs_create(struct inode *dir, struct dentry *dentry, umode_t mode, ...@@ -292,7 +295,7 @@ static int f2fs_create(struct inode *dir, struct dentry *dentry, umode_t mode,
goto out; goto out;
f2fs_unlock_op(sbi); f2fs_unlock_op(sbi);
alloc_nid_done(sbi, ino); f2fs_alloc_nid_done(sbi, ino);
d_instantiate_new(dentry, inode); d_instantiate_new(dentry, inode);
...@@ -302,7 +305,7 @@ static int f2fs_create(struct inode *dir, struct dentry *dentry, umode_t mode, ...@@ -302,7 +305,7 @@ static int f2fs_create(struct inode *dir, struct dentry *dentry, umode_t mode,
f2fs_balance_fs(sbi, true); f2fs_balance_fs(sbi, true);
return 0; return 0;
out: out:
handle_failed_inode(inode); f2fs_handle_failed_inode(inode);
return err; return err;
} }
...@@ -397,7 +400,7 @@ static int __recover_dot_dentries(struct inode *dir, nid_t pino) ...@@ -397,7 +400,7 @@ static int __recover_dot_dentries(struct inode *dir, nid_t pino)
err = PTR_ERR(page); err = PTR_ERR(page);
goto out; goto out;
} else { } else {
err = __f2fs_add_link(dir, &dot, NULL, dir->i_ino, S_IFDIR); err = f2fs_do_add_link(dir, &dot, NULL, dir->i_ino, S_IFDIR);
if (err) if (err)
goto out; goto out;
} }
...@@ -408,7 +411,7 @@ static int __recover_dot_dentries(struct inode *dir, nid_t pino) ...@@ -408,7 +411,7 @@ static int __recover_dot_dentries(struct inode *dir, nid_t pino)
else if (IS_ERR(page)) else if (IS_ERR(page))
err = PTR_ERR(page); err = PTR_ERR(page);
else else
err = __f2fs_add_link(dir, &dotdot, NULL, pino, S_IFDIR); err = f2fs_do_add_link(dir, &dotdot, NULL, pino, S_IFDIR);
out: out:
if (!err) if (!err)
clear_inode_flag(dir, FI_INLINE_DOTS); clear_inode_flag(dir, FI_INLINE_DOTS);
...@@ -520,7 +523,7 @@ static int f2fs_unlink(struct inode *dir, struct dentry *dentry) ...@@ -520,7 +523,7 @@ static int f2fs_unlink(struct inode *dir, struct dentry *dentry)
f2fs_balance_fs(sbi, true); f2fs_balance_fs(sbi, true);
f2fs_lock_op(sbi); f2fs_lock_op(sbi);
err = acquire_orphan_inode(sbi); err = f2fs_acquire_orphan_inode(sbi);
if (err) { if (err) {
f2fs_unlock_op(sbi); f2fs_unlock_op(sbi);
f2fs_put_page(page, 0); f2fs_put_page(page, 0);
...@@ -585,9 +588,9 @@ static int f2fs_symlink(struct inode *dir, struct dentry *dentry, ...@@ -585,9 +588,9 @@ static int f2fs_symlink(struct inode *dir, struct dentry *dentry,
f2fs_lock_op(sbi); f2fs_lock_op(sbi);
err = f2fs_add_link(dentry, inode); err = f2fs_add_link(dentry, inode);
if (err) if (err)
goto out_handle_failed_inode; goto out_f2fs_handle_failed_inode;
f2fs_unlock_op(sbi); f2fs_unlock_op(sbi);
alloc_nid_done(sbi, inode->i_ino); f2fs_alloc_nid_done(sbi, inode->i_ino);
err = fscrypt_encrypt_symlink(inode, symname, len, &disk_link); err = fscrypt_encrypt_symlink(inode, symname, len, &disk_link);
if (err) if (err)
...@@ -620,8 +623,8 @@ static int f2fs_symlink(struct inode *dir, struct dentry *dentry, ...@@ -620,8 +623,8 @@ static int f2fs_symlink(struct inode *dir, struct dentry *dentry,
f2fs_balance_fs(sbi, true); f2fs_balance_fs(sbi, true);
goto out_free_encrypted_link; goto out_free_encrypted_link;
out_handle_failed_inode: out_f2fs_handle_failed_inode:
handle_failed_inode(inode); f2fs_handle_failed_inode(inode);
out_free_encrypted_link: out_free_encrypted_link:
if (disk_link.name != (unsigned char *)symname) if (disk_link.name != (unsigned char *)symname)
kfree(disk_link.name); kfree(disk_link.name);
...@@ -657,7 +660,7 @@ static int f2fs_mkdir(struct inode *dir, struct dentry *dentry, umode_t mode) ...@@ -657,7 +660,7 @@ static int f2fs_mkdir(struct inode *dir, struct dentry *dentry, umode_t mode)
goto out_fail; goto out_fail;
f2fs_unlock_op(sbi); f2fs_unlock_op(sbi);
alloc_nid_done(sbi, inode->i_ino); f2fs_alloc_nid_done(sbi, inode->i_ino);
d_instantiate_new(dentry, inode); d_instantiate_new(dentry, inode);
...@@ -669,7 +672,7 @@ static int f2fs_mkdir(struct inode *dir, struct dentry *dentry, umode_t mode) ...@@ -669,7 +672,7 @@ static int f2fs_mkdir(struct inode *dir, struct dentry *dentry, umode_t mode)
out_fail: out_fail:
clear_inode_flag(inode, FI_INC_LINK); clear_inode_flag(inode, FI_INC_LINK);
handle_failed_inode(inode); f2fs_handle_failed_inode(inode);
return err; return err;
} }
...@@ -708,7 +711,7 @@ static int f2fs_mknod(struct inode *dir, struct dentry *dentry, ...@@ -708,7 +711,7 @@ static int f2fs_mknod(struct inode *dir, struct dentry *dentry,
goto out; goto out;
f2fs_unlock_op(sbi); f2fs_unlock_op(sbi);
alloc_nid_done(sbi, inode->i_ino); f2fs_alloc_nid_done(sbi, inode->i_ino);
d_instantiate_new(dentry, inode); d_instantiate_new(dentry, inode);
...@@ -718,7 +721,7 @@ static int f2fs_mknod(struct inode *dir, struct dentry *dentry, ...@@ -718,7 +721,7 @@ static int f2fs_mknod(struct inode *dir, struct dentry *dentry,
f2fs_balance_fs(sbi, true); f2fs_balance_fs(sbi, true);
return 0; return 0;
out: out:
handle_failed_inode(inode); f2fs_handle_failed_inode(inode);
return err; return err;
} }
...@@ -747,7 +750,7 @@ static int __f2fs_tmpfile(struct inode *dir, struct dentry *dentry, ...@@ -747,7 +750,7 @@ static int __f2fs_tmpfile(struct inode *dir, struct dentry *dentry,
} }
f2fs_lock_op(sbi); f2fs_lock_op(sbi);
err = acquire_orphan_inode(sbi); err = f2fs_acquire_orphan_inode(sbi);
if (err) if (err)
goto out; goto out;
...@@ -759,8 +762,8 @@ static int __f2fs_tmpfile(struct inode *dir, struct dentry *dentry, ...@@ -759,8 +762,8 @@ static int __f2fs_tmpfile(struct inode *dir, struct dentry *dentry,
* add this non-linked tmpfile to orphan list, in this way we could * add this non-linked tmpfile to orphan list, in this way we could
* remove all unused data of tmpfile after abnormal power-off. * remove all unused data of tmpfile after abnormal power-off.
*/ */
add_orphan_inode(inode); f2fs_add_orphan_inode(inode);
alloc_nid_done(sbi, inode->i_ino); f2fs_alloc_nid_done(sbi, inode->i_ino);
if (whiteout) { if (whiteout) {
f2fs_i_links_write(inode, false); f2fs_i_links_write(inode, false);
...@@ -776,9 +779,9 @@ static int __f2fs_tmpfile(struct inode *dir, struct dentry *dentry, ...@@ -776,9 +779,9 @@ static int __f2fs_tmpfile(struct inode *dir, struct dentry *dentry,
return 0; return 0;
release_out: release_out:
release_orphan_inode(sbi); f2fs_release_orphan_inode(sbi);
out: out:
handle_failed_inode(inode); f2fs_handle_failed_inode(inode);
return err; return err;
} }
...@@ -885,7 +888,7 @@ static int f2fs_rename(struct inode *old_dir, struct dentry *old_dentry, ...@@ -885,7 +888,7 @@ static int f2fs_rename(struct inode *old_dir, struct dentry *old_dentry,
f2fs_lock_op(sbi); f2fs_lock_op(sbi);
err = acquire_orphan_inode(sbi); err = f2fs_acquire_orphan_inode(sbi);
if (err) if (err)
goto put_out_dir; goto put_out_dir;
...@@ -899,9 +902,9 @@ static int f2fs_rename(struct inode *old_dir, struct dentry *old_dentry, ...@@ -899,9 +902,9 @@ static int f2fs_rename(struct inode *old_dir, struct dentry *old_dentry,
up_write(&F2FS_I(new_inode)->i_sem); up_write(&F2FS_I(new_inode)->i_sem);
if (!new_inode->i_nlink) if (!new_inode->i_nlink)
add_orphan_inode(new_inode); f2fs_add_orphan_inode(new_inode);
else else
release_orphan_inode(sbi); f2fs_release_orphan_inode(sbi);
} else { } else {
f2fs_balance_fs(sbi, true); f2fs_balance_fs(sbi, true);
...@@ -969,8 +972,12 @@ static int f2fs_rename(struct inode *old_dir, struct dentry *old_dentry, ...@@ -969,8 +972,12 @@ static int f2fs_rename(struct inode *old_dir, struct dentry *old_dentry,
f2fs_put_page(old_dir_page, 0); f2fs_put_page(old_dir_page, 0);
f2fs_i_links_write(old_dir, false); f2fs_i_links_write(old_dir, false);
} }
if (F2FS_OPTION(sbi).fsync_mode == FSYNC_MODE_STRICT) if (F2FS_OPTION(sbi).fsync_mode == FSYNC_MODE_STRICT) {
add_ino_entry(sbi, new_dir->i_ino, TRANS_DIR_INO); f2fs_add_ino_entry(sbi, new_dir->i_ino, TRANS_DIR_INO);
if (S_ISDIR(old_inode->i_mode))
f2fs_add_ino_entry(sbi, old_inode->i_ino,
TRANS_DIR_INO);
}
f2fs_unlock_op(sbi); f2fs_unlock_op(sbi);
...@@ -1121,8 +1128,8 @@ static int f2fs_cross_rename(struct inode *old_dir, struct dentry *old_dentry, ...@@ -1121,8 +1128,8 @@ static int f2fs_cross_rename(struct inode *old_dir, struct dentry *old_dentry,
f2fs_mark_inode_dirty_sync(new_dir, false); f2fs_mark_inode_dirty_sync(new_dir, false);
if (F2FS_OPTION(sbi).fsync_mode == FSYNC_MODE_STRICT) { if (F2FS_OPTION(sbi).fsync_mode == FSYNC_MODE_STRICT) {
add_ino_entry(sbi, old_dir->i_ino, TRANS_DIR_INO); f2fs_add_ino_entry(sbi, old_dir->i_ino, TRANS_DIR_INO);
add_ino_entry(sbi, new_dir->i_ino, TRANS_DIR_INO); f2fs_add_ino_entry(sbi, new_dir->i_ino, TRANS_DIR_INO);
} }
f2fs_unlock_op(sbi); f2fs_unlock_op(sbi);
......
...@@ -23,13 +23,28 @@ ...@@ -23,13 +23,28 @@
#include "trace.h" #include "trace.h"
#include <trace/events/f2fs.h> #include <trace/events/f2fs.h>
#define on_build_free_nids(nmi) mutex_is_locked(&(nm_i)->build_lock) #define on_f2fs_build_free_nids(nmi) mutex_is_locked(&(nm_i)->build_lock)
static struct kmem_cache *nat_entry_slab; static struct kmem_cache *nat_entry_slab;
static struct kmem_cache *free_nid_slab; static struct kmem_cache *free_nid_slab;
static struct kmem_cache *nat_entry_set_slab; static struct kmem_cache *nat_entry_set_slab;
bool available_free_memory(struct f2fs_sb_info *sbi, int type) /*
* Check whether the given nid is within node id range.
*/
int f2fs_check_nid_range(struct f2fs_sb_info *sbi, nid_t nid)
{
if (unlikely(nid < F2FS_ROOT_INO(sbi) || nid >= NM_I(sbi)->max_nid)) {
set_sbi_flag(sbi, SBI_NEED_FSCK);
f2fs_msg(sbi->sb, KERN_WARNING,
"%s: out-of-range nid=%x, run fsck to fix.",
__func__, nid);
return -EINVAL;
}
return 0;
}
bool f2fs_available_free_memory(struct f2fs_sb_info *sbi, int type)
{ {
struct f2fs_nm_info *nm_i = NM_I(sbi); struct f2fs_nm_info *nm_i = NM_I(sbi);
struct sysinfo val; struct sysinfo val;
...@@ -87,18 +102,10 @@ bool available_free_memory(struct f2fs_sb_info *sbi, int type) ...@@ -87,18 +102,10 @@ bool available_free_memory(struct f2fs_sb_info *sbi, int type)
static void clear_node_page_dirty(struct page *page) static void clear_node_page_dirty(struct page *page)
{ {
struct address_space *mapping = page->mapping;
unsigned int long flags;
if (PageDirty(page)) { if (PageDirty(page)) {
xa_lock_irqsave(&mapping->i_pages, flags); f2fs_clear_radix_tree_dirty_tag(page);
radix_tree_tag_clear(&mapping->i_pages,
page_index(page),
PAGECACHE_TAG_DIRTY);
xa_unlock_irqrestore(&mapping->i_pages, flags);
clear_page_dirty_for_io(page); clear_page_dirty_for_io(page);
dec_page_count(F2FS_M_SB(mapping), F2FS_DIRTY_NODES); dec_page_count(F2FS_P_SB(page), F2FS_DIRTY_NODES);
} }
ClearPageUptodate(page); ClearPageUptodate(page);
} }
...@@ -106,7 +113,7 @@ static void clear_node_page_dirty(struct page *page) ...@@ -106,7 +113,7 @@ static void clear_node_page_dirty(struct page *page)
static struct page *get_current_nat_page(struct f2fs_sb_info *sbi, nid_t nid) static struct page *get_current_nat_page(struct f2fs_sb_info *sbi, nid_t nid)
{ {
pgoff_t index = current_nat_addr(sbi, nid); pgoff_t index = current_nat_addr(sbi, nid);
return get_meta_page(sbi, index); return f2fs_get_meta_page(sbi, index);
} }
static struct page *get_next_nat_page(struct f2fs_sb_info *sbi, nid_t nid) static struct page *get_next_nat_page(struct f2fs_sb_info *sbi, nid_t nid)
...@@ -123,8 +130,8 @@ static struct page *get_next_nat_page(struct f2fs_sb_info *sbi, nid_t nid) ...@@ -123,8 +130,8 @@ static struct page *get_next_nat_page(struct f2fs_sb_info *sbi, nid_t nid)
dst_off = next_nat_addr(sbi, src_off); dst_off = next_nat_addr(sbi, src_off);
/* get current nat block page with lock */ /* get current nat block page with lock */
src_page = get_meta_page(sbi, src_off); src_page = f2fs_get_meta_page(sbi, src_off);
dst_page = grab_meta_page(sbi, dst_off); dst_page = f2fs_grab_meta_page(sbi, dst_off);
f2fs_bug_on(sbi, PageDirty(src_page)); f2fs_bug_on(sbi, PageDirty(src_page));
src_addr = page_address(src_page); src_addr = page_address(src_page);
...@@ -260,7 +267,7 @@ static unsigned int __gang_lookup_nat_set(struct f2fs_nm_info *nm_i, ...@@ -260,7 +267,7 @@ static unsigned int __gang_lookup_nat_set(struct f2fs_nm_info *nm_i,
start, nr); start, nr);
} }
int need_dentry_mark(struct f2fs_sb_info *sbi, nid_t nid) int f2fs_need_dentry_mark(struct f2fs_sb_info *sbi, nid_t nid)
{ {
struct f2fs_nm_info *nm_i = NM_I(sbi); struct f2fs_nm_info *nm_i = NM_I(sbi);
struct nat_entry *e; struct nat_entry *e;
...@@ -277,7 +284,7 @@ int need_dentry_mark(struct f2fs_sb_info *sbi, nid_t nid) ...@@ -277,7 +284,7 @@ int need_dentry_mark(struct f2fs_sb_info *sbi, nid_t nid)
return need; return need;
} }
bool is_checkpointed_node(struct f2fs_sb_info *sbi, nid_t nid) bool f2fs_is_checkpointed_node(struct f2fs_sb_info *sbi, nid_t nid)
{ {
struct f2fs_nm_info *nm_i = NM_I(sbi); struct f2fs_nm_info *nm_i = NM_I(sbi);
struct nat_entry *e; struct nat_entry *e;
...@@ -291,7 +298,7 @@ bool is_checkpointed_node(struct f2fs_sb_info *sbi, nid_t nid) ...@@ -291,7 +298,7 @@ bool is_checkpointed_node(struct f2fs_sb_info *sbi, nid_t nid)
return is_cp; return is_cp;
} }
bool need_inode_block_update(struct f2fs_sb_info *sbi, nid_t ino) bool f2fs_need_inode_block_update(struct f2fs_sb_info *sbi, nid_t ino)
{ {
struct f2fs_nm_info *nm_i = NM_I(sbi); struct f2fs_nm_info *nm_i = NM_I(sbi);
struct nat_entry *e; struct nat_entry *e;
...@@ -364,8 +371,7 @@ static void set_node_addr(struct f2fs_sb_info *sbi, struct node_info *ni, ...@@ -364,8 +371,7 @@ static void set_node_addr(struct f2fs_sb_info *sbi, struct node_info *ni,
new_blkaddr == NULL_ADDR); new_blkaddr == NULL_ADDR);
f2fs_bug_on(sbi, nat_get_blkaddr(e) == NEW_ADDR && f2fs_bug_on(sbi, nat_get_blkaddr(e) == NEW_ADDR &&
new_blkaddr == NEW_ADDR); new_blkaddr == NEW_ADDR);
f2fs_bug_on(sbi, nat_get_blkaddr(e) != NEW_ADDR && f2fs_bug_on(sbi, is_valid_blkaddr(nat_get_blkaddr(e)) &&
nat_get_blkaddr(e) != NULL_ADDR &&
new_blkaddr == NEW_ADDR); new_blkaddr == NEW_ADDR);
/* increment version no as node is removed */ /* increment version no as node is removed */
...@@ -376,7 +382,7 @@ static void set_node_addr(struct f2fs_sb_info *sbi, struct node_info *ni, ...@@ -376,7 +382,7 @@ static void set_node_addr(struct f2fs_sb_info *sbi, struct node_info *ni,
/* change address */ /* change address */
nat_set_blkaddr(e, new_blkaddr); nat_set_blkaddr(e, new_blkaddr);
if (new_blkaddr == NEW_ADDR || new_blkaddr == NULL_ADDR) if (!is_valid_blkaddr(new_blkaddr))
set_nat_flag(e, IS_CHECKPOINTED, false); set_nat_flag(e, IS_CHECKPOINTED, false);
__set_nat_cache_dirty(nm_i, e); __set_nat_cache_dirty(nm_i, e);
...@@ -391,7 +397,7 @@ static void set_node_addr(struct f2fs_sb_info *sbi, struct node_info *ni, ...@@ -391,7 +397,7 @@ static void set_node_addr(struct f2fs_sb_info *sbi, struct node_info *ni,
up_write(&nm_i->nat_tree_lock); up_write(&nm_i->nat_tree_lock);
} }
int try_to_free_nats(struct f2fs_sb_info *sbi, int nr_shrink) int f2fs_try_to_free_nats(struct f2fs_sb_info *sbi, int nr_shrink)
{ {
struct f2fs_nm_info *nm_i = NM_I(sbi); struct f2fs_nm_info *nm_i = NM_I(sbi);
int nr = nr_shrink; int nr = nr_shrink;
...@@ -413,7 +419,8 @@ int try_to_free_nats(struct f2fs_sb_info *sbi, int nr_shrink) ...@@ -413,7 +419,8 @@ int try_to_free_nats(struct f2fs_sb_info *sbi, int nr_shrink)
/* /*
* This function always returns success * This function always returns success
*/ */
void get_node_info(struct f2fs_sb_info *sbi, nid_t nid, struct node_info *ni) void f2fs_get_node_info(struct f2fs_sb_info *sbi, nid_t nid,
struct node_info *ni)
{ {
struct f2fs_nm_info *nm_i = NM_I(sbi); struct f2fs_nm_info *nm_i = NM_I(sbi);
struct curseg_info *curseg = CURSEG_I(sbi, CURSEG_HOT_DATA); struct curseg_info *curseg = CURSEG_I(sbi, CURSEG_HOT_DATA);
...@@ -443,7 +450,7 @@ void get_node_info(struct f2fs_sb_info *sbi, nid_t nid, struct node_info *ni) ...@@ -443,7 +450,7 @@ void get_node_info(struct f2fs_sb_info *sbi, nid_t nid, struct node_info *ni)
/* Check current segment summary */ /* Check current segment summary */
down_read(&curseg->journal_rwsem); down_read(&curseg->journal_rwsem);
i = lookup_journal_in_cursum(journal, NAT_JOURNAL, nid, 0); i = f2fs_lookup_journal_in_cursum(journal, NAT_JOURNAL, nid, 0);
if (i >= 0) { if (i >= 0) {
ne = nat_in_journal(journal, i); ne = nat_in_journal(journal, i);
node_info_from_raw_nat(ni, &ne); node_info_from_raw_nat(ni, &ne);
...@@ -458,7 +465,7 @@ void get_node_info(struct f2fs_sb_info *sbi, nid_t nid, struct node_info *ni) ...@@ -458,7 +465,7 @@ void get_node_info(struct f2fs_sb_info *sbi, nid_t nid, struct node_info *ni)
index = current_nat_addr(sbi, nid); index = current_nat_addr(sbi, nid);
up_read(&nm_i->nat_tree_lock); up_read(&nm_i->nat_tree_lock);
page = get_meta_page(sbi, index); page = f2fs_get_meta_page(sbi, index);
nat_blk = (struct f2fs_nat_block *)page_address(page); nat_blk = (struct f2fs_nat_block *)page_address(page);
ne = nat_blk->entries[nid - start_nid]; ne = nat_blk->entries[nid - start_nid];
node_info_from_raw_nat(ni, &ne); node_info_from_raw_nat(ni, &ne);
...@@ -471,7 +478,7 @@ void get_node_info(struct f2fs_sb_info *sbi, nid_t nid, struct node_info *ni) ...@@ -471,7 +478,7 @@ void get_node_info(struct f2fs_sb_info *sbi, nid_t nid, struct node_info *ni)
/* /*
* readahead MAX_RA_NODE number of node pages. * readahead MAX_RA_NODE number of node pages.
*/ */
static void ra_node_pages(struct page *parent, int start, int n) static void f2fs_ra_node_pages(struct page *parent, int start, int n)
{ {
struct f2fs_sb_info *sbi = F2FS_P_SB(parent); struct f2fs_sb_info *sbi = F2FS_P_SB(parent);
struct blk_plug plug; struct blk_plug plug;
...@@ -485,13 +492,13 @@ static void ra_node_pages(struct page *parent, int start, int n) ...@@ -485,13 +492,13 @@ static void ra_node_pages(struct page *parent, int start, int n)
end = min(end, NIDS_PER_BLOCK); end = min(end, NIDS_PER_BLOCK);
for (i = start; i < end; i++) { for (i = start; i < end; i++) {
nid = get_nid(parent, i, false); nid = get_nid(parent, i, false);
ra_node_page(sbi, nid); f2fs_ra_node_page(sbi, nid);
} }
blk_finish_plug(&plug); blk_finish_plug(&plug);
} }
pgoff_t get_next_page_offset(struct dnode_of_data *dn, pgoff_t pgofs) pgoff_t f2fs_get_next_page_offset(struct dnode_of_data *dn, pgoff_t pgofs)
{ {
const long direct_index = ADDRS_PER_INODE(dn->inode); const long direct_index = ADDRS_PER_INODE(dn->inode);
const long direct_blks = ADDRS_PER_BLOCK; const long direct_blks = ADDRS_PER_BLOCK;
...@@ -606,7 +613,7 @@ static int get_node_path(struct inode *inode, long block, ...@@ -606,7 +613,7 @@ static int get_node_path(struct inode *inode, long block,
* f2fs_unlock_op() only if ro is not set RDONLY_NODE. * f2fs_unlock_op() only if ro is not set RDONLY_NODE.
* In the case of RDONLY_NODE, we don't need to care about mutex. * In the case of RDONLY_NODE, we don't need to care about mutex.
*/ */
int get_dnode_of_data(struct dnode_of_data *dn, pgoff_t index, int mode) int f2fs_get_dnode_of_data(struct dnode_of_data *dn, pgoff_t index, int mode)
{ {
struct f2fs_sb_info *sbi = F2FS_I_SB(dn->inode); struct f2fs_sb_info *sbi = F2FS_I_SB(dn->inode);
struct page *npage[4]; struct page *npage[4];
...@@ -625,7 +632,7 @@ int get_dnode_of_data(struct dnode_of_data *dn, pgoff_t index, int mode) ...@@ -625,7 +632,7 @@ int get_dnode_of_data(struct dnode_of_data *dn, pgoff_t index, int mode)
npage[0] = dn->inode_page; npage[0] = dn->inode_page;
if (!npage[0]) { if (!npage[0]) {
npage[0] = get_node_page(sbi, nids[0]); npage[0] = f2fs_get_node_page(sbi, nids[0]);
if (IS_ERR(npage[0])) if (IS_ERR(npage[0]))
return PTR_ERR(npage[0]); return PTR_ERR(npage[0]);
} }
...@@ -649,24 +656,24 @@ int get_dnode_of_data(struct dnode_of_data *dn, pgoff_t index, int mode) ...@@ -649,24 +656,24 @@ int get_dnode_of_data(struct dnode_of_data *dn, pgoff_t index, int mode)
if (!nids[i] && mode == ALLOC_NODE) { if (!nids[i] && mode == ALLOC_NODE) {
/* alloc new node */ /* alloc new node */
if (!alloc_nid(sbi, &(nids[i]))) { if (!f2fs_alloc_nid(sbi, &(nids[i]))) {
err = -ENOSPC; err = -ENOSPC;
goto release_pages; goto release_pages;
} }
dn->nid = nids[i]; dn->nid = nids[i];
npage[i] = new_node_page(dn, noffset[i]); npage[i] = f2fs_new_node_page(dn, noffset[i]);
if (IS_ERR(npage[i])) { if (IS_ERR(npage[i])) {
alloc_nid_failed(sbi, nids[i]); f2fs_alloc_nid_failed(sbi, nids[i]);
err = PTR_ERR(npage[i]); err = PTR_ERR(npage[i]);
goto release_pages; goto release_pages;
} }
set_nid(parent, offset[i - 1], nids[i], i == 1); set_nid(parent, offset[i - 1], nids[i], i == 1);
alloc_nid_done(sbi, nids[i]); f2fs_alloc_nid_done(sbi, nids[i]);
done = true; done = true;
} else if (mode == LOOKUP_NODE_RA && i == level && level > 1) { } else if (mode == LOOKUP_NODE_RA && i == level && level > 1) {
npage[i] = get_node_page_ra(parent, offset[i - 1]); npage[i] = f2fs_get_node_page_ra(parent, offset[i - 1]);
if (IS_ERR(npage[i])) { if (IS_ERR(npage[i])) {
err = PTR_ERR(npage[i]); err = PTR_ERR(npage[i]);
goto release_pages; goto release_pages;
...@@ -681,7 +688,7 @@ int get_dnode_of_data(struct dnode_of_data *dn, pgoff_t index, int mode) ...@@ -681,7 +688,7 @@ int get_dnode_of_data(struct dnode_of_data *dn, pgoff_t index, int mode)
} }
if (!done) { if (!done) {
npage[i] = get_node_page(sbi, nids[i]); npage[i] = f2fs_get_node_page(sbi, nids[i]);
if (IS_ERR(npage[i])) { if (IS_ERR(npage[i])) {
err = PTR_ERR(npage[i]); err = PTR_ERR(npage[i]);
f2fs_put_page(npage[0], 0); f2fs_put_page(npage[0], 0);
...@@ -720,15 +727,15 @@ static void truncate_node(struct dnode_of_data *dn) ...@@ -720,15 +727,15 @@ static void truncate_node(struct dnode_of_data *dn)
struct f2fs_sb_info *sbi = F2FS_I_SB(dn->inode); struct f2fs_sb_info *sbi = F2FS_I_SB(dn->inode);
struct node_info ni; struct node_info ni;
get_node_info(sbi, dn->nid, &ni); f2fs_get_node_info(sbi, dn->nid, &ni);
/* Deallocate node address */ /* Deallocate node address */
invalidate_blocks(sbi, ni.blk_addr); f2fs_invalidate_blocks(sbi, ni.blk_addr);
dec_valid_node_count(sbi, dn->inode, dn->nid == dn->inode->i_ino); dec_valid_node_count(sbi, dn->inode, dn->nid == dn->inode->i_ino);
set_node_addr(sbi, &ni, NULL_ADDR, false); set_node_addr(sbi, &ni, NULL_ADDR, false);
if (dn->nid == dn->inode->i_ino) { if (dn->nid == dn->inode->i_ino) {
remove_orphan_inode(sbi, dn->nid); f2fs_remove_orphan_inode(sbi, dn->nid);
dec_valid_inode_count(sbi); dec_valid_inode_count(sbi);
f2fs_inode_synced(dn->inode); f2fs_inode_synced(dn->inode);
} }
...@@ -753,7 +760,7 @@ static int truncate_dnode(struct dnode_of_data *dn) ...@@ -753,7 +760,7 @@ static int truncate_dnode(struct dnode_of_data *dn)
return 1; return 1;
/* get direct node */ /* get direct node */
page = get_node_page(F2FS_I_SB(dn->inode), dn->nid); page = f2fs_get_node_page(F2FS_I_SB(dn->inode), dn->nid);
if (IS_ERR(page) && PTR_ERR(page) == -ENOENT) if (IS_ERR(page) && PTR_ERR(page) == -ENOENT)
return 1; return 1;
else if (IS_ERR(page)) else if (IS_ERR(page))
...@@ -762,7 +769,7 @@ static int truncate_dnode(struct dnode_of_data *dn) ...@@ -762,7 +769,7 @@ static int truncate_dnode(struct dnode_of_data *dn)
/* Make dnode_of_data for parameter */ /* Make dnode_of_data for parameter */
dn->node_page = page; dn->node_page = page;
dn->ofs_in_node = 0; dn->ofs_in_node = 0;
truncate_data_blocks(dn); f2fs_truncate_data_blocks(dn);
truncate_node(dn); truncate_node(dn);
return 1; return 1;
} }
...@@ -783,13 +790,13 @@ static int truncate_nodes(struct dnode_of_data *dn, unsigned int nofs, ...@@ -783,13 +790,13 @@ static int truncate_nodes(struct dnode_of_data *dn, unsigned int nofs,
trace_f2fs_truncate_nodes_enter(dn->inode, dn->nid, dn->data_blkaddr); trace_f2fs_truncate_nodes_enter(dn->inode, dn->nid, dn->data_blkaddr);
page = get_node_page(F2FS_I_SB(dn->inode), dn->nid); page = f2fs_get_node_page(F2FS_I_SB(dn->inode), dn->nid);
if (IS_ERR(page)) { if (IS_ERR(page)) {
trace_f2fs_truncate_nodes_exit(dn->inode, PTR_ERR(page)); trace_f2fs_truncate_nodes_exit(dn->inode, PTR_ERR(page));
return PTR_ERR(page); return PTR_ERR(page);
} }
ra_node_pages(page, ofs, NIDS_PER_BLOCK); f2fs_ra_node_pages(page, ofs, NIDS_PER_BLOCK);
rn = F2FS_NODE(page); rn = F2FS_NODE(page);
if (depth < 3) { if (depth < 3) {
...@@ -859,7 +866,7 @@ static int truncate_partial_nodes(struct dnode_of_data *dn, ...@@ -859,7 +866,7 @@ static int truncate_partial_nodes(struct dnode_of_data *dn,
/* get indirect nodes in the path */ /* get indirect nodes in the path */
for (i = 0; i < idx + 1; i++) { for (i = 0; i < idx + 1; i++) {
/* reference count'll be increased */ /* reference count'll be increased */
pages[i] = get_node_page(F2FS_I_SB(dn->inode), nid[i]); pages[i] = f2fs_get_node_page(F2FS_I_SB(dn->inode), nid[i]);
if (IS_ERR(pages[i])) { if (IS_ERR(pages[i])) {
err = PTR_ERR(pages[i]); err = PTR_ERR(pages[i]);
idx = i - 1; idx = i - 1;
...@@ -868,7 +875,7 @@ static int truncate_partial_nodes(struct dnode_of_data *dn, ...@@ -868,7 +875,7 @@ static int truncate_partial_nodes(struct dnode_of_data *dn,
nid[i + 1] = get_nid(pages[i], offset[i + 1], false); nid[i + 1] = get_nid(pages[i], offset[i + 1], false);
} }
ra_node_pages(pages[idx], offset[idx + 1], NIDS_PER_BLOCK); f2fs_ra_node_pages(pages[idx], offset[idx + 1], NIDS_PER_BLOCK);
/* free direct nodes linked to a partial indirect node */ /* free direct nodes linked to a partial indirect node */
for (i = offset[idx + 1]; i < NIDS_PER_BLOCK; i++) { for (i = offset[idx + 1]; i < NIDS_PER_BLOCK; i++) {
...@@ -905,7 +912,7 @@ static int truncate_partial_nodes(struct dnode_of_data *dn, ...@@ -905,7 +912,7 @@ static int truncate_partial_nodes(struct dnode_of_data *dn,
/* /*
* All the block addresses of data and nodes should be nullified. * All the block addresses of data and nodes should be nullified.
*/ */
int truncate_inode_blocks(struct inode *inode, pgoff_t from) int f2fs_truncate_inode_blocks(struct inode *inode, pgoff_t from)
{ {
struct f2fs_sb_info *sbi = F2FS_I_SB(inode); struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
int err = 0, cont = 1; int err = 0, cont = 1;
...@@ -921,7 +928,7 @@ int truncate_inode_blocks(struct inode *inode, pgoff_t from) ...@@ -921,7 +928,7 @@ int truncate_inode_blocks(struct inode *inode, pgoff_t from)
if (level < 0) if (level < 0)
return level; return level;
page = get_node_page(sbi, inode->i_ino); page = f2fs_get_node_page(sbi, inode->i_ino);
if (IS_ERR(page)) { if (IS_ERR(page)) {
trace_f2fs_truncate_inode_blocks_exit(inode, PTR_ERR(page)); trace_f2fs_truncate_inode_blocks_exit(inode, PTR_ERR(page));
return PTR_ERR(page); return PTR_ERR(page);
...@@ -1001,7 +1008,7 @@ int truncate_inode_blocks(struct inode *inode, pgoff_t from) ...@@ -1001,7 +1008,7 @@ int truncate_inode_blocks(struct inode *inode, pgoff_t from)
} }
/* caller must lock inode page */ /* caller must lock inode page */
int truncate_xattr_node(struct inode *inode) int f2fs_truncate_xattr_node(struct inode *inode)
{ {
struct f2fs_sb_info *sbi = F2FS_I_SB(inode); struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
nid_t nid = F2FS_I(inode)->i_xattr_nid; nid_t nid = F2FS_I(inode)->i_xattr_nid;
...@@ -1011,7 +1018,7 @@ int truncate_xattr_node(struct inode *inode) ...@@ -1011,7 +1018,7 @@ int truncate_xattr_node(struct inode *inode)
if (!nid) if (!nid)
return 0; return 0;
npage = get_node_page(sbi, nid); npage = f2fs_get_node_page(sbi, nid);
if (IS_ERR(npage)) if (IS_ERR(npage))
return PTR_ERR(npage); return PTR_ERR(npage);
...@@ -1026,17 +1033,17 @@ int truncate_xattr_node(struct inode *inode) ...@@ -1026,17 +1033,17 @@ int truncate_xattr_node(struct inode *inode)
* Caller should grab and release a rwsem by calling f2fs_lock_op() and * Caller should grab and release a rwsem by calling f2fs_lock_op() and
* f2fs_unlock_op(). * f2fs_unlock_op().
*/ */
int remove_inode_page(struct inode *inode) int f2fs_remove_inode_page(struct inode *inode)
{ {
struct dnode_of_data dn; struct dnode_of_data dn;
int err; int err;
set_new_dnode(&dn, inode, NULL, NULL, inode->i_ino); set_new_dnode(&dn, inode, NULL, NULL, inode->i_ino);
err = get_dnode_of_data(&dn, 0, LOOKUP_NODE); err = f2fs_get_dnode_of_data(&dn, 0, LOOKUP_NODE);
if (err) if (err)
return err; return err;
err = truncate_xattr_node(inode); err = f2fs_truncate_xattr_node(inode);
if (err) { if (err) {
f2fs_put_dnode(&dn); f2fs_put_dnode(&dn);
return err; return err;
...@@ -1045,7 +1052,7 @@ int remove_inode_page(struct inode *inode) ...@@ -1045,7 +1052,7 @@ int remove_inode_page(struct inode *inode)
/* remove potential inline_data blocks */ /* remove potential inline_data blocks */
if (S_ISREG(inode->i_mode) || S_ISDIR(inode->i_mode) || if (S_ISREG(inode->i_mode) || S_ISDIR(inode->i_mode) ||
S_ISLNK(inode->i_mode)) S_ISLNK(inode->i_mode))
truncate_data_blocks_range(&dn, 1); f2fs_truncate_data_blocks_range(&dn, 1);
/* 0 is possible, after f2fs_new_inode() has failed */ /* 0 is possible, after f2fs_new_inode() has failed */
f2fs_bug_on(F2FS_I_SB(inode), f2fs_bug_on(F2FS_I_SB(inode),
...@@ -1056,7 +1063,7 @@ int remove_inode_page(struct inode *inode) ...@@ -1056,7 +1063,7 @@ int remove_inode_page(struct inode *inode)
return 0; return 0;
} }
struct page *new_inode_page(struct inode *inode) struct page *f2fs_new_inode_page(struct inode *inode)
{ {
struct dnode_of_data dn; struct dnode_of_data dn;
...@@ -1064,10 +1071,10 @@ struct page *new_inode_page(struct inode *inode) ...@@ -1064,10 +1071,10 @@ struct page *new_inode_page(struct inode *inode)
set_new_dnode(&dn, inode, NULL, NULL, inode->i_ino); set_new_dnode(&dn, inode, NULL, NULL, inode->i_ino);
/* caller should f2fs_put_page(page, 1); */ /* caller should f2fs_put_page(page, 1); */
return new_node_page(&dn, 0); return f2fs_new_node_page(&dn, 0);
} }
struct page *new_node_page(struct dnode_of_data *dn, unsigned int ofs) struct page *f2fs_new_node_page(struct dnode_of_data *dn, unsigned int ofs)
{ {
struct f2fs_sb_info *sbi = F2FS_I_SB(dn->inode); struct f2fs_sb_info *sbi = F2FS_I_SB(dn->inode);
struct node_info new_ni; struct node_info new_ni;
...@@ -1085,7 +1092,7 @@ struct page *new_node_page(struct dnode_of_data *dn, unsigned int ofs) ...@@ -1085,7 +1092,7 @@ struct page *new_node_page(struct dnode_of_data *dn, unsigned int ofs)
goto fail; goto fail;
#ifdef CONFIG_F2FS_CHECK_FS #ifdef CONFIG_F2FS_CHECK_FS
get_node_info(sbi, dn->nid, &new_ni); f2fs_get_node_info(sbi, dn->nid, &new_ni);
f2fs_bug_on(sbi, new_ni.blk_addr != NULL_ADDR); f2fs_bug_on(sbi, new_ni.blk_addr != NULL_ADDR);
#endif #endif
new_ni.nid = dn->nid; new_ni.nid = dn->nid;
...@@ -1137,7 +1144,7 @@ static int read_node_page(struct page *page, int op_flags) ...@@ -1137,7 +1144,7 @@ static int read_node_page(struct page *page, int op_flags)
if (PageUptodate(page)) if (PageUptodate(page))
return LOCKED_PAGE; return LOCKED_PAGE;
get_node_info(sbi, page->index, &ni); f2fs_get_node_info(sbi, page->index, &ni);
if (unlikely(ni.blk_addr == NULL_ADDR)) { if (unlikely(ni.blk_addr == NULL_ADDR)) {
ClearPageUptodate(page); ClearPageUptodate(page);
...@@ -1151,14 +1158,15 @@ static int read_node_page(struct page *page, int op_flags) ...@@ -1151,14 +1158,15 @@ static int read_node_page(struct page *page, int op_flags)
/* /*
* Readahead a node page * Readahead a node page
*/ */
void ra_node_page(struct f2fs_sb_info *sbi, nid_t nid) void f2fs_ra_node_page(struct f2fs_sb_info *sbi, nid_t nid)
{ {
struct page *apage; struct page *apage;
int err; int err;
if (!nid) if (!nid)
return; return;
f2fs_bug_on(sbi, check_nid_range(sbi, nid)); if (f2fs_check_nid_range(sbi, nid))
return;
rcu_read_lock(); rcu_read_lock();
apage = radix_tree_lookup(&NODE_MAPPING(sbi)->i_pages, nid); apage = radix_tree_lookup(&NODE_MAPPING(sbi)->i_pages, nid);
...@@ -1182,7 +1190,8 @@ static struct page *__get_node_page(struct f2fs_sb_info *sbi, pgoff_t nid, ...@@ -1182,7 +1190,8 @@ static struct page *__get_node_page(struct f2fs_sb_info *sbi, pgoff_t nid,
if (!nid) if (!nid)
return ERR_PTR(-ENOENT); return ERR_PTR(-ENOENT);
f2fs_bug_on(sbi, check_nid_range(sbi, nid)); if (f2fs_check_nid_range(sbi, nid))
return ERR_PTR(-EINVAL);
repeat: repeat:
page = f2fs_grab_cache_page(NODE_MAPPING(sbi), nid, false); page = f2fs_grab_cache_page(NODE_MAPPING(sbi), nid, false);
if (!page) if (!page)
...@@ -1198,7 +1207,7 @@ static struct page *__get_node_page(struct f2fs_sb_info *sbi, pgoff_t nid, ...@@ -1198,7 +1207,7 @@ static struct page *__get_node_page(struct f2fs_sb_info *sbi, pgoff_t nid,
} }
if (parent) if (parent)
ra_node_pages(parent, start + 1, MAX_RA_NODE); f2fs_ra_node_pages(parent, start + 1, MAX_RA_NODE);
lock_page(page); lock_page(page);
...@@ -1232,12 +1241,12 @@ static struct page *__get_node_page(struct f2fs_sb_info *sbi, pgoff_t nid, ...@@ -1232,12 +1241,12 @@ static struct page *__get_node_page(struct f2fs_sb_info *sbi, pgoff_t nid,
return page; return page;
} }
struct page *get_node_page(struct f2fs_sb_info *sbi, pgoff_t nid) struct page *f2fs_get_node_page(struct f2fs_sb_info *sbi, pgoff_t nid)
{ {
return __get_node_page(sbi, nid, NULL, 0); return __get_node_page(sbi, nid, NULL, 0);
} }
struct page *get_node_page_ra(struct page *parent, int start) struct page *f2fs_get_node_page_ra(struct page *parent, int start)
{ {
struct f2fs_sb_info *sbi = F2FS_P_SB(parent); struct f2fs_sb_info *sbi = F2FS_P_SB(parent);
nid_t nid = get_nid(parent, start, false); nid_t nid = get_nid(parent, start, false);
...@@ -1272,7 +1281,7 @@ static void flush_inline_data(struct f2fs_sb_info *sbi, nid_t ino) ...@@ -1272,7 +1281,7 @@ static void flush_inline_data(struct f2fs_sb_info *sbi, nid_t ino)
ret = f2fs_write_inline_data(inode, page); ret = f2fs_write_inline_data(inode, page);
inode_dec_dirty_pages(inode); inode_dec_dirty_pages(inode);
remove_dirty_inode(inode); f2fs_remove_dirty_inode(inode);
if (ret) if (ret)
set_page_dirty(page); set_page_dirty(page);
page_out: page_out:
...@@ -1359,11 +1368,8 @@ static int __write_node_page(struct page *page, bool atomic, bool *submitted, ...@@ -1359,11 +1368,8 @@ static int __write_node_page(struct page *page, bool atomic, bool *submitted,
trace_f2fs_writepage(page, NODE); trace_f2fs_writepage(page, NODE);
if (unlikely(f2fs_cp_error(sbi))) { if (unlikely(f2fs_cp_error(sbi)))
dec_page_count(sbi, F2FS_DIRTY_NODES); goto redirty_out;
unlock_page(page);
return 0;
}
if (unlikely(is_sbi_flag_set(sbi, SBI_POR_DOING))) if (unlikely(is_sbi_flag_set(sbi, SBI_POR_DOING)))
goto redirty_out; goto redirty_out;
...@@ -1379,7 +1385,7 @@ static int __write_node_page(struct page *page, bool atomic, bool *submitted, ...@@ -1379,7 +1385,7 @@ static int __write_node_page(struct page *page, bool atomic, bool *submitted,
down_read(&sbi->node_write); down_read(&sbi->node_write);
} }
get_node_info(sbi, nid, &ni); f2fs_get_node_info(sbi, nid, &ni);
/* This page is already truncated */ /* This page is already truncated */
if (unlikely(ni.blk_addr == NULL_ADDR)) { if (unlikely(ni.blk_addr == NULL_ADDR)) {
...@@ -1394,8 +1400,9 @@ static int __write_node_page(struct page *page, bool atomic, bool *submitted, ...@@ -1394,8 +1400,9 @@ static int __write_node_page(struct page *page, bool atomic, bool *submitted,
fio.op_flags |= REQ_PREFLUSH | REQ_FUA; fio.op_flags |= REQ_PREFLUSH | REQ_FUA;
set_page_writeback(page); set_page_writeback(page);
ClearPageError(page);
fio.old_blkaddr = ni.blk_addr; fio.old_blkaddr = ni.blk_addr;
write_node_page(nid, &fio); f2fs_do_write_node_page(nid, &fio);
set_node_addr(sbi, &ni, fio.new_blkaddr, is_fsync_dnode(page)); set_node_addr(sbi, &ni, fio.new_blkaddr, is_fsync_dnode(page));
dec_page_count(sbi, F2FS_DIRTY_NODES); dec_page_count(sbi, F2FS_DIRTY_NODES);
up_read(&sbi->node_write); up_read(&sbi->node_write);
...@@ -1424,7 +1431,7 @@ static int __write_node_page(struct page *page, bool atomic, bool *submitted, ...@@ -1424,7 +1431,7 @@ static int __write_node_page(struct page *page, bool atomic, bool *submitted,
return AOP_WRITEPAGE_ACTIVATE; return AOP_WRITEPAGE_ACTIVATE;
} }
void move_node_page(struct page *node_page, int gc_type) void f2fs_move_node_page(struct page *node_page, int gc_type)
{ {
if (gc_type == FG_GC) { if (gc_type == FG_GC) {
struct writeback_control wbc = { struct writeback_control wbc = {
...@@ -1461,7 +1468,7 @@ static int f2fs_write_node_page(struct page *page, ...@@ -1461,7 +1468,7 @@ static int f2fs_write_node_page(struct page *page,
return __write_node_page(page, false, NULL, wbc, false, FS_NODE_IO); return __write_node_page(page, false, NULL, wbc, false, FS_NODE_IO);
} }
int fsync_node_pages(struct f2fs_sb_info *sbi, struct inode *inode, int f2fs_fsync_node_pages(struct f2fs_sb_info *sbi, struct inode *inode,
struct writeback_control *wbc, bool atomic) struct writeback_control *wbc, bool atomic)
{ {
pgoff_t index; pgoff_t index;
...@@ -1528,9 +1535,9 @@ int fsync_node_pages(struct f2fs_sb_info *sbi, struct inode *inode, ...@@ -1528,9 +1535,9 @@ int fsync_node_pages(struct f2fs_sb_info *sbi, struct inode *inode,
if (IS_INODE(page)) { if (IS_INODE(page)) {
if (is_inode_flag_set(inode, if (is_inode_flag_set(inode,
FI_DIRTY_INODE)) FI_DIRTY_INODE))
update_inode(inode, page); f2fs_update_inode(inode, page);
set_dentry_mark(page, set_dentry_mark(page,
need_dentry_mark(sbi, ino)); f2fs_need_dentry_mark(sbi, ino));
} }
/* may be written by other thread */ /* may be written by other thread */
if (!PageDirty(page)) if (!PageDirty(page))
...@@ -1580,7 +1587,8 @@ int fsync_node_pages(struct f2fs_sb_info *sbi, struct inode *inode, ...@@ -1580,7 +1587,8 @@ int fsync_node_pages(struct f2fs_sb_info *sbi, struct inode *inode,
return ret ? -EIO: 0; return ret ? -EIO: 0;
} }
int sync_node_pages(struct f2fs_sb_info *sbi, struct writeback_control *wbc, int f2fs_sync_node_pages(struct f2fs_sb_info *sbi,
struct writeback_control *wbc,
bool do_balance, enum iostat_type io_type) bool do_balance, enum iostat_type io_type)
{ {
pgoff_t index; pgoff_t index;
...@@ -1588,21 +1596,28 @@ int sync_node_pages(struct f2fs_sb_info *sbi, struct writeback_control *wbc, ...@@ -1588,21 +1596,28 @@ int sync_node_pages(struct f2fs_sb_info *sbi, struct writeback_control *wbc,
int step = 0; int step = 0;
int nwritten = 0; int nwritten = 0;
int ret = 0; int ret = 0;
int nr_pages; int nr_pages, done = 0;
pagevec_init(&pvec); pagevec_init(&pvec);
next_step: next_step:
index = 0; index = 0;
while ((nr_pages = pagevec_lookup_tag(&pvec, NODE_MAPPING(sbi), &index, while (!done && (nr_pages = pagevec_lookup_tag(&pvec,
PAGECACHE_TAG_DIRTY))) { NODE_MAPPING(sbi), &index, PAGECACHE_TAG_DIRTY))) {
int i; int i;
for (i = 0; i < nr_pages; i++) { for (i = 0; i < nr_pages; i++) {
struct page *page = pvec.pages[i]; struct page *page = pvec.pages[i];
bool submitted = false; bool submitted = false;
/* give a priority to WB_SYNC threads */
if (atomic_read(&sbi->wb_sync_req[NODE]) &&
wbc->sync_mode == WB_SYNC_NONE) {
done = 1;
break;
}
/* /*
* flushing sequence with step: * flushing sequence with step:
* 0. indirect nodes * 0. indirect nodes
...@@ -1681,7 +1696,7 @@ int sync_node_pages(struct f2fs_sb_info *sbi, struct writeback_control *wbc, ...@@ -1681,7 +1696,7 @@ int sync_node_pages(struct f2fs_sb_info *sbi, struct writeback_control *wbc,
return ret; return ret;
} }
int wait_on_node_pages_writeback(struct f2fs_sb_info *sbi, nid_t ino) int f2fs_wait_on_node_pages_writeback(struct f2fs_sb_info *sbi, nid_t ino)
{ {
pgoff_t index = 0; pgoff_t index = 0;
struct pagevec pvec; struct pagevec pvec;
...@@ -1730,14 +1745,21 @@ static int f2fs_write_node_pages(struct address_space *mapping, ...@@ -1730,14 +1745,21 @@ static int f2fs_write_node_pages(struct address_space *mapping,
if (get_pages(sbi, F2FS_DIRTY_NODES) < nr_pages_to_skip(sbi, NODE)) if (get_pages(sbi, F2FS_DIRTY_NODES) < nr_pages_to_skip(sbi, NODE))
goto skip_write; goto skip_write;
if (wbc->sync_mode == WB_SYNC_ALL)
atomic_inc(&sbi->wb_sync_req[NODE]);
else if (atomic_read(&sbi->wb_sync_req[NODE]))
goto skip_write;
trace_f2fs_writepages(mapping->host, wbc, NODE); trace_f2fs_writepages(mapping->host, wbc, NODE);
diff = nr_pages_to_write(sbi, NODE, wbc); diff = nr_pages_to_write(sbi, NODE, wbc);
wbc->sync_mode = WB_SYNC_NONE;
blk_start_plug(&plug); blk_start_plug(&plug);
sync_node_pages(sbi, wbc, true, FS_NODE_IO); f2fs_sync_node_pages(sbi, wbc, true, FS_NODE_IO);
blk_finish_plug(&plug); blk_finish_plug(&plug);
wbc->nr_to_write = max((long)0, wbc->nr_to_write - diff); wbc->nr_to_write = max((long)0, wbc->nr_to_write - diff);
if (wbc->sync_mode == WB_SYNC_ALL)
atomic_dec(&sbi->wb_sync_req[NODE]);
return 0; return 0;
skip_write: skip_write:
...@@ -1753,7 +1775,7 @@ static int f2fs_set_node_page_dirty(struct page *page) ...@@ -1753,7 +1775,7 @@ static int f2fs_set_node_page_dirty(struct page *page)
if (!PageUptodate(page)) if (!PageUptodate(page))
SetPageUptodate(page); SetPageUptodate(page);
if (!PageDirty(page)) { if (!PageDirty(page)) {
f2fs_set_page_dirty_nobuffers(page); __set_page_dirty_nobuffers(page);
inc_page_count(F2FS_P_SB(page), F2FS_DIRTY_NODES); inc_page_count(F2FS_P_SB(page), F2FS_DIRTY_NODES);
SetPagePrivate(page); SetPagePrivate(page);
f2fs_trace_pid(page); f2fs_trace_pid(page);
...@@ -1883,20 +1905,20 @@ static bool add_free_nid(struct f2fs_sb_info *sbi, ...@@ -1883,20 +1905,20 @@ static bool add_free_nid(struct f2fs_sb_info *sbi,
* Thread A Thread B * Thread A Thread B
* - f2fs_create * - f2fs_create
* - f2fs_new_inode * - f2fs_new_inode
* - alloc_nid * - f2fs_alloc_nid
* - __insert_nid_to_list(PREALLOC_NID) * - __insert_nid_to_list(PREALLOC_NID)
* - f2fs_balance_fs_bg * - f2fs_balance_fs_bg
* - build_free_nids * - f2fs_build_free_nids
* - __build_free_nids * - __f2fs_build_free_nids
* - scan_nat_page * - scan_nat_page
* - add_free_nid * - add_free_nid
* - __lookup_nat_cache * - __lookup_nat_cache
* - f2fs_add_link * - f2fs_add_link
* - init_inode_metadata * - f2fs_init_inode_metadata
* - new_inode_page * - f2fs_new_inode_page
* - new_node_page * - f2fs_new_node_page
* - set_node_addr * - set_node_addr
* - alloc_nid_done * - f2fs_alloc_nid_done
* - __remove_nid_from_list(PREALLOC_NID) * - __remove_nid_from_list(PREALLOC_NID)
* - __insert_nid_to_list(FREE_NID) * - __insert_nid_to_list(FREE_NID)
*/ */
...@@ -2028,7 +2050,8 @@ static void scan_free_nid_bits(struct f2fs_sb_info *sbi) ...@@ -2028,7 +2050,8 @@ static void scan_free_nid_bits(struct f2fs_sb_info *sbi)
up_read(&nm_i->nat_tree_lock); up_read(&nm_i->nat_tree_lock);
} }
static void __build_free_nids(struct f2fs_sb_info *sbi, bool sync, bool mount) static void __f2fs_build_free_nids(struct f2fs_sb_info *sbi,
bool sync, bool mount)
{ {
struct f2fs_nm_info *nm_i = NM_I(sbi); struct f2fs_nm_info *nm_i = NM_I(sbi);
int i = 0; int i = 0;
...@@ -2041,7 +2064,7 @@ static void __build_free_nids(struct f2fs_sb_info *sbi, bool sync, bool mount) ...@@ -2041,7 +2064,7 @@ static void __build_free_nids(struct f2fs_sb_info *sbi, bool sync, bool mount)
if (nm_i->nid_cnt[FREE_NID] >= NAT_ENTRY_PER_BLOCK) if (nm_i->nid_cnt[FREE_NID] >= NAT_ENTRY_PER_BLOCK)
return; return;
if (!sync && !available_free_memory(sbi, FREE_NIDS)) if (!sync && !f2fs_available_free_memory(sbi, FREE_NIDS))
return; return;
if (!mount) { if (!mount) {
...@@ -2053,7 +2076,7 @@ static void __build_free_nids(struct f2fs_sb_info *sbi, bool sync, bool mount) ...@@ -2053,7 +2076,7 @@ static void __build_free_nids(struct f2fs_sb_info *sbi, bool sync, bool mount)
} }
/* readahead nat pages to be scanned */ /* readahead nat pages to be scanned */
ra_meta_pages(sbi, NAT_BLOCK_OFFSET(nid), FREE_NID_PAGES, f2fs_ra_meta_pages(sbi, NAT_BLOCK_OFFSET(nid), FREE_NID_PAGES,
META_NAT, true); META_NAT, true);
down_read(&nm_i->nat_tree_lock); down_read(&nm_i->nat_tree_lock);
...@@ -2083,14 +2106,14 @@ static void __build_free_nids(struct f2fs_sb_info *sbi, bool sync, bool mount) ...@@ -2083,14 +2106,14 @@ static void __build_free_nids(struct f2fs_sb_info *sbi, bool sync, bool mount)
up_read(&nm_i->nat_tree_lock); up_read(&nm_i->nat_tree_lock);
ra_meta_pages(sbi, NAT_BLOCK_OFFSET(nm_i->next_scan_nid), f2fs_ra_meta_pages(sbi, NAT_BLOCK_OFFSET(nm_i->next_scan_nid),
nm_i->ra_nid_pages, META_NAT, false); nm_i->ra_nid_pages, META_NAT, false);
} }
void build_free_nids(struct f2fs_sb_info *sbi, bool sync, bool mount) void f2fs_build_free_nids(struct f2fs_sb_info *sbi, bool sync, bool mount)
{ {
mutex_lock(&NM_I(sbi)->build_lock); mutex_lock(&NM_I(sbi)->build_lock);
__build_free_nids(sbi, sync, mount); __f2fs_build_free_nids(sbi, sync, mount);
mutex_unlock(&NM_I(sbi)->build_lock); mutex_unlock(&NM_I(sbi)->build_lock);
} }
...@@ -2099,7 +2122,7 @@ void build_free_nids(struct f2fs_sb_info *sbi, bool sync, bool mount) ...@@ -2099,7 +2122,7 @@ void build_free_nids(struct f2fs_sb_info *sbi, bool sync, bool mount)
* from second parameter of this function. * from second parameter of this function.
* The returned nid could be used ino as well as nid when inode is created. * The returned nid could be used ino as well as nid when inode is created.
*/ */
bool alloc_nid(struct f2fs_sb_info *sbi, nid_t *nid) bool f2fs_alloc_nid(struct f2fs_sb_info *sbi, nid_t *nid)
{ {
struct f2fs_nm_info *nm_i = NM_I(sbi); struct f2fs_nm_info *nm_i = NM_I(sbi);
struct free_nid *i = NULL; struct free_nid *i = NULL;
...@@ -2117,8 +2140,8 @@ bool alloc_nid(struct f2fs_sb_info *sbi, nid_t *nid) ...@@ -2117,8 +2140,8 @@ bool alloc_nid(struct f2fs_sb_info *sbi, nid_t *nid)
return false; return false;
} }
/* We should not use stale free nids created by build_free_nids */ /* We should not use stale free nids created by f2fs_build_free_nids */
if (nm_i->nid_cnt[FREE_NID] && !on_build_free_nids(nm_i)) { if (nm_i->nid_cnt[FREE_NID] && !on_f2fs_build_free_nids(nm_i)) {
f2fs_bug_on(sbi, list_empty(&nm_i->free_nid_list)); f2fs_bug_on(sbi, list_empty(&nm_i->free_nid_list));
i = list_first_entry(&nm_i->free_nid_list, i = list_first_entry(&nm_i->free_nid_list,
struct free_nid, list); struct free_nid, list);
...@@ -2135,14 +2158,14 @@ bool alloc_nid(struct f2fs_sb_info *sbi, nid_t *nid) ...@@ -2135,14 +2158,14 @@ bool alloc_nid(struct f2fs_sb_info *sbi, nid_t *nid)
spin_unlock(&nm_i->nid_list_lock); spin_unlock(&nm_i->nid_list_lock);
/* Let's scan nat pages and its caches to get free nids */ /* Let's scan nat pages and its caches to get free nids */
build_free_nids(sbi, true, false); f2fs_build_free_nids(sbi, true, false);
goto retry; goto retry;
} }
/* /*
* alloc_nid() should be called prior to this function. * f2fs_alloc_nid() should be called prior to this function.
*/ */
void alloc_nid_done(struct f2fs_sb_info *sbi, nid_t nid) void f2fs_alloc_nid_done(struct f2fs_sb_info *sbi, nid_t nid)
{ {
struct f2fs_nm_info *nm_i = NM_I(sbi); struct f2fs_nm_info *nm_i = NM_I(sbi);
struct free_nid *i; struct free_nid *i;
...@@ -2157,9 +2180,9 @@ void alloc_nid_done(struct f2fs_sb_info *sbi, nid_t nid) ...@@ -2157,9 +2180,9 @@ void alloc_nid_done(struct f2fs_sb_info *sbi, nid_t nid)
} }
/* /*
* alloc_nid() should be called prior to this function. * f2fs_alloc_nid() should be called prior to this function.
*/ */
void alloc_nid_failed(struct f2fs_sb_info *sbi, nid_t nid) void f2fs_alloc_nid_failed(struct f2fs_sb_info *sbi, nid_t nid)
{ {
struct f2fs_nm_info *nm_i = NM_I(sbi); struct f2fs_nm_info *nm_i = NM_I(sbi);
struct free_nid *i; struct free_nid *i;
...@@ -2172,7 +2195,7 @@ void alloc_nid_failed(struct f2fs_sb_info *sbi, nid_t nid) ...@@ -2172,7 +2195,7 @@ void alloc_nid_failed(struct f2fs_sb_info *sbi, nid_t nid)
i = __lookup_free_nid_list(nm_i, nid); i = __lookup_free_nid_list(nm_i, nid);
f2fs_bug_on(sbi, !i); f2fs_bug_on(sbi, !i);
if (!available_free_memory(sbi, FREE_NIDS)) { if (!f2fs_available_free_memory(sbi, FREE_NIDS)) {
__remove_free_nid(sbi, i, PREALLOC_NID); __remove_free_nid(sbi, i, PREALLOC_NID);
need_free = true; need_free = true;
} else { } else {
...@@ -2189,7 +2212,7 @@ void alloc_nid_failed(struct f2fs_sb_info *sbi, nid_t nid) ...@@ -2189,7 +2212,7 @@ void alloc_nid_failed(struct f2fs_sb_info *sbi, nid_t nid)
kmem_cache_free(free_nid_slab, i); kmem_cache_free(free_nid_slab, i);
} }
int try_to_free_nids(struct f2fs_sb_info *sbi, int nr_shrink) int f2fs_try_to_free_nids(struct f2fs_sb_info *sbi, int nr_shrink)
{ {
struct f2fs_nm_info *nm_i = NM_I(sbi); struct f2fs_nm_info *nm_i = NM_I(sbi);
struct free_nid *i, *next; struct free_nid *i, *next;
...@@ -2217,14 +2240,14 @@ int try_to_free_nids(struct f2fs_sb_info *sbi, int nr_shrink) ...@@ -2217,14 +2240,14 @@ int try_to_free_nids(struct f2fs_sb_info *sbi, int nr_shrink)
return nr - nr_shrink; return nr - nr_shrink;
} }
void recover_inline_xattr(struct inode *inode, struct page *page) void f2fs_recover_inline_xattr(struct inode *inode, struct page *page)
{ {
void *src_addr, *dst_addr; void *src_addr, *dst_addr;
size_t inline_size; size_t inline_size;
struct page *ipage; struct page *ipage;
struct f2fs_inode *ri; struct f2fs_inode *ri;
ipage = get_node_page(F2FS_I_SB(inode), inode->i_ino); ipage = f2fs_get_node_page(F2FS_I_SB(inode), inode->i_ino);
f2fs_bug_on(F2FS_I_SB(inode), IS_ERR(ipage)); f2fs_bug_on(F2FS_I_SB(inode), IS_ERR(ipage));
ri = F2FS_INODE(page); ri = F2FS_INODE(page);
...@@ -2242,11 +2265,11 @@ void recover_inline_xattr(struct inode *inode, struct page *page) ...@@ -2242,11 +2265,11 @@ void recover_inline_xattr(struct inode *inode, struct page *page)
f2fs_wait_on_page_writeback(ipage, NODE, true); f2fs_wait_on_page_writeback(ipage, NODE, true);
memcpy(dst_addr, src_addr, inline_size); memcpy(dst_addr, src_addr, inline_size);
update_inode: update_inode:
update_inode(inode, ipage); f2fs_update_inode(inode, ipage);
f2fs_put_page(ipage, 1); f2fs_put_page(ipage, 1);
} }
int recover_xattr_data(struct inode *inode, struct page *page) int f2fs_recover_xattr_data(struct inode *inode, struct page *page)
{ {
struct f2fs_sb_info *sbi = F2FS_I_SB(inode); struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
nid_t prev_xnid = F2FS_I(inode)->i_xattr_nid; nid_t prev_xnid = F2FS_I(inode)->i_xattr_nid;
...@@ -2259,25 +2282,25 @@ int recover_xattr_data(struct inode *inode, struct page *page) ...@@ -2259,25 +2282,25 @@ int recover_xattr_data(struct inode *inode, struct page *page)
goto recover_xnid; goto recover_xnid;
/* 1: invalidate the previous xattr nid */ /* 1: invalidate the previous xattr nid */
get_node_info(sbi, prev_xnid, &ni); f2fs_get_node_info(sbi, prev_xnid, &ni);
invalidate_blocks(sbi, ni.blk_addr); f2fs_invalidate_blocks(sbi, ni.blk_addr);
dec_valid_node_count(sbi, inode, false); dec_valid_node_count(sbi, inode, false);
set_node_addr(sbi, &ni, NULL_ADDR, false); set_node_addr(sbi, &ni, NULL_ADDR, false);
recover_xnid: recover_xnid:
/* 2: update xattr nid in inode */ /* 2: update xattr nid in inode */
if (!alloc_nid(sbi, &new_xnid)) if (!f2fs_alloc_nid(sbi, &new_xnid))
return -ENOSPC; return -ENOSPC;
set_new_dnode(&dn, inode, NULL, NULL, new_xnid); set_new_dnode(&dn, inode, NULL, NULL, new_xnid);
xpage = new_node_page(&dn, XATTR_NODE_OFFSET); xpage = f2fs_new_node_page(&dn, XATTR_NODE_OFFSET);
if (IS_ERR(xpage)) { if (IS_ERR(xpage)) {
alloc_nid_failed(sbi, new_xnid); f2fs_alloc_nid_failed(sbi, new_xnid);
return PTR_ERR(xpage); return PTR_ERR(xpage);
} }
alloc_nid_done(sbi, new_xnid); f2fs_alloc_nid_done(sbi, new_xnid);
update_inode_page(inode); f2fs_update_inode_page(inode);
/* 3: update and set xattr node page dirty */ /* 3: update and set xattr node page dirty */
memcpy(F2FS_NODE(xpage), F2FS_NODE(page), VALID_XATTR_BLOCK_SIZE); memcpy(F2FS_NODE(xpage), F2FS_NODE(page), VALID_XATTR_BLOCK_SIZE);
...@@ -2288,14 +2311,14 @@ int recover_xattr_data(struct inode *inode, struct page *page) ...@@ -2288,14 +2311,14 @@ int recover_xattr_data(struct inode *inode, struct page *page)
return 0; return 0;
} }
int recover_inode_page(struct f2fs_sb_info *sbi, struct page *page) int f2fs_recover_inode_page(struct f2fs_sb_info *sbi, struct page *page)
{ {
struct f2fs_inode *src, *dst; struct f2fs_inode *src, *dst;
nid_t ino = ino_of_node(page); nid_t ino = ino_of_node(page);
struct node_info old_ni, new_ni; struct node_info old_ni, new_ni;
struct page *ipage; struct page *ipage;
get_node_info(sbi, ino, &old_ni); f2fs_get_node_info(sbi, ino, &old_ni);
if (unlikely(old_ni.blk_addr != NULL_ADDR)) if (unlikely(old_ni.blk_addr != NULL_ADDR))
return -EINVAL; return -EINVAL;
...@@ -2349,7 +2372,7 @@ int recover_inode_page(struct f2fs_sb_info *sbi, struct page *page) ...@@ -2349,7 +2372,7 @@ int recover_inode_page(struct f2fs_sb_info *sbi, struct page *page)
return 0; return 0;
} }
void restore_node_summary(struct f2fs_sb_info *sbi, void f2fs_restore_node_summary(struct f2fs_sb_info *sbi,
unsigned int segno, struct f2fs_summary_block *sum) unsigned int segno, struct f2fs_summary_block *sum)
{ {
struct f2fs_node *rn; struct f2fs_node *rn;
...@@ -2366,10 +2389,10 @@ void restore_node_summary(struct f2fs_sb_info *sbi, ...@@ -2366,10 +2389,10 @@ void restore_node_summary(struct f2fs_sb_info *sbi,
nrpages = min(last_offset - i, BIO_MAX_PAGES); nrpages = min(last_offset - i, BIO_MAX_PAGES);
/* readahead node pages */ /* readahead node pages */
ra_meta_pages(sbi, addr, nrpages, META_POR, true); f2fs_ra_meta_pages(sbi, addr, nrpages, META_POR, true);
for (idx = addr; idx < addr + nrpages; idx++) { for (idx = addr; idx < addr + nrpages; idx++) {
struct page *page = get_tmp_page(sbi, idx); struct page *page = f2fs_get_tmp_page(sbi, idx);
rn = F2FS_NODE(page); rn = F2FS_NODE(page);
sum_entry->nid = rn->footer.nid; sum_entry->nid = rn->footer.nid;
...@@ -2511,7 +2534,7 @@ static void __flush_nat_entry_set(struct f2fs_sb_info *sbi, ...@@ -2511,7 +2534,7 @@ static void __flush_nat_entry_set(struct f2fs_sb_info *sbi,
f2fs_bug_on(sbi, nat_get_blkaddr(ne) == NEW_ADDR); f2fs_bug_on(sbi, nat_get_blkaddr(ne) == NEW_ADDR);
if (to_journal) { if (to_journal) {
offset = lookup_journal_in_cursum(journal, offset = f2fs_lookup_journal_in_cursum(journal,
NAT_JOURNAL, nid, 1); NAT_JOURNAL, nid, 1);
f2fs_bug_on(sbi, offset < 0); f2fs_bug_on(sbi, offset < 0);
raw_ne = &nat_in_journal(journal, offset); raw_ne = &nat_in_journal(journal, offset);
...@@ -2548,7 +2571,7 @@ static void __flush_nat_entry_set(struct f2fs_sb_info *sbi, ...@@ -2548,7 +2571,7 @@ static void __flush_nat_entry_set(struct f2fs_sb_info *sbi,
/* /*
* This function is called during the checkpointing process. * This function is called during the checkpointing process.
*/ */
void flush_nat_entries(struct f2fs_sb_info *sbi, struct cp_control *cpc) void f2fs_flush_nat_entries(struct f2fs_sb_info *sbi, struct cp_control *cpc)
{ {
struct f2fs_nm_info *nm_i = NM_I(sbi); struct f2fs_nm_info *nm_i = NM_I(sbi);
struct curseg_info *curseg = CURSEG_I(sbi, CURSEG_HOT_DATA); struct curseg_info *curseg = CURSEG_I(sbi, CURSEG_HOT_DATA);
...@@ -2611,7 +2634,7 @@ static int __get_nat_bitmaps(struct f2fs_sb_info *sbi) ...@@ -2611,7 +2634,7 @@ static int __get_nat_bitmaps(struct f2fs_sb_info *sbi)
nat_bits_addr = __start_cp_addr(sbi) + sbi->blocks_per_seg - nat_bits_addr = __start_cp_addr(sbi) + sbi->blocks_per_seg -
nm_i->nat_bits_blocks; nm_i->nat_bits_blocks;
for (i = 0; i < nm_i->nat_bits_blocks; i++) { for (i = 0; i < nm_i->nat_bits_blocks; i++) {
struct page *page = get_meta_page(sbi, nat_bits_addr++); struct page *page = f2fs_get_meta_page(sbi, nat_bits_addr++);
memcpy(nm_i->nat_bits + (i << F2FS_BLKSIZE_BITS), memcpy(nm_i->nat_bits + (i << F2FS_BLKSIZE_BITS),
page_address(page), F2FS_BLKSIZE); page_address(page), F2FS_BLKSIZE);
...@@ -2754,7 +2777,7 @@ static int init_free_nid_cache(struct f2fs_sb_info *sbi) ...@@ -2754,7 +2777,7 @@ static int init_free_nid_cache(struct f2fs_sb_info *sbi)
return 0; return 0;
} }
int build_node_manager(struct f2fs_sb_info *sbi) int f2fs_build_node_manager(struct f2fs_sb_info *sbi)
{ {
int err; int err;
...@@ -2774,11 +2797,11 @@ int build_node_manager(struct f2fs_sb_info *sbi) ...@@ -2774,11 +2797,11 @@ int build_node_manager(struct f2fs_sb_info *sbi)
/* load free nid status from nat_bits table */ /* load free nid status from nat_bits table */
load_free_nid_bitmap(sbi); load_free_nid_bitmap(sbi);
build_free_nids(sbi, true, true); f2fs_build_free_nids(sbi, true, true);
return 0; return 0;
} }
void destroy_node_manager(struct f2fs_sb_info *sbi) void f2fs_destroy_node_manager(struct f2fs_sb_info *sbi)
{ {
struct f2fs_nm_info *nm_i = NM_I(sbi); struct f2fs_nm_info *nm_i = NM_I(sbi);
struct free_nid *i, *next_i; struct free_nid *i, *next_i;
...@@ -2850,7 +2873,7 @@ void destroy_node_manager(struct f2fs_sb_info *sbi) ...@@ -2850,7 +2873,7 @@ void destroy_node_manager(struct f2fs_sb_info *sbi)
kfree(nm_i); kfree(nm_i);
} }
int __init create_node_manager_caches(void) int __init f2fs_create_node_manager_caches(void)
{ {
nat_entry_slab = f2fs_kmem_cache_create("nat_entry", nat_entry_slab = f2fs_kmem_cache_create("nat_entry",
sizeof(struct nat_entry)); sizeof(struct nat_entry));
...@@ -2876,7 +2899,7 @@ int __init create_node_manager_caches(void) ...@@ -2876,7 +2899,7 @@ int __init create_node_manager_caches(void)
return -ENOMEM; return -ENOMEM;
} }
void destroy_node_manager_caches(void) void f2fs_destroy_node_manager_caches(void)
{ {
kmem_cache_destroy(nat_entry_set_slab); kmem_cache_destroy(nat_entry_set_slab);
kmem_cache_destroy(free_nid_slab); kmem_cache_destroy(free_nid_slab);
......
...@@ -47,7 +47,7 @@ ...@@ -47,7 +47,7 @@
static struct kmem_cache *fsync_entry_slab; static struct kmem_cache *fsync_entry_slab;
bool space_for_roll_forward(struct f2fs_sb_info *sbi) bool f2fs_space_for_roll_forward(struct f2fs_sb_info *sbi)
{ {
s64 nalloc = percpu_counter_sum_positive(&sbi->alloc_valid_block_count); s64 nalloc = percpu_counter_sum_positive(&sbi->alloc_valid_block_count);
...@@ -162,7 +162,7 @@ static int recover_dentry(struct inode *inode, struct page *ipage, ...@@ -162,7 +162,7 @@ static int recover_dentry(struct inode *inode, struct page *ipage,
goto out_put; goto out_put;
} }
err = acquire_orphan_inode(F2FS_I_SB(inode)); err = f2fs_acquire_orphan_inode(F2FS_I_SB(inode));
if (err) { if (err) {
iput(einode); iput(einode);
goto out_put; goto out_put;
...@@ -173,7 +173,7 @@ static int recover_dentry(struct inode *inode, struct page *ipage, ...@@ -173,7 +173,7 @@ static int recover_dentry(struct inode *inode, struct page *ipage,
} else if (IS_ERR(page)) { } else if (IS_ERR(page)) {
err = PTR_ERR(page); err = PTR_ERR(page);
} else { } else {
err = __f2fs_do_add_link(dir, &fname, inode, err = f2fs_add_dentry(dir, &fname, inode,
inode->i_ino, inode->i_mode); inode->i_ino, inode->i_mode);
} }
if (err == -ENOMEM) if (err == -ENOMEM)
...@@ -204,8 +204,6 @@ static void recover_inline_flags(struct inode *inode, struct f2fs_inode *ri) ...@@ -204,8 +204,6 @@ static void recover_inline_flags(struct inode *inode, struct f2fs_inode *ri)
set_inode_flag(inode, FI_DATA_EXIST); set_inode_flag(inode, FI_DATA_EXIST);
else else
clear_inode_flag(inode, FI_DATA_EXIST); clear_inode_flag(inode, FI_DATA_EXIST);
if (!(ri->i_inline & F2FS_INLINE_DOTS))
clear_inode_flag(inode, FI_INLINE_DOTS);
} }
static void recover_inode(struct inode *inode, struct page *page) static void recover_inode(struct inode *inode, struct page *page)
...@@ -254,10 +252,10 @@ static int find_fsync_dnodes(struct f2fs_sb_info *sbi, struct list_head *head, ...@@ -254,10 +252,10 @@ static int find_fsync_dnodes(struct f2fs_sb_info *sbi, struct list_head *head,
while (1) { while (1) {
struct fsync_inode_entry *entry; struct fsync_inode_entry *entry;
if (!is_valid_blkaddr(sbi, blkaddr, META_POR)) if (!f2fs_is_valid_meta_blkaddr(sbi, blkaddr, META_POR))
return 0; return 0;
page = get_tmp_page(sbi, blkaddr); page = f2fs_get_tmp_page(sbi, blkaddr);
if (!is_recoverable_dnode(page)) if (!is_recoverable_dnode(page))
break; break;
...@@ -271,7 +269,7 @@ static int find_fsync_dnodes(struct f2fs_sb_info *sbi, struct list_head *head, ...@@ -271,7 +269,7 @@ static int find_fsync_dnodes(struct f2fs_sb_info *sbi, struct list_head *head,
if (!check_only && if (!check_only &&
IS_INODE(page) && is_dent_dnode(page)) { IS_INODE(page) && is_dent_dnode(page)) {
err = recover_inode_page(sbi, page); err = f2fs_recover_inode_page(sbi, page);
if (err) if (err)
break; break;
quota_inode = true; quota_inode = true;
...@@ -312,7 +310,7 @@ static int find_fsync_dnodes(struct f2fs_sb_info *sbi, struct list_head *head, ...@@ -312,7 +310,7 @@ static int find_fsync_dnodes(struct f2fs_sb_info *sbi, struct list_head *head,
blkaddr = next_blkaddr_of_node(page); blkaddr = next_blkaddr_of_node(page);
f2fs_put_page(page, 1); f2fs_put_page(page, 1);
ra_meta_pages_cond(sbi, blkaddr); f2fs_ra_meta_pages_cond(sbi, blkaddr);
} }
f2fs_put_page(page, 1); f2fs_put_page(page, 1);
return err; return err;
...@@ -355,7 +353,7 @@ static int check_index_in_prev_nodes(struct f2fs_sb_info *sbi, ...@@ -355,7 +353,7 @@ static int check_index_in_prev_nodes(struct f2fs_sb_info *sbi,
} }
} }
sum_page = get_sum_page(sbi, segno); sum_page = f2fs_get_sum_page(sbi, segno);
sum_node = (struct f2fs_summary_block *)page_address(sum_page); sum_node = (struct f2fs_summary_block *)page_address(sum_page);
sum = sum_node->entries[blkoff]; sum = sum_node->entries[blkoff];
f2fs_put_page(sum_page, 1); f2fs_put_page(sum_page, 1);
...@@ -375,7 +373,7 @@ static int check_index_in_prev_nodes(struct f2fs_sb_info *sbi, ...@@ -375,7 +373,7 @@ static int check_index_in_prev_nodes(struct f2fs_sb_info *sbi,
} }
/* Get the node page */ /* Get the node page */
node_page = get_node_page(sbi, nid); node_page = f2fs_get_node_page(sbi, nid);
if (IS_ERR(node_page)) if (IS_ERR(node_page))
return PTR_ERR(node_page); return PTR_ERR(node_page);
...@@ -400,7 +398,8 @@ static int check_index_in_prev_nodes(struct f2fs_sb_info *sbi, ...@@ -400,7 +398,8 @@ static int check_index_in_prev_nodes(struct f2fs_sb_info *sbi,
inode = dn->inode; inode = dn->inode;
} }
bidx = start_bidx_of_node(offset, inode) + le16_to_cpu(sum.ofs_in_node); bidx = f2fs_start_bidx_of_node(offset, inode) +
le16_to_cpu(sum.ofs_in_node);
/* /*
* if inode page is locked, unlock temporarily, but its reference * if inode page is locked, unlock temporarily, but its reference
...@@ -410,11 +409,11 @@ static int check_index_in_prev_nodes(struct f2fs_sb_info *sbi, ...@@ -410,11 +409,11 @@ static int check_index_in_prev_nodes(struct f2fs_sb_info *sbi,
unlock_page(dn->inode_page); unlock_page(dn->inode_page);
set_new_dnode(&tdn, inode, NULL, NULL, 0); set_new_dnode(&tdn, inode, NULL, NULL, 0);
if (get_dnode_of_data(&tdn, bidx, LOOKUP_NODE)) if (f2fs_get_dnode_of_data(&tdn, bidx, LOOKUP_NODE))
goto out; goto out;
if (tdn.data_blkaddr == blkaddr) if (tdn.data_blkaddr == blkaddr)
truncate_data_blocks_range(&tdn, 1); f2fs_truncate_data_blocks_range(&tdn, 1);
f2fs_put_dnode(&tdn); f2fs_put_dnode(&tdn);
out: out:
...@@ -427,7 +426,7 @@ static int check_index_in_prev_nodes(struct f2fs_sb_info *sbi, ...@@ -427,7 +426,7 @@ static int check_index_in_prev_nodes(struct f2fs_sb_info *sbi,
truncate_out: truncate_out:
if (datablock_addr(tdn.inode, tdn.node_page, if (datablock_addr(tdn.inode, tdn.node_page,
tdn.ofs_in_node) == blkaddr) tdn.ofs_in_node) == blkaddr)
truncate_data_blocks_range(&tdn, 1); f2fs_truncate_data_blocks_range(&tdn, 1);
if (dn->inode->i_ino == nid && !dn->inode_page_locked) if (dn->inode->i_ino == nid && !dn->inode_page_locked)
unlock_page(dn->inode_page); unlock_page(dn->inode_page);
return 0; return 0;
...@@ -443,25 +442,25 @@ static int do_recover_data(struct f2fs_sb_info *sbi, struct inode *inode, ...@@ -443,25 +442,25 @@ static int do_recover_data(struct f2fs_sb_info *sbi, struct inode *inode,
/* step 1: recover xattr */ /* step 1: recover xattr */
if (IS_INODE(page)) { if (IS_INODE(page)) {
recover_inline_xattr(inode, page); f2fs_recover_inline_xattr(inode, page);
} else if (f2fs_has_xattr_block(ofs_of_node(page))) { } else if (f2fs_has_xattr_block(ofs_of_node(page))) {
err = recover_xattr_data(inode, page); err = f2fs_recover_xattr_data(inode, page);
if (!err) if (!err)
recovered++; recovered++;
goto out; goto out;
} }
/* step 2: recover inline data */ /* step 2: recover inline data */
if (recover_inline_data(inode, page)) if (f2fs_recover_inline_data(inode, page))
goto out; goto out;
/* step 3: recover data indices */ /* step 3: recover data indices */
start = start_bidx_of_node(ofs_of_node(page), inode); start = f2fs_start_bidx_of_node(ofs_of_node(page), inode);
end = start + ADDRS_PER_PAGE(page, inode); end = start + ADDRS_PER_PAGE(page, inode);
set_new_dnode(&dn, inode, NULL, NULL, 0); set_new_dnode(&dn, inode, NULL, NULL, 0);
retry_dn: retry_dn:
err = get_dnode_of_data(&dn, start, ALLOC_NODE); err = f2fs_get_dnode_of_data(&dn, start, ALLOC_NODE);
if (err) { if (err) {
if (err == -ENOMEM) { if (err == -ENOMEM) {
congestion_wait(BLK_RW_ASYNC, HZ/50); congestion_wait(BLK_RW_ASYNC, HZ/50);
...@@ -472,7 +471,7 @@ static int do_recover_data(struct f2fs_sb_info *sbi, struct inode *inode, ...@@ -472,7 +471,7 @@ static int do_recover_data(struct f2fs_sb_info *sbi, struct inode *inode,
f2fs_wait_on_page_writeback(dn.node_page, NODE, true); f2fs_wait_on_page_writeback(dn.node_page, NODE, true);
get_node_info(sbi, dn.nid, &ni); f2fs_get_node_info(sbi, dn.nid, &ni);
f2fs_bug_on(sbi, ni.ino != ino_of_node(page)); f2fs_bug_on(sbi, ni.ino != ino_of_node(page));
f2fs_bug_on(sbi, ofs_of_node(dn.node_page) != ofs_of_node(page)); f2fs_bug_on(sbi, ofs_of_node(dn.node_page) != ofs_of_node(page));
...@@ -488,7 +487,7 @@ static int do_recover_data(struct f2fs_sb_info *sbi, struct inode *inode, ...@@ -488,7 +487,7 @@ static int do_recover_data(struct f2fs_sb_info *sbi, struct inode *inode,
/* dest is invalid, just invalidate src block */ /* dest is invalid, just invalidate src block */
if (dest == NULL_ADDR) { if (dest == NULL_ADDR) {
truncate_data_blocks_range(&dn, 1); f2fs_truncate_data_blocks_range(&dn, 1);
continue; continue;
} }
...@@ -502,19 +501,19 @@ static int do_recover_data(struct f2fs_sb_info *sbi, struct inode *inode, ...@@ -502,19 +501,19 @@ static int do_recover_data(struct f2fs_sb_info *sbi, struct inode *inode,
* and then reserve one new block in dnode page. * and then reserve one new block in dnode page.
*/ */
if (dest == NEW_ADDR) { if (dest == NEW_ADDR) {
truncate_data_blocks_range(&dn, 1); f2fs_truncate_data_blocks_range(&dn, 1);
reserve_new_block(&dn); f2fs_reserve_new_block(&dn);
continue; continue;
} }
/* dest is valid block, try to recover from src to dest */ /* dest is valid block, try to recover from src to dest */
if (is_valid_blkaddr(sbi, dest, META_POR)) { if (f2fs_is_valid_meta_blkaddr(sbi, dest, META_POR)) {
if (src == NULL_ADDR) { if (src == NULL_ADDR) {
err = reserve_new_block(&dn); err = f2fs_reserve_new_block(&dn);
#ifdef CONFIG_F2FS_FAULT_INJECTION #ifdef CONFIG_F2FS_FAULT_INJECTION
while (err) while (err)
err = reserve_new_block(&dn); err = f2fs_reserve_new_block(&dn);
#endif #endif
/* We should not get -ENOSPC */ /* We should not get -ENOSPC */
f2fs_bug_on(sbi, err); f2fs_bug_on(sbi, err);
...@@ -569,12 +568,12 @@ static int recover_data(struct f2fs_sb_info *sbi, struct list_head *inode_list, ...@@ -569,12 +568,12 @@ static int recover_data(struct f2fs_sb_info *sbi, struct list_head *inode_list,
while (1) { while (1) {
struct fsync_inode_entry *entry; struct fsync_inode_entry *entry;
if (!is_valid_blkaddr(sbi, blkaddr, META_POR)) if (!f2fs_is_valid_meta_blkaddr(sbi, blkaddr, META_POR))
break; break;
ra_meta_pages_cond(sbi, blkaddr); f2fs_ra_meta_pages_cond(sbi, blkaddr);
page = get_tmp_page(sbi, blkaddr); page = f2fs_get_tmp_page(sbi, blkaddr);
if (!is_recoverable_dnode(page)) { if (!is_recoverable_dnode(page)) {
f2fs_put_page(page, 1); f2fs_put_page(page, 1);
...@@ -612,11 +611,11 @@ static int recover_data(struct f2fs_sb_info *sbi, struct list_head *inode_list, ...@@ -612,11 +611,11 @@ static int recover_data(struct f2fs_sb_info *sbi, struct list_head *inode_list,
f2fs_put_page(page, 1); f2fs_put_page(page, 1);
} }
if (!err) if (!err)
allocate_new_segments(sbi); f2fs_allocate_new_segments(sbi);
return err; return err;
} }
int recover_fsync_data(struct f2fs_sb_info *sbi, bool check_only) int f2fs_recover_fsync_data(struct f2fs_sb_info *sbi, bool check_only)
{ {
struct list_head inode_list; struct list_head inode_list;
struct list_head dir_list; struct list_head dir_list;
...@@ -691,7 +690,7 @@ int recover_fsync_data(struct f2fs_sb_info *sbi, bool check_only) ...@@ -691,7 +690,7 @@ int recover_fsync_data(struct f2fs_sb_info *sbi, bool check_only)
struct cp_control cpc = { struct cp_control cpc = {
.reason = CP_RECOVERY, .reason = CP_RECOVERY,
}; };
err = write_checkpoint(sbi, &cpc); err = f2fs_write_checkpoint(sbi, &cpc);
} }
kmem_cache_destroy(fsync_entry_slab); kmem_cache_destroy(fsync_entry_slab);
......
...@@ -169,7 +169,7 @@ static unsigned long __find_rev_next_zero_bit(const unsigned long *addr, ...@@ -169,7 +169,7 @@ static unsigned long __find_rev_next_zero_bit(const unsigned long *addr,
return result - size + __reverse_ffz(tmp); return result - size + __reverse_ffz(tmp);
} }
bool need_SSR(struct f2fs_sb_info *sbi) bool f2fs_need_SSR(struct f2fs_sb_info *sbi)
{ {
int node_secs = get_blocktype_secs(sbi, F2FS_DIRTY_NODES); int node_secs = get_blocktype_secs(sbi, F2FS_DIRTY_NODES);
int dent_secs = get_blocktype_secs(sbi, F2FS_DIRTY_DENTS); int dent_secs = get_blocktype_secs(sbi, F2FS_DIRTY_DENTS);
...@@ -177,14 +177,14 @@ bool need_SSR(struct f2fs_sb_info *sbi) ...@@ -177,14 +177,14 @@ bool need_SSR(struct f2fs_sb_info *sbi)
if (test_opt(sbi, LFS)) if (test_opt(sbi, LFS))
return false; return false;
if (sbi->gc_thread && sbi->gc_thread->gc_urgent) if (sbi->gc_mode == GC_URGENT)
return true; return true;
return free_sections(sbi) <= (node_secs + 2 * dent_secs + imeta_secs + return free_sections(sbi) <= (node_secs + 2 * dent_secs + imeta_secs +
SM_I(sbi)->min_ssr_sections + reserved_sections(sbi)); SM_I(sbi)->min_ssr_sections + reserved_sections(sbi));
} }
void register_inmem_page(struct inode *inode, struct page *page) void f2fs_register_inmem_page(struct inode *inode, struct page *page)
{ {
struct f2fs_sb_info *sbi = F2FS_I_SB(inode); struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
struct f2fs_inode_info *fi = F2FS_I(inode); struct f2fs_inode_info *fi = F2FS_I(inode);
...@@ -230,6 +230,8 @@ static int __revoke_inmem_pages(struct inode *inode, ...@@ -230,6 +230,8 @@ static int __revoke_inmem_pages(struct inode *inode,
lock_page(page); lock_page(page);
f2fs_wait_on_page_writeback(page, DATA, true);
if (recover) { if (recover) {
struct dnode_of_data dn; struct dnode_of_data dn;
struct node_info ni; struct node_info ni;
...@@ -237,7 +239,8 @@ static int __revoke_inmem_pages(struct inode *inode, ...@@ -237,7 +239,8 @@ static int __revoke_inmem_pages(struct inode *inode,
trace_f2fs_commit_inmem_page(page, INMEM_REVOKE); trace_f2fs_commit_inmem_page(page, INMEM_REVOKE);
retry: retry:
set_new_dnode(&dn, inode, NULL, NULL, 0); set_new_dnode(&dn, inode, NULL, NULL, 0);
err = get_dnode_of_data(&dn, page->index, LOOKUP_NODE); err = f2fs_get_dnode_of_data(&dn, page->index,
LOOKUP_NODE);
if (err) { if (err) {
if (err == -ENOMEM) { if (err == -ENOMEM) {
congestion_wait(BLK_RW_ASYNC, HZ/50); congestion_wait(BLK_RW_ASYNC, HZ/50);
...@@ -247,9 +250,9 @@ static int __revoke_inmem_pages(struct inode *inode, ...@@ -247,9 +250,9 @@ static int __revoke_inmem_pages(struct inode *inode,
err = -EAGAIN; err = -EAGAIN;
goto next; goto next;
} }
get_node_info(sbi, dn.nid, &ni); f2fs_get_node_info(sbi, dn.nid, &ni);
if (cur->old_addr == NEW_ADDR) { if (cur->old_addr == NEW_ADDR) {
invalidate_blocks(sbi, dn.data_blkaddr); f2fs_invalidate_blocks(sbi, dn.data_blkaddr);
f2fs_update_data_blkaddr(&dn, NEW_ADDR); f2fs_update_data_blkaddr(&dn, NEW_ADDR);
} else } else
f2fs_replace_block(sbi, &dn, dn.data_blkaddr, f2fs_replace_block(sbi, &dn, dn.data_blkaddr,
...@@ -271,7 +274,7 @@ static int __revoke_inmem_pages(struct inode *inode, ...@@ -271,7 +274,7 @@ static int __revoke_inmem_pages(struct inode *inode,
return err; return err;
} }
void drop_inmem_pages_all(struct f2fs_sb_info *sbi) void f2fs_drop_inmem_pages_all(struct f2fs_sb_info *sbi, bool gc_failure)
{ {
struct list_head *head = &sbi->inode_list[ATOMIC_FILE]; struct list_head *head = &sbi->inode_list[ATOMIC_FILE];
struct inode *inode; struct inode *inode;
...@@ -287,15 +290,23 @@ void drop_inmem_pages_all(struct f2fs_sb_info *sbi) ...@@ -287,15 +290,23 @@ void drop_inmem_pages_all(struct f2fs_sb_info *sbi)
spin_unlock(&sbi->inode_lock[ATOMIC_FILE]); spin_unlock(&sbi->inode_lock[ATOMIC_FILE]);
if (inode) { if (inode) {
drop_inmem_pages(inode); if (gc_failure) {
if (fi->i_gc_failures[GC_FAILURE_ATOMIC])
goto drop;
goto skip;
}
drop:
set_inode_flag(inode, FI_ATOMIC_REVOKE_REQUEST);
f2fs_drop_inmem_pages(inode);
iput(inode); iput(inode);
} }
skip:
congestion_wait(BLK_RW_ASYNC, HZ/50); congestion_wait(BLK_RW_ASYNC, HZ/50);
cond_resched(); cond_resched();
goto next; goto next;
} }
void drop_inmem_pages(struct inode *inode) void f2fs_drop_inmem_pages(struct inode *inode)
{ {
struct f2fs_sb_info *sbi = F2FS_I_SB(inode); struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
struct f2fs_inode_info *fi = F2FS_I(inode); struct f2fs_inode_info *fi = F2FS_I(inode);
...@@ -309,11 +320,11 @@ void drop_inmem_pages(struct inode *inode) ...@@ -309,11 +320,11 @@ void drop_inmem_pages(struct inode *inode)
mutex_unlock(&fi->inmem_lock); mutex_unlock(&fi->inmem_lock);
clear_inode_flag(inode, FI_ATOMIC_FILE); clear_inode_flag(inode, FI_ATOMIC_FILE);
clear_inode_flag(inode, FI_HOT_DATA); fi->i_gc_failures[GC_FAILURE_ATOMIC] = 0;
stat_dec_atomic_write(inode); stat_dec_atomic_write(inode);
} }
void drop_inmem_page(struct inode *inode, struct page *page) void f2fs_drop_inmem_page(struct inode *inode, struct page *page)
{ {
struct f2fs_inode_info *fi = F2FS_I(inode); struct f2fs_inode_info *fi = F2FS_I(inode);
struct f2fs_sb_info *sbi = F2FS_I_SB(inode); struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
...@@ -328,7 +339,7 @@ void drop_inmem_page(struct inode *inode, struct page *page) ...@@ -328,7 +339,7 @@ void drop_inmem_page(struct inode *inode, struct page *page)
break; break;
} }
f2fs_bug_on(sbi, !cur || cur->page != page); f2fs_bug_on(sbi, list_empty(head) || cur->page != page);
list_del(&cur->list); list_del(&cur->list);
mutex_unlock(&fi->inmem_lock); mutex_unlock(&fi->inmem_lock);
...@@ -343,8 +354,7 @@ void drop_inmem_page(struct inode *inode, struct page *page) ...@@ -343,8 +354,7 @@ void drop_inmem_page(struct inode *inode, struct page *page)
trace_f2fs_commit_inmem_page(page, INMEM_INVALIDATE); trace_f2fs_commit_inmem_page(page, INMEM_INVALIDATE);
} }
static int __commit_inmem_pages(struct inode *inode, static int __f2fs_commit_inmem_pages(struct inode *inode)
struct list_head *revoke_list)
{ {
struct f2fs_sb_info *sbi = F2FS_I_SB(inode); struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
struct f2fs_inode_info *fi = F2FS_I(inode); struct f2fs_inode_info *fi = F2FS_I(inode);
...@@ -357,9 +367,12 @@ static int __commit_inmem_pages(struct inode *inode, ...@@ -357,9 +367,12 @@ static int __commit_inmem_pages(struct inode *inode,
.op_flags = REQ_SYNC | REQ_PRIO, .op_flags = REQ_SYNC | REQ_PRIO,
.io_type = FS_DATA_IO, .io_type = FS_DATA_IO,
}; };
struct list_head revoke_list;
pgoff_t last_idx = ULONG_MAX; pgoff_t last_idx = ULONG_MAX;
int err = 0; int err = 0;
INIT_LIST_HEAD(&revoke_list);
list_for_each_entry_safe(cur, tmp, &fi->inmem_pages, list) { list_for_each_entry_safe(cur, tmp, &fi->inmem_pages, list) {
struct page *page = cur->page; struct page *page = cur->page;
...@@ -371,14 +384,14 @@ static int __commit_inmem_pages(struct inode *inode, ...@@ -371,14 +384,14 @@ static int __commit_inmem_pages(struct inode *inode,
f2fs_wait_on_page_writeback(page, DATA, true); f2fs_wait_on_page_writeback(page, DATA, true);
if (clear_page_dirty_for_io(page)) { if (clear_page_dirty_for_io(page)) {
inode_dec_dirty_pages(inode); inode_dec_dirty_pages(inode);
remove_dirty_inode(inode); f2fs_remove_dirty_inode(inode);
} }
retry: retry:
fio.page = page; fio.page = page;
fio.old_blkaddr = NULL_ADDR; fio.old_blkaddr = NULL_ADDR;
fio.encrypted_page = NULL; fio.encrypted_page = NULL;
fio.need_lock = LOCK_DONE; fio.need_lock = LOCK_DONE;
err = do_write_data_page(&fio); err = f2fs_do_write_data_page(&fio);
if (err) { if (err) {
if (err == -ENOMEM) { if (err == -ENOMEM) {
congestion_wait(BLK_RW_ASYNC, HZ/50); congestion_wait(BLK_RW_ASYNC, HZ/50);
...@@ -393,50 +406,46 @@ static int __commit_inmem_pages(struct inode *inode, ...@@ -393,50 +406,46 @@ static int __commit_inmem_pages(struct inode *inode,
last_idx = page->index; last_idx = page->index;
} }
unlock_page(page); unlock_page(page);
list_move_tail(&cur->list, revoke_list); list_move_tail(&cur->list, &revoke_list);
} }
if (last_idx != ULONG_MAX) if (last_idx != ULONG_MAX)
f2fs_submit_merged_write_cond(sbi, inode, 0, last_idx, DATA); f2fs_submit_merged_write_cond(sbi, inode, 0, last_idx, DATA);
if (!err) if (err) {
__revoke_inmem_pages(inode, revoke_list, false, false); /*
* try to revoke all committed pages, but still we could fail
* due to no memory or other reason, if that happened, EAGAIN
* will be returned, which means in such case, transaction is
* already not integrity, caller should use journal to do the
* recovery or rewrite & commit last transaction. For other
* error number, revoking was done by filesystem itself.
*/
err = __revoke_inmem_pages(inode, &revoke_list, false, true);
/* drop all uncommitted pages */
__revoke_inmem_pages(inode, &fi->inmem_pages, true, false);
} else {
__revoke_inmem_pages(inode, &revoke_list, false, false);
}
return err; return err;
} }
int commit_inmem_pages(struct inode *inode) int f2fs_commit_inmem_pages(struct inode *inode)
{ {
struct f2fs_sb_info *sbi = F2FS_I_SB(inode); struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
struct f2fs_inode_info *fi = F2FS_I(inode); struct f2fs_inode_info *fi = F2FS_I(inode);
struct list_head revoke_list;
int err; int err;
INIT_LIST_HEAD(&revoke_list);
f2fs_balance_fs(sbi, true); f2fs_balance_fs(sbi, true);
f2fs_lock_op(sbi); f2fs_lock_op(sbi);
set_inode_flag(inode, FI_ATOMIC_COMMIT); set_inode_flag(inode, FI_ATOMIC_COMMIT);
mutex_lock(&fi->inmem_lock); mutex_lock(&fi->inmem_lock);
err = __commit_inmem_pages(inode, &revoke_list); err = __f2fs_commit_inmem_pages(inode);
if (err) {
int ret;
/*
* try to revoke all committed pages, but still we could fail
* due to no memory or other reason, if that happened, EAGAIN
* will be returned, which means in such case, transaction is
* already not integrity, caller should use journal to do the
* recovery or rewrite & commit last transaction. For other
* error number, revoking was done by filesystem itself.
*/
ret = __revoke_inmem_pages(inode, &revoke_list, false, true);
if (ret)
err = ret;
/* drop all uncommitted pages */
__revoke_inmem_pages(inode, &fi->inmem_pages, true, false);
}
spin_lock(&sbi->inode_lock[ATOMIC_FILE]); spin_lock(&sbi->inode_lock[ATOMIC_FILE]);
if (!list_empty(&fi->inmem_ilist)) if (!list_empty(&fi->inmem_ilist))
list_del_init(&fi->inmem_ilist); list_del_init(&fi->inmem_ilist);
...@@ -478,25 +487,28 @@ void f2fs_balance_fs(struct f2fs_sb_info *sbi, bool need) ...@@ -478,25 +487,28 @@ void f2fs_balance_fs(struct f2fs_sb_info *sbi, bool need)
void f2fs_balance_fs_bg(struct f2fs_sb_info *sbi) void f2fs_balance_fs_bg(struct f2fs_sb_info *sbi)
{ {
if (unlikely(is_sbi_flag_set(sbi, SBI_POR_DOING)))
return;
/* try to shrink extent cache when there is no enough memory */ /* try to shrink extent cache when there is no enough memory */
if (!available_free_memory(sbi, EXTENT_CACHE)) if (!f2fs_available_free_memory(sbi, EXTENT_CACHE))
f2fs_shrink_extent_tree(sbi, EXTENT_CACHE_SHRINK_NUMBER); f2fs_shrink_extent_tree(sbi, EXTENT_CACHE_SHRINK_NUMBER);
/* check the # of cached NAT entries */ /* check the # of cached NAT entries */
if (!available_free_memory(sbi, NAT_ENTRIES)) if (!f2fs_available_free_memory(sbi, NAT_ENTRIES))
try_to_free_nats(sbi, NAT_ENTRY_PER_BLOCK); f2fs_try_to_free_nats(sbi, NAT_ENTRY_PER_BLOCK);
if (!available_free_memory(sbi, FREE_NIDS)) if (!f2fs_available_free_memory(sbi, FREE_NIDS))
try_to_free_nids(sbi, MAX_FREE_NIDS); f2fs_try_to_free_nids(sbi, MAX_FREE_NIDS);
else else
build_free_nids(sbi, false, false); f2fs_build_free_nids(sbi, false, false);
if (!is_idle(sbi) && !excess_dirty_nats(sbi)) if (!is_idle(sbi) && !excess_dirty_nats(sbi))
return; return;
/* checkpoint is the only way to shrink partial cached entries */ /* checkpoint is the only way to shrink partial cached entries */
if (!available_free_memory(sbi, NAT_ENTRIES) || if (!f2fs_available_free_memory(sbi, NAT_ENTRIES) ||
!available_free_memory(sbi, INO_ENTRIES) || !f2fs_available_free_memory(sbi, INO_ENTRIES) ||
excess_prefree_segs(sbi) || excess_prefree_segs(sbi) ||
excess_dirty_nats(sbi) || excess_dirty_nats(sbi) ||
f2fs_time_over(sbi, CP_TIME)) { f2fs_time_over(sbi, CP_TIME)) {
...@@ -504,7 +516,7 @@ void f2fs_balance_fs_bg(struct f2fs_sb_info *sbi) ...@@ -504,7 +516,7 @@ void f2fs_balance_fs_bg(struct f2fs_sb_info *sbi)
struct blk_plug plug; struct blk_plug plug;
blk_start_plug(&plug); blk_start_plug(&plug);
sync_dirty_inodes(sbi, FILE_INODE); f2fs_sync_dirty_inodes(sbi, FILE_INODE);
blk_finish_plug(&plug); blk_finish_plug(&plug);
} }
f2fs_sync_fs(sbi->sb, true); f2fs_sync_fs(sbi->sb, true);
...@@ -537,7 +549,7 @@ static int submit_flush_wait(struct f2fs_sb_info *sbi, nid_t ino) ...@@ -537,7 +549,7 @@ static int submit_flush_wait(struct f2fs_sb_info *sbi, nid_t ino)
return __submit_flush_wait(sbi, sbi->sb->s_bdev); return __submit_flush_wait(sbi, sbi->sb->s_bdev);
for (i = 0; i < sbi->s_ndevs; i++) { for (i = 0; i < sbi->s_ndevs; i++) {
if (!is_dirty_device(sbi, ino, i, FLUSH_INO)) if (!f2fs_is_dirty_device(sbi, ino, i, FLUSH_INO))
continue; continue;
ret = __submit_flush_wait(sbi, FDEV(i).bdev); ret = __submit_flush_wait(sbi, FDEV(i).bdev);
if (ret) if (ret)
...@@ -648,7 +660,7 @@ int f2fs_issue_flush(struct f2fs_sb_info *sbi, nid_t ino) ...@@ -648,7 +660,7 @@ int f2fs_issue_flush(struct f2fs_sb_info *sbi, nid_t ino)
return cmd.ret; return cmd.ret;
} }
int create_flush_cmd_control(struct f2fs_sb_info *sbi) int f2fs_create_flush_cmd_control(struct f2fs_sb_info *sbi)
{ {
dev_t dev = sbi->sb->s_bdev->bd_dev; dev_t dev = sbi->sb->s_bdev->bd_dev;
struct flush_cmd_control *fcc; struct flush_cmd_control *fcc;
...@@ -685,7 +697,7 @@ int create_flush_cmd_control(struct f2fs_sb_info *sbi) ...@@ -685,7 +697,7 @@ int create_flush_cmd_control(struct f2fs_sb_info *sbi)
return err; return err;
} }
void destroy_flush_cmd_control(struct f2fs_sb_info *sbi, bool free) void f2fs_destroy_flush_cmd_control(struct f2fs_sb_info *sbi, bool free)
{ {
struct flush_cmd_control *fcc = SM_I(sbi)->fcc_info; struct flush_cmd_control *fcc = SM_I(sbi)->fcc_info;
...@@ -915,6 +927,42 @@ static void __check_sit_bitmap(struct f2fs_sb_info *sbi, ...@@ -915,6 +927,42 @@ static void __check_sit_bitmap(struct f2fs_sb_info *sbi,
#endif #endif
} }
static void __init_discard_policy(struct f2fs_sb_info *sbi,
struct discard_policy *dpolicy,
int discard_type, unsigned int granularity)
{
/* common policy */
dpolicy->type = discard_type;
dpolicy->sync = true;
dpolicy->granularity = granularity;
dpolicy->max_requests = DEF_MAX_DISCARD_REQUEST;
dpolicy->io_aware_gran = MAX_PLIST_NUM;
if (discard_type == DPOLICY_BG) {
dpolicy->min_interval = DEF_MIN_DISCARD_ISSUE_TIME;
dpolicy->mid_interval = DEF_MID_DISCARD_ISSUE_TIME;
dpolicy->max_interval = DEF_MAX_DISCARD_ISSUE_TIME;
dpolicy->io_aware = true;
dpolicy->sync = false;
if (utilization(sbi) > DEF_DISCARD_URGENT_UTIL) {
dpolicy->granularity = 1;
dpolicy->max_interval = DEF_MIN_DISCARD_ISSUE_TIME;
}
} else if (discard_type == DPOLICY_FORCE) {
dpolicy->min_interval = DEF_MIN_DISCARD_ISSUE_TIME;
dpolicy->mid_interval = DEF_MID_DISCARD_ISSUE_TIME;
dpolicy->max_interval = DEF_MAX_DISCARD_ISSUE_TIME;
dpolicy->io_aware = false;
} else if (discard_type == DPOLICY_FSTRIM) {
dpolicy->io_aware = false;
} else if (discard_type == DPOLICY_UMOUNT) {
dpolicy->max_requests = UINT_MAX;
dpolicy->io_aware = false;
}
}
/* this function is copied from blkdev_issue_discard from block/blk-lib.c */ /* this function is copied from blkdev_issue_discard from block/blk-lib.c */
static void __submit_discard_cmd(struct f2fs_sb_info *sbi, static void __submit_discard_cmd(struct f2fs_sb_info *sbi,
struct discard_policy *dpolicy, struct discard_policy *dpolicy,
...@@ -929,6 +977,9 @@ static void __submit_discard_cmd(struct f2fs_sb_info *sbi, ...@@ -929,6 +977,9 @@ static void __submit_discard_cmd(struct f2fs_sb_info *sbi,
if (dc->state != D_PREP) if (dc->state != D_PREP)
return; return;
if (is_sbi_flag_set(sbi, SBI_NEED_FSCK))
return;
trace_f2fs_issue_discard(dc->bdev, dc->start, dc->len); trace_f2fs_issue_discard(dc->bdev, dc->start, dc->len);
dc->error = __blkdev_issue_discard(dc->bdev, dc->error = __blkdev_issue_discard(dc->bdev,
...@@ -972,7 +1023,7 @@ static struct discard_cmd *__insert_discard_tree(struct f2fs_sb_info *sbi, ...@@ -972,7 +1023,7 @@ static struct discard_cmd *__insert_discard_tree(struct f2fs_sb_info *sbi,
goto do_insert; goto do_insert;
} }
p = __lookup_rb_tree_for_insert(sbi, &dcc->root, &parent, lstart); p = f2fs_lookup_rb_tree_for_insert(sbi, &dcc->root, &parent, lstart);
do_insert: do_insert:
dc = __attach_discard_cmd(sbi, bdev, lstart, start, len, parent, p); dc = __attach_discard_cmd(sbi, bdev, lstart, start, len, parent, p);
if (!dc) if (!dc)
...@@ -1037,7 +1088,7 @@ static void __update_discard_tree_range(struct f2fs_sb_info *sbi, ...@@ -1037,7 +1088,7 @@ static void __update_discard_tree_range(struct f2fs_sb_info *sbi,
mutex_lock(&dcc->cmd_lock); mutex_lock(&dcc->cmd_lock);
dc = (struct discard_cmd *)__lookup_rb_tree_ret(&dcc->root, dc = (struct discard_cmd *)f2fs_lookup_rb_tree_ret(&dcc->root,
NULL, lstart, NULL, lstart,
(struct rb_entry **)&prev_dc, (struct rb_entry **)&prev_dc,
(struct rb_entry **)&next_dc, (struct rb_entry **)&next_dc,
...@@ -1130,68 +1181,6 @@ static int __queue_discard_cmd(struct f2fs_sb_info *sbi, ...@@ -1130,68 +1181,6 @@ static int __queue_discard_cmd(struct f2fs_sb_info *sbi,
return 0; return 0;
} }
static void __issue_discard_cmd_range(struct f2fs_sb_info *sbi,
struct discard_policy *dpolicy,
unsigned int start, unsigned int end)
{
struct discard_cmd_control *dcc = SM_I(sbi)->dcc_info;
struct discard_cmd *prev_dc = NULL, *next_dc = NULL;
struct rb_node **insert_p = NULL, *insert_parent = NULL;
struct discard_cmd *dc;
struct blk_plug plug;
int issued;
next:
issued = 0;
mutex_lock(&dcc->cmd_lock);
f2fs_bug_on(sbi, !__check_rb_tree_consistence(sbi, &dcc->root));
dc = (struct discard_cmd *)__lookup_rb_tree_ret(&dcc->root,
NULL, start,
(struct rb_entry **)&prev_dc,
(struct rb_entry **)&next_dc,
&insert_p, &insert_parent, true);
if (!dc)
dc = next_dc;
blk_start_plug(&plug);
while (dc && dc->lstart <= end) {
struct rb_node *node;
if (dc->len < dpolicy->granularity)
goto skip;
if (dc->state != D_PREP) {
list_move_tail(&dc->list, &dcc->fstrim_list);
goto skip;
}
__submit_discard_cmd(sbi, dpolicy, dc);
if (++issued >= dpolicy->max_requests) {
start = dc->lstart + dc->len;
blk_finish_plug(&plug);
mutex_unlock(&dcc->cmd_lock);
schedule();
goto next;
}
skip:
node = rb_next(&dc->rb_node);
dc = rb_entry_safe(node, struct discard_cmd, rb_node);
if (fatal_signal_pending(current))
break;
}
blk_finish_plug(&plug);
mutex_unlock(&dcc->cmd_lock);
}
static int __issue_discard_cmd(struct f2fs_sb_info *sbi, static int __issue_discard_cmd(struct f2fs_sb_info *sbi,
struct discard_policy *dpolicy) struct discard_policy *dpolicy)
{ {
...@@ -1210,7 +1199,8 @@ static int __issue_discard_cmd(struct f2fs_sb_info *sbi, ...@@ -1210,7 +1199,8 @@ static int __issue_discard_cmd(struct f2fs_sb_info *sbi,
mutex_lock(&dcc->cmd_lock); mutex_lock(&dcc->cmd_lock);
if (list_empty(pend_list)) if (list_empty(pend_list))
goto next; goto next;
f2fs_bug_on(sbi, !__check_rb_tree_consistence(sbi, &dcc->root)); f2fs_bug_on(sbi,
!f2fs_check_rb_tree_consistence(sbi, &dcc->root));
blk_start_plug(&plug); blk_start_plug(&plug);
list_for_each_entry_safe(dc, tmp, pend_list, list) { list_for_each_entry_safe(dc, tmp, pend_list, list) {
f2fs_bug_on(sbi, dc->state != D_PREP); f2fs_bug_on(sbi, dc->state != D_PREP);
...@@ -1263,7 +1253,7 @@ static bool __drop_discard_cmd(struct f2fs_sb_info *sbi) ...@@ -1263,7 +1253,7 @@ static bool __drop_discard_cmd(struct f2fs_sb_info *sbi)
return dropped; return dropped;
} }
void drop_discard_cmd(struct f2fs_sb_info *sbi) void f2fs_drop_discard_cmd(struct f2fs_sb_info *sbi)
{ {
__drop_discard_cmd(sbi); __drop_discard_cmd(sbi);
} }
...@@ -1332,7 +1322,18 @@ static unsigned int __wait_discard_cmd_range(struct f2fs_sb_info *sbi, ...@@ -1332,7 +1322,18 @@ static unsigned int __wait_discard_cmd_range(struct f2fs_sb_info *sbi,
static void __wait_all_discard_cmd(struct f2fs_sb_info *sbi, static void __wait_all_discard_cmd(struct f2fs_sb_info *sbi,
struct discard_policy *dpolicy) struct discard_policy *dpolicy)
{ {
__wait_discard_cmd_range(sbi, dpolicy, 0, UINT_MAX); struct discard_policy dp;
if (dpolicy) {
__wait_discard_cmd_range(sbi, dpolicy, 0, UINT_MAX);
return;
}
/* wait all */
__init_discard_policy(sbi, &dp, DPOLICY_FSTRIM, 1);
__wait_discard_cmd_range(sbi, &dp, 0, UINT_MAX);
__init_discard_policy(sbi, &dp, DPOLICY_UMOUNT, 1);
__wait_discard_cmd_range(sbi, &dp, 0, UINT_MAX);
} }
/* This should be covered by global mutex, &sit_i->sentry_lock */ /* This should be covered by global mutex, &sit_i->sentry_lock */
...@@ -1343,7 +1344,8 @@ static void f2fs_wait_discard_bio(struct f2fs_sb_info *sbi, block_t blkaddr) ...@@ -1343,7 +1344,8 @@ static void f2fs_wait_discard_bio(struct f2fs_sb_info *sbi, block_t blkaddr)
bool need_wait = false; bool need_wait = false;
mutex_lock(&dcc->cmd_lock); mutex_lock(&dcc->cmd_lock);
dc = (struct discard_cmd *)__lookup_rb_tree(&dcc->root, NULL, blkaddr); dc = (struct discard_cmd *)f2fs_lookup_rb_tree(&dcc->root,
NULL, blkaddr);
if (dc) { if (dc) {
if (dc->state == D_PREP) { if (dc->state == D_PREP) {
__punch_discard_cmd(sbi, dc, blkaddr); __punch_discard_cmd(sbi, dc, blkaddr);
...@@ -1358,7 +1360,7 @@ static void f2fs_wait_discard_bio(struct f2fs_sb_info *sbi, block_t blkaddr) ...@@ -1358,7 +1360,7 @@ static void f2fs_wait_discard_bio(struct f2fs_sb_info *sbi, block_t blkaddr)
__wait_one_discard_bio(sbi, dc); __wait_one_discard_bio(sbi, dc);
} }
void stop_discard_thread(struct f2fs_sb_info *sbi) void f2fs_stop_discard_thread(struct f2fs_sb_info *sbi)
{ {
struct discard_cmd_control *dcc = SM_I(sbi)->dcc_info; struct discard_cmd_control *dcc = SM_I(sbi)->dcc_info;
...@@ -1377,11 +1379,13 @@ bool f2fs_wait_discard_bios(struct f2fs_sb_info *sbi) ...@@ -1377,11 +1379,13 @@ bool f2fs_wait_discard_bios(struct f2fs_sb_info *sbi)
struct discard_policy dpolicy; struct discard_policy dpolicy;
bool dropped; bool dropped;
init_discard_policy(&dpolicy, DPOLICY_UMOUNT, dcc->discard_granularity); __init_discard_policy(sbi, &dpolicy, DPOLICY_UMOUNT,
dcc->discard_granularity);
__issue_discard_cmd(sbi, &dpolicy); __issue_discard_cmd(sbi, &dpolicy);
dropped = __drop_discard_cmd(sbi); dropped = __drop_discard_cmd(sbi);
__wait_all_discard_cmd(sbi, &dpolicy);
/* just to make sure there is no pending discard commands */
__wait_all_discard_cmd(sbi, NULL);
return dropped; return dropped;
} }
...@@ -1397,32 +1401,39 @@ static int issue_discard_thread(void *data) ...@@ -1397,32 +1401,39 @@ static int issue_discard_thread(void *data)
set_freezable(); set_freezable();
do { do {
init_discard_policy(&dpolicy, DPOLICY_BG, __init_discard_policy(sbi, &dpolicy, DPOLICY_BG,
dcc->discard_granularity); dcc->discard_granularity);
wait_event_interruptible_timeout(*q, wait_event_interruptible_timeout(*q,
kthread_should_stop() || freezing(current) || kthread_should_stop() || freezing(current) ||
dcc->discard_wake, dcc->discard_wake,
msecs_to_jiffies(wait_ms)); msecs_to_jiffies(wait_ms));
if (dcc->discard_wake)
dcc->discard_wake = 0;
if (try_to_freeze()) if (try_to_freeze())
continue; continue;
if (f2fs_readonly(sbi->sb)) if (f2fs_readonly(sbi->sb))
continue; continue;
if (kthread_should_stop()) if (kthread_should_stop())
return 0; return 0;
if (is_sbi_flag_set(sbi, SBI_NEED_FSCK)) {
wait_ms = dpolicy.max_interval;
continue;
}
if (dcc->discard_wake) if (sbi->gc_mode == GC_URGENT)
dcc->discard_wake = 0; __init_discard_policy(sbi, &dpolicy, DPOLICY_FORCE, 1);
if (sbi->gc_thread && sbi->gc_thread->gc_urgent)
init_discard_policy(&dpolicy, DPOLICY_FORCE, 1);
sb_start_intwrite(sbi->sb); sb_start_intwrite(sbi->sb);
issued = __issue_discard_cmd(sbi, &dpolicy); issued = __issue_discard_cmd(sbi, &dpolicy);
if (issued) { if (issued > 0) {
__wait_all_discard_cmd(sbi, &dpolicy); __wait_all_discard_cmd(sbi, &dpolicy);
wait_ms = dpolicy.min_interval; wait_ms = dpolicy.min_interval;
} else if (issued == -1){
wait_ms = dpolicy.mid_interval;
} else { } else {
wait_ms = dpolicy.max_interval; wait_ms = dpolicy.max_interval;
} }
...@@ -1591,20 +1602,24 @@ static bool add_discard_addrs(struct f2fs_sb_info *sbi, struct cp_control *cpc, ...@@ -1591,20 +1602,24 @@ static bool add_discard_addrs(struct f2fs_sb_info *sbi, struct cp_control *cpc,
return false; return false;
} }
void release_discard_addrs(struct f2fs_sb_info *sbi) static void release_discard_addr(struct discard_entry *entry)
{
list_del(&entry->list);
kmem_cache_free(discard_entry_slab, entry);
}
void f2fs_release_discard_addrs(struct f2fs_sb_info *sbi)
{ {
struct list_head *head = &(SM_I(sbi)->dcc_info->entry_list); struct list_head *head = &(SM_I(sbi)->dcc_info->entry_list);
struct discard_entry *entry, *this; struct discard_entry *entry, *this;
/* drop caches */ /* drop caches */
list_for_each_entry_safe(entry, this, head, list) { list_for_each_entry_safe(entry, this, head, list)
list_del(&entry->list); release_discard_addr(entry);
kmem_cache_free(discard_entry_slab, entry);
}
} }
/* /*
* Should call clear_prefree_segments after checkpoint is done. * Should call f2fs_clear_prefree_segments after checkpoint is done.
*/ */
static void set_prefree_as_free_segments(struct f2fs_sb_info *sbi) static void set_prefree_as_free_segments(struct f2fs_sb_info *sbi)
{ {
...@@ -1617,7 +1632,8 @@ static void set_prefree_as_free_segments(struct f2fs_sb_info *sbi) ...@@ -1617,7 +1632,8 @@ static void set_prefree_as_free_segments(struct f2fs_sb_info *sbi)
mutex_unlock(&dirty_i->seglist_lock); mutex_unlock(&dirty_i->seglist_lock);
} }
void clear_prefree_segments(struct f2fs_sb_info *sbi, struct cp_control *cpc) void f2fs_clear_prefree_segments(struct f2fs_sb_info *sbi,
struct cp_control *cpc)
{ {
struct discard_cmd_control *dcc = SM_I(sbi)->dcc_info; struct discard_cmd_control *dcc = SM_I(sbi)->dcc_info;
struct list_head *head = &dcc->entry_list; struct list_head *head = &dcc->entry_list;
...@@ -1700,40 +1716,13 @@ void clear_prefree_segments(struct f2fs_sb_info *sbi, struct cp_control *cpc) ...@@ -1700,40 +1716,13 @@ void clear_prefree_segments(struct f2fs_sb_info *sbi, struct cp_control *cpc)
if (cur_pos < sbi->blocks_per_seg) if (cur_pos < sbi->blocks_per_seg)
goto find_next; goto find_next;
list_del(&entry->list); release_discard_addr(entry);
dcc->nr_discards -= total_len; dcc->nr_discards -= total_len;
kmem_cache_free(discard_entry_slab, entry);
} }
wake_up_discard_thread(sbi, false); wake_up_discard_thread(sbi, false);
} }
void init_discard_policy(struct discard_policy *dpolicy,
int discard_type, unsigned int granularity)
{
/* common policy */
dpolicy->type = discard_type;
dpolicy->sync = true;
dpolicy->granularity = granularity;
dpolicy->max_requests = DEF_MAX_DISCARD_REQUEST;
dpolicy->io_aware_gran = MAX_PLIST_NUM;
if (discard_type == DPOLICY_BG) {
dpolicy->min_interval = DEF_MIN_DISCARD_ISSUE_TIME;
dpolicy->max_interval = DEF_MAX_DISCARD_ISSUE_TIME;
dpolicy->io_aware = true;
} else if (discard_type == DPOLICY_FORCE) {
dpolicy->min_interval = DEF_MIN_DISCARD_ISSUE_TIME;
dpolicy->max_interval = DEF_MAX_DISCARD_ISSUE_TIME;
dpolicy->io_aware = false;
} else if (discard_type == DPOLICY_FSTRIM) {
dpolicy->io_aware = false;
} else if (discard_type == DPOLICY_UMOUNT) {
dpolicy->io_aware = false;
}
}
static int create_discard_cmd_control(struct f2fs_sb_info *sbi) static int create_discard_cmd_control(struct f2fs_sb_info *sbi)
{ {
dev_t dev = sbi->sb->s_bdev->bd_dev; dev_t dev = sbi->sb->s_bdev->bd_dev;
...@@ -1786,7 +1775,7 @@ static void destroy_discard_cmd_control(struct f2fs_sb_info *sbi) ...@@ -1786,7 +1775,7 @@ static void destroy_discard_cmd_control(struct f2fs_sb_info *sbi)
if (!dcc) if (!dcc)
return; return;
stop_discard_thread(sbi); f2fs_stop_discard_thread(sbi);
kfree(dcc); kfree(dcc);
SM_I(sbi)->dcc_info = NULL; SM_I(sbi)->dcc_info = NULL;
...@@ -1833,8 +1822,9 @@ static void update_sit_entry(struct f2fs_sb_info *sbi, block_t blkaddr, int del) ...@@ -1833,8 +1822,9 @@ static void update_sit_entry(struct f2fs_sb_info *sbi, block_t blkaddr, int del)
(new_vblocks > sbi->blocks_per_seg))); (new_vblocks > sbi->blocks_per_seg)));
se->valid_blocks = new_vblocks; se->valid_blocks = new_vblocks;
se->mtime = get_mtime(sbi); se->mtime = get_mtime(sbi, false);
SIT_I(sbi)->max_mtime = se->mtime; if (se->mtime > SIT_I(sbi)->max_mtime)
SIT_I(sbi)->max_mtime = se->mtime;
/* Update valid block bitmap */ /* Update valid block bitmap */
if (del > 0) { if (del > 0) {
...@@ -1902,7 +1892,7 @@ static void update_sit_entry(struct f2fs_sb_info *sbi, block_t blkaddr, int del) ...@@ -1902,7 +1892,7 @@ static void update_sit_entry(struct f2fs_sb_info *sbi, block_t blkaddr, int del)
get_sec_entry(sbi, segno)->valid_blocks += del; get_sec_entry(sbi, segno)->valid_blocks += del;
} }
void invalidate_blocks(struct f2fs_sb_info *sbi, block_t addr) void f2fs_invalidate_blocks(struct f2fs_sb_info *sbi, block_t addr)
{ {
unsigned int segno = GET_SEGNO(sbi, addr); unsigned int segno = GET_SEGNO(sbi, addr);
struct sit_info *sit_i = SIT_I(sbi); struct sit_info *sit_i = SIT_I(sbi);
...@@ -1922,14 +1912,14 @@ void invalidate_blocks(struct f2fs_sb_info *sbi, block_t addr) ...@@ -1922,14 +1912,14 @@ void invalidate_blocks(struct f2fs_sb_info *sbi, block_t addr)
up_write(&sit_i->sentry_lock); up_write(&sit_i->sentry_lock);
} }
bool is_checkpointed_data(struct f2fs_sb_info *sbi, block_t blkaddr) bool f2fs_is_checkpointed_data(struct f2fs_sb_info *sbi, block_t blkaddr)
{ {
struct sit_info *sit_i = SIT_I(sbi); struct sit_info *sit_i = SIT_I(sbi);
unsigned int segno, offset; unsigned int segno, offset;
struct seg_entry *se; struct seg_entry *se;
bool is_cp = false; bool is_cp = false;
if (blkaddr == NEW_ADDR || blkaddr == NULL_ADDR) if (!is_valid_blkaddr(blkaddr))
return true; return true;
down_read(&sit_i->sentry_lock); down_read(&sit_i->sentry_lock);
...@@ -1961,7 +1951,7 @@ static void __add_sum_entry(struct f2fs_sb_info *sbi, int type, ...@@ -1961,7 +1951,7 @@ static void __add_sum_entry(struct f2fs_sb_info *sbi, int type,
/* /*
* Calculate the number of current summary pages for writing * Calculate the number of current summary pages for writing
*/ */
int npages_for_summary_flush(struct f2fs_sb_info *sbi, bool for_ra) int f2fs_npages_for_summary_flush(struct f2fs_sb_info *sbi, bool for_ra)
{ {
int valid_sum_count = 0; int valid_sum_count = 0;
int i, sum_in_page; int i, sum_in_page;
...@@ -1991,14 +1981,15 @@ int npages_for_summary_flush(struct f2fs_sb_info *sbi, bool for_ra) ...@@ -1991,14 +1981,15 @@ int npages_for_summary_flush(struct f2fs_sb_info *sbi, bool for_ra)
/* /*
* Caller should put this summary page * Caller should put this summary page
*/ */
struct page *get_sum_page(struct f2fs_sb_info *sbi, unsigned int segno) struct page *f2fs_get_sum_page(struct f2fs_sb_info *sbi, unsigned int segno)
{ {
return get_meta_page(sbi, GET_SUM_BLOCK(sbi, segno)); return f2fs_get_meta_page(sbi, GET_SUM_BLOCK(sbi, segno));
} }
void update_meta_page(struct f2fs_sb_info *sbi, void *src, block_t blk_addr) void f2fs_update_meta_page(struct f2fs_sb_info *sbi,
void *src, block_t blk_addr)
{ {
struct page *page = grab_meta_page(sbi, blk_addr); struct page *page = f2fs_grab_meta_page(sbi, blk_addr);
memcpy(page_address(page), src, PAGE_SIZE); memcpy(page_address(page), src, PAGE_SIZE);
set_page_dirty(page); set_page_dirty(page);
...@@ -2008,18 +1999,19 @@ void update_meta_page(struct f2fs_sb_info *sbi, void *src, block_t blk_addr) ...@@ -2008,18 +1999,19 @@ void update_meta_page(struct f2fs_sb_info *sbi, void *src, block_t blk_addr)
static void write_sum_page(struct f2fs_sb_info *sbi, static void write_sum_page(struct f2fs_sb_info *sbi,
struct f2fs_summary_block *sum_blk, block_t blk_addr) struct f2fs_summary_block *sum_blk, block_t blk_addr)
{ {
update_meta_page(sbi, (void *)sum_blk, blk_addr); f2fs_update_meta_page(sbi, (void *)sum_blk, blk_addr);
} }
static void write_current_sum_page(struct f2fs_sb_info *sbi, static void write_current_sum_page(struct f2fs_sb_info *sbi,
int type, block_t blk_addr) int type, block_t blk_addr)
{ {
struct curseg_info *curseg = CURSEG_I(sbi, type); struct curseg_info *curseg = CURSEG_I(sbi, type);
struct page *page = grab_meta_page(sbi, blk_addr); struct page *page = f2fs_grab_meta_page(sbi, blk_addr);
struct f2fs_summary_block *src = curseg->sum_blk; struct f2fs_summary_block *src = curseg->sum_blk;
struct f2fs_summary_block *dst; struct f2fs_summary_block *dst;
dst = (struct f2fs_summary_block *)page_address(page); dst = (struct f2fs_summary_block *)page_address(page);
memset(dst, 0, PAGE_SIZE);
mutex_lock(&curseg->curseg_mutex); mutex_lock(&curseg->curseg_mutex);
...@@ -2259,7 +2251,7 @@ static void change_curseg(struct f2fs_sb_info *sbi, int type) ...@@ -2259,7 +2251,7 @@ static void change_curseg(struct f2fs_sb_info *sbi, int type)
curseg->alloc_type = SSR; curseg->alloc_type = SSR;
__next_free_blkoff(sbi, curseg, 0); __next_free_blkoff(sbi, curseg, 0);
sum_page = get_sum_page(sbi, new_segno); sum_page = f2fs_get_sum_page(sbi, new_segno);
sum_node = (struct f2fs_summary_block *)page_address(sum_page); sum_node = (struct f2fs_summary_block *)page_address(sum_page);
memcpy(curseg->sum_blk, sum_node, SUM_ENTRY_SIZE); memcpy(curseg->sum_blk, sum_node, SUM_ENTRY_SIZE);
f2fs_put_page(sum_page, 1); f2fs_put_page(sum_page, 1);
...@@ -2273,7 +2265,7 @@ static int get_ssr_segment(struct f2fs_sb_info *sbi, int type) ...@@ -2273,7 +2265,7 @@ static int get_ssr_segment(struct f2fs_sb_info *sbi, int type)
int i, cnt; int i, cnt;
bool reversed = false; bool reversed = false;
/* need_SSR() already forces to do this */ /* f2fs_need_SSR() already forces to do this */
if (v_ops->get_victim(sbi, &segno, BG_GC, type, SSR)) { if (v_ops->get_victim(sbi, &segno, BG_GC, type, SSR)) {
curseg->next_segno = segno; curseg->next_segno = segno;
return 1; return 1;
...@@ -2325,7 +2317,7 @@ static void allocate_segment_by_default(struct f2fs_sb_info *sbi, ...@@ -2325,7 +2317,7 @@ static void allocate_segment_by_default(struct f2fs_sb_info *sbi,
new_curseg(sbi, type, false); new_curseg(sbi, type, false);
else if (curseg->alloc_type == LFS && is_next_segment_free(sbi, type)) else if (curseg->alloc_type == LFS && is_next_segment_free(sbi, type))
new_curseg(sbi, type, false); new_curseg(sbi, type, false);
else if (need_SSR(sbi) && get_ssr_segment(sbi, type)) else if (f2fs_need_SSR(sbi) && get_ssr_segment(sbi, type))
change_curseg(sbi, type); change_curseg(sbi, type);
else else
new_curseg(sbi, type, false); new_curseg(sbi, type, false);
...@@ -2333,7 +2325,7 @@ static void allocate_segment_by_default(struct f2fs_sb_info *sbi, ...@@ -2333,7 +2325,7 @@ static void allocate_segment_by_default(struct f2fs_sb_info *sbi,
stat_inc_seg_type(sbi, curseg); stat_inc_seg_type(sbi, curseg);
} }
void allocate_new_segments(struct f2fs_sb_info *sbi) void f2fs_allocate_new_segments(struct f2fs_sb_info *sbi)
{ {
struct curseg_info *curseg; struct curseg_info *curseg;
unsigned int old_segno; unsigned int old_segno;
...@@ -2355,7 +2347,8 @@ static const struct segment_allocation default_salloc_ops = { ...@@ -2355,7 +2347,8 @@ static const struct segment_allocation default_salloc_ops = {
.allocate_segment = allocate_segment_by_default, .allocate_segment = allocate_segment_by_default,
}; };
bool exist_trim_candidates(struct f2fs_sb_info *sbi, struct cp_control *cpc) bool f2fs_exist_trim_candidates(struct f2fs_sb_info *sbi,
struct cp_control *cpc)
{ {
__u64 trim_start = cpc->trim_start; __u64 trim_start = cpc->trim_start;
bool has_candidate = false; bool has_candidate = false;
...@@ -2373,11 +2366,72 @@ bool exist_trim_candidates(struct f2fs_sb_info *sbi, struct cp_control *cpc) ...@@ -2373,11 +2366,72 @@ bool exist_trim_candidates(struct f2fs_sb_info *sbi, struct cp_control *cpc)
return has_candidate; return has_candidate;
} }
static void __issue_discard_cmd_range(struct f2fs_sb_info *sbi,
struct discard_policy *dpolicy,
unsigned int start, unsigned int end)
{
struct discard_cmd_control *dcc = SM_I(sbi)->dcc_info;
struct discard_cmd *prev_dc = NULL, *next_dc = NULL;
struct rb_node **insert_p = NULL, *insert_parent = NULL;
struct discard_cmd *dc;
struct blk_plug plug;
int issued;
next:
issued = 0;
mutex_lock(&dcc->cmd_lock);
f2fs_bug_on(sbi, !f2fs_check_rb_tree_consistence(sbi, &dcc->root));
dc = (struct discard_cmd *)f2fs_lookup_rb_tree_ret(&dcc->root,
NULL, start,
(struct rb_entry **)&prev_dc,
(struct rb_entry **)&next_dc,
&insert_p, &insert_parent, true);
if (!dc)
dc = next_dc;
blk_start_plug(&plug);
while (dc && dc->lstart <= end) {
struct rb_node *node;
if (dc->len < dpolicy->granularity)
goto skip;
if (dc->state != D_PREP) {
list_move_tail(&dc->list, &dcc->fstrim_list);
goto skip;
}
__submit_discard_cmd(sbi, dpolicy, dc);
if (++issued >= dpolicy->max_requests) {
start = dc->lstart + dc->len;
blk_finish_plug(&plug);
mutex_unlock(&dcc->cmd_lock);
__wait_all_discard_cmd(sbi, NULL);
congestion_wait(BLK_RW_ASYNC, HZ/50);
goto next;
}
skip:
node = rb_next(&dc->rb_node);
dc = rb_entry_safe(node, struct discard_cmd, rb_node);
if (fatal_signal_pending(current))
break;
}
blk_finish_plug(&plug);
mutex_unlock(&dcc->cmd_lock);
}
int f2fs_trim_fs(struct f2fs_sb_info *sbi, struct fstrim_range *range) int f2fs_trim_fs(struct f2fs_sb_info *sbi, struct fstrim_range *range)
{ {
__u64 start = F2FS_BYTES_TO_BLK(range->start); __u64 start = F2FS_BYTES_TO_BLK(range->start);
__u64 end = start + F2FS_BYTES_TO_BLK(range->len) - 1; __u64 end = start + F2FS_BYTES_TO_BLK(range->len) - 1;
unsigned int start_segno, end_segno, cur_segno; unsigned int start_segno, end_segno;
block_t start_block, end_block; block_t start_block, end_block;
struct cp_control cpc; struct cp_control cpc;
struct discard_policy dpolicy; struct discard_policy dpolicy;
...@@ -2388,12 +2442,12 @@ int f2fs_trim_fs(struct f2fs_sb_info *sbi, struct fstrim_range *range) ...@@ -2388,12 +2442,12 @@ int f2fs_trim_fs(struct f2fs_sb_info *sbi, struct fstrim_range *range)
return -EINVAL; return -EINVAL;
if (end <= MAIN_BLKADDR(sbi)) if (end <= MAIN_BLKADDR(sbi))
goto out; return -EINVAL;
if (is_sbi_flag_set(sbi, SBI_NEED_FSCK)) { if (is_sbi_flag_set(sbi, SBI_NEED_FSCK)) {
f2fs_msg(sbi->sb, KERN_WARNING, f2fs_msg(sbi->sb, KERN_WARNING,
"Found FS corruption, run fsck to fix."); "Found FS corruption, run fsck to fix.");
goto out; return -EIO;
} }
/* start/end segment number in main_area */ /* start/end segment number in main_area */
...@@ -2403,40 +2457,36 @@ int f2fs_trim_fs(struct f2fs_sb_info *sbi, struct fstrim_range *range) ...@@ -2403,40 +2457,36 @@ int f2fs_trim_fs(struct f2fs_sb_info *sbi, struct fstrim_range *range)
cpc.reason = CP_DISCARD; cpc.reason = CP_DISCARD;
cpc.trim_minlen = max_t(__u64, 1, F2FS_BYTES_TO_BLK(range->minlen)); cpc.trim_minlen = max_t(__u64, 1, F2FS_BYTES_TO_BLK(range->minlen));
cpc.trim_start = start_segno;
cpc.trim_end = end_segno;
/* do checkpoint to issue discard commands safely */ if (sbi->discard_blks == 0)
for (cur_segno = start_segno; cur_segno <= end_segno; goto out;
cur_segno = cpc.trim_end + 1) {
cpc.trim_start = cur_segno;
if (sbi->discard_blks == 0)
break;
else if (sbi->discard_blks < BATCHED_TRIM_BLOCKS(sbi))
cpc.trim_end = end_segno;
else
cpc.trim_end = min_t(unsigned int,
rounddown(cur_segno +
BATCHED_TRIM_SEGMENTS(sbi),
sbi->segs_per_sec) - 1, end_segno);
mutex_lock(&sbi->gc_mutex);
err = write_checkpoint(sbi, &cpc);
mutex_unlock(&sbi->gc_mutex);
if (err)
break;
schedule(); mutex_lock(&sbi->gc_mutex);
} err = f2fs_write_checkpoint(sbi, &cpc);
mutex_unlock(&sbi->gc_mutex);
if (err)
goto out;
start_block = START_BLOCK(sbi, start_segno); start_block = START_BLOCK(sbi, start_segno);
end_block = START_BLOCK(sbi, min(cur_segno, end_segno) + 1); end_block = START_BLOCK(sbi, end_segno + 1);
init_discard_policy(&dpolicy, DPOLICY_FSTRIM, cpc.trim_minlen); __init_discard_policy(sbi, &dpolicy, DPOLICY_FSTRIM, cpc.trim_minlen);
__issue_discard_cmd_range(sbi, &dpolicy, start_block, end_block); __issue_discard_cmd_range(sbi, &dpolicy, start_block, end_block);
trimmed = __wait_discard_cmd_range(sbi, &dpolicy,
/*
* We filed discard candidates, but actually we don't need to wait for
* all of them, since they'll be issued in idle time along with runtime
* discard option. User configuration looks like using runtime discard
* or periodic fstrim instead of it.
*/
if (!test_opt(sbi, DISCARD)) {
trimmed = __wait_discard_cmd_range(sbi, &dpolicy,
start_block, end_block); start_block, end_block);
range->len = F2FS_BLK_TO_BYTES(trimmed);
}
out: out:
range->len = F2FS_BLK_TO_BYTES(trimmed);
return err; return err;
} }
...@@ -2448,7 +2498,7 @@ static bool __has_curseg_space(struct f2fs_sb_info *sbi, int type) ...@@ -2448,7 +2498,7 @@ static bool __has_curseg_space(struct f2fs_sb_info *sbi, int type)
return false; return false;
} }
int rw_hint_to_seg_type(enum rw_hint hint) int f2fs_rw_hint_to_seg_type(enum rw_hint hint)
{ {
switch (hint) { switch (hint) {
case WRITE_LIFE_SHORT: case WRITE_LIFE_SHORT:
...@@ -2521,7 +2571,7 @@ int rw_hint_to_seg_type(enum rw_hint hint) ...@@ -2521,7 +2571,7 @@ int rw_hint_to_seg_type(enum rw_hint hint)
* WRITE_LIFE_LONG " WRITE_LIFE_LONG * WRITE_LIFE_LONG " WRITE_LIFE_LONG
*/ */
enum rw_hint io_type_to_rw_hint(struct f2fs_sb_info *sbi, enum rw_hint f2fs_io_type_to_rw_hint(struct f2fs_sb_info *sbi,
enum page_type type, enum temp_type temp) enum page_type type, enum temp_type temp)
{ {
if (F2FS_OPTION(sbi).whint_mode == WHINT_MODE_USER) { if (F2FS_OPTION(sbi).whint_mode == WHINT_MODE_USER) {
...@@ -2588,9 +2638,11 @@ static int __get_segment_type_6(struct f2fs_io_info *fio) ...@@ -2588,9 +2638,11 @@ static int __get_segment_type_6(struct f2fs_io_info *fio)
if (is_cold_data(fio->page) || file_is_cold(inode)) if (is_cold_data(fio->page) || file_is_cold(inode))
return CURSEG_COLD_DATA; return CURSEG_COLD_DATA;
if (file_is_hot(inode) || if (file_is_hot(inode) ||
is_inode_flag_set(inode, FI_HOT_DATA)) is_inode_flag_set(inode, FI_HOT_DATA) ||
is_inode_flag_set(inode, FI_ATOMIC_FILE) ||
is_inode_flag_set(inode, FI_VOLATILE_FILE))
return CURSEG_HOT_DATA; return CURSEG_HOT_DATA;
return rw_hint_to_seg_type(inode->i_write_hint); return f2fs_rw_hint_to_seg_type(inode->i_write_hint);
} else { } else {
if (IS_DNODE(fio->page)) if (IS_DNODE(fio->page))
return is_cold_node(fio->page) ? CURSEG_WARM_NODE : return is_cold_node(fio->page) ? CURSEG_WARM_NODE :
...@@ -2626,7 +2678,7 @@ static int __get_segment_type(struct f2fs_io_info *fio) ...@@ -2626,7 +2678,7 @@ static int __get_segment_type(struct f2fs_io_info *fio)
return type; return type;
} }
void allocate_data_block(struct f2fs_sb_info *sbi, struct page *page, void f2fs_allocate_data_block(struct f2fs_sb_info *sbi, struct page *page,
block_t old_blkaddr, block_t *new_blkaddr, block_t old_blkaddr, block_t *new_blkaddr,
struct f2fs_summary *sum, int type, struct f2fs_summary *sum, int type,
struct f2fs_io_info *fio, bool add_list) struct f2fs_io_info *fio, bool add_list)
...@@ -2686,6 +2738,7 @@ void allocate_data_block(struct f2fs_sb_info *sbi, struct page *page, ...@@ -2686,6 +2738,7 @@ void allocate_data_block(struct f2fs_sb_info *sbi, struct page *page,
INIT_LIST_HEAD(&fio->list); INIT_LIST_HEAD(&fio->list);
fio->in_list = true; fio->in_list = true;
fio->retry = false;
io = sbi->write_io[fio->type] + fio->temp; io = sbi->write_io[fio->type] + fio->temp;
spin_lock(&io->io_lock); spin_lock(&io->io_lock);
list_add_tail(&fio->list, &io->io_list); list_add_tail(&fio->list, &io->io_list);
...@@ -2708,7 +2761,7 @@ static void update_device_state(struct f2fs_io_info *fio) ...@@ -2708,7 +2761,7 @@ static void update_device_state(struct f2fs_io_info *fio)
devidx = f2fs_target_device_index(sbi, fio->new_blkaddr); devidx = f2fs_target_device_index(sbi, fio->new_blkaddr);
/* update device state for fsync */ /* update device state for fsync */
set_dirty_device(sbi, fio->ino, devidx, FLUSH_INO); f2fs_set_dirty_device(sbi, fio->ino, devidx, FLUSH_INO);
/* update device state for checkpoint */ /* update device state for checkpoint */
if (!f2fs_test_bit(devidx, (char *)&sbi->dirty_device)) { if (!f2fs_test_bit(devidx, (char *)&sbi->dirty_device)) {
...@@ -2721,23 +2774,28 @@ static void update_device_state(struct f2fs_io_info *fio) ...@@ -2721,23 +2774,28 @@ static void update_device_state(struct f2fs_io_info *fio)
static void do_write_page(struct f2fs_summary *sum, struct f2fs_io_info *fio) static void do_write_page(struct f2fs_summary *sum, struct f2fs_io_info *fio)
{ {
int type = __get_segment_type(fio); int type = __get_segment_type(fio);
int err; bool keep_order = (test_opt(fio->sbi, LFS) && type == CURSEG_COLD_DATA);
if (keep_order)
down_read(&fio->sbi->io_order_lock);
reallocate: reallocate:
allocate_data_block(fio->sbi, fio->page, fio->old_blkaddr, f2fs_allocate_data_block(fio->sbi, fio->page, fio->old_blkaddr,
&fio->new_blkaddr, sum, type, fio, true); &fio->new_blkaddr, sum, type, fio, true);
/* writeout dirty page into bdev */ /* writeout dirty page into bdev */
err = f2fs_submit_page_write(fio); f2fs_submit_page_write(fio);
if (err == -EAGAIN) { if (fio->retry) {
fio->old_blkaddr = fio->new_blkaddr; fio->old_blkaddr = fio->new_blkaddr;
goto reallocate; goto reallocate;
} else if (!err) {
update_device_state(fio);
} }
update_device_state(fio);
if (keep_order)
up_read(&fio->sbi->io_order_lock);
} }
void write_meta_page(struct f2fs_sb_info *sbi, struct page *page, void f2fs_do_write_meta_page(struct f2fs_sb_info *sbi, struct page *page,
enum iostat_type io_type) enum iostat_type io_type)
{ {
struct f2fs_io_info fio = { struct f2fs_io_info fio = {
...@@ -2757,12 +2815,13 @@ void write_meta_page(struct f2fs_sb_info *sbi, struct page *page, ...@@ -2757,12 +2815,13 @@ void write_meta_page(struct f2fs_sb_info *sbi, struct page *page,
fio.op_flags &= ~REQ_META; fio.op_flags &= ~REQ_META;
set_page_writeback(page); set_page_writeback(page);
ClearPageError(page);
f2fs_submit_page_write(&fio); f2fs_submit_page_write(&fio);
f2fs_update_iostat(sbi, io_type, F2FS_BLKSIZE); f2fs_update_iostat(sbi, io_type, F2FS_BLKSIZE);
} }
void write_node_page(unsigned int nid, struct f2fs_io_info *fio) void f2fs_do_write_node_page(unsigned int nid, struct f2fs_io_info *fio)
{ {
struct f2fs_summary sum; struct f2fs_summary sum;
...@@ -2772,14 +2831,15 @@ void write_node_page(unsigned int nid, struct f2fs_io_info *fio) ...@@ -2772,14 +2831,15 @@ void write_node_page(unsigned int nid, struct f2fs_io_info *fio)
f2fs_update_iostat(fio->sbi, fio->io_type, F2FS_BLKSIZE); f2fs_update_iostat(fio->sbi, fio->io_type, F2FS_BLKSIZE);
} }
void write_data_page(struct dnode_of_data *dn, struct f2fs_io_info *fio) void f2fs_outplace_write_data(struct dnode_of_data *dn,
struct f2fs_io_info *fio)
{ {
struct f2fs_sb_info *sbi = fio->sbi; struct f2fs_sb_info *sbi = fio->sbi;
struct f2fs_summary sum; struct f2fs_summary sum;
struct node_info ni; struct node_info ni;
f2fs_bug_on(sbi, dn->data_blkaddr == NULL_ADDR); f2fs_bug_on(sbi, dn->data_blkaddr == NULL_ADDR);
get_node_info(sbi, dn->nid, &ni); f2fs_get_node_info(sbi, dn->nid, &ni);
set_summary(&sum, dn->nid, dn->ofs_in_node, ni.version); set_summary(&sum, dn->nid, dn->ofs_in_node, ni.version);
do_write_page(&sum, fio); do_write_page(&sum, fio);
f2fs_update_data_blkaddr(dn, fio->new_blkaddr); f2fs_update_data_blkaddr(dn, fio->new_blkaddr);
...@@ -2787,7 +2847,7 @@ void write_data_page(struct dnode_of_data *dn, struct f2fs_io_info *fio) ...@@ -2787,7 +2847,7 @@ void write_data_page(struct dnode_of_data *dn, struct f2fs_io_info *fio)
f2fs_update_iostat(sbi, fio->io_type, F2FS_BLKSIZE); f2fs_update_iostat(sbi, fio->io_type, F2FS_BLKSIZE);
} }
int rewrite_data_page(struct f2fs_io_info *fio) int f2fs_inplace_write_data(struct f2fs_io_info *fio)
{ {
int err; int err;
struct f2fs_sb_info *sbi = fio->sbi; struct f2fs_sb_info *sbi = fio->sbi;
...@@ -2822,7 +2882,7 @@ static inline int __f2fs_get_curseg(struct f2fs_sb_info *sbi, ...@@ -2822,7 +2882,7 @@ static inline int __f2fs_get_curseg(struct f2fs_sb_info *sbi,
return i; return i;
} }
void __f2fs_replace_block(struct f2fs_sb_info *sbi, struct f2fs_summary *sum, void f2fs_do_replace_block(struct f2fs_sb_info *sbi, struct f2fs_summary *sum,
block_t old_blkaddr, block_t new_blkaddr, block_t old_blkaddr, block_t new_blkaddr,
bool recover_curseg, bool recover_newaddr) bool recover_curseg, bool recover_newaddr)
{ {
...@@ -2907,7 +2967,7 @@ void f2fs_replace_block(struct f2fs_sb_info *sbi, struct dnode_of_data *dn, ...@@ -2907,7 +2967,7 @@ void f2fs_replace_block(struct f2fs_sb_info *sbi, struct dnode_of_data *dn,
set_summary(&sum, dn->nid, dn->ofs_in_node, version); set_summary(&sum, dn->nid, dn->ofs_in_node, version);
__f2fs_replace_block(sbi, &sum, old_addr, new_addr, f2fs_do_replace_block(sbi, &sum, old_addr, new_addr,
recover_curseg, recover_newaddr); recover_curseg, recover_newaddr);
f2fs_update_data_blkaddr(dn, new_addr); f2fs_update_data_blkaddr(dn, new_addr);
...@@ -2932,7 +2992,7 @@ void f2fs_wait_on_block_writeback(struct f2fs_sb_info *sbi, block_t blkaddr) ...@@ -2932,7 +2992,7 @@ void f2fs_wait_on_block_writeback(struct f2fs_sb_info *sbi, block_t blkaddr)
{ {
struct page *cpage; struct page *cpage;
if (blkaddr == NEW_ADDR || blkaddr == NULL_ADDR) if (!is_valid_blkaddr(blkaddr))
return; return;
cpage = find_lock_page(META_MAPPING(sbi), blkaddr); cpage = find_lock_page(META_MAPPING(sbi), blkaddr);
...@@ -2953,7 +3013,7 @@ static void read_compacted_summaries(struct f2fs_sb_info *sbi) ...@@ -2953,7 +3013,7 @@ static void read_compacted_summaries(struct f2fs_sb_info *sbi)
start = start_sum_block(sbi); start = start_sum_block(sbi);
page = get_meta_page(sbi, start++); page = f2fs_get_meta_page(sbi, start++);
kaddr = (unsigned char *)page_address(page); kaddr = (unsigned char *)page_address(page);
/* Step 1: restore nat cache */ /* Step 1: restore nat cache */
...@@ -2993,7 +3053,7 @@ static void read_compacted_summaries(struct f2fs_sb_info *sbi) ...@@ -2993,7 +3053,7 @@ static void read_compacted_summaries(struct f2fs_sb_info *sbi)
f2fs_put_page(page, 1); f2fs_put_page(page, 1);
page = NULL; page = NULL;
page = get_meta_page(sbi, start++); page = f2fs_get_meta_page(sbi, start++);
kaddr = (unsigned char *)page_address(page); kaddr = (unsigned char *)page_address(page);
offset = 0; offset = 0;
} }
...@@ -3032,7 +3092,7 @@ static int read_normal_summaries(struct f2fs_sb_info *sbi, int type) ...@@ -3032,7 +3092,7 @@ static int read_normal_summaries(struct f2fs_sb_info *sbi, int type)
blk_addr = GET_SUM_BLOCK(sbi, segno); blk_addr = GET_SUM_BLOCK(sbi, segno);
} }
new = get_meta_page(sbi, blk_addr); new = f2fs_get_meta_page(sbi, blk_addr);
sum = (struct f2fs_summary_block *)page_address(new); sum = (struct f2fs_summary_block *)page_address(new);
if (IS_NODESEG(type)) { if (IS_NODESEG(type)) {
...@@ -3044,7 +3104,7 @@ static int read_normal_summaries(struct f2fs_sb_info *sbi, int type) ...@@ -3044,7 +3104,7 @@ static int read_normal_summaries(struct f2fs_sb_info *sbi, int type)
ns->ofs_in_node = 0; ns->ofs_in_node = 0;
} }
} else { } else {
restore_node_summary(sbi, segno, sum); f2fs_restore_node_summary(sbi, segno, sum);
} }
} }
...@@ -3076,10 +3136,10 @@ static int restore_curseg_summaries(struct f2fs_sb_info *sbi) ...@@ -3076,10 +3136,10 @@ static int restore_curseg_summaries(struct f2fs_sb_info *sbi)
int err; int err;
if (is_set_ckpt_flags(sbi, CP_COMPACT_SUM_FLAG)) { if (is_set_ckpt_flags(sbi, CP_COMPACT_SUM_FLAG)) {
int npages = npages_for_summary_flush(sbi, true); int npages = f2fs_npages_for_summary_flush(sbi, true);
if (npages >= 2) if (npages >= 2)
ra_meta_pages(sbi, start_sum_block(sbi), npages, f2fs_ra_meta_pages(sbi, start_sum_block(sbi), npages,
META_CP, true); META_CP, true);
/* restore for compacted data summary */ /* restore for compacted data summary */
...@@ -3088,7 +3148,7 @@ static int restore_curseg_summaries(struct f2fs_sb_info *sbi) ...@@ -3088,7 +3148,7 @@ static int restore_curseg_summaries(struct f2fs_sb_info *sbi)
} }
if (__exist_node_summaries(sbi)) if (__exist_node_summaries(sbi))
ra_meta_pages(sbi, sum_blk_addr(sbi, NR_CURSEG_TYPE, type), f2fs_ra_meta_pages(sbi, sum_blk_addr(sbi, NR_CURSEG_TYPE, type),
NR_CURSEG_TYPE - type, META_CP, true); NR_CURSEG_TYPE - type, META_CP, true);
for (; type <= CURSEG_COLD_NODE; type++) { for (; type <= CURSEG_COLD_NODE; type++) {
...@@ -3114,8 +3174,9 @@ static void write_compacted_summaries(struct f2fs_sb_info *sbi, block_t blkaddr) ...@@ -3114,8 +3174,9 @@ static void write_compacted_summaries(struct f2fs_sb_info *sbi, block_t blkaddr)
int written_size = 0; int written_size = 0;
int i, j; int i, j;
page = grab_meta_page(sbi, blkaddr++); page = f2fs_grab_meta_page(sbi, blkaddr++);
kaddr = (unsigned char *)page_address(page); kaddr = (unsigned char *)page_address(page);
memset(kaddr, 0, PAGE_SIZE);
/* Step 1: write nat cache */ /* Step 1: write nat cache */
seg_i = CURSEG_I(sbi, CURSEG_HOT_DATA); seg_i = CURSEG_I(sbi, CURSEG_HOT_DATA);
...@@ -3138,8 +3199,9 @@ static void write_compacted_summaries(struct f2fs_sb_info *sbi, block_t blkaddr) ...@@ -3138,8 +3199,9 @@ static void write_compacted_summaries(struct f2fs_sb_info *sbi, block_t blkaddr)
for (j = 0; j < blkoff; j++) { for (j = 0; j < blkoff; j++) {
if (!page) { if (!page) {
page = grab_meta_page(sbi, blkaddr++); page = f2fs_grab_meta_page(sbi, blkaddr++);
kaddr = (unsigned char *)page_address(page); kaddr = (unsigned char *)page_address(page);
memset(kaddr, 0, PAGE_SIZE);
written_size = 0; written_size = 0;
} }
summary = (struct f2fs_summary *)(kaddr + written_size); summary = (struct f2fs_summary *)(kaddr + written_size);
...@@ -3174,7 +3236,7 @@ static void write_normal_summaries(struct f2fs_sb_info *sbi, ...@@ -3174,7 +3236,7 @@ static void write_normal_summaries(struct f2fs_sb_info *sbi,
write_current_sum_page(sbi, i, blkaddr + (i - type)); write_current_sum_page(sbi, i, blkaddr + (i - type));
} }
void write_data_summaries(struct f2fs_sb_info *sbi, block_t start_blk) void f2fs_write_data_summaries(struct f2fs_sb_info *sbi, block_t start_blk)
{ {
if (is_set_ckpt_flags(sbi, CP_COMPACT_SUM_FLAG)) if (is_set_ckpt_flags(sbi, CP_COMPACT_SUM_FLAG))
write_compacted_summaries(sbi, start_blk); write_compacted_summaries(sbi, start_blk);
...@@ -3182,12 +3244,12 @@ void write_data_summaries(struct f2fs_sb_info *sbi, block_t start_blk) ...@@ -3182,12 +3244,12 @@ void write_data_summaries(struct f2fs_sb_info *sbi, block_t start_blk)
write_normal_summaries(sbi, start_blk, CURSEG_HOT_DATA); write_normal_summaries(sbi, start_blk, CURSEG_HOT_DATA);
} }
void write_node_summaries(struct f2fs_sb_info *sbi, block_t start_blk) void f2fs_write_node_summaries(struct f2fs_sb_info *sbi, block_t start_blk)
{ {
write_normal_summaries(sbi, start_blk, CURSEG_HOT_NODE); write_normal_summaries(sbi, start_blk, CURSEG_HOT_NODE);
} }
int lookup_journal_in_cursum(struct f2fs_journal *journal, int type, int f2fs_lookup_journal_in_cursum(struct f2fs_journal *journal, int type,
unsigned int val, int alloc) unsigned int val, int alloc)
{ {
int i; int i;
...@@ -3212,7 +3274,7 @@ int lookup_journal_in_cursum(struct f2fs_journal *journal, int type, ...@@ -3212,7 +3274,7 @@ int lookup_journal_in_cursum(struct f2fs_journal *journal, int type,
static struct page *get_current_sit_page(struct f2fs_sb_info *sbi, static struct page *get_current_sit_page(struct f2fs_sb_info *sbi,
unsigned int segno) unsigned int segno)
{ {
return get_meta_page(sbi, current_sit_addr(sbi, segno)); return f2fs_get_meta_page(sbi, current_sit_addr(sbi, segno));
} }
static struct page *get_next_sit_page(struct f2fs_sb_info *sbi, static struct page *get_next_sit_page(struct f2fs_sb_info *sbi,
...@@ -3225,7 +3287,7 @@ static struct page *get_next_sit_page(struct f2fs_sb_info *sbi, ...@@ -3225,7 +3287,7 @@ static struct page *get_next_sit_page(struct f2fs_sb_info *sbi,
src_off = current_sit_addr(sbi, start); src_off = current_sit_addr(sbi, start);
dst_off = next_sit_addr(sbi, src_off); dst_off = next_sit_addr(sbi, src_off);
page = grab_meta_page(sbi, dst_off); page = f2fs_grab_meta_page(sbi, dst_off);
seg_info_to_sit_page(sbi, page, start); seg_info_to_sit_page(sbi, page, start);
set_page_dirty(page); set_page_dirty(page);
...@@ -3321,7 +3383,7 @@ static void remove_sits_in_journal(struct f2fs_sb_info *sbi) ...@@ -3321,7 +3383,7 @@ static void remove_sits_in_journal(struct f2fs_sb_info *sbi)
* CP calls this function, which flushes SIT entries including sit_journal, * CP calls this function, which flushes SIT entries including sit_journal,
* and moves prefree segs to free segs. * and moves prefree segs to free segs.
*/ */
void flush_sit_entries(struct f2fs_sb_info *sbi, struct cp_control *cpc) void f2fs_flush_sit_entries(struct f2fs_sb_info *sbi, struct cp_control *cpc)
{ {
struct sit_info *sit_i = SIT_I(sbi); struct sit_info *sit_i = SIT_I(sbi);
unsigned long *bitmap = sit_i->dirty_sentries_bitmap; unsigned long *bitmap = sit_i->dirty_sentries_bitmap;
...@@ -3380,6 +3442,11 @@ void flush_sit_entries(struct f2fs_sb_info *sbi, struct cp_control *cpc) ...@@ -3380,6 +3442,11 @@ void flush_sit_entries(struct f2fs_sb_info *sbi, struct cp_control *cpc)
int offset, sit_offset; int offset, sit_offset;
se = get_seg_entry(sbi, segno); se = get_seg_entry(sbi, segno);
#ifdef CONFIG_F2FS_CHECK_FS
if (memcmp(se->cur_valid_map, se->cur_valid_map_mir,
SIT_VBLOCK_MAP_SIZE))
f2fs_bug_on(sbi, 1);
#endif
/* add discard candidates */ /* add discard candidates */
if (!(cpc->reason & CP_DISCARD)) { if (!(cpc->reason & CP_DISCARD)) {
...@@ -3388,17 +3455,21 @@ void flush_sit_entries(struct f2fs_sb_info *sbi, struct cp_control *cpc) ...@@ -3388,17 +3455,21 @@ void flush_sit_entries(struct f2fs_sb_info *sbi, struct cp_control *cpc)
} }
if (to_journal) { if (to_journal) {
offset = lookup_journal_in_cursum(journal, offset = f2fs_lookup_journal_in_cursum(journal,
SIT_JOURNAL, segno, 1); SIT_JOURNAL, segno, 1);
f2fs_bug_on(sbi, offset < 0); f2fs_bug_on(sbi, offset < 0);
segno_in_journal(journal, offset) = segno_in_journal(journal, offset) =
cpu_to_le32(segno); cpu_to_le32(segno);
seg_info_to_raw_sit(se, seg_info_to_raw_sit(se,
&sit_in_journal(journal, offset)); &sit_in_journal(journal, offset));
check_block_count(sbi, segno,
&sit_in_journal(journal, offset));
} else { } else {
sit_offset = SIT_ENTRY_OFFSET(sit_i, segno); sit_offset = SIT_ENTRY_OFFSET(sit_i, segno);
seg_info_to_raw_sit(se, seg_info_to_raw_sit(se,
&raw_sit->entries[sit_offset]); &raw_sit->entries[sit_offset]);
check_block_count(sbi, segno,
&raw_sit->entries[sit_offset]);
} }
__clear_bit(segno, bitmap); __clear_bit(segno, bitmap);
...@@ -3597,9 +3668,10 @@ static int build_sit_entries(struct f2fs_sb_info *sbi) ...@@ -3597,9 +3668,10 @@ static int build_sit_entries(struct f2fs_sb_info *sbi)
unsigned int i, start, end; unsigned int i, start, end;
unsigned int readed, start_blk = 0; unsigned int readed, start_blk = 0;
int err = 0; int err = 0;
block_t total_node_blocks = 0;
do { do {
readed = ra_meta_pages(sbi, start_blk, BIO_MAX_PAGES, readed = f2fs_ra_meta_pages(sbi, start_blk, BIO_MAX_PAGES,
META_SIT, true); META_SIT, true);
start = start_blk * sit_i->sents_per_block; start = start_blk * sit_i->sents_per_block;
...@@ -3619,6 +3691,8 @@ static int build_sit_entries(struct f2fs_sb_info *sbi) ...@@ -3619,6 +3691,8 @@ static int build_sit_entries(struct f2fs_sb_info *sbi)
if (err) if (err)
return err; return err;
seg_info_from_raw_sit(se, &sit); seg_info_from_raw_sit(se, &sit);
if (IS_NODESEG(se->type))
total_node_blocks += se->valid_blocks;
/* build discard map only one time */ /* build discard map only one time */
if (f2fs_discard_en(sbi)) { if (f2fs_discard_en(sbi)) {
...@@ -3647,15 +3721,28 @@ static int build_sit_entries(struct f2fs_sb_info *sbi) ...@@ -3647,15 +3721,28 @@ static int build_sit_entries(struct f2fs_sb_info *sbi)
unsigned int old_valid_blocks; unsigned int old_valid_blocks;
start = le32_to_cpu(segno_in_journal(journal, i)); start = le32_to_cpu(segno_in_journal(journal, i));
if (start >= MAIN_SEGS(sbi)) {
f2fs_msg(sbi->sb, KERN_ERR,
"Wrong journal entry on segno %u",
start);
set_sbi_flag(sbi, SBI_NEED_FSCK);
err = -EINVAL;
break;
}
se = &sit_i->sentries[start]; se = &sit_i->sentries[start];
sit = sit_in_journal(journal, i); sit = sit_in_journal(journal, i);
old_valid_blocks = se->valid_blocks; old_valid_blocks = se->valid_blocks;
if (IS_NODESEG(se->type))
total_node_blocks -= old_valid_blocks;
err = check_block_count(sbi, start, &sit); err = check_block_count(sbi, start, &sit);
if (err) if (err)
break; break;
seg_info_from_raw_sit(se, &sit); seg_info_from_raw_sit(se, &sit);
if (IS_NODESEG(se->type))
total_node_blocks += se->valid_blocks;
if (f2fs_discard_en(sbi)) { if (f2fs_discard_en(sbi)) {
if (is_set_ckpt_flags(sbi, CP_TRIMMED_FLAG)) { if (is_set_ckpt_flags(sbi, CP_TRIMMED_FLAG)) {
...@@ -3664,16 +3751,28 @@ static int build_sit_entries(struct f2fs_sb_info *sbi) ...@@ -3664,16 +3751,28 @@ static int build_sit_entries(struct f2fs_sb_info *sbi)
} else { } else {
memcpy(se->discard_map, se->cur_valid_map, memcpy(se->discard_map, se->cur_valid_map,
SIT_VBLOCK_MAP_SIZE); SIT_VBLOCK_MAP_SIZE);
sbi->discard_blks += old_valid_blocks - sbi->discard_blks += old_valid_blocks;
se->valid_blocks; sbi->discard_blks -= se->valid_blocks;
} }
} }
if (sbi->segs_per_sec > 1) if (sbi->segs_per_sec > 1) {
get_sec_entry(sbi, start)->valid_blocks += get_sec_entry(sbi, start)->valid_blocks +=
se->valid_blocks - old_valid_blocks; se->valid_blocks;
get_sec_entry(sbi, start)->valid_blocks -=
old_valid_blocks;
}
} }
up_read(&curseg->journal_rwsem); up_read(&curseg->journal_rwsem);
if (!err && total_node_blocks != valid_node_count(sbi)) {
f2fs_msg(sbi->sb, KERN_ERR,
"SIT is corrupted node# %u vs %u",
total_node_blocks, valid_node_count(sbi));
set_sbi_flag(sbi, SBI_NEED_FSCK);
err = -EINVAL;
}
return err; return err;
} }
...@@ -3772,7 +3871,7 @@ static void init_min_max_mtime(struct f2fs_sb_info *sbi) ...@@ -3772,7 +3871,7 @@ static void init_min_max_mtime(struct f2fs_sb_info *sbi)
down_write(&sit_i->sentry_lock); down_write(&sit_i->sentry_lock);
sit_i->min_mtime = LLONG_MAX; sit_i->min_mtime = ULLONG_MAX;
for (segno = 0; segno < MAIN_SEGS(sbi); segno += sbi->segs_per_sec) { for (segno = 0; segno < MAIN_SEGS(sbi); segno += sbi->segs_per_sec) {
unsigned int i; unsigned int i;
...@@ -3786,11 +3885,11 @@ static void init_min_max_mtime(struct f2fs_sb_info *sbi) ...@@ -3786,11 +3885,11 @@ static void init_min_max_mtime(struct f2fs_sb_info *sbi)
if (sit_i->min_mtime > mtime) if (sit_i->min_mtime > mtime)
sit_i->min_mtime = mtime; sit_i->min_mtime = mtime;
} }
sit_i->max_mtime = get_mtime(sbi); sit_i->max_mtime = get_mtime(sbi, false);
up_write(&sit_i->sentry_lock); up_write(&sit_i->sentry_lock);
} }
int build_segment_manager(struct f2fs_sb_info *sbi) int f2fs_build_segment_manager(struct f2fs_sb_info *sbi)
{ {
struct f2fs_super_block *raw_super = F2FS_RAW_SUPER(sbi); struct f2fs_super_block *raw_super = F2FS_RAW_SUPER(sbi);
struct f2fs_checkpoint *ckpt = F2FS_CKPT(sbi); struct f2fs_checkpoint *ckpt = F2FS_CKPT(sbi);
...@@ -3822,14 +3921,12 @@ int build_segment_manager(struct f2fs_sb_info *sbi) ...@@ -3822,14 +3921,12 @@ int build_segment_manager(struct f2fs_sb_info *sbi)
sm_info->min_hot_blocks = DEF_MIN_HOT_BLOCKS; sm_info->min_hot_blocks = DEF_MIN_HOT_BLOCKS;
sm_info->min_ssr_sections = reserved_sections(sbi); sm_info->min_ssr_sections = reserved_sections(sbi);
sm_info->trim_sections = DEF_BATCHED_TRIM_SECTIONS;
INIT_LIST_HEAD(&sm_info->sit_entry_set); INIT_LIST_HEAD(&sm_info->sit_entry_set);
init_rwsem(&sm_info->curseg_lock); init_rwsem(&sm_info->curseg_lock);
if (!f2fs_readonly(sbi->sb)) { if (!f2fs_readonly(sbi->sb)) {
err = create_flush_cmd_control(sbi); err = f2fs_create_flush_cmd_control(sbi);
if (err) if (err)
return err; return err;
} }
...@@ -3954,13 +4051,13 @@ static void destroy_sit_info(struct f2fs_sb_info *sbi) ...@@ -3954,13 +4051,13 @@ static void destroy_sit_info(struct f2fs_sb_info *sbi)
kfree(sit_i); kfree(sit_i);
} }
void destroy_segment_manager(struct f2fs_sb_info *sbi) void f2fs_destroy_segment_manager(struct f2fs_sb_info *sbi)
{ {
struct f2fs_sm_info *sm_info = SM_I(sbi); struct f2fs_sm_info *sm_info = SM_I(sbi);
if (!sm_info) if (!sm_info)
return; return;
destroy_flush_cmd_control(sbi, true); f2fs_destroy_flush_cmd_control(sbi, true);
destroy_discard_cmd_control(sbi); destroy_discard_cmd_control(sbi);
destroy_dirty_segmap(sbi); destroy_dirty_segmap(sbi);
destroy_curseg(sbi); destroy_curseg(sbi);
...@@ -3970,7 +4067,7 @@ void destroy_segment_manager(struct f2fs_sb_info *sbi) ...@@ -3970,7 +4067,7 @@ void destroy_segment_manager(struct f2fs_sb_info *sbi)
kfree(sm_info); kfree(sm_info);
} }
int __init create_segment_manager_caches(void) int __init f2fs_create_segment_manager_caches(void)
{ {
discard_entry_slab = f2fs_kmem_cache_create("discard_entry", discard_entry_slab = f2fs_kmem_cache_create("discard_entry",
sizeof(struct discard_entry)); sizeof(struct discard_entry));
...@@ -4003,7 +4100,7 @@ int __init create_segment_manager_caches(void) ...@@ -4003,7 +4100,7 @@ int __init create_segment_manager_caches(void)
return -ENOMEM; return -ENOMEM;
} }
void destroy_segment_manager_caches(void) void f2fs_destroy_segment_manager_caches(void)
{ {
kmem_cache_destroy(sit_entry_set_slab); kmem_cache_destroy(sit_entry_set_slab);
kmem_cache_destroy(discard_cmd_slab); kmem_cache_destroy(discard_cmd_slab);
......
...@@ -85,7 +85,7 @@ ...@@ -85,7 +85,7 @@
(GET_SEGOFF_FROM_SEG0(sbi, blk_addr) & ((sbi)->blocks_per_seg - 1)) (GET_SEGOFF_FROM_SEG0(sbi, blk_addr) & ((sbi)->blocks_per_seg - 1))
#define GET_SEGNO(sbi, blk_addr) \ #define GET_SEGNO(sbi, blk_addr) \
((((blk_addr) == NULL_ADDR) || ((blk_addr) == NEW_ADDR)) ? \ ((!is_valid_blkaddr(blk_addr)) ? \
NULL_SEGNO : GET_L2R_SEGNO(FREE_I(sbi), \ NULL_SEGNO : GET_L2R_SEGNO(FREE_I(sbi), \
GET_SEGNO_FROM_SEG0(sbi, blk_addr))) GET_SEGNO_FROM_SEG0(sbi, blk_addr)))
#define BLKS_PER_SEC(sbi) \ #define BLKS_PER_SEC(sbi) \
...@@ -215,6 +215,8 @@ struct segment_allocation { ...@@ -215,6 +215,8 @@ struct segment_allocation {
#define IS_DUMMY_WRITTEN_PAGE(page) \ #define IS_DUMMY_WRITTEN_PAGE(page) \
(page_private(page) == (unsigned long)DUMMY_WRITTEN_PAGE) (page_private(page) == (unsigned long)DUMMY_WRITTEN_PAGE)
#define MAX_SKIP_ATOMIC_COUNT 16
struct inmem_pages { struct inmem_pages {
struct list_head list; struct list_head list;
struct page *page; struct page *page;
...@@ -375,6 +377,7 @@ static inline void seg_info_to_sit_page(struct f2fs_sb_info *sbi, ...@@ -375,6 +377,7 @@ static inline void seg_info_to_sit_page(struct f2fs_sb_info *sbi,
int i; int i;
raw_sit = (struct f2fs_sit_block *)page_address(page); raw_sit = (struct f2fs_sit_block *)page_address(page);
memset(raw_sit, 0, PAGE_SIZE);
for (i = 0; i < end - start; i++) { for (i = 0; i < end - start; i++) {
rs = &raw_sit->entries[i]; rs = &raw_sit->entries[i];
se = get_seg_entry(sbi, start + i); se = get_seg_entry(sbi, start + i);
...@@ -742,12 +745,23 @@ static inline void set_to_next_sit(struct sit_info *sit_i, unsigned int start) ...@@ -742,12 +745,23 @@ static inline void set_to_next_sit(struct sit_info *sit_i, unsigned int start)
#endif #endif
} }
static inline unsigned long long get_mtime(struct f2fs_sb_info *sbi) static inline unsigned long long get_mtime(struct f2fs_sb_info *sbi,
bool base_time)
{ {
struct sit_info *sit_i = SIT_I(sbi); struct sit_info *sit_i = SIT_I(sbi);
time64_t now = ktime_get_real_seconds(); time64_t diff, now = ktime_get_real_seconds();
if (now >= sit_i->mounted_time)
return sit_i->elapsed_time + now - sit_i->mounted_time;
return sit_i->elapsed_time + now - sit_i->mounted_time; /* system time is set to the past */
if (!base_time) {
diff = sit_i->mounted_time - now;
if (sit_i->elapsed_time >= diff)
return sit_i->elapsed_time - diff;
return 0;
}
return sit_i->elapsed_time;
} }
static inline void set_summary(struct f2fs_summary *sum, nid_t nid, static inline void set_summary(struct f2fs_summary *sum, nid_t nid,
...@@ -771,15 +785,6 @@ static inline block_t sum_blk_addr(struct f2fs_sb_info *sbi, int base, int type) ...@@ -771,15 +785,6 @@ static inline block_t sum_blk_addr(struct f2fs_sb_info *sbi, int base, int type)
- (base + 1) + type; - (base + 1) + type;
} }
static inline bool no_fggc_candidate(struct f2fs_sb_info *sbi,
unsigned int secno)
{
if (get_valid_blocks(sbi, GET_SEG_FROM_SEC(sbi, secno), true) >
sbi->fggc_threshold)
return true;
return false;
}
static inline bool sec_usage_check(struct f2fs_sb_info *sbi, unsigned int secno) static inline bool sec_usage_check(struct f2fs_sb_info *sbi, unsigned int secno)
{ {
if (IS_CURSEC(sbi, secno) || (sbi->cur_victim_sec == secno)) if (IS_CURSEC(sbi, secno) || (sbi->cur_victim_sec == secno))
......
...@@ -109,11 +109,11 @@ unsigned long f2fs_shrink_scan(struct shrinker *shrink, ...@@ -109,11 +109,11 @@ unsigned long f2fs_shrink_scan(struct shrinker *shrink,
/* shrink clean nat cache entries */ /* shrink clean nat cache entries */
if (freed < nr) if (freed < nr)
freed += try_to_free_nats(sbi, nr - freed); freed += f2fs_try_to_free_nats(sbi, nr - freed);
/* shrink free nids cache entries */ /* shrink free nids cache entries */
if (freed < nr) if (freed < nr)
freed += try_to_free_nids(sbi, nr - freed); freed += f2fs_try_to_free_nids(sbi, nr - freed);
spin_lock(&f2fs_list_lock); spin_lock(&f2fs_list_lock);
p = p->next; p = p->next;
......
...@@ -740,6 +740,10 @@ static int parse_options(struct super_block *sb, char *options) ...@@ -740,6 +740,10 @@ static int parse_options(struct super_block *sb, char *options)
} else if (strlen(name) == 6 && } else if (strlen(name) == 6 &&
!strncmp(name, "strict", 6)) { !strncmp(name, "strict", 6)) {
F2FS_OPTION(sbi).fsync_mode = FSYNC_MODE_STRICT; F2FS_OPTION(sbi).fsync_mode = FSYNC_MODE_STRICT;
} else if (strlen(name) == 9 &&
!strncmp(name, "nobarrier", 9)) {
F2FS_OPTION(sbi).fsync_mode =
FSYNC_MODE_NOBARRIER;
} else { } else {
kfree(name); kfree(name);
return -EINVAL; return -EINVAL;
...@@ -826,15 +830,14 @@ static struct inode *f2fs_alloc_inode(struct super_block *sb) ...@@ -826,15 +830,14 @@ static struct inode *f2fs_alloc_inode(struct super_block *sb)
/* Initialize f2fs-specific inode info */ /* Initialize f2fs-specific inode info */
atomic_set(&fi->dirty_pages, 0); atomic_set(&fi->dirty_pages, 0);
fi->i_current_depth = 1;
init_rwsem(&fi->i_sem); init_rwsem(&fi->i_sem);
INIT_LIST_HEAD(&fi->dirty_list); INIT_LIST_HEAD(&fi->dirty_list);
INIT_LIST_HEAD(&fi->gdirty_list); INIT_LIST_HEAD(&fi->gdirty_list);
INIT_LIST_HEAD(&fi->inmem_ilist); INIT_LIST_HEAD(&fi->inmem_ilist);
INIT_LIST_HEAD(&fi->inmem_pages); INIT_LIST_HEAD(&fi->inmem_pages);
mutex_init(&fi->inmem_lock); mutex_init(&fi->inmem_lock);
init_rwsem(&fi->dio_rwsem[READ]); init_rwsem(&fi->i_gc_rwsem[READ]);
init_rwsem(&fi->dio_rwsem[WRITE]); init_rwsem(&fi->i_gc_rwsem[WRITE]);
init_rwsem(&fi->i_mmap_sem); init_rwsem(&fi->i_mmap_sem);
init_rwsem(&fi->i_xattr_sem); init_rwsem(&fi->i_xattr_sem);
...@@ -862,7 +865,7 @@ static int f2fs_drop_inode(struct inode *inode) ...@@ -862,7 +865,7 @@ static int f2fs_drop_inode(struct inode *inode)
/* some remained atomic pages should discarded */ /* some remained atomic pages should discarded */
if (f2fs_is_atomic_file(inode)) if (f2fs_is_atomic_file(inode))
drop_inmem_pages(inode); f2fs_drop_inmem_pages(inode);
/* should remain fi->extent_tree for writepage */ /* should remain fi->extent_tree for writepage */
f2fs_destroy_extent_node(inode); f2fs_destroy_extent_node(inode);
...@@ -999,7 +1002,7 @@ static void f2fs_put_super(struct super_block *sb) ...@@ -999,7 +1002,7 @@ static void f2fs_put_super(struct super_block *sb)
struct cp_control cpc = { struct cp_control cpc = {
.reason = CP_UMOUNT, .reason = CP_UMOUNT,
}; };
write_checkpoint(sbi, &cpc); f2fs_write_checkpoint(sbi, &cpc);
} }
/* be sure to wait for any on-going discard commands */ /* be sure to wait for any on-going discard commands */
...@@ -1009,17 +1012,17 @@ static void f2fs_put_super(struct super_block *sb) ...@@ -1009,17 +1012,17 @@ static void f2fs_put_super(struct super_block *sb)
struct cp_control cpc = { struct cp_control cpc = {
.reason = CP_UMOUNT | CP_TRIMMED, .reason = CP_UMOUNT | CP_TRIMMED,
}; };
write_checkpoint(sbi, &cpc); f2fs_write_checkpoint(sbi, &cpc);
} }
/* write_checkpoint can update stat informaion */ /* f2fs_write_checkpoint can update stat informaion */
f2fs_destroy_stats(sbi); f2fs_destroy_stats(sbi);
/* /*
* normally superblock is clean, so we need to release this. * normally superblock is clean, so we need to release this.
* In addition, EIO will skip do checkpoint, we need this as well. * In addition, EIO will skip do checkpoint, we need this as well.
*/ */
release_ino_entry(sbi, true); f2fs_release_ino_entry(sbi, true);
f2fs_leave_shrinker(sbi); f2fs_leave_shrinker(sbi);
mutex_unlock(&sbi->umount_mutex); mutex_unlock(&sbi->umount_mutex);
...@@ -1031,8 +1034,8 @@ static void f2fs_put_super(struct super_block *sb) ...@@ -1031,8 +1034,8 @@ static void f2fs_put_super(struct super_block *sb)
iput(sbi->meta_inode); iput(sbi->meta_inode);
/* destroy f2fs internal modules */ /* destroy f2fs internal modules */
destroy_node_manager(sbi); f2fs_destroy_node_manager(sbi);
destroy_segment_manager(sbi); f2fs_destroy_segment_manager(sbi);
kfree(sbi->ckpt); kfree(sbi->ckpt);
...@@ -1074,7 +1077,7 @@ int f2fs_sync_fs(struct super_block *sb, int sync) ...@@ -1074,7 +1077,7 @@ int f2fs_sync_fs(struct super_block *sb, int sync)
cpc.reason = __get_cp_reason(sbi); cpc.reason = __get_cp_reason(sbi);
mutex_lock(&sbi->gc_mutex); mutex_lock(&sbi->gc_mutex);
err = write_checkpoint(sbi, &cpc); err = f2fs_write_checkpoint(sbi, &cpc);
mutex_unlock(&sbi->gc_mutex); mutex_unlock(&sbi->gc_mutex);
} }
f2fs_trace_ios(NULL, 1); f2fs_trace_ios(NULL, 1);
...@@ -1477,11 +1480,11 @@ static int f2fs_remount(struct super_block *sb, int *flags, char *data) ...@@ -1477,11 +1480,11 @@ static int f2fs_remount(struct super_block *sb, int *flags, char *data)
*/ */
if ((*flags & SB_RDONLY) || !test_opt(sbi, BG_GC)) { if ((*flags & SB_RDONLY) || !test_opt(sbi, BG_GC)) {
if (sbi->gc_thread) { if (sbi->gc_thread) {
stop_gc_thread(sbi); f2fs_stop_gc_thread(sbi);
need_restart_gc = true; need_restart_gc = true;
} }
} else if (!sbi->gc_thread) { } else if (!sbi->gc_thread) {
err = start_gc_thread(sbi); err = f2fs_start_gc_thread(sbi);
if (err) if (err)
goto restore_opts; goto restore_opts;
need_stop_gc = true; need_stop_gc = true;
...@@ -1504,9 +1507,9 @@ static int f2fs_remount(struct super_block *sb, int *flags, char *data) ...@@ -1504,9 +1507,9 @@ static int f2fs_remount(struct super_block *sb, int *flags, char *data)
*/ */
if ((*flags & SB_RDONLY) || !test_opt(sbi, FLUSH_MERGE)) { if ((*flags & SB_RDONLY) || !test_opt(sbi, FLUSH_MERGE)) {
clear_opt(sbi, FLUSH_MERGE); clear_opt(sbi, FLUSH_MERGE);
destroy_flush_cmd_control(sbi, false); f2fs_destroy_flush_cmd_control(sbi, false);
} else { } else {
err = create_flush_cmd_control(sbi); err = f2fs_create_flush_cmd_control(sbi);
if (err) if (err)
goto restore_gc; goto restore_gc;
} }
...@@ -1524,11 +1527,11 @@ static int f2fs_remount(struct super_block *sb, int *flags, char *data) ...@@ -1524,11 +1527,11 @@ static int f2fs_remount(struct super_block *sb, int *flags, char *data)
return 0; return 0;
restore_gc: restore_gc:
if (need_restart_gc) { if (need_restart_gc) {
if (start_gc_thread(sbi)) if (f2fs_start_gc_thread(sbi))
f2fs_msg(sbi->sb, KERN_WARNING, f2fs_msg(sbi->sb, KERN_WARNING,
"background gc thread has stopped"); "background gc thread has stopped");
} else if (need_stop_gc) { } else if (need_stop_gc) {
stop_gc_thread(sbi); f2fs_stop_gc_thread(sbi);
} }
restore_opts: restore_opts:
#ifdef CONFIG_QUOTA #ifdef CONFIG_QUOTA
...@@ -1800,7 +1803,7 @@ static int f2fs_quota_on(struct super_block *sb, int type, int format_id, ...@@ -1800,7 +1803,7 @@ static int f2fs_quota_on(struct super_block *sb, int type, int format_id,
inode = d_inode(path->dentry); inode = d_inode(path->dentry);
inode_lock(inode); inode_lock(inode);
F2FS_I(inode)->i_flags |= FS_NOATIME_FL | FS_IMMUTABLE_FL; F2FS_I(inode)->i_flags |= F2FS_NOATIME_FL | F2FS_IMMUTABLE_FL;
inode_set_flags(inode, S_NOATIME | S_IMMUTABLE, inode_set_flags(inode, S_NOATIME | S_IMMUTABLE,
S_NOATIME | S_IMMUTABLE); S_NOATIME | S_IMMUTABLE);
inode_unlock(inode); inode_unlock(inode);
...@@ -1824,7 +1827,7 @@ static int f2fs_quota_off(struct super_block *sb, int type) ...@@ -1824,7 +1827,7 @@ static int f2fs_quota_off(struct super_block *sb, int type)
goto out_put; goto out_put;
inode_lock(inode); inode_lock(inode);
F2FS_I(inode)->i_flags &= ~(FS_NOATIME_FL | FS_IMMUTABLE_FL); F2FS_I(inode)->i_flags &= ~(F2FS_NOATIME_FL | F2FS_IMMUTABLE_FL);
inode_set_flags(inode, 0, S_NOATIME | S_IMMUTABLE); inode_set_flags(inode, 0, S_NOATIME | S_IMMUTABLE);
inode_unlock(inode); inode_unlock(inode);
f2fs_mark_inode_dirty_sync(inode, false); f2fs_mark_inode_dirty_sync(inode, false);
...@@ -1946,7 +1949,7 @@ static struct inode *f2fs_nfs_get_inode(struct super_block *sb, ...@@ -1946,7 +1949,7 @@ static struct inode *f2fs_nfs_get_inode(struct super_block *sb,
struct f2fs_sb_info *sbi = F2FS_SB(sb); struct f2fs_sb_info *sbi = F2FS_SB(sb);
struct inode *inode; struct inode *inode;
if (check_nid_range(sbi, ino)) if (f2fs_check_nid_range(sbi, ino))
return ERR_PTR(-ESTALE); return ERR_PTR(-ESTALE);
/* /*
...@@ -2129,6 +2132,8 @@ static inline bool sanity_check_area_boundary(struct f2fs_sb_info *sbi, ...@@ -2129,6 +2132,8 @@ static inline bool sanity_check_area_boundary(struct f2fs_sb_info *sbi,
static int sanity_check_raw_super(struct f2fs_sb_info *sbi, static int sanity_check_raw_super(struct f2fs_sb_info *sbi,
struct buffer_head *bh) struct buffer_head *bh)
{ {
block_t segment_count, segs_per_sec, secs_per_zone;
block_t total_sections, blocks_per_seg;
struct f2fs_super_block *raw_super = (struct f2fs_super_block *) struct f2fs_super_block *raw_super = (struct f2fs_super_block *)
(bh->b_data + F2FS_SUPER_OFFSET); (bh->b_data + F2FS_SUPER_OFFSET);
struct super_block *sb = sbi->sb; struct super_block *sb = sbi->sb;
...@@ -2185,6 +2190,72 @@ static int sanity_check_raw_super(struct f2fs_sb_info *sbi, ...@@ -2185,6 +2190,72 @@ static int sanity_check_raw_super(struct f2fs_sb_info *sbi,
return 1; return 1;
} }
segment_count = le32_to_cpu(raw_super->segment_count);
segs_per_sec = le32_to_cpu(raw_super->segs_per_sec);
secs_per_zone = le32_to_cpu(raw_super->secs_per_zone);
total_sections = le32_to_cpu(raw_super->section_count);
/* blocks_per_seg should be 512, given the above check */
blocks_per_seg = 1 << le32_to_cpu(raw_super->log_blocks_per_seg);
if (segment_count > F2FS_MAX_SEGMENT ||
segment_count < F2FS_MIN_SEGMENTS) {
f2fs_msg(sb, KERN_INFO,
"Invalid segment count (%u)",
segment_count);
return 1;
}
if (total_sections > segment_count ||
total_sections < F2FS_MIN_SEGMENTS ||
segs_per_sec > segment_count || !segs_per_sec) {
f2fs_msg(sb, KERN_INFO,
"Invalid segment/section count (%u, %u x %u)",
segment_count, total_sections, segs_per_sec);
return 1;
}
if ((segment_count / segs_per_sec) < total_sections) {
f2fs_msg(sb, KERN_INFO,
"Small segment_count (%u < %u * %u)",
segment_count, segs_per_sec, total_sections);
return 1;
}
if (segment_count > (le32_to_cpu(raw_super->block_count) >> 9)) {
f2fs_msg(sb, KERN_INFO,
"Wrong segment_count / block_count (%u > %u)",
segment_count, le32_to_cpu(raw_super->block_count));
return 1;
}
if (secs_per_zone > total_sections) {
f2fs_msg(sb, KERN_INFO,
"Wrong secs_per_zone (%u > %u)",
secs_per_zone, total_sections);
return 1;
}
if (le32_to_cpu(raw_super->extension_count) > F2FS_MAX_EXTENSION ||
raw_super->hot_ext_count > F2FS_MAX_EXTENSION ||
(le32_to_cpu(raw_super->extension_count) +
raw_super->hot_ext_count) > F2FS_MAX_EXTENSION) {
f2fs_msg(sb, KERN_INFO,
"Corrupted extension count (%u + %u > %u)",
le32_to_cpu(raw_super->extension_count),
raw_super->hot_ext_count,
F2FS_MAX_EXTENSION);
return 1;
}
if (le32_to_cpu(raw_super->cp_payload) >
(blocks_per_seg - F2FS_CP_PACKS)) {
f2fs_msg(sb, KERN_INFO,
"Insane cp_payload (%u > %u)",
le32_to_cpu(raw_super->cp_payload),
blocks_per_seg - F2FS_CP_PACKS);
return 1;
}
/* check reserved ino info */ /* check reserved ino info */
if (le32_to_cpu(raw_super->node_ino) != 1 || if (le32_to_cpu(raw_super->node_ino) != 1 ||
le32_to_cpu(raw_super->meta_ino) != 2 || le32_to_cpu(raw_super->meta_ino) != 2 ||
...@@ -2197,13 +2268,6 @@ static int sanity_check_raw_super(struct f2fs_sb_info *sbi, ...@@ -2197,13 +2268,6 @@ static int sanity_check_raw_super(struct f2fs_sb_info *sbi,
return 1; return 1;
} }
if (le32_to_cpu(raw_super->segment_count) > F2FS_MAX_SEGMENT) {
f2fs_msg(sb, KERN_INFO,
"Invalid segment count (%u)",
le32_to_cpu(raw_super->segment_count));
return 1;
}
/* check CP/SIT/NAT/SSA/MAIN_AREA area boundary */ /* check CP/SIT/NAT/SSA/MAIN_AREA area boundary */
if (sanity_check_area_boundary(sbi, bh)) if (sanity_check_area_boundary(sbi, bh))
return 1; return 1;
...@@ -2211,7 +2275,7 @@ static int sanity_check_raw_super(struct f2fs_sb_info *sbi, ...@@ -2211,7 +2275,7 @@ static int sanity_check_raw_super(struct f2fs_sb_info *sbi,
return 0; return 0;
} }
int sanity_check_ckpt(struct f2fs_sb_info *sbi) int f2fs_sanity_check_ckpt(struct f2fs_sb_info *sbi)
{ {
unsigned int total, fsmeta; unsigned int total, fsmeta;
struct f2fs_super_block *raw_super = F2FS_RAW_SUPER(sbi); struct f2fs_super_block *raw_super = F2FS_RAW_SUPER(sbi);
...@@ -2292,13 +2356,15 @@ static void init_sb_info(struct f2fs_sb_info *sbi) ...@@ -2292,13 +2356,15 @@ static void init_sb_info(struct f2fs_sb_info *sbi)
for (i = 0; i < NR_COUNT_TYPE; i++) for (i = 0; i < NR_COUNT_TYPE; i++)
atomic_set(&sbi->nr_pages[i], 0); atomic_set(&sbi->nr_pages[i], 0);
atomic_set(&sbi->wb_sync_req, 0); for (i = 0; i < META; i++)
atomic_set(&sbi->wb_sync_req[i], 0);
INIT_LIST_HEAD(&sbi->s_list); INIT_LIST_HEAD(&sbi->s_list);
mutex_init(&sbi->umount_mutex); mutex_init(&sbi->umount_mutex);
for (i = 0; i < NR_PAGE_TYPE - 1; i++) for (i = 0; i < NR_PAGE_TYPE - 1; i++)
for (j = HOT; j < NR_TEMP_TYPE; j++) for (j = HOT; j < NR_TEMP_TYPE; j++)
mutex_init(&sbi->wio_mutex[i][j]); mutex_init(&sbi->wio_mutex[i][j]);
init_rwsem(&sbi->io_order_lock);
spin_lock_init(&sbi->cp_lock); spin_lock_init(&sbi->cp_lock);
sbi->dirty_device = 0; sbi->dirty_device = 0;
...@@ -2759,7 +2825,7 @@ static int f2fs_fill_super(struct super_block *sb, void *data, int silent) ...@@ -2759,7 +2825,7 @@ static int f2fs_fill_super(struct super_block *sb, void *data, int silent)
goto free_io_dummy; goto free_io_dummy;
} }
err = get_valid_checkpoint(sbi); err = f2fs_get_valid_checkpoint(sbi);
if (err) { if (err) {
f2fs_msg(sb, KERN_ERR, "Failed to get valid F2FS checkpoint"); f2fs_msg(sb, KERN_ERR, "Failed to get valid F2FS checkpoint");
goto free_meta_inode; goto free_meta_inode;
...@@ -2789,18 +2855,18 @@ static int f2fs_fill_super(struct super_block *sb, void *data, int silent) ...@@ -2789,18 +2855,18 @@ static int f2fs_fill_super(struct super_block *sb, void *data, int silent)
spin_lock_init(&sbi->inode_lock[i]); spin_lock_init(&sbi->inode_lock[i]);
} }
init_extent_cache_info(sbi); f2fs_init_extent_cache_info(sbi);
init_ino_entry_info(sbi); f2fs_init_ino_entry_info(sbi);
/* setup f2fs internal modules */ /* setup f2fs internal modules */
err = build_segment_manager(sbi); err = f2fs_build_segment_manager(sbi);
if (err) { if (err) {
f2fs_msg(sb, KERN_ERR, f2fs_msg(sb, KERN_ERR,
"Failed to initialize F2FS segment manager"); "Failed to initialize F2FS segment manager");
goto free_sm; goto free_sm;
} }
err = build_node_manager(sbi); err = f2fs_build_node_manager(sbi);
if (err) { if (err) {
f2fs_msg(sb, KERN_ERR, f2fs_msg(sb, KERN_ERR,
"Failed to initialize F2FS node manager"); "Failed to initialize F2FS node manager");
...@@ -2818,7 +2884,7 @@ static int f2fs_fill_super(struct super_block *sb, void *data, int silent) ...@@ -2818,7 +2884,7 @@ static int f2fs_fill_super(struct super_block *sb, void *data, int silent)
sbi->kbytes_written = sbi->kbytes_written =
le64_to_cpu(seg_i->journal->info.kbytes_written); le64_to_cpu(seg_i->journal->info.kbytes_written);
build_gc_manager(sbi); f2fs_build_gc_manager(sbi);
/* get an inode for node space */ /* get an inode for node space */
sbi->node_inode = f2fs_iget(sb, F2FS_NODE_INO(sbi)); sbi->node_inode = f2fs_iget(sb, F2FS_NODE_INO(sbi));
...@@ -2870,7 +2936,7 @@ static int f2fs_fill_super(struct super_block *sb, void *data, int silent) ...@@ -2870,7 +2936,7 @@ static int f2fs_fill_super(struct super_block *sb, void *data, int silent)
} }
#endif #endif
/* if there are nt orphan nodes free them */ /* if there are nt orphan nodes free them */
err = recover_orphan_inodes(sbi); err = f2fs_recover_orphan_inodes(sbi);
if (err) if (err)
goto free_meta; goto free_meta;
...@@ -2892,7 +2958,7 @@ static int f2fs_fill_super(struct super_block *sb, void *data, int silent) ...@@ -2892,7 +2958,7 @@ static int f2fs_fill_super(struct super_block *sb, void *data, int silent)
if (!retry) if (!retry)
goto skip_recovery; goto skip_recovery;
err = recover_fsync_data(sbi, false); err = f2fs_recover_fsync_data(sbi, false);
if (err < 0) { if (err < 0) {
need_fsck = true; need_fsck = true;
f2fs_msg(sb, KERN_ERR, f2fs_msg(sb, KERN_ERR,
...@@ -2900,7 +2966,7 @@ static int f2fs_fill_super(struct super_block *sb, void *data, int silent) ...@@ -2900,7 +2966,7 @@ static int f2fs_fill_super(struct super_block *sb, void *data, int silent)
goto free_meta; goto free_meta;
} }
} else { } else {
err = recover_fsync_data(sbi, true); err = f2fs_recover_fsync_data(sbi, true);
if (!f2fs_readonly(sb) && err > 0) { if (!f2fs_readonly(sb) && err > 0) {
err = -EINVAL; err = -EINVAL;
...@@ -2910,7 +2976,7 @@ static int f2fs_fill_super(struct super_block *sb, void *data, int silent) ...@@ -2910,7 +2976,7 @@ static int f2fs_fill_super(struct super_block *sb, void *data, int silent)
} }
} }
skip_recovery: skip_recovery:
/* recover_fsync_data() cleared this already */ /* f2fs_recover_fsync_data() cleared this already */
clear_sbi_flag(sbi, SBI_POR_DOING); clear_sbi_flag(sbi, SBI_POR_DOING);
/* /*
...@@ -2919,7 +2985,7 @@ static int f2fs_fill_super(struct super_block *sb, void *data, int silent) ...@@ -2919,7 +2985,7 @@ static int f2fs_fill_super(struct super_block *sb, void *data, int silent)
*/ */
if (test_opt(sbi, BG_GC) && !f2fs_readonly(sb)) { if (test_opt(sbi, BG_GC) && !f2fs_readonly(sb)) {
/* After POR, we can run background GC thread.*/ /* After POR, we can run background GC thread.*/
err = start_gc_thread(sbi); err = f2fs_start_gc_thread(sbi);
if (err) if (err)
goto free_meta; goto free_meta;
} }
...@@ -2950,10 +3016,10 @@ static int f2fs_fill_super(struct super_block *sb, void *data, int silent) ...@@ -2950,10 +3016,10 @@ static int f2fs_fill_super(struct super_block *sb, void *data, int silent)
#endif #endif
f2fs_sync_inode_meta(sbi); f2fs_sync_inode_meta(sbi);
/* /*
* Some dirty meta pages can be produced by recover_orphan_inodes() * Some dirty meta pages can be produced by f2fs_recover_orphan_inodes()
* failed by EIO. Then, iput(node_inode) can trigger balance_fs_bg() * failed by EIO. Then, iput(node_inode) can trigger balance_fs_bg()
* followed by write_checkpoint() through f2fs_write_node_pages(), which * followed by f2fs_write_checkpoint() through f2fs_write_node_pages(), which
* falls into an infinite loop in sync_meta_pages(). * falls into an infinite loop in f2fs_sync_meta_pages().
*/ */
truncate_inode_pages_final(META_MAPPING(sbi)); truncate_inode_pages_final(META_MAPPING(sbi));
#ifdef CONFIG_QUOTA #ifdef CONFIG_QUOTA
...@@ -2966,13 +3032,13 @@ static int f2fs_fill_super(struct super_block *sb, void *data, int silent) ...@@ -2966,13 +3032,13 @@ static int f2fs_fill_super(struct super_block *sb, void *data, int silent)
free_stats: free_stats:
f2fs_destroy_stats(sbi); f2fs_destroy_stats(sbi);
free_node_inode: free_node_inode:
release_ino_entry(sbi, true); f2fs_release_ino_entry(sbi, true);
truncate_inode_pages_final(NODE_MAPPING(sbi)); truncate_inode_pages_final(NODE_MAPPING(sbi));
iput(sbi->node_inode); iput(sbi->node_inode);
free_nm: free_nm:
destroy_node_manager(sbi); f2fs_destroy_node_manager(sbi);
free_sm: free_sm:
destroy_segment_manager(sbi); f2fs_destroy_segment_manager(sbi);
free_devices: free_devices:
destroy_device_list(sbi); destroy_device_list(sbi);
kfree(sbi->ckpt); kfree(sbi->ckpt);
...@@ -3018,8 +3084,8 @@ static void kill_f2fs_super(struct super_block *sb) ...@@ -3018,8 +3084,8 @@ static void kill_f2fs_super(struct super_block *sb)
{ {
if (sb->s_root) { if (sb->s_root) {
set_sbi_flag(F2FS_SB(sb), SBI_IS_CLOSE); set_sbi_flag(F2FS_SB(sb), SBI_IS_CLOSE);
stop_gc_thread(F2FS_SB(sb)); f2fs_stop_gc_thread(F2FS_SB(sb));
stop_discard_thread(F2FS_SB(sb)); f2fs_stop_discard_thread(F2FS_SB(sb));
} }
kill_block_super(sb); kill_block_super(sb);
} }
...@@ -3057,21 +3123,27 @@ static int __init init_f2fs_fs(void) ...@@ -3057,21 +3123,27 @@ static int __init init_f2fs_fs(void)
{ {
int err; int err;
if (PAGE_SIZE != F2FS_BLKSIZE) {
printk("F2FS not supported on PAGE_SIZE(%lu) != %d\n",
PAGE_SIZE, F2FS_BLKSIZE);
return -EINVAL;
}
f2fs_build_trace_ios(); f2fs_build_trace_ios();
err = init_inodecache(); err = init_inodecache();
if (err) if (err)
goto fail; goto fail;
err = create_node_manager_caches(); err = f2fs_create_node_manager_caches();
if (err) if (err)
goto free_inodecache; goto free_inodecache;
err = create_segment_manager_caches(); err = f2fs_create_segment_manager_caches();
if (err) if (err)
goto free_node_manager_caches; goto free_node_manager_caches;
err = create_checkpoint_caches(); err = f2fs_create_checkpoint_caches();
if (err) if (err)
goto free_segment_manager_caches; goto free_segment_manager_caches;
err = create_extent_cache(); err = f2fs_create_extent_cache();
if (err) if (err)
goto free_checkpoint_caches; goto free_checkpoint_caches;
err = f2fs_init_sysfs(); err = f2fs_init_sysfs();
...@@ -3086,8 +3158,13 @@ static int __init init_f2fs_fs(void) ...@@ -3086,8 +3158,13 @@ static int __init init_f2fs_fs(void)
err = f2fs_create_root_stats(); err = f2fs_create_root_stats();
if (err) if (err)
goto free_filesystem; goto free_filesystem;
err = f2fs_init_post_read_processing();
if (err)
goto free_root_stats;
return 0; return 0;
free_root_stats:
f2fs_destroy_root_stats();
free_filesystem: free_filesystem:
unregister_filesystem(&f2fs_fs_type); unregister_filesystem(&f2fs_fs_type);
free_shrinker: free_shrinker:
...@@ -3095,13 +3172,13 @@ static int __init init_f2fs_fs(void) ...@@ -3095,13 +3172,13 @@ static int __init init_f2fs_fs(void)
free_sysfs: free_sysfs:
f2fs_exit_sysfs(); f2fs_exit_sysfs();
free_extent_cache: free_extent_cache:
destroy_extent_cache(); f2fs_destroy_extent_cache();
free_checkpoint_caches: free_checkpoint_caches:
destroy_checkpoint_caches(); f2fs_destroy_checkpoint_caches();
free_segment_manager_caches: free_segment_manager_caches:
destroy_segment_manager_caches(); f2fs_destroy_segment_manager_caches();
free_node_manager_caches: free_node_manager_caches:
destroy_node_manager_caches(); f2fs_destroy_node_manager_caches();
free_inodecache: free_inodecache:
destroy_inodecache(); destroy_inodecache();
fail: fail:
...@@ -3110,14 +3187,15 @@ static int __init init_f2fs_fs(void) ...@@ -3110,14 +3187,15 @@ static int __init init_f2fs_fs(void)
static void __exit exit_f2fs_fs(void) static void __exit exit_f2fs_fs(void)
{ {
f2fs_destroy_post_read_processing();
f2fs_destroy_root_stats(); f2fs_destroy_root_stats();
unregister_filesystem(&f2fs_fs_type); unregister_filesystem(&f2fs_fs_type);
unregister_shrinker(&f2fs_shrinker_info); unregister_shrinker(&f2fs_shrinker_info);
f2fs_exit_sysfs(); f2fs_exit_sysfs();
destroy_extent_cache(); f2fs_destroy_extent_cache();
destroy_checkpoint_caches(); f2fs_destroy_checkpoint_caches();
destroy_segment_manager_caches(); f2fs_destroy_segment_manager_caches();
destroy_node_manager_caches(); f2fs_destroy_node_manager_caches();
destroy_inodecache(); destroy_inodecache();
f2fs_destroy_trace_ios(); f2fs_destroy_trace_ios();
} }
......
...@@ -147,13 +147,13 @@ static ssize_t f2fs_sbi_show(struct f2fs_attr *a, ...@@ -147,13 +147,13 @@ static ssize_t f2fs_sbi_show(struct f2fs_attr *a,
int len = 0, i; int len = 0, i;
len += snprintf(buf + len, PAGE_SIZE - len, len += snprintf(buf + len, PAGE_SIZE - len,
"cold file extenstion:\n"); "cold file extension:\n");
for (i = 0; i < cold_count; i++) for (i = 0; i < cold_count; i++)
len += snprintf(buf + len, PAGE_SIZE - len, "%s\n", len += snprintf(buf + len, PAGE_SIZE - len, "%s\n",
extlist[i]); extlist[i]);
len += snprintf(buf + len, PAGE_SIZE - len, len += snprintf(buf + len, PAGE_SIZE - len,
"hot file extenstion:\n"); "hot file extension:\n");
for (i = cold_count; i < cold_count + hot_count; i++) for (i = cold_count; i < cold_count + hot_count; i++)
len += snprintf(buf + len, PAGE_SIZE - len, "%s\n", len += snprintf(buf + len, PAGE_SIZE - len, "%s\n",
extlist[i]); extlist[i]);
...@@ -165,7 +165,7 @@ static ssize_t f2fs_sbi_show(struct f2fs_attr *a, ...@@ -165,7 +165,7 @@ static ssize_t f2fs_sbi_show(struct f2fs_attr *a,
return snprintf(buf, PAGE_SIZE, "%u\n", *ui); return snprintf(buf, PAGE_SIZE, "%u\n", *ui);
} }
static ssize_t f2fs_sbi_store(struct f2fs_attr *a, static ssize_t __sbi_store(struct f2fs_attr *a,
struct f2fs_sb_info *sbi, struct f2fs_sb_info *sbi,
const char *buf, size_t count) const char *buf, size_t count)
{ {
...@@ -201,13 +201,13 @@ static ssize_t f2fs_sbi_store(struct f2fs_attr *a, ...@@ -201,13 +201,13 @@ static ssize_t f2fs_sbi_store(struct f2fs_attr *a,
down_write(&sbi->sb_lock); down_write(&sbi->sb_lock);
ret = update_extension_list(sbi, name, hot, set); ret = f2fs_update_extension_list(sbi, name, hot, set);
if (ret) if (ret)
goto out; goto out;
ret = f2fs_commit_super(sbi, false); ret = f2fs_commit_super(sbi, false);
if (ret) if (ret)
update_extension_list(sbi, name, hot, !set); f2fs_update_extension_list(sbi, name, hot, !set);
out: out:
up_write(&sbi->sb_lock); up_write(&sbi->sb_lock);
return ret ? ret : count; return ret ? ret : count;
...@@ -245,19 +245,56 @@ static ssize_t f2fs_sbi_store(struct f2fs_attr *a, ...@@ -245,19 +245,56 @@ static ssize_t f2fs_sbi_store(struct f2fs_attr *a,
return count; return count;
} }
if (!strcmp(a->attr.name, "trim_sections"))
return -EINVAL;
if (!strcmp(a->attr.name, "gc_urgent")) {
if (t >= 1) {
sbi->gc_mode = GC_URGENT;
if (sbi->gc_thread) {
wake_up_interruptible_all(
&sbi->gc_thread->gc_wait_queue_head);
wake_up_discard_thread(sbi, true);
}
} else {
sbi->gc_mode = GC_NORMAL;
}
return count;
}
if (!strcmp(a->attr.name, "gc_idle")) {
if (t == GC_IDLE_CB)
sbi->gc_mode = GC_IDLE_CB;
else if (t == GC_IDLE_GREEDY)
sbi->gc_mode = GC_IDLE_GREEDY;
else
sbi->gc_mode = GC_NORMAL;
return count;
}
*ui = t; *ui = t;
if (!strcmp(a->attr.name, "iostat_enable") && *ui == 0) if (!strcmp(a->attr.name, "iostat_enable") && *ui == 0)
f2fs_reset_iostat(sbi); f2fs_reset_iostat(sbi);
if (!strcmp(a->attr.name, "gc_urgent") && t == 1 && sbi->gc_thread) {
sbi->gc_thread->gc_wake = 1;
wake_up_interruptible_all(&sbi->gc_thread->gc_wait_queue_head);
wake_up_discard_thread(sbi, true);
}
return count; return count;
} }
static ssize_t f2fs_sbi_store(struct f2fs_attr *a,
struct f2fs_sb_info *sbi,
const char *buf, size_t count)
{
ssize_t ret;
bool gc_entry = (!strcmp(a->attr.name, "gc_urgent") ||
a->struct_type == GC_THREAD);
if (gc_entry)
down_read(&sbi->sb->s_umount);
ret = __sbi_store(a, sbi, buf, count);
if (gc_entry)
up_read(&sbi->sb->s_umount);
return ret;
}
static ssize_t f2fs_attr_show(struct kobject *kobj, static ssize_t f2fs_attr_show(struct kobject *kobj,
struct attribute *attr, char *buf) struct attribute *attr, char *buf)
{ {
...@@ -346,8 +383,8 @@ F2FS_RW_ATTR(GC_THREAD, f2fs_gc_kthread, gc_urgent_sleep_time, ...@@ -346,8 +383,8 @@ F2FS_RW_ATTR(GC_THREAD, f2fs_gc_kthread, gc_urgent_sleep_time,
F2FS_RW_ATTR(GC_THREAD, f2fs_gc_kthread, gc_min_sleep_time, min_sleep_time); F2FS_RW_ATTR(GC_THREAD, f2fs_gc_kthread, gc_min_sleep_time, min_sleep_time);
F2FS_RW_ATTR(GC_THREAD, f2fs_gc_kthread, gc_max_sleep_time, max_sleep_time); F2FS_RW_ATTR(GC_THREAD, f2fs_gc_kthread, gc_max_sleep_time, max_sleep_time);
F2FS_RW_ATTR(GC_THREAD, f2fs_gc_kthread, gc_no_gc_sleep_time, no_gc_sleep_time); F2FS_RW_ATTR(GC_THREAD, f2fs_gc_kthread, gc_no_gc_sleep_time, no_gc_sleep_time);
F2FS_RW_ATTR(GC_THREAD, f2fs_gc_kthread, gc_idle, gc_idle); F2FS_RW_ATTR(F2FS_SBI, f2fs_sb_info, gc_idle, gc_mode);
F2FS_RW_ATTR(GC_THREAD, f2fs_gc_kthread, gc_urgent, gc_urgent); F2FS_RW_ATTR(F2FS_SBI, f2fs_sb_info, gc_urgent, gc_mode);
F2FS_RW_ATTR(SM_INFO, f2fs_sm_info, reclaim_segments, rec_prefree_segments); F2FS_RW_ATTR(SM_INFO, f2fs_sm_info, reclaim_segments, rec_prefree_segments);
F2FS_RW_ATTR(DCC_INFO, discard_cmd_control, max_small_discards, max_discards); F2FS_RW_ATTR(DCC_INFO, discard_cmd_control, max_small_discards, max_discards);
F2FS_RW_ATTR(DCC_INFO, discard_cmd_control, discard_granularity, discard_granularity); F2FS_RW_ATTR(DCC_INFO, discard_cmd_control, discard_granularity, discard_granularity);
......
...@@ -252,7 +252,7 @@ static int read_inline_xattr(struct inode *inode, struct page *ipage, ...@@ -252,7 +252,7 @@ static int read_inline_xattr(struct inode *inode, struct page *ipage,
if (ipage) { if (ipage) {
inline_addr = inline_xattr_addr(inode, ipage); inline_addr = inline_xattr_addr(inode, ipage);
} else { } else {
page = get_node_page(sbi, inode->i_ino); page = f2fs_get_node_page(sbi, inode->i_ino);
if (IS_ERR(page)) if (IS_ERR(page))
return PTR_ERR(page); return PTR_ERR(page);
...@@ -273,7 +273,7 @@ static int read_xattr_block(struct inode *inode, void *txattr_addr) ...@@ -273,7 +273,7 @@ static int read_xattr_block(struct inode *inode, void *txattr_addr)
void *xattr_addr; void *xattr_addr;
/* The inode already has an extended attribute block. */ /* The inode already has an extended attribute block. */
xpage = get_node_page(sbi, xnid); xpage = f2fs_get_node_page(sbi, xnid);
if (IS_ERR(xpage)) if (IS_ERR(xpage))
return PTR_ERR(xpage); return PTR_ERR(xpage);
...@@ -397,7 +397,7 @@ static inline int write_all_xattrs(struct inode *inode, __u32 hsize, ...@@ -397,7 +397,7 @@ static inline int write_all_xattrs(struct inode *inode, __u32 hsize,
int err = 0; int err = 0;
if (hsize > inline_size && !F2FS_I(inode)->i_xattr_nid) if (hsize > inline_size && !F2FS_I(inode)->i_xattr_nid)
if (!alloc_nid(sbi, &new_nid)) if (!f2fs_alloc_nid(sbi, &new_nid))
return -ENOSPC; return -ENOSPC;
/* write to inline xattr */ /* write to inline xattr */
...@@ -405,9 +405,9 @@ static inline int write_all_xattrs(struct inode *inode, __u32 hsize, ...@@ -405,9 +405,9 @@ static inline int write_all_xattrs(struct inode *inode, __u32 hsize,
if (ipage) { if (ipage) {
inline_addr = inline_xattr_addr(inode, ipage); inline_addr = inline_xattr_addr(inode, ipage);
} else { } else {
in_page = get_node_page(sbi, inode->i_ino); in_page = f2fs_get_node_page(sbi, inode->i_ino);
if (IS_ERR(in_page)) { if (IS_ERR(in_page)) {
alloc_nid_failed(sbi, new_nid); f2fs_alloc_nid_failed(sbi, new_nid);
return PTR_ERR(in_page); return PTR_ERR(in_page);
} }
inline_addr = inline_xattr_addr(inode, in_page); inline_addr = inline_xattr_addr(inode, in_page);
...@@ -417,8 +417,8 @@ static inline int write_all_xattrs(struct inode *inode, __u32 hsize, ...@@ -417,8 +417,8 @@ static inline int write_all_xattrs(struct inode *inode, __u32 hsize,
NODE, true); NODE, true);
/* no need to use xattr node block */ /* no need to use xattr node block */
if (hsize <= inline_size) { if (hsize <= inline_size) {
err = truncate_xattr_node(inode); err = f2fs_truncate_xattr_node(inode);
alloc_nid_failed(sbi, new_nid); f2fs_alloc_nid_failed(sbi, new_nid);
if (err) { if (err) {
f2fs_put_page(in_page, 1); f2fs_put_page(in_page, 1);
return err; return err;
...@@ -431,10 +431,10 @@ static inline int write_all_xattrs(struct inode *inode, __u32 hsize, ...@@ -431,10 +431,10 @@ static inline int write_all_xattrs(struct inode *inode, __u32 hsize,
/* write to xattr node block */ /* write to xattr node block */
if (F2FS_I(inode)->i_xattr_nid) { if (F2FS_I(inode)->i_xattr_nid) {
xpage = get_node_page(sbi, F2FS_I(inode)->i_xattr_nid); xpage = f2fs_get_node_page(sbi, F2FS_I(inode)->i_xattr_nid);
if (IS_ERR(xpage)) { if (IS_ERR(xpage)) {
err = PTR_ERR(xpage); err = PTR_ERR(xpage);
alloc_nid_failed(sbi, new_nid); f2fs_alloc_nid_failed(sbi, new_nid);
goto in_page_out; goto in_page_out;
} }
f2fs_bug_on(sbi, new_nid); f2fs_bug_on(sbi, new_nid);
...@@ -442,13 +442,13 @@ static inline int write_all_xattrs(struct inode *inode, __u32 hsize, ...@@ -442,13 +442,13 @@ static inline int write_all_xattrs(struct inode *inode, __u32 hsize,
} else { } else {
struct dnode_of_data dn; struct dnode_of_data dn;
set_new_dnode(&dn, inode, NULL, NULL, new_nid); set_new_dnode(&dn, inode, NULL, NULL, new_nid);
xpage = new_node_page(&dn, XATTR_NODE_OFFSET); xpage = f2fs_new_node_page(&dn, XATTR_NODE_OFFSET);
if (IS_ERR(xpage)) { if (IS_ERR(xpage)) {
err = PTR_ERR(xpage); err = PTR_ERR(xpage);
alloc_nid_failed(sbi, new_nid); f2fs_alloc_nid_failed(sbi, new_nid);
goto in_page_out; goto in_page_out;
} }
alloc_nid_done(sbi, new_nid); f2fs_alloc_nid_done(sbi, new_nid);
} }
xattr_addr = page_address(xpage); xattr_addr = page_address(xpage);
...@@ -693,7 +693,7 @@ int f2fs_setxattr(struct inode *inode, int index, const char *name, ...@@ -693,7 +693,7 @@ int f2fs_setxattr(struct inode *inode, int index, const char *name,
if (err) if (err)
return err; return err;
/* this case is only from init_inode_metadata */ /* this case is only from f2fs_init_inode_metadata */
if (ipage) if (ipage)
return __f2fs_setxattr(inode, index, name, value, return __f2fs_setxattr(inode, index, name, value,
size, ipage, flags); size, ipage, flags);
......
...@@ -25,6 +25,10 @@ static inline bool fscrypt_dummy_context_enabled(struct inode *inode) ...@@ -25,6 +25,10 @@ static inline bool fscrypt_dummy_context_enabled(struct inode *inode)
} }
/* crypto.c */ /* crypto.c */
static inline void fscrypt_enqueue_decrypt_work(struct work_struct *work)
{
}
static inline struct fscrypt_ctx *fscrypt_get_ctx(const struct inode *inode, static inline struct fscrypt_ctx *fscrypt_get_ctx(const struct inode *inode,
gfp_t gfp_flags) gfp_t gfp_flags)
{ {
...@@ -150,10 +154,13 @@ static inline bool fscrypt_match_name(const struct fscrypt_name *fname, ...@@ -150,10 +154,13 @@ static inline bool fscrypt_match_name(const struct fscrypt_name *fname,
} }
/* bio.c */ /* bio.c */
static inline void fscrypt_decrypt_bio_pages(struct fscrypt_ctx *ctx, static inline void fscrypt_decrypt_bio(struct bio *bio)
struct bio *bio) {
}
static inline void fscrypt_enqueue_decrypt_bio(struct fscrypt_ctx *ctx,
struct bio *bio)
{ {
return;
} }
static inline void fscrypt_pullback_bio_page(struct page **page, bool restore) static inline void fscrypt_pullback_bio_page(struct page **page, bool restore)
......
...@@ -59,6 +59,7 @@ static inline bool fscrypt_dummy_context_enabled(struct inode *inode) ...@@ -59,6 +59,7 @@ static inline bool fscrypt_dummy_context_enabled(struct inode *inode)
} }
/* crypto.c */ /* crypto.c */
extern void fscrypt_enqueue_decrypt_work(struct work_struct *);
extern struct fscrypt_ctx *fscrypt_get_ctx(const struct inode *, gfp_t); extern struct fscrypt_ctx *fscrypt_get_ctx(const struct inode *, gfp_t);
extern void fscrypt_release_ctx(struct fscrypt_ctx *); extern void fscrypt_release_ctx(struct fscrypt_ctx *);
extern struct page *fscrypt_encrypt_page(const struct inode *, struct page *, extern struct page *fscrypt_encrypt_page(const struct inode *, struct page *,
...@@ -174,7 +175,9 @@ static inline bool fscrypt_match_name(const struct fscrypt_name *fname, ...@@ -174,7 +175,9 @@ static inline bool fscrypt_match_name(const struct fscrypt_name *fname,
} }
/* bio.c */ /* bio.c */
extern void fscrypt_decrypt_bio_pages(struct fscrypt_ctx *, struct bio *); extern void fscrypt_decrypt_bio(struct bio *);
extern void fscrypt_enqueue_decrypt_bio(struct fscrypt_ctx *ctx,
struct bio *bio);
extern void fscrypt_pullback_bio_page(struct page **, bool); extern void fscrypt_pullback_bio_page(struct page **, bool);
extern int fscrypt_zeroout_range(const struct inode *, pgoff_t, sector_t, extern int fscrypt_zeroout_range(const struct inode *, pgoff_t, sector_t,
unsigned int); unsigned int);
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册