1. 02 6月, 2015 8 次提交
    • T
      writeback: implement WB_has_dirty_io wb_state flag · d6c10f1f
      Tejun Heo 提交于
      Currently, wb_has_dirty_io() determines whether a wb (bdi_writeback)
      has any dirty inode by testing all three IO lists on each invocation
      without actively keeping track.  For cgroup writeback support, a
      single bdi will host multiple wb's each of which will host dirty
      inodes separately and we'll need to make bdi_has_dirty_io(), which
      currently only represents the root wb, aggregate has_dirty_io from all
      member wb's, which requires tracking transitions in has_dirty_io state
      on each wb.
      
      This patch introduces inode_wb_list_{move|del}_locked() to consolidate
      IO list operations leaving queue_io() the only other function which
      directly manipulates IO lists (via move_expired_inodes()).  All three
      functions are updated to call wb_io_lists_[de]populated() which keep
      track of whether the wb has dirty inodes or not and record it using
      the new WB_has_dirty_io flag.  inode_wb_list_moved_locked()'s return
      value indicates whether the wb had no dirty inodes before.
      
      mark_inode_dirty() is restructured so that the return value of
      inode_wb_list_move_locked() can be used for deciding whether to wake
      up the wb.
      
      While at it, change {bdi|wb}_has_dirty_io()'s return values to bool.
      These functions were returning 0 and 1 before.  Also, add a comment
      explaining the synchronization of wb_state flags.
      
      v2: Updated to accommodate b_dirty_time.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Jan Kara <jack@suse.cz>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      d6c10f1f
    • T
      writeback: implement and use inode_congested() · 703c2708
      Tejun Heo 提交于
      In several places, bdi_congested() and its wrappers are used to
      determine whether more IOs should be issued.  With cgroup writeback
      support, this question can't be answered solely based on the bdi
      (backing_dev_info).  It's dependent on whether the filesystem and bdi
      support cgroup writeback and the blkcg the inode is associated with.
      
      This patch implements inode_congested() and its wrappers which take
      @inode and determines the congestion state considering cgroup
      writeback.  The new functions replace bdi_*congested() calls in places
      where the query is about specific inode and task.
      
      There are several filesystem users which also fit this criteria but
      they should be updated when each filesystem implements cgroup
      writeback support.
      
      v2: Now that a given inode is associated with only one wb, congestion
          state can be determined independent from the asking task.  Drop
          @task.  Spotted by Vivek.  Also, converted to take @inode instead
          of @mapping and renamed to inode_congested().
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Jan Kara <jack@suse.cz>
      Cc: Vivek Goyal <vgoyal@redhat.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      703c2708
    • T
      writeback: make backing_dev_info host cgroup-specific bdi_writebacks · 52ebea74
      Tejun Heo 提交于
      For the planned cgroup writeback support, on each bdi
      (backing_dev_info), each memcg will be served by a separate wb
      (bdi_writeback).  This patch updates bdi so that a bdi can host
      multiple wbs (bdi_writebacks).
      
      On the default hierarchy, blkcg implicitly enables memcg.  This allows
      using memcg's page ownership for attributing writeback IOs, and every
      memcg - blkcg combination can be served by its own wb by assigning a
      dedicated wb to each memcg.  This means that there may be multiple
      wb's of a bdi mapped to the same blkcg.  As congested state is per
      blkcg - bdi combination, those wb's should share the same congested
      state.  This is achieved by tracking congested state via
      bdi_writeback_congested structs which are keyed by blkcg.
      
      bdi->wb remains unchanged and will keep serving the root cgroup.
      cgwb's (cgroup wb's) for non-root cgroups are created on-demand or
      looked up while dirtying an inode according to the memcg of the page
      being dirtied or current task.  Each cgwb is indexed on bdi->cgwb_tree
      by its memcg id.  Once an inode is associated with its wb, it can be
      retrieved using inode_to_wb().
      
      Currently, none of the filesystems has FS_CGROUP_WRITEBACK and all
      pages will keep being associated with bdi->wb.
      
      v3: inode_attach_wb() in account_page_dirtied() moved inside
          mapping_cap_account_dirty() block where it's known to be !NULL.
          Also, an unnecessary NULL check before kfree() removed.  Both
          detected by the kbuild bot.
      
      v2: Updated so that wb association is per inode and wb is per memcg
          rather than blkcg.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: kbuild test robot <fengguang.wu@intel.com>
      Cc: Dan Carpenter <dan.carpenter@oracle.com>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Jan Kara <jack@suse.cz>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      52ebea74
    • T
      bdi: make inode_to_bdi() inline · a212b105
      Tejun Heo 提交于
      Now that bdi definitions are moved to backing-dev-defs.h,
      backing-dev.h can include blkdev.h and inline inode_to_bdi() without
      worrying about introducing circular include dependency.  The function
      gets called from hot paths and fairly trivial.
      
      This patch makes inode_to_bdi() and sb_is_blkdev_sb() that the
      function calls inline.  blockdev_superblock and noop_backing_dev_info
      are EXPORT_GPL'd to allow the inline functions to be used from
      modules.
      
      While at it, make sb_is_blkdev_sb() return bool instead of int.
      
      v2: Fixed typo in description as suggested by Jan.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Reviewed-by: NJens Axboe <axboe@kernel.dk>
      Cc: Christoph Hellwig <hch@infradead.org>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      a212b105
    • T
      writeback: move backing_dev_info->wb_lock and ->worklist into bdi_writeback · f0054bb1
      Tejun Heo 提交于
      Currently, a bdi (backing_dev_info) embeds single wb (bdi_writeback)
      and the role of the separation is unclear.  For cgroup support for
      writeback IOs, a bdi will be updated to host multiple wb's where each
      wb serves writeback IOs of a different cgroup on the bdi.  To achieve
      that, a wb should carry all states necessary for servicing writeback
      IOs for a cgroup independently.
      
      This patch moves bdi->wb_lock and ->worklist into wb.
      
      * The lock protects bdi->worklist and bdi->wb.dwork scheduling.  While
        moving, rename it to wb->work_lock as wb->wb_lock is confusing.
        Also, move wb->dwork downwards so that it's colocated with the new
        ->work_lock and ->work_list fields.
      
      * bdi_writeback_workfn()		-> wb_workfn()
        bdi_wakeup_thread_delayed(bdi)	-> wb_wakeup_delayed(wb)
        bdi_wakeup_thread(bdi)		-> wb_wakeup(wb)
        bdi_queue_work(bdi, ...)		-> wb_queue_work(wb, ...)
        __bdi_start_writeback(bdi, ...)	-> __wb_start_writeback(wb, ...)
        get_next_work_item(bdi)		-> get_next_work_item(wb)
      
      * bdi_wb_shutdown() is renamed to wb_shutdown() and now takes @wb.
        The function contained parts which belong to the containing bdi
        rather than the wb itself - testing cap_writeback_dirty and
        bdi_remove_from_list() invocation.  Those are moved to
        bdi_unregister().
      
      * bdi_wb_{init|exit}() are renamed to wb_{init|exit}().
        Initializations of the moved bdi->wb_lock and ->work_list are
        relocated from bdi_init() to wb_init().
      
      * As there's still only one bdi_writeback per backing_dev_info, all
        uses of bdi->state are mechanically replaced with bdi->wb.state
        introducing no behavior changes.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Reviewed-by: NJan Kara <jack@suse.cz>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Wu Fengguang <fengguang.wu@intel.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      f0054bb1
    • T
      writeback: move bandwidth related fields from backing_dev_info into bdi_writeback · a88a341a
      Tejun Heo 提交于
      Currently, a bdi (backing_dev_info) embeds single wb (bdi_writeback)
      and the role of the separation is unclear.  For cgroup support for
      writeback IOs, a bdi will be updated to host multiple wb's where each
      wb serves writeback IOs of a different cgroup on the bdi.  To achieve
      that, a wb should carry all states necessary for servicing writeback
      IOs for a cgroup independently.
      
      This patch moves bandwidth related fields from backing_dev_info into
      bdi_writeback.
      
      * The moved fields are: bw_time_stamp, dirtied_stamp, written_stamp,
        write_bandwidth, avg_write_bandwidth, dirty_ratelimit,
        balanced_dirty_ratelimit, completions and dirty_exceeded.
      
      * writeback_chunk_size() and over_bground_thresh() now take @wb
        instead of @bdi.
      
      * bdi_writeout_fraction(bdi, ...)	-> wb_writeout_fraction(wb, ...)
        bdi_dirty_limit(bdi, ...)		-> wb_dirty_limit(wb, ...)
        bdi_position_ration(bdi, ...)		-> wb_position_ratio(wb, ...)
        bdi_update_writebandwidth(bdi, ...)	-> wb_update_write_bandwidth(wb, ...)
        [__]bdi_update_bandwidth(bdi, ...)	-> [__]wb_update_bandwidth(wb, ...)
        bdi_{max|min}_pause(bdi, ...)		-> wb_{max|min}_pause(wb, ...)
        bdi_dirty_limits(bdi, ...)		-> wb_dirty_limits(wb, ...)
      
      * Init/exits of the relocated fields are moved to bdi_wb_init/exit()
        respectively.  Note that explicit zeroing is dropped in the process
        as wb's are cleared in entirety anyway.
      
      * As there's still only one bdi_writeback per backing_dev_info, all
        uses of bdi->stat[] are mechanically replaced with bdi->wb.stat[]
        introducing no behavior changes.
      
      v2: Typo in description fixed as suggested by Jan.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Reviewed-by: NJan Kara <jack@suse.cz>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Wu Fengguang <fengguang.wu@intel.com>
      Cc: Jaegeuk Kim <jaegeuk@kernel.org>
      Cc: Steven Whitehouse <swhiteho@redhat.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      a88a341a
    • T
      writeback: move backing_dev_info->bdi_stat[] into bdi_writeback · 93f78d88
      Tejun Heo 提交于
      Currently, a bdi (backing_dev_info) embeds single wb (bdi_writeback)
      and the role of the separation is unclear.  For cgroup support for
      writeback IOs, a bdi will be updated to host multiple wb's where each
      wb serves writeback IOs of a different cgroup on the bdi.  To achieve
      that, a wb should carry all states necessary for servicing writeback
      IOs for a cgroup independently.
      
      This patch moves bdi->bdi_stat[] into wb.
      
      * enum bdi_stat_item is renamed to wb_stat_item and the prefix of all
        enums is changed from BDI_ to WB_.
      
      * BDI_STAT_BATCH() -> WB_STAT_BATCH()
      
      * [__]{add|inc|dec|sum}_wb_stat(bdi, ...) -> [__]{add|inc}_wb_stat(wb, ...)
      
      * bdi_stat[_error]() -> wb_stat[_error]()
      
      * bdi_writeout_inc() -> wb_writeout_inc()
      
      * stat init is moved to bdi_wb_init() and bdi_wb_exit() is added and
        frees stat.
      
      * As there's still only one bdi_writeback per backing_dev_info, all
        uses of bdi->stat[] are mechanically replaced with bdi->wb.stat[]
        introducing no behavior changes.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Reviewed-by: NJan Kara <jack@suse.cz>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Wu Fengguang <fengguang.wu@intel.com>
      Cc: Miklos Szeredi <miklos@szeredi.hu>
      Cc: Trond Myklebust <trond.myklebust@primarydata.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      93f78d88
    • T
      writeback: move backing_dev_info->state into bdi_writeback · 4452226e
      Tejun Heo 提交于
      Currently, a bdi (backing_dev_info) embeds single wb (bdi_writeback)
      and the role of the separation is unclear.  For cgroup support for
      writeback IOs, a bdi will be updated to host multiple wb's where each
      wb serves writeback IOs of a different cgroup on the bdi.  To achieve
      that, a wb should carry all states necessary for servicing writeback
      IOs for a cgroup independently.
      
      This patch moves bdi->state into wb.
      
      * enum bdi_state is renamed to wb_state and the prefix of all enums is
        changed from BDI_ to WB_.
      
      * Explicit zeroing of bdi->state is removed without adding zeoring of
        wb->state as the whole data structure is zeroed on init anyway.
      
      * As there's still only one bdi_writeback per backing_dev_info, all
        uses of bdi->state are mechanically replaced with bdi->wb.state
        introducing no behavior changes.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Reviewed-by: NJan Kara <jack@suse.cz>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Wu Fengguang <fengguang.wu@intel.com>
      Cc: drbd-dev@lists.linbit.com
      Cc: Neil Brown <neilb@suse.de>
      Cc: Alasdair Kergon <agk@redhat.com>
      Cc: Mike Snitzer <snitzer@redhat.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      4452226e
  2. 18 3月, 2015 2 次提交
    • T
      fs: add dirtytime_expire_seconds sysctl · 1efff914
      Theodore Ts'o 提交于
      Add a tuning knob so we can adjust the dirtytime expiration timeout,
      which is very useful for testing lazytime.
      Signed-off-by: NTheodore Ts'o <tytso@mit.edu>
      Reviewed-by: NJan Kara <jack@suse.cz>
      1efff914
    • T
      fs: make sure the timestamps for lazytime inodes eventually get written · a2f48706
      Theodore Ts'o 提交于
      Jan Kara pointed out that if there is an inode which is constantly
      getting dirtied with I_DIRTY_PAGES, an inode with an updated timestamp
      will never be written since inode->dirtied_when is constantly getting
      updated.  We fix this by adding an extra field to the inode,
      dirtied_time_when, so inodes with a stale dirtytime can get detected
      and handled.
      
      In addition, if we have a dirtytime inode caused by an atime update,
      and there is no write activity on the file system, we need to have a
      secondary system to make sure these inodes get written out.  We do
      this by setting up a second delayed work structure which wakes up the
      CPU much more rarely compared to writeback_expire_centisecs.
      Signed-off-by: NTheodore Ts'o <tytso@mit.edu>
      Reviewed-by: NJan Kara <jack@suse.cz>
      a2f48706
  3. 23 2月, 2015 1 次提交
    • K
      trylock_super(): replacement for grab_super_passive() · eb6ef3df
      Konstantin Khlebnikov 提交于
      I've noticed significant locking contention in memory reclaimer around
      sb_lock inside grab_super_passive(). Grab_super_passive() is called from
      two places: in icache/dcache shrinkers (function super_cache_scan) and
      from writeback (function __writeback_inodes_wb). Both are required for
      progress in memory allocator.
      
      Grab_super_passive() acquires sb_lock to increment sb->s_count and check
      sb->s_instances. It seems sb->s_umount locked for read is enough here:
      super-block deactivation always runs under sb->s_umount locked for write.
      Protecting super-block itself isn't a problem: in super_cache_scan() sb
      is protected by shrinker_rwsem: it cannot be freed if its slab shrinkers
      are still active. Inside writeback super-block comes from inode from bdi
      writeback list under wb->list_lock.
      
      This patch removes locking sb_lock and checks s_instances under s_umount:
      generic_shutdown_super() unlinks it under sb->s_umount locked for write.
      New variant is called trylock_super() and since it only locks semaphore,
      callers must call up_read(&sb->s_umount) instead of drop_super(sb) when
      they're done.
      Signed-off-by: NKonstantin Khlebnikov <khlebnikov@yandex-team.ru>
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      eb6ef3df
  4. 05 2月, 2015 1 次提交
    • T
      vfs: add support for a lazytime mount option · 0ae45f63
      Theodore Ts'o 提交于
      Add a new mount option which enables a new "lazytime" mode.  This mode
      causes atime, mtime, and ctime updates to only be made to the
      in-memory version of the inode.  The on-disk times will only get
      updated when (a) if the inode needs to be updated for some non-time
      related change, (b) if userspace calls fsync(), syncfs() or sync(), or
      (c) just before an undeleted inode is evicted from memory.
      
      This is OK according to POSIX because there are no guarantees after a
      crash unless userspace explicitly requests via a fsync(2) call.
      
      For workloads which feature a large number of random write to a
      preallocated file, the lazytime mount option significantly reduces
      writes to the inode table.  The repeated 4k writes to a single block
      will result in undesirable stress on flash devices and SMR disk
      drives.  Even on conventional HDD's, the repeated writes to the inode
      table block will trigger Adjacent Track Interference (ATI) remediation
      latencies, which very negatively impact long tail latencies --- which
      is a very big deal for web serving tiers (for example).
      
      Google-Bug-Id: 18297052
      Signed-off-by: NTheodore Ts'o <tytso@mit.edu>
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      0ae45f63
  5. 22 1月, 2015 1 次提交
    • J
      fs: make inode_to_bdi() handle NULL inode · b520252a
      Jens Axboe 提交于
      Running a heavy fs workload, I ran into a situation where we pass
      down a page for writeback/swap that doesn't have an inode mapping:
      
      BUG: unable to handle kernel NULL pointer dereference at 0000000000000028
      IP: [<ffffffff8119589f>] inode_to_bdi+0xf/0x50
      PGD 0
      Oops: 0000 [#1] PREEMPT SMP
      Modules linked in: wl(O) tun cfg80211 btusb joydev hid_apple hid_generic usbhid hid bcm5974 usb_storage nouveau snd_hda_codec_hdmi snd_hda_codec_cirrus snd_hda_codec_generic x86_pkg_temp_thermal snd_hda_intel kvm_intel snd_hda_controller snd_hda_codec kvm snd_hwdep snd_pcm applesmc input_polldev snd_seq_midi snd_seq_midi_event snd_rawmidi snd_seq snd_timer snd_seq_device snd xhci_pci xhci_hcd ttm thunderbolt soundcore apple_gmux apple_bl bluetooth binfmt_misc fuse nls_iso8859_1 nls_cp437 vfat fat [last unloaded: wl]
      CPU: 4 PID: 50 Comm: kswapd0 Tainted: G     U     O   3.19.0-rc5+ #60
      Hardware name: Apple Inc. MacBookPro11,3/Mac-2BD1B31983FE1663, BIOS MBP112.88Z.0138.B02.1310181745 10/18/2013
      task: ffff880462e917f0 ti: ffff880462edc000 task.ti: ffff880462edc000
      RIP: 0010:[<ffffffff8119589f>]  [<ffffffff8119589f>] inode_to_bdi+0xf/0x50
      RSP: 0000:ffff880462edf8e8  EFLAGS: 00010282
      RAX: ffffffff81c4cd80 RBX: ffffea0001b3abc0 RCX: 0000000000000000
      RDX: 0000000000000001 RSI: 0000000000000000 RDI: 0000000000000000
      RBP: ffff880462edf8f8 R08: 00000000001e8500 R09: ffff880460f7cb68
      R10: ffff880462edfa00 R11: 0000000000000101 R12: 0000000000000000
      R13: ffffffff81c4cd98 R14: 0000000000000000 R15: ffff880460f7c9c0
      FS:  0000000000000000(0000) GS:ffff88047f300000(0000) knlGS:0000000000000000
      CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
      CR2: 0000000000000028 CR3: 00000002b6341000 CR4: 00000000001407e0
      DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
      DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
      Stack:
       ffffea0001b3abc0 ffffffff81c4cd80 ffff880462edf948 ffffffff811244aa
       ffffffff811565b0 ffff880460f7c9c0 ffff880462edf948 ffffea0001b3abc0
       0000000000000001 ffff880462edfb40 ffff880008b999c0 ffff880460f7c9c0
      Call Trace:
       [<ffffffff811244aa>] __test_set_page_writeback+0x3a/0x170
       [<ffffffff811565b0>] ? SyS_madvise+0x790/0x790
       [<ffffffff81156bb6>] __swap_writepage+0x216/0x280
       [<ffffffff8133d592>] ? radix_tree_insert+0x32/0xe0
       [<ffffffff81157741>] ? swap_info_get+0x61/0xf0
       [<ffffffff81159bfc>] ? page_swapcount+0x4c/0x60
       [<ffffffff81156c4d>] swap_writepage+0x2d/0x50
       [<ffffffff81131658>] shmem_writepage+0x198/0x2c0
       [<ffffffff8112cae4>] shrink_page_list+0x464/0xa00
       [<ffffffff8112d666>] shrink_inactive_list+0x266/0x500
       [<ffffffff8112e215>] shrink_lruvec+0x5d5/0x720
       [<ffffffff8112e3bb>] shrink_zone+0x5b/0x190
       [<ffffffff8112ee3f>] kswapd+0x48f/0x8d0
       [<ffffffff8112e9b0>] ? try_to_free_pages+0x4c0/0x4c0
       [<ffffffff81067be2>] kthread+0xd2/0xf0
       [<ffffffff81060000>] ? workqueue_congested+0x30/0x80
       [<ffffffff81067b10>] ? kthread_create_on_node+0x180/0x180
       [<ffffffff816b556c>] ret_from_fork+0x7c/0xb0
       [<ffffffff81067b10>] ? kthread_create_on_node+0x180/0x180
      Code: 00 48 c7 c7 8d 8d a4 81 e8 3f 62 eb ff e9 fc fe ff ff 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 55 48 89 e5 41 54 49 89 fc 53 <48> 8b 5f 28 48 89 df e8 15 f8 00 00 85 c0 75 11 48 8b 83 d8 00
      RIP  [<ffffffff8119589f>] inode_to_bdi+0xf/0x50
       RSP <ffff880462edf8e8>
      CR2: 0000000000000028
      ---[ end trace eb0e21aa7dad3ddf ]---
      
      Handle this in inode_to_bdi() by punting it to noop_backing_dev_info,
      if mapping->host is NULL.
      Signed-off-by: NJens Axboe <axboe@fb.com>
      b520252a
  6. 21 1月, 2015 2 次提交
  7. 05 11月, 2014 1 次提交
    • T
      writeback: fix a subtle race condition in I_DIRTY clearing · 9c6ac78e
      Tejun Heo 提交于
      After invoking ->dirty_inode(), __mark_inode_dirty() does smp_mb() and
      tests inode->i_state locklessly to see whether it already has all the
      necessary I_DIRTY bits set.  The comment above the barrier doesn't
      contain any useful information - memory barriers can't ensure "changes
      are seen by all cpus" by itself.
      
      And it sure enough was broken.  Please consider the following
      scenario.
      
       CPU 0					CPU 1
       -------------------------------------------------------------------------------
      
      					enters __writeback_single_inode()
      					grabs inode->i_lock
      					tests PAGECACHE_TAG_DIRTY which is clear
       enters __set_page_dirty()
       grabs mapping->tree_lock
       sets PAGECACHE_TAG_DIRTY
       releases mapping->tree_lock
       leaves __set_page_dirty()
      
       enters __mark_inode_dirty()
       smp_mb()
       sees I_DIRTY_PAGES set
       leaves __mark_inode_dirty()
      					clears I_DIRTY_PAGES
      					releases inode->i_lock
      
      Now @inode has dirty pages w/ I_DIRTY_PAGES clear.  This doesn't seem
      to lead to an immediately critical problem because requeue_inode()
      later checks PAGECACHE_TAG_DIRTY instead of I_DIRTY_PAGES when
      deciding whether the inode needs to be requeued for IO and there are
      enough unintentional memory barriers inbetween, so while the inode
      ends up with inconsistent I_DIRTY_PAGES flag, it doesn't fall off the
      IO list.
      
      The lack of explicit barrier may also theoretically affect the other
      I_DIRTY bits which deal with metadata dirtiness.  There is no
      guarantee that a strong enough barrier exists between
      I_DIRTY_[DATA]SYNC clearing and write_inode() writing out the dirtied
      inode.  Filesystem inode writeout path likely has enough stuff which
      can behave as full barrier but it's theoretically possible that the
      writeout may not see all the updates from ->dirty_inode().
      
      Fix it by adding an explicit smp_mb() after I_DIRTY clearing.  Note
      that I_DIRTY_PAGES needs a special treatment as it always needs to be
      cleared to be interlocked with the lockless test on
      __mark_inode_dirty() side.  It's cleared unconditionally and
      reinstated after smp_mb() if the mapping still has dirty pages.
      
      Also add comments explaining how and why the barriers are paired.
      
      Lightly tested.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Jan Kara <jack@suse.cz>
      Cc: Mikulas Patocka <mpatocka@redhat.com>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Cc: stable@vger.kernel.org
      Reviewed-by: NJan Kara <jack@suse.cz>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      9c6ac78e
  8. 16 7月, 2014 1 次提交
    • N
      sched: Remove proliferation of wait_on_bit() action functions · 74316201
      NeilBrown 提交于
      The current "wait_on_bit" interface requires an 'action'
      function to be provided which does the actual waiting.
      There are over 20 such functions, many of them identical.
      Most cases can be satisfied by one of just two functions, one
      which uses io_schedule() and one which just uses schedule().
      
      So:
       Rename wait_on_bit and        wait_on_bit_lock to
              wait_on_bit_action and wait_on_bit_lock_action
       to make it explicit that they need an action function.
      
       Introduce new wait_on_bit{,_lock} and wait_on_bit{,_lock}_io
       which are *not* given an action function but implicitly use
       a standard one.
       The decision to error-out if a signal is pending is now made
       based on the 'mode' argument rather than being encoded in the action
       function.
      
       All instances of the old wait_on_bit and wait_on_bit_lock which
       can use the new version have been changed accordingly and their
       action functions have been discarded.
       wait_on_bit{_lock} does not return any specific error code in the
       event of a signal so the caller must check for non-zero and
       interpolate their own error code as appropriate.
      
      The wait_on_bit() call in __fscache_wait_on_invalidate() was
      ambiguous as it specified TASK_UNINTERRUPTIBLE but used
      fscache_wait_bit_interruptible as an action function.
      David Howells confirms this should be uniformly
      "uninterruptible"
      
      The main remaining user of wait_on_bit{,_lock}_action is NFS
      which needs to use a freezer-aware schedule() call.
      
      A comment in fs/gfs2/glock.c notes that having multiple 'action'
      functions is useful as they display differently in the 'wchan'
      field of 'ps'. (and /proc/$PID/wchan).
      As the new bit_wait{,_io} functions are tagged "__sched", they
      will not show up at all, but something higher in the stack.  So
      the distinction will still be visible, only with different
      function names (gds2_glock_wait versus gfs2_glock_dq_wait in the
      gfs2/glock.c case).
      
      Since first version of this patch (against 3.15) two new action
      functions appeared, on in NFS and one in CIFS.  CIFS also now
      uses an action function that makes the same freezer aware
      schedule call as NFS.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      Acked-by: David Howells <dhowells@redhat.com> (fscache, keys)
      Acked-by: Steven Whitehouse <swhiteho@redhat.com> (gfs2)
      Acked-by: NPeter Zijlstra <peterz@infradead.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Steve French <sfrench@samba.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Link: http://lkml.kernel.org/r/20140707051603.28027.72349.stgit@notabene.brownSigned-off-by: NIngo Molnar <mingo@kernel.org>
      74316201
  9. 04 4月, 2014 2 次提交
    • J
      bdi: avoid oops on device removal · 5acda9d1
      Jan Kara 提交于
      After commit 839a8e86 ("writeback: replace custom worker pool
      implementation with unbound workqueue") when device is removed while we
      are writing to it we crash in bdi_writeback_workfn() ->
      set_worker_desc() because bdi->dev is NULL.
      
      This can happen because even though bdi_unregister() cancels all pending
      flushing work, nothing really prevents new ones from being queued from
      balance_dirty_pages() or other places.
      
      Fix the problem by clearing BDI_registered bit in bdi_unregister() and
      checking it before scheduling of any flushing work.
      
      Fixes: 839a8e86Reviewed-by: NTejun Heo <tj@kernel.org>
      Signed-off-by: NJan Kara <jack@suse.cz>
      Cc: Derek Basehore <dbasehore@chromium.org>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      5acda9d1
    • D
      backing_dev: fix hung task on sync · 6ca738d6
      Derek Basehore 提交于
      bdi_wakeup_thread_delayed() used the mod_delayed_work() function to
      schedule work to writeback dirty inodes.  The problem with this is that
      it can delay work that is scheduled for immediate execution, such as the
      work from sync_inodes_sb().  This can happen since mod_delayed_work()
      can now steal work from a work_queue.  This fixes the problem by using
      queue_delayed_work() instead.  This is a regression caused by commit
      839a8e86 ("writeback: replace custom worker pool implementation with
      unbound workqueue").
      
      The reason that this causes a problem is that laptop-mode will change
      the delay, dirty_writeback_centisecs, to 60000 (10 minutes) by default.
      In the case that bdi_wakeup_thread_delayed() races with
      sync_inodes_sb(), sync will be stopped for 10 minutes and trigger a hung
      task.  Even if dirty_writeback_centisecs is not long enough to cause a
      hung task, we still don't want to delay sync for that long.
      
      We fix the problem by using queue_delayed_work() when we want to
      schedule writeback sometime in future.  This function doesn't change the
      timer if it is already armed.
      
      For the same reason, we also change bdi_writeback_workfn() to
      immediately queue the work again in the case that the work_list is not
      empty.  The same problem can happen if the sync work is run on the
      rescue worker.
      
      [jack@suse.cz: update changelog, add comment, use bdi_wakeup_thread_delayed()]
      Signed-off-by: NDerek Basehore <dbasehore@chromium.org>
      Reviewed-by: NJan Kara <jack@suse.cz>
      Cc: Alexander Viro <viro@zento.linux.org.uk>
      Reviewed-by: NTejun Heo <tj@kernel.org>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: "Darrick J. Wong" <darrick.wong@oracle.com>
      Cc: Derek Basehore <dbasehore@chromium.org>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Benson Leung <bleung@chromium.org>
      Cc: Sonny Rao <sonnyrao@chromium.org>
      Cc: Luigi Semenzato <semenzato@chromium.org>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Dave Chinner <david@fromorbit.com>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      6ca738d6
  10. 22 2月, 2014 1 次提交
    • J
      Revert "writeback: do not sync data dirtied after sync start" · 0dc83bd3
      Jan Kara 提交于
      This reverts commit c4a391b5. Dave
      Chinner <david@fromorbit.com> has reported the commit may cause some
      inodes to be left out from sync(2). This is because we can call
      redirty_tail() for some inode (which sets i_dirtied_when to current time)
      after sync(2) has started or similarly requeue_inode() can set
      i_dirtied_when to current time if writeback had to skip some pages. The
      real problem is in the functions clobbering i_dirtied_when but fixing
      that isn't trivial so revert is a safer choice for now.
      
      CC: stable@vger.kernel.org # >= 3.13
      Signed-off-by: NJan Kara <jack@suse.cz>
      0dc83bd3
  11. 06 2月, 2014 1 次提交
    • S
      GFS2: journal data writepages update · 774016b2
      Steven Whitehouse 提交于
      GFS2 has carried what is more or less a copy of the
      write_cache_pages() for some time. It seems that this
      copy has slipped behind the core code over time. This
      patch brings it back uptodate, and in addition adds the
      tracepoint which would otherwise be missing.
      
      We could go further, and eliminate some or all of the
      code duplication here. The issue is that if we do that,
      then the function we need to split out from the existing
      write_cache_pages(), which will look a lot like
      gfs2_jdata_write_pagevec(), would land up putting quite a
      lot of extra variables on the stack. I know that has been
      a problem in the past in the writeback code path, which
      is why I've hesitated to do it here.
      Signed-off-by: NSteven Whitehouse <swhiteho@redhat.com>
      774016b2
  12. 14 12月, 2013 1 次提交
    • J
      writeback: Fix data corruption on NFS · f9b0e058
      Jan Kara 提交于
      Commit 4f8ad655 "writeback: Refactor writeback_single_inode()" added
      a condition to skip clean inode. However this is wrong in WB_SYNC_ALL
      mode because there we also want to wait for outstanding writeback on
      possibly clean inode. This was causing occasional data corruption issues
      on NFS because it uses sync_inode() to make sure all outstanding writes
      are flushed to the server before truncating the inode and with
      sync_inode() returning prematurely file was sometimes extended back
      by an outstanding write after it was truncated.
      
      So modify the test to also check for pages under writeback in
      WB_SYNC_ALL mode.
      
      CC: stable@vger.kernel.org # >= 3.5
      Fixes: 4f8ad655Reported-and-tested-by: NDan Duval <dan.duval@oracle.com>
      Signed-off-by: NJan Kara <jack@suse.cz>
      f9b0e058
  13. 13 11月, 2013 1 次提交
    • J
      writeback: do not sync data dirtied after sync start · c4a391b5
      Jan Kara 提交于
      When there are processes heavily creating small files while sync(2) is
      running, it can easily happen that quite some new files are created
      between WB_SYNC_NONE and WB_SYNC_ALL pass of sync(2).  That can happen
      especially if there are several busy filesystems (remember that sync
      traverses filesystems sequentially and waits in WB_SYNC_ALL phase on one
      fs before starting it on another fs).  Because WB_SYNC_ALL pass is slow
      (e.g.  causes a transaction commit and cache flush for each inode in
      ext3), resulting sync(2) times are rather large.
      
      The following script reproduces the problem:
      
        function run_writers
        {
          for (( i = 0; i < 10; i++ )); do
            mkdir $1/dir$i
            for (( j = 0; j < 40000; j++ )); do
              dd if=/dev/zero of=$1/dir$i/$j bs=4k count=4 &>/dev/null
            done &
          done
        }
      
        for dir in "$@"; do
          run_writers $dir
        done
      
        sleep 40
        time sync
      
      Fix the problem by disregarding inodes dirtied after sync(2) was called
      in the WB_SYNC_ALL pass.  To allow for this, sync_inodes_sb() now takes
      a time stamp when sync has started which is used for setting up work for
      flusher threads.
      
      To give some numbers, when above script is run on two ext4 filesystems
      on simple SATA drive, the average sync time from 10 runs is 267.549
      seconds with standard deviation 104.799426.  With the patched kernel,
      the average sync time from 10 runs is 2.995 seconds with standard
      deviation 0.096.
      Signed-off-by: NJan Kara <jack@suse.cz>
      Reviewed-by: NFengguang Wu <fengguang.wu@intel.com>
      Reviewed-by: NDave Chinner <dchinner@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      c4a391b5
  14. 25 10月, 2013 1 次提交
  15. 12 9月, 2013 3 次提交
    • J
      writeback: fix race that cause writeback hung · 146d7009
      Junxiao Bi 提交于
      There is a race between mark inode dirty and writeback thread, see the
      following scenario.  In this case, writeback thread will not run though
      there is dirty_io.
      
      __mark_inode_dirty()                                          bdi_writeback_workfn()
      	...                                                       	...
      	spin_lock(&inode->i_lock);
      	...
      	if (bdi_cap_writeback_dirty(bdi)) {
      	    <<< assume wb has dirty_io, so wakeup_bdi is false.
      	    <<< the following inode_dirty also have wakeup_bdi false.
      	    if (!wb_has_dirty_io(&bdi->wb))
      		    wakeup_bdi = true;
      	}
      	spin_unlock(&inode->i_lock);
      	                                                            <<< assume last dirty_io is removed here.
      	                                                            pages_written = wb_do_writeback(wb);
      	                                                            ...
      	                                                            <<< work_list empty and wb has no dirty_io,
      	                                                            <<< delayed_work will not be queued.
      	                                                            if (!list_empty(&bdi->work_list) ||
      	                                                                (wb_has_dirty_io(wb) && dirty_writeback_interval))
      	                                                                queue_delayed_work(bdi_wq, &wb->dwork,
      	                                                                    msecs_to_jiffies(dirty_writeback_interval * 10));
      	spin_lock(&bdi->wb.list_lock);
      	inode->dirtied_when = jiffies;
      	<<< new dirty_io is added.
      	list_move(&inode->i_wb_list, &bdi->wb.b_dirty);
      	spin_unlock(&bdi->wb.list_lock);
      
      	<<< though there is dirty_io, but wakeup_bdi is false,
      	<<< so writeback thread will not be waked up and
      	<<< the new dirty_io will not be flushed.
      	if (wakeup_bdi)
      	    bdi_wakeup_thread_delayed(bdi);
      
      Writeback will run until there is a new flush work queued.  This may cause
      a lot of dirty pages stay in memory for a long time.
      Signed-off-by: NJunxiao Bi <junxiao.bi@oracle.com>
      Reviewed-by: NJan Kara <jack@suse.cz>
      Cc: Fengguang Wu <fengguang.wu@intel.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      146d7009
    • W
      mm/writeback: make writeback_inodes_wb static · 7d9f073b
      Wanpeng Li 提交于
      It's not used globally and could be static.
      Signed-off-by: NWanpeng Li <liwanp@linux.vnet.ibm.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Fengguang Wu <fengguang.wu@intel.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Jiri Kosina <jkosina@suse.cz>
      Cc: Wanpeng Li <liwanp@linux.vnet.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      7d9f073b
    • J
      writeback: fix occasional slow sync(1) · 47df3dde
      Jan Kara 提交于
      In case when system contains no dirty pages, wakeup_flusher_threads() will
      submit WB_SYNC_NONE writeback for 0 pages so wb_writeback() exits
      immediately without doing anything, even though there are dirty inodes in
      the system.  Thus sync(1) will write all the dirty inodes from a
      WB_SYNC_ALL writeback pass which is slow.
      
      Fix the problem by using get_nr_dirty_pages() in wakeup_flusher_threads()
      instead of calculating number of dirty pages manually.  That function also
      takes number of dirty inodes into account.
      Signed-off-by: NJan Kara <jack@suse.cz>
      Reported-by: NPaul Taysom <taysom@chromium.org>
      Cc: Wu Fengguang <fengguang.wu@intel.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      47df3dde
  16. 10 7月, 2013 2 次提交
  17. 09 7月, 2013 1 次提交
    • J
      writeback: Do not sort b_io list only because of block device inode · a8855990
      Jan Kara 提交于
      It is very likely that block device inode will be part of BDI dirty list
      as well. However it doesn't make sence to sort inodes on the b_io list
      just because of this inode (as it contains buffers all over the device
      anyway). So save some CPU cycles which is valuable since we hold relatively
      contented wb->list_lock.
      Signed-off-by: NJan Kara <jack@suse.cz>
      a8855990
  18. 03 7月, 2013 1 次提交
    • D
      sync: don't block the flusher thread waiting on IO · 7747bd4b
      Dave Chinner 提交于
      When sync does it's WB_SYNC_ALL writeback, it issues data Io and
      then immediately waits for IO completion. This is done in the
      context of the flusher thread, and hence completely ties up the
      flusher thread for the backing device until all the dirty inodes
      have been synced. On filesystems that are dirtying inodes constantly
      and quickly, this means the flusher thread can be tied up for
      minutes per sync call and hence badly affect system level write IO
      performance as the page cache cannot be cleaned quickly.
      
      We already have a wait loop for IO completion for sync(2), so cut
      this out of the flusher thread and delegate it to wait_sb_inodes().
      Hence we can do rapid IO submission, and then wait for it all to
      complete.
      
      Effect of sync on fsmark before the patch:
      
      FSUse%        Count         Size    Files/sec     App Overhead
      .....
           0       640000         4096      35154.6          1026984
           0       720000         4096      36740.3          1023844
           0       800000         4096      36184.6           916599
           0       880000         4096       1282.7          1054367
           0       960000         4096       3951.3           918773
           0      1040000         4096      40646.2           996448
           0      1120000         4096      43610.1           895647
           0      1200000         4096      40333.1           921048
      
      And a single sync pass took:
      
        real    0m52.407s
        user    0m0.000s
        sys     0m0.090s
      
      After the patch, there is no impact on fsmark results, and each
      individual sync(2) operation run concurrently with the same fsmark
      workload takes roughly 7s:
      
        real    0m6.930s
        user    0m0.000s
        sys     0m0.039s
      
      IOWs, sync is 7-8x faster on a busy filesystem and does not have an
      adverse impact on ongoing async data write operations.
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NJan Kara <jack@suse.cz>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      7747bd4b
  19. 01 5月, 2013 1 次提交
    • T
      writeback: set worker desc to identify writeback workers in task dumps · ef3b1019
      Tejun Heo 提交于
      Writeback has been recently converted to use workqueue instead of its
      private thread pool implementation.  One negative side effect of this
      conversion is that there's no easy to tell which backing device a
      writeback work item was working on at the time of task dump, be it
      sysrq-t, BUG, WARN or whatever, which, according to our writeback
      brethren, is important in tracking down issues with a lot of mounted
      file systems on a lot of different devices.
      
      This patch restores that information using the new worker description
      facility.  bdi_writeback_workfn() calls set_work_desc() to identify
      which bdi it's working on.  The description is printed out together with
      the worqueue name and worker function as in the following example dump.
      
       WARNING: at fs/fs-writeback.c:1015 bdi_writeback_workfn+0x2b4/0x3c0()
       Modules linked in:
       Pid: 28, comm: kworker/u18:0 Not tainted 3.9.0-rc1-work+ #24 empty empty/S3992
       Workqueue: writeback bdi_writeback_workfn (flush-8:16)
        ffffffff820a3a98 ffff88015b927cb8 ffffffff81c61855 ffff88015b927cf8
        ffffffff8108f500 0000000000000000 ffff88007a171948 ffff88007a1716b0
        ffff88015b49df00 ffff88015b8d3940 0000000000000000 ffff88015b927d08
       Call Trace:
        [<ffffffff81c61855>] dump_stack+0x19/0x1b
        [<ffffffff8108f500>] warn_slowpath_common+0x70/0xa0
        [<ffffffff8108f54a>] warn_slowpath_null+0x1a/0x20
        [<ffffffff81200144>] bdi_writeback_workfn+0x2b4/0x3c0
        [<ffffffff810b4c87>] process_one_work+0x1d7/0x660
        [<ffffffff810b5c72>] worker_thread+0x122/0x380
        [<ffffffff810bdfea>] kthread+0xea/0xf0
        [<ffffffff81c6cedc>] ret_from_fork+0x7c/0xb0
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Dave Chinner <david@fromorbit.com>
      Cc: Jan Kara <jack@suse.cz>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      ef3b1019
  20. 02 4月, 2013 1 次提交
    • T
      writeback: replace custom worker pool implementation with unbound workqueue · 839a8e86
      Tejun Heo 提交于
      Writeback implements its own worker pool - each bdi can be associated
      with a worker thread which is created and destroyed dynamically.  The
      worker thread for the default bdi is always present and serves as the
      "forker" thread which forks off worker threads for other bdis.
      
      there's no reason for writeback to implement its own worker pool when
      using unbound workqueue instead is much simpler and more efficient.
      This patch replaces custom worker pool implementation in writeback
      with an unbound workqueue.
      
      The conversion isn't too complicated but the followings are worth
      mentioning.
      
      * bdi_writeback->last_active, task and wakeup_timer are removed.
        delayed_work ->dwork is added instead.  Explicit timer handling is
        no longer necessary.  Everything works by either queueing / modding
        / flushing / canceling the delayed_work item.
      
      * bdi_writeback_thread() becomes bdi_writeback_workfn() which runs off
        bdi_writeback->dwork.  On each execution, it processes
        bdi->work_list and reschedules itself if there are more things to
        do.
      
        The function also handles low-mem condition, which used to be
        handled by the forker thread.  If the function is running off a
        rescuer thread, it only writes out limited number of pages so that
        the rescuer can serve other bdis too.  This preserves the flusher
        creation failure behavior of the forker thread.
      
      * INIT_LIST_HEAD(&bdi->bdi_list) is used to tell
        bdi_writeback_workfn() about on-going bdi unregistration so that it
        always drains work_list even if it's running off the rescuer.  Note
        that the original code was broken in this regard.  Under memory
        pressure, a bdi could finish unregistration with non-empty
        work_list.
      
      * The default bdi is no longer special.  It now is treated the same as
        any other bdi and bdi_cap_flush_forker() is removed.
      
      * BDI_pending is no longer used.  Removed.
      
      * Some tracepoints become non-applicable.  The following TPs are
        removed - writeback_nothread, writeback_wake_thread,
        writeback_wake_forker_thread, writeback_thread_start,
        writeback_thread_stop.
      
      Everything, including devices coming and going away and rescuer
      operation under simulated memory pressure, seems to work fine in my
      test setup.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Reviewed-by: NJan Kara <jack@suse.cz>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Fengguang Wu <fengguang.wu@intel.com>
      Cc: Jeff Moyer <jmoyer@redhat.com>
      839a8e86
  21. 14 1月, 2013 1 次提交
    • T
      writeback: add more tracepoints · 9fb0a7da
      Tejun Heo 提交于
      Add tracepoints for page dirtying, writeback_single_inode start, inode
      dirtying and writeback.  For the latter two inode events, a pair of
      events are defined to denote start and end of the operations (the
      starting one has _start suffix and the one w/o suffix happens after
      the operation is complete).  These inode ops are FS specific and can
      be non-trivial and having enclosing tracepoints is useful for external
      tracers.
      
      This is part of tracepoint additions to improve visiblity into
      dirtying / writeback operations for io tracer and userland.
      
      v2: writeback_dirty_inode[_start] TPs may be called for files on
          pseudo FSes w/ unregistered bdi.  Check whether bdi->dev is %NULL
          before dereferencing.
      
      v3: buffer dirtying moved to a block TP.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Reviewed-by: NJan Kara <jack@suse.cz>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      9fb0a7da
  22. 12 1月, 2013 1 次提交
    • M
      vfs: re-implement writeback_inodes_sb(_nr)_if_idle() and rename them · 10ee27a0
      Miao Xie 提交于
      writeback_inodes_sb(_nr)_if_idle() is re-implemented by replacing down_read()
      with down_read_trylock() because
      
      - If ->s_umount is write locked, then the sb is not idle. That is
        writeback_inodes_sb(_nr)_if_idle() needn't wait for the lock.
      
      - writeback_inodes_sb(_nr)_if_idle() grabs s_umount lock when it want to start
        writeback, it may bring us deadlock problem when doing umount. In order to
        fix the problem, ext4 and btrfs implemented their own writeback functions
        instead of writeback_inodes_sb(_nr)_if_idle(), but it introduced the redundant
        code, it is better to implement a new writeback_inodes_sb(_nr)_if_idle().
      
      The name of these two functions is cumbersome, so rename them to
      try_to_writeback_inodes_sb(_nr).
      
      This idea came from Christoph Hellwig.
      Some code is from the patch of Kamal Mostafa.
      Reviewed-by: NJan Kara <jack@suse.cz>
      Signed-off-by: NMiao Xie <miaox@cn.fujitsu.com>
      Signed-off-by: NFengguang Wu <fengguang.wu@intel.com>
      10ee27a0
  23. 13 12月, 2012 1 次提交
  24. 27 11月, 2012 1 次提交
    • J
      writeback: put unused inodes to LRU after writeback completion · 4eff96dd
      Jan Kara 提交于
      Commit 169ebd90 ("writeback: Avoid iput() from flusher thread")
      removed iget-iput pair from inode writeback.  As a side effect, inodes
      that are dirty during iput_final() call won't be ever added to inode LRU
      (iput_final() doesn't add dirty inodes to LRU and later when the inode
      is cleaned there's noone to add the inode there).  Thus inodes are
      effectively unreclaimable until someone looks them up again.
      
      The practical effect of this bug is limited by the fact that inodes are
      pinned by a dentry for long enough that the inode gets cleaned.  But
      still the bug can have nasty consequences leading up to OOM conditions
      under certain circumstances.  Following can easily reproduce the
      problem:
      
        for (( i = 0; i < 1000; i++ )); do
          mkdir $i
          for (( j = 0; j < 1000; j++ )); do
            touch $i/$j
            echo 2 > /proc/sys/vm/drop_caches
          done
        done
      
      then one needs to run 'sync; ls -lR' to make inodes reclaimable again.
      
      We fix the issue by inserting unused clean inodes into the LRU after
      writeback finishes in inode_sync_complete().
      Signed-off-by: NJan Kara <jack@suse.cz>
      Reported-by: NOGAWA Hirofumi <hirofumi@mail.parknet.co.jp>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Cc: OGAWA Hirofumi <hirofumi@mail.parknet.co.jp>
      Cc: Wu Fengguang <fengguang.wu@intel.com>
      Cc: Dave Chinner <david@fromorbit.com>
      Cc: <stable@vger.kernel.org>		[3.5+]
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      4eff96dd
  25. 09 10月, 2012 1 次提交
  26. 21 9月, 2012 1 次提交
  27. 20 9月, 2012 1 次提交
    • T
      ext4: fix potential deadlock in ext4_nonda_switch() · 00d4e736
      Theodore Ts'o 提交于
      In ext4_nonda_switch(), if the file system is getting full we used to
      call writeback_inodes_sb_if_idle().  The problem is that we can be
      holding i_mutex already, and this causes a potential deadlock when
      writeback_inodes_sb_if_idle() when it tries to take s_umount.  (See
      lockdep output below).
      
      As it turns out we don't need need to hold s_umount; the fact that we
      are in the middle of the write(2) system call will keep the superblock
      pinned.  Unfortunately writeback_inodes_sb() checks to make sure
      s_umount is taken, and the VFS uses a different mechanism for making
      sure the file system doesn't get unmounted out from under us.  The
      simplest way of dealing with this is to just simply grab s_umount
      using a trylock, and skip kicking the writeback flusher thread in the
      very unlikely case that we can't take a read lock on s_umount without
      blocking.
      
      Also, we now check the cirteria for kicking the writeback thread
      before we decide to whether to fall back to non-delayed writeback, so
      if there are any outstanding delayed allocation writes, we try to get
      them resolved as soon as possible.
      
         [ INFO: possible circular locking dependency detected ]
         3.6.0-rc1-00042-gce894ca #367 Not tainted
         -------------------------------------------------------
         dd/8298 is trying to acquire lock:
          (&type->s_umount_key#18){++++..}, at: [<c02277d4>] writeback_inodes_sb_if_idle+0x28/0x46
      
         but task is already holding lock:
          (&sb->s_type->i_mutex_key#8){+.+...}, at: [<c01ddcce>] generic_file_aio_write+0x5f/0xd3
      
         which lock already depends on the new lock.
      
         2 locks held by dd/8298:
          #0:  (sb_writers#2){.+.+.+}, at: [<c01ddcc5>] generic_file_aio_write+0x56/0xd3
          #1:  (&sb->s_type->i_mutex_key#8){+.+...}, at: [<c01ddcce>] generic_file_aio_write+0x5f/0xd3
      
         stack backtrace:
         Pid: 8298, comm: dd Not tainted 3.6.0-rc1-00042-gce894ca #367
         Call Trace:
          [<c015b79c>] ? console_unlock+0x345/0x372
          [<c06d62a1>] print_circular_bug+0x190/0x19d
          [<c019906c>] __lock_acquire+0x86d/0xb6c
          [<c01999db>] ? mark_held_locks+0x5c/0x7b
          [<c0199724>] lock_acquire+0x66/0xb9
          [<c02277d4>] ? writeback_inodes_sb_if_idle+0x28/0x46
          [<c06db935>] down_read+0x28/0x58
          [<c02277d4>] ? writeback_inodes_sb_if_idle+0x28/0x46
          [<c02277d4>] writeback_inodes_sb_if_idle+0x28/0x46
          [<c026f3b2>] ext4_nonda_switch+0xe1/0xf4
          [<c0271ece>] ext4_da_write_begin+0x27/0x193
          [<c01dcdb0>] generic_file_buffered_write+0xc8/0x1bb
          [<c01ddc47>] __generic_file_aio_write+0x1dd/0x205
          [<c01ddce7>] generic_file_aio_write+0x78/0xd3
          [<c026d336>] ext4_file_write+0x480/0x4a6
          [<c0198c1d>] ? __lock_acquire+0x41e/0xb6c
          [<c0180944>] ? sched_clock_cpu+0x11a/0x13e
          [<c01967e9>] ? trace_hardirqs_off+0xb/0xd
          [<c018099f>] ? local_clock+0x37/0x4e
          [<c0209f2c>] do_sync_write+0x67/0x9d
          [<c0209ec5>] ? wait_on_retry_sync_kiocb+0x44/0x44
          [<c020a7b9>] vfs_write+0x7b/0xe6
          [<c020a9a6>] sys_write+0x3b/0x64
          [<c06dd4bd>] syscall_call+0x7/0xb
      Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
      Cc: stable@vger.kernel.org
      00d4e736