1. 25 2月, 2017 1 次提交
  2. 29 7月, 2016 2 次提交
  3. 27 7月, 2016 1 次提交
  4. 09 3月, 2016 1 次提交
    • Y
      tracing, writeback: Replace cgroup path to cgroup ino · a664edb3
      Yang Shi 提交于
      commit 5634cc2a ("writeback: update writeback
      tracepoints to report cgroup") made writeback tracepoints print out cgroup
      path when CGROUP_WRITEBACK is enabled, but it may trigger the below bug on -rt
      kernel since kernfs_path and kernfs_path_len are called by tracepoints, which
      acquire spin lock that is sleepable on -rt kernel.
      
      BUG: sleeping function called from invalid context at kernel/locking/rtmutex.c:930
      in_atomic(): 1, irqs_disabled(): 0, pid: 625, name: kworker/u16:3
      INFO: lockdep is turned off.
      Preemption disabled at:[<ffffffc000374a5c>] wb_writeback+0xec/0x830
      
      CPU: 7 PID: 625 Comm: kworker/u16:3 Not tainted 4.4.1-rt5 #20
      Hardware name: Freescale Layerscape 2085a RDB Board (DT)
      Workqueue: writeback wb_workfn (flush-7:0)
      Call trace:
      [<ffffffc00008d708>] dump_backtrace+0x0/0x200
      [<ffffffc00008d92c>] show_stack+0x24/0x30
      [<ffffffc0007b0f40>] dump_stack+0x88/0xa8
      [<ffffffc000127d74>] ___might_sleep+0x2ec/0x300
      [<ffffffc000d5d550>] rt_spin_lock+0x38/0xb8
      [<ffffffc0003e0548>] kernfs_path_len+0x30/0x90
      [<ffffffc00036b360>] trace_event_raw_event_writeback_work_class+0xe8/0x2e8
      [<ffffffc000374f90>] wb_writeback+0x620/0x830
      [<ffffffc000376224>] wb_workfn+0x61c/0x950
      [<ffffffc000110adc>] process_one_work+0x3ac/0xb30
      [<ffffffc0001112fc>] worker_thread+0x9c/0x7a8
      [<ffffffc00011a9e8>] kthread+0x190/0x1b0
      [<ffffffc000086ca0>] ret_from_fork+0x10/0x30
      
      With unlocked kernfs_* functions, synchronize_sched() has to be called in
      kernfs_rename which could be called in syscall path, but it is problematic.
      So, print out cgroup ino instead of path name, which could be converted to
      path name by userland.
      
      Withouth CGROUP_WRITEBACK enabled, it just prints out root dir. But, root
      dir ino vary from different filesystems, so printing out -1U to indicate
      an invalid cgroup ino.
      
      Link: http://lkml.kernel.org/r/1456996137-8354-1-git-send-email-yang.shi@linaro.orgAcked-by: NTejun Heo <tj@kernel.org>
      Signed-off-by: NYang Shi <yang.shi@linaro.org>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      a664edb3
  5. 19 8月, 2015 1 次提交
    • T
      writeback: update writeback tracepoints to report cgroup · 5634cc2a
      Tejun Heo 提交于
      The following tracepoints are updated to report the cgroup used during
      cgroup writeback.
      
      * writeback_write_inode[_start]
      * writeback_queue
      * writeback_exec
      * writeback_start
      * writeback_written
      * writeback_wait
      * writeback_nowork
      * writeback_wake_background
      * wbc_writepage
      * writeback_queue_io
      * bdi_dirty_ratelimit
      * balance_dirty_pages
      * writeback_sb_inodes_requeue
      * writeback_single_inode[_start]
      
      Note that writeback_bdi_register is separated out from writeback_class
      as reporting cgroup doesn't make sense to it.  Tracepoints which take
      bdi are updated to take bdi_writeback instead.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Suggested-by: NJan Kara <jack@suse.cz>
      Reviewed-by: NJan Kara <jack@suse.cz>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      5634cc2a
  6. 02 6月, 2015 2 次提交
    • T
      writeback: move global_dirty_limit into wb_domain · dcc25ae7
      Tejun Heo 提交于
      This patch is a part of the series to define wb_domain which
      represents a domain that wb's (bdi_writeback's) belong to and are
      measured against each other in.  This will enable IO backpressure
      propagation for cgroup writeback.
      
      global_dirty_limit exists to regulate the global dirty threshold which
      is a property of the wb_domain.  This patch moves hard_dirty_limit,
      dirty_lock, and update_time into wb_domain.
      
      This is pure reorganization and doesn't introduce any behavioral
      changes.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Jan Kara <jack@suse.cz>
      Cc: Wu Fengguang <fengguang.wu@intel.com>
      Cc: Greg Thelen <gthelen@google.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      dcc25ae7
    • T
      writeback: move bandwidth related fields from backing_dev_info into bdi_writeback · a88a341a
      Tejun Heo 提交于
      Currently, a bdi (backing_dev_info) embeds single wb (bdi_writeback)
      and the role of the separation is unclear.  For cgroup support for
      writeback IOs, a bdi will be updated to host multiple wb's where each
      wb serves writeback IOs of a different cgroup on the bdi.  To achieve
      that, a wb should carry all states necessary for servicing writeback
      IOs for a cgroup independently.
      
      This patch moves bandwidth related fields from backing_dev_info into
      bdi_writeback.
      
      * The moved fields are: bw_time_stamp, dirtied_stamp, written_stamp,
        write_bandwidth, avg_write_bandwidth, dirty_ratelimit,
        balanced_dirty_ratelimit, completions and dirty_exceeded.
      
      * writeback_chunk_size() and over_bground_thresh() now take @wb
        instead of @bdi.
      
      * bdi_writeout_fraction(bdi, ...)	-> wb_writeout_fraction(wb, ...)
        bdi_dirty_limit(bdi, ...)		-> wb_dirty_limit(wb, ...)
        bdi_position_ration(bdi, ...)		-> wb_position_ratio(wb, ...)
        bdi_update_writebandwidth(bdi, ...)	-> wb_update_write_bandwidth(wb, ...)
        [__]bdi_update_bandwidth(bdi, ...)	-> [__]wb_update_bandwidth(wb, ...)
        bdi_{max|min}_pause(bdi, ...)		-> wb_{max|min}_pause(wb, ...)
        bdi_dirty_limits(bdi, ...)		-> wb_dirty_limits(wb, ...)
      
      * Init/exits of the relocated fields are moved to bdi_wb_init/exit()
        respectively.  Note that explicit zeroing is dropped in the process
        as wb's are cleared in entirety anyway.
      
      * As there's still only one bdi_writeback per backing_dev_info, all
        uses of bdi->stat[] are mechanically replaced with bdi->wb.stat[]
        introducing no behavior changes.
      
      v2: Typo in description fixed as suggested by Jan.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Reviewed-by: NJan Kara <jack@suse.cz>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Wu Fengguang <fengguang.wu@intel.com>
      Cc: Jaegeuk Kim <jaegeuk@kernel.org>
      Cc: Steven Whitehouse <swhiteho@redhat.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      a88a341a
  7. 29 5月, 2015 1 次提交
  8. 08 4月, 2015 1 次提交
  9. 05 2月, 2015 1 次提交
    • T
      vfs: add support for a lazytime mount option · 0ae45f63
      Theodore Ts'o 提交于
      Add a new mount option which enables a new "lazytime" mode.  This mode
      causes atime, mtime, and ctime updates to only be made to the
      in-memory version of the inode.  The on-disk times will only get
      updated when (a) if the inode needs to be updated for some non-time
      related change, (b) if userspace calls fsync(), syncfs() or sync(), or
      (c) just before an undeleted inode is evicted from memory.
      
      This is OK according to POSIX because there are no guarantees after a
      crash unless userspace explicitly requests via a fsync(2) call.
      
      For workloads which feature a large number of random write to a
      preallocated file, the lazytime mount option significantly reduces
      writes to the inode table.  The repeated 4k writes to a single block
      will result in undesirable stress on flash devices and SMR disk
      drives.  Even on conventional HDD's, the repeated writes to the inode
      table block will trigger Adjacent Track Interference (ATI) remediation
      latencies, which very negatively impact long tail latencies --- which
      is a very big deal for web serving tiers (for example).
      
      Google-Bug-Id: 18297052
      Signed-off-by: NTheodore Ts'o <tytso@mit.edu>
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      0ae45f63
  10. 21 1月, 2015 2 次提交
  11. 07 3月, 2014 1 次提交
  12. 22 2月, 2014 1 次提交
    • J
      Revert "writeback: do not sync data dirtied after sync start" · 0dc83bd3
      Jan Kara 提交于
      This reverts commit c4a391b5. Dave
      Chinner <david@fromorbit.com> has reported the commit may cause some
      inodes to be left out from sync(2). This is because we can call
      redirty_tail() for some inode (which sets i_dirtied_when to current time)
      after sync(2) has started or similarly requeue_inode() can set
      i_dirtied_when to current time if writeback had to skip some pages. The
      real problem is in the functions clobbering i_dirtied_when but fixing
      that isn't trivial so revert is a safer choice for now.
      
      CC: stable@vger.kernel.org # >= 3.13
      Signed-off-by: NJan Kara <jack@suse.cz>
      0dc83bd3
  13. 06 2月, 2014 1 次提交
    • S
      GFS2: journal data writepages update · 774016b2
      Steven Whitehouse 提交于
      GFS2 has carried what is more or less a copy of the
      write_cache_pages() for some time. It seems that this
      copy has slipped behind the core code over time. This
      patch brings it back uptodate, and in addition adds the
      tracepoint which would otherwise be missing.
      
      We could go further, and eliminate some or all of the
      code duplication here. The issue is that if we do that,
      then the function we need to split out from the existing
      write_cache_pages(), which will look a lot like
      gfs2_jdata_write_pagevec(), would land up putting quite a
      lot of extra variables on the stack. I know that has been
      a problem in the past in the writeback code path, which
      is why I've hesitated to do it here.
      Signed-off-by: NSteven Whitehouse <swhiteho@redhat.com>
      774016b2
  14. 13 11月, 2013 1 次提交
    • J
      writeback: do not sync data dirtied after sync start · c4a391b5
      Jan Kara 提交于
      When there are processes heavily creating small files while sync(2) is
      running, it can easily happen that quite some new files are created
      between WB_SYNC_NONE and WB_SYNC_ALL pass of sync(2).  That can happen
      especially if there are several busy filesystems (remember that sync
      traverses filesystems sequentially and waits in WB_SYNC_ALL phase on one
      fs before starting it on another fs).  Because WB_SYNC_ALL pass is slow
      (e.g.  causes a transaction commit and cache flush for each inode in
      ext3), resulting sync(2) times are rather large.
      
      The following script reproduces the problem:
      
        function run_writers
        {
          for (( i = 0; i < 10; i++ )); do
            mkdir $1/dir$i
            for (( j = 0; j < 40000; j++ )); do
              dd if=/dev/zero of=$1/dir$i/$j bs=4k count=4 &>/dev/null
            done &
          done
        }
      
        for dir in "$@"; do
          run_writers $dir
        done
      
        sleep 40
        time sync
      
      Fix the problem by disregarding inodes dirtied after sync(2) was called
      in the WB_SYNC_ALL pass.  To allow for this, sync_inodes_sb() now takes
      a time stamp when sync has started which is used for setting up work for
      flusher threads.
      
      To give some numbers, when above script is run on two ext4 filesystems
      on simple SATA drive, the average sync time from 10 runs is 267.549
      seconds with standard deviation 104.799426.  With the patched kernel,
      the average sync time from 10 runs is 2.995 seconds with standard
      deviation 0.096.
      Signed-off-by: NJan Kara <jack@suse.cz>
      Reviewed-by: NFengguang Wu <fengguang.wu@intel.com>
      Reviewed-by: NDave Chinner <dchinner@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      c4a391b5
  15. 02 4月, 2013 1 次提交
    • T
      writeback: replace custom worker pool implementation with unbound workqueue · 839a8e86
      Tejun Heo 提交于
      Writeback implements its own worker pool - each bdi can be associated
      with a worker thread which is created and destroyed dynamically.  The
      worker thread for the default bdi is always present and serves as the
      "forker" thread which forks off worker threads for other bdis.
      
      there's no reason for writeback to implement its own worker pool when
      using unbound workqueue instead is much simpler and more efficient.
      This patch replaces custom worker pool implementation in writeback
      with an unbound workqueue.
      
      The conversion isn't too complicated but the followings are worth
      mentioning.
      
      * bdi_writeback->last_active, task and wakeup_timer are removed.
        delayed_work ->dwork is added instead.  Explicit timer handling is
        no longer necessary.  Everything works by either queueing / modding
        / flushing / canceling the delayed_work item.
      
      * bdi_writeback_thread() becomes bdi_writeback_workfn() which runs off
        bdi_writeback->dwork.  On each execution, it processes
        bdi->work_list and reschedules itself if there are more things to
        do.
      
        The function also handles low-mem condition, which used to be
        handled by the forker thread.  If the function is running off a
        rescuer thread, it only writes out limited number of pages so that
        the rescuer can serve other bdis too.  This preserves the flusher
        creation failure behavior of the forker thread.
      
      * INIT_LIST_HEAD(&bdi->bdi_list) is used to tell
        bdi_writeback_workfn() about on-going bdi unregistration so that it
        always drains work_list even if it's running off the rescuer.  Note
        that the original code was broken in this regard.  Under memory
        pressure, a bdi could finish unregistration with non-empty
        work_list.
      
      * The default bdi is no longer special.  It now is treated the same as
        any other bdi and bdi_cap_flush_forker() is removed.
      
      * BDI_pending is no longer used.  Removed.
      
      * Some tracepoints become non-applicable.  The following TPs are
        removed - writeback_nothread, writeback_wake_thread,
        writeback_wake_forker_thread, writeback_thread_start,
        writeback_thread_stop.
      
      Everything, including devices coming and going away and rescuer
      operation under simulated memory pressure, seems to work fine in my
      test setup.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Reviewed-by: NJan Kara <jack@suse.cz>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Fengguang Wu <fengguang.wu@intel.com>
      Cc: Jeff Moyer <jmoyer@redhat.com>
      839a8e86
  16. 14 1月, 2013 1 次提交
    • T
      writeback: add more tracepoints · 9fb0a7da
      Tejun Heo 提交于
      Add tracepoints for page dirtying, writeback_single_inode start, inode
      dirtying and writeback.  For the latter two inode events, a pair of
      events are defined to denote start and end of the operations (the
      starting one has _start suffix and the one w/o suffix happens after
      the operation is complete).  These inode ops are FS specific and can
      be non-trivial and having enclosing tracepoints is useful for external
      tracers.
      
      This is part of tracepoint additions to improve visiblity into
      dirtying / writeback operations for io tracer and userland.
      
      v2: writeback_dirty_inode[_start] TPs may be called for files on
          pseudo FSes w/ unregistered bdi.  Check whether bdi->dev is %NULL
          before dereferencing.
      
      v3: buffer dirtying moved to a block TP.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Reviewed-by: NJan Kara <jack@suse.cz>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      9fb0a7da
  17. 06 5月, 2012 1 次提交
  18. 16 3月, 2012 1 次提交
    • P
      device.h: audit and cleanup users in main include dir · 313162d0
      Paul Gortmaker 提交于
      The <linux/device.h> header includes a lot of stuff, and
      it in turn gets a lot of use just for the basic "struct device"
      which appears so often.
      
      Clean up the users as follows:
      
      1) For those headers only needing "struct device" as a pointer
      in fcn args, replace the include with exactly that.
      
      2) For headers not really using anything from device.h, simply
      delete the include altogether.
      
      3) For headers relying on getting device.h implicitly before
      being included themselves, now explicitly include device.h
      
      4) For files in which doing #1 or #2 uncovers an implicit
      dependency on some other header, fix by explicitly adding
      the required header(s).
      
      Any C files that were implicitly relying on device.h to be
      present have already been dealt with in advance.
      
      Total removals from #1 and #2: 51.  Total additions coming
      from #3: 9.  Total other implicit dependencies from #4: 7.
      
      As of 3.3-rc1, there were 110, so a net removal of 42 gives
      about a 38% reduction in device.h presence in include/*
      Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
      313162d0
  19. 06 2月, 2012 1 次提交
  20. 01 2月, 2012 1 次提交
  21. 18 12月, 2011 2 次提交
    • W
      writeback: dirty ratelimit - think time compensation · 83712358
      Wu Fengguang 提交于
      Compensate the task's think time when computing the final pause time,
      so that ->dirty_ratelimit can be executed accurately.
      
              think time := time spend outside of balance_dirty_pages()
      
      In the rare case that the task slept longer than the 200ms period time
      (result in negative pause time), the sleep time will be compensated in
      the following periods, too, if it's less than 1 second.
      
      Accumulated errors are carefully avoided as long as the max pause area
      is not hitted.
      
      Pseudo code:
      
              period = pages_dirtied / task_ratelimit;
              think = jiffies - dirty_paused_when;
              pause = period - think;
      
      1) normal case: period > think
      
              pause = period - think
              dirty_paused_when = jiffies + pause
              nr_dirtied = 0
      
                                   period time
                    |===============================>|
                        think time      pause time
                    |===============>|==============>|
              ------|----------------|---------------|------------------------
              dirty_paused_when   jiffies
      
      2) no pause case: period <= think
      
              don't pause; reduce future pause time by:
              dirty_paused_when += period
              nr_dirtied = 0
      
                                 period time
                    |===============================>|
                                        think time
                    |===================================================>|
              ------|--------------------------------+-------------------|----
              dirty_paused_when                                       jiffies
      Acked-by: NJan Kara <jack@suse.cz>
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: NWu Fengguang <fengguang.wu@intel.com>
      83712358
    • W
      writeback: show writeback reason with __print_symbolic · b3bba872
      Wu Fengguang 提交于
      This makes the binary trace understandable by trace-cmd.
      
      CC: Dave Chinner <david@fromorbit.com>
      CC: Curt Wohlgemuth <curtw@google.com>
      CC: Steven Rostedt <rostedt@goodmis.org>
      Signed-off-by: NWu Fengguang <fengguang.wu@intel.com>
      b3bba872
  22. 31 10月, 2011 4 次提交
  23. 03 10月, 2011 1 次提交
    • W
      writeback: IO-less balance_dirty_pages() · 143dfe86
      Wu Fengguang 提交于
      As proposed by Chris, Dave and Jan, don't start foreground writeback IO
      inside balance_dirty_pages(). Instead, simply let it idle sleep for some
      time to throttle the dirtying task. In the mean while, kick off the
      per-bdi flusher thread to do background writeback IO.
      
      RATIONALS
      =========
      
      - disk seeks on concurrent writeback of multiple inodes (Dave Chinner)
      
        If every thread doing writes and being throttled start foreground
        writeback, it leads to N IO submitters from at least N different
        inodes at the same time, end up with N different sets of IO being
        issued with potentially zero locality to each other, resulting in
        much lower elevator sort/merge efficiency and hence we seek the disk
        all over the place to service the different sets of IO.
        OTOH, if there is only one submission thread, it doesn't jump between
        inodes in the same way when congestion clears - it keeps writing to
        the same inode, resulting in large related chunks of sequential IOs
        being issued to the disk. This is more efficient than the above
        foreground writeback because the elevator works better and the disk
        seeks less.
      
      - lock contention and cache bouncing on concurrent IO submitters (Dave Chinner)
      
        With this patchset, the fs_mark benchmark on a 12-drive software RAID0 goes
        from CPU bound to IO bound, freeing "3-4 CPUs worth of spinlock contention".
      
        * "CPU usage has dropped by ~55%", "it certainly appears that most of
          the CPU time saving comes from the removal of contention on the
          inode_wb_list_lock" (IMHO at least 10% comes from the reduction of
          cacheline bouncing, because the new code is able to call much less
          frequently into balance_dirty_pages() and hence access the global
          page states)
      
        * the user space "App overhead" is reduced by 20%, by avoiding the
          cacheline pollution by the complex writeback code path
      
        * "for a ~5% throughput reduction", "the number of write IOs have
          dropped by ~25%", and the elapsed time reduced from 41:42.17 to
          40:53.23.
      
        * On a simple test of 100 dd, it reduces the CPU %system time from 30% to 3%,
          and improves IO throughput from 38MB/s to 42MB/s.
      
      - IO size too small for fast arrays and too large for slow USB sticks
      
        The write_chunk used by current balance_dirty_pages() cannot be
        directly set to some large value (eg. 128MB) for better IO efficiency.
        Because it could lead to more than 1 second user perceivable stalls.
        Even the current 4MB write size may be too large for slow USB sticks.
        The fact that balance_dirty_pages() starts IO on itself couples the
        IO size to wait time, which makes it hard to do suitable IO size while
        keeping the wait time under control.
      
        Now it's possible to increase writeback chunk size proportional to the
        disk bandwidth. In a simple test of 50 dd's on XFS, 1-HDD, 3GB ram,
        the larger writeback size dramatically reduces the seek count to 1/10
        (far beyond my expectation) and improves the write throughput by 24%.
      
      - long block time in balance_dirty_pages() hurts desktop responsiveness
      
        Many of us may have the experience: it often takes a couple of seconds
        or even long time to stop a heavy writing dd/cp/tar command with
        Ctrl-C or "kill -9".
      
      - IO pipeline broken by bumpy write() progress
      
        There are a broad class of "loop {read(buf); write(buf);}" applications
        whose read() pipeline will be under-utilized or even come to a stop if
        the write()s have long latencies _or_ don't progress in a constant rate.
        The current threshold based throttling inherently transfers the large
        low level IO completion fluctuations to bumpy application write()s,
        and further deteriorates with increasing number of dirtiers and/or bdi's.
      
        For example, when doing 50 dd's + 1 remote rsync to an XFS partition,
        the rsync progresses very bumpy in legacy kernel, and throughput is
        improved by 67% by this patchset. (plus the larger write chunk size,
        it will be 93% speedup).
      
        The new rate based throttling can support 1000+ dd's with excellent
        smoothness, low latency and low overheads.
      
      For the above reasons, it's much better to do IO-less and low latency
      pauses in balance_dirty_pages().
      
      Jan Kara, Dave Chinner and me explored the scheme to let
      balance_dirty_pages() wait for enough writeback IO completions to
      safeguard the dirty limit. However it's found to have two problems:
      
      - in large NUMA systems, the per-cpu counters may have big accounting
        errors, leading to big throttle wait time and jitters.
      
      - NFS may kill large amount of unstable pages with one single COMMIT.
        Because NFS server serves COMMIT with expensive fsync() IOs, it is
        desirable to delay and reduce the number of COMMITs. So it's not
        likely to optimize away such kind of bursty IO completions, and the
        resulted large (and tiny) stall times in IO completion based throttling.
      
      So here is a pause time oriented approach, which tries to control the
      pause time in each balance_dirty_pages() invocations, by controlling
      the number of pages dirtied before calling balance_dirty_pages(), for
      smooth and efficient dirty throttling:
      
      - avoid useless (eg. zero pause time) balance_dirty_pages() calls
      - avoid too small pause time (less than   4ms, which burns CPU power)
      - avoid too large pause time (more than 200ms, which hurts responsiveness)
      - avoid big fluctuations of pause times
      
      It can control pause times at will. The default policy (in a followup
      patch) will be to do ~10ms pauses in 1-dd case, and increase to ~100ms
      in 1000-dd case.
      
      BEHAVIOR CHANGE
      ===============
      
      (1) dirty threshold
      
      Users will notice that the applications will get throttled once crossing
      the global (background + dirty)/2=15% threshold, and then balanced around
      17.5%. Before patch, the behavior is to just throttle it at 20% dirtyable
      memory in 1-dd case.
      
      Since the task will be soft throttled earlier than before, it may be
      perceived by end users as performance "slow down" if his application
      happens to dirty more than 15% dirtyable memory.
      
      (2) smoothness/responsiveness
      
      Users will notice a more responsive system during heavy writeback.
      "killall dd" will take effect instantly.
      Signed-off-by: NWu Fengguang <fengguang.wu@intel.com>
      143dfe86
  24. 31 8月, 2011 1 次提交
  25. 10 7月, 2011 2 次提交
    • W
      writeback: trace global_dirty_state · e1cbe236
      Wu Fengguang 提交于
      Add trace event balance_dirty_state for showing the global dirty page
      counts and thresholds at each global_dirty_limits() invocation.  This
      will cover the callers throttle_vm_writeout(), over_bground_thresh()
      and each balance_dirty_pages() loop.
      Signed-off-by: NWu Fengguang <fengguang.wu@intel.com>
      e1cbe236
    • W
      writeback: make writeback_control.nr_to_write straight · d46db3d5
      Wu Fengguang 提交于
      Pass struct wb_writeback_work all the way down to writeback_sb_inodes(),
      and initialize the struct writeback_control there.
      
      struct writeback_control is basically designed to control writeback of a
      single file, but we keep abuse it for writing multiple files in
      writeback_sb_inodes() and its callers.
      
      It immediately clean things up, e.g. suddenly wbc.nr_to_write vs
      work->nr_pages starts to make sense, and instead of saving and restoring
      pages_skipped in writeback_sb_inodes it can always start with a clean
      zero value.
      
      It also makes a neat IO pattern change: large dirty files are now
      written in the full 4MB writeback chunk size, rather than whatever
      remained quota in wbc->nr_to_write.
      Acked-by: NJan Kara <jack@suse.cz>
      Proposed-by: NChristoph Hellwig <hch@infradead.org>
      Signed-off-by: NWu Fengguang <fengguang.wu@intel.com>
      d46db3d5
  26. 08 6月, 2011 3 次提交
  27. 14 1月, 2011 1 次提交
  28. 27 10月, 2010 3 次提交
    • M
      writeback: do not sleep on the congestion queue if there are no congested BDIs... · 0e093d99
      Mel Gorman 提交于
      writeback: do not sleep on the congestion queue if there are no congested BDIs or if significant congestion is not being encountered in the current zone
      
      If congestion_wait() is called with no BDI congested, the caller will
      sleep for the full timeout and this may be an unnecessary sleep.  This
      patch adds a wait_iff_congested() that checks congestion and only sleeps
      if a BDI is congested else, it calls cond_resched() to ensure the caller
      is not hogging the CPU longer than its quota but otherwise will not sleep.
      
      This is aimed at reducing some of the major desktop stalls reported during
      IO.  For example, while kswapd is operating, it calls congestion_wait()
      but it could just have been reclaiming clean page cache pages with no
      congestion.  Without this patch, it would sleep for a full timeout but
      after this patch, it'll just call schedule() if it has been on the CPU too
      long.  Similar logic applies to direct reclaimers that are not making
      enough progress.
      Signed-off-by: NMel Gorman <mel@csn.ul.ie>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Minchan Kim <minchan.kim@gmail.com>
      Cc: Wu Fengguang <fengguang.wu@intel.com>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Jens Axboe <axboe@kernel.dk>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      0e093d99
    • M
      writeback: account for time spent congestion_waited · 52bb9198
      Mel Gorman 提交于
      There is strong evidence to indicate a lot of time is being spent in
      congestion_wait(), some of it unnecessarily.  This patch adds a tracepoint
      for congestion_wait to record when congestion_wait() was called, how long
      the timeout was for and how long it actually slept.
      Signed-off-by: NMel Gorman <mel@csn.ul.ie>
      Reviewed-by: NMinchan Kim <minchan.kim@gmail.com>
      Reviewed-by: NJohannes Weiner <hannes@cmpxchg.org>
      Cc: Wu Fengguang <fengguang.wu@intel.com>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Rik van Riel <riel@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      52bb9198
    • W
      writeback: remove nonblocking/encountered_congestion references · 1b430bee
      Wu Fengguang 提交于
      This removes more dead code that was somehow missed by commit 0d99519e
      (writeback: remove unused nonblocking and congestion checks).  There are
      no behavior change except for the removal of two entries from one of the
      ext4 tracing interface.
      
      The nonblocking checks in ->writepages are no longer used because the
      flusher now prefer to block on get_request_wait() than to skip inodes on
      IO congestion.  The latter will lead to more seeky IO.
      
      The nonblocking checks in ->writepage are no longer used because it's
      redundant with the WB_SYNC_NONE check.
      
      We no long set ->nonblocking in VM page out and page migration, because
      a) it's effectively redundant with WB_SYNC_NONE in current code
      b) it's old semantic of "Don't get stuck on request queues" is mis-behavior:
         that would skip some dirty inodes on congestion and page out others, which
         is unfair in terms of LRU age.
      
      Inspired by Christoph Hellwig. Thanks!
      Signed-off-by: NWu Fengguang <fengguang.wu@intel.com>
      Cc: Theodore Ts'o <tytso@mit.edu>
      Cc: David Howells <dhowells@redhat.com>
      Cc: Sage Weil <sage@newdream.net>
      Cc: Steve French <sfrench@samba.org>
      Cc: Chris Mason <chris.mason@oracle.com>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Christoph Hellwig <hch@infradead.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      1b430bee