1. 29 7月, 2016 1 次提交
    • M
      mm, vmscan: move LRU lists to node · 599d0c95
      Mel Gorman 提交于
      This moves the LRU lists from the zone to the node and related data such
      as counters, tracing, congestion tracking and writeback tracking.
      
      Unfortunately, due to reclaim and compaction retry logic, it is
      necessary to account for the number of LRU pages on both zone and node
      logic.  Most reclaim logic is based on the node counters but the retry
      logic uses the zone counters which do not distinguish inactive and
      active sizes.  It would be possible to leave the LRU counters on a
      per-zone basis but it's a heavier calculation across multiple cache
      lines that is much more frequent than the retry checks.
      
      Other than the LRU counters, this is mostly a mechanical patch but note
      that it introduces a number of anomalies.  For example, the scans are
      per-zone but using per-node counters.  We also mark a node as congested
      when a zone is congested.  This causes weird problems that are fixed
      later but is easier to review.
      
      In the event that there is excessive overhead on 32-bit systems due to
      the nodes being on LRU then there are two potential solutions
      
      1. Long-term isolation of highmem pages when reclaim is lowmem
      
         When pages are skipped, they are immediately added back onto the LRU
         list. If lowmem reclaim persisted for long periods of time, the same
         highmem pages get continually scanned. The idea would be that lowmem
         keeps those pages on a separate list until a reclaim for highmem pages
         arrives that splices the highmem pages back onto the LRU. It potentially
         could be implemented similar to the UNEVICTABLE list.
      
         That would reduce the skip rate with the potential corner case is that
         highmem pages have to be scanned and reclaimed to free lowmem slab pages.
      
      2. Linear scan lowmem pages if the initial LRU shrink fails
      
         This will break LRU ordering but may be preferable and faster during
         memory pressure than skipping LRU pages.
      
      Link: http://lkml.kernel.org/r/1467970510-21195-4-git-send-email-mgorman@techsingularity.netSigned-off-by: NMel Gorman <mgorman@techsingularity.net>
      Acked-by: NJohannes Weiner <hannes@cmpxchg.org>
      Acked-by: NVlastimil Babka <vbabka@suse.cz>
      Cc: Hillf Danton <hillf.zj@alibaba-inc.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Michal Hocko <mhocko@kernel.org>
      Cc: Minchan Kim <minchan@kernel.org>
      Cc: Rik van Riel <riel@surriel.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      599d0c95
  2. 27 7月, 2016 2 次提交
  3. 30 5月, 2016 1 次提交
  4. 21 5月, 2016 1 次提交
    • N
      MM: increase safety margin provided by PF_LESS_THROTTLE · a53eaff8
      NeilBrown 提交于
      When nfsd is exporting a filesystem over NFS which is then NFS-mounted
      on the local machine there is a risk of deadlock.  This happens when
      there are lots of dirty pages in the NFS filesystem and they cause NFSD
      to be throttled, either in throttle_vm_writeout() or in
      balance_dirty_pages().
      
      To avoid this problem the PF_LESS_THROTTLE flag is set for NFSD threads
      and it provides a 25% increase to the limits that affect NFSD.  Any
      process writing to an NFS filesystem will be throttled well before the
      number of dirty NFS pages reaches the limit imposed on NFSD, so NFSD
      will not deadlock on pages that it needs to write out.  At least it
      shouldn't.
      
      All processes are allowed a small excess margin to avoid performing too
      many calculations: ratelimit_pages.
      
      ratelimit_pages is set so that if a thread on every CPU uses the entire
      margin, the total will only go 3% over the limit, and this is much less
      than the 25% bonus that PF_LESS_THROTTLE provides, so this margin
      shouldn't be a problem.  But it is.
      
      The "total memory" that these 3% and 25% are calculated against are not
      really total memory but are "global_dirtyable_memory()" which doesn't
      include anonymous memory, just free memory and page-cache memory.
      
      The "ratelimit_pages" number is based on whatever the
      global_dirtyable_memory was on the last CPU hot-plug, which might not be
      what you expect, but is probably close to the total freeable memory.
      
      The throttle threshold uses the global_dirtable_memory at the moment
      when the throttling happens, which could be much less than at the last
      CPU hotplug.  So if lots of anonymous memory has been allocated, thus
      pushing out lots of page-cache pages, then NFSD might end up being
      throttled due to dirty NFS pages because the "25%" bonus it gets is
      calculated against a rather small amount of dirtyable memory, while the
      "3%" margin that other processes are allowed to dirty without penalty is
      calculated against a much larger number.
      
      To remove this possibility of deadlock we need to make sure that the
      margin granted to PF_LESS_THROTTLE exceeds that rate-limit margin.
      Simply adding ratelimit_pages isn't enough as that should be multiplied
      by the number of cpus.
      
      So add "global_wb_domain.dirty_limit / 32" as that more accurately
      reflects the current total over-shoot margin.  This ensures that the
      number of dirty NFS pages never gets so high that nfsd will be throttled
      waiting for them to be written.
      
      Link: http://lkml.kernel.org/r/87futgowwv.fsf@notabene.neil.brown.nameSigned-off-by: NNeilBrown <neilb@suse.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      a53eaff8
  5. 20 5月, 2016 1 次提交
  6. 06 5月, 2016 1 次提交
    • H
      writeback: Fix performance regression in wb_over_bg_thresh() · 74d36944
      Howard Cochran 提交于
      Commit 947e9762 ("writeback: update wb_over_bg_thresh() to use
      wb_domain aware operations") unintentionally changed this function's
      meaning from "are there more dirty pages than the background writeback
      threshold" to "are there more dirty pages than the writeback threshold".
      The background writeback threshold is typically half of the writeback
      threshold, so this had the effect of raising the number of dirty pages
      required to cause a writeback worker to perform background writeout.
      
      This can cause a very severe performance regression when a BDI uses
      BDI_CAP_STRICTLIMIT because balance_dirty_pages() and the writeback worker
      can now disagree on whether writeback should be initiated.
      
      For example, in a system having 1GB of RAM, a single spinning disk, and a
      "pass-through" FUSE filesystem mounted over the disk, application code
      mmapped a 128MB file on the disk and was randomly dirtying pages in that
      mapping.
      
      Because FUSE uses strictlimit and has a default max_ratio of only 1%, in
      balance_dirty_pages, thresh is ~200, bg_thresh is ~100, and the
      dirty_freerun_ceiling is the average of those, ~150. So, it pauses the
      dirtying processes when we have 151 dirty pages and wakes up a background
      writeback worker. But the worker tests the wrong threshold (200 instead of
      100), so it does not initiate writeback and just returns.
      
      Thus, balance_dirty_pages keeps looping, sleeping and then waking up the
      worker who will do nothing. It remains stuck in this state until the few
      dirty pages that we have finally expire and we write them back for that
      reason. Then the whole process repeats, resulting in near-zero throughput
      through the FUSE BDI.
      
      The fix is to call the parameterized variant of wb_calc_thresh, so that the
      worker will do writeback if the bg_thresh is exceeded which was the
      behavior before the referenced commit.
      
      Fixes: 947e9762 ("writeback: update wb_over_bg_thresh() to use wb_domain aware operations")
      Signed-off-by: NHoward Cochran <hcochran@kernelspring.com>
      Acked-by: NTejun Heo <tj@kernel.org>
      Signed-off-by: NMiklos Szeredi <mszeredi@redhat.com>
      Cc: <stable@vger.kernel.org> # v4.2+
      Tested-by Sedat Dilek <sedat.dilek@gmail.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      74d36944
  7. 13 4月, 2016 1 次提交
    • H
      writeback: Fix performance regression in wb_over_bg_thresh() · 81b76485
      Howard Cochran 提交于
      Commit 947e9762 ("writeback: update wb_over_bg_thresh() to use
      wb_domain aware operations") unintentionally changed this function's
      meaning from "are there more dirty pages than the background writeback
      threshold" to "are there more dirty pages than the writeback threshold".
      The background writeback threshold is typically half of the writeback
      threshold, so this had the effect of raising the number of dirty pages
      required to cause a writeback worker to perform background writeout.
      
      This can cause a very severe performance regression when a BDI uses
      BDI_CAP_STRICTLIMIT because balance_dirty_pages() and the writeback worker
      can now disagree on whether writeback should be initiated.
      
      For example, in a system having 1GB of RAM, a single spinning disk, and
      a "pass-through" FUSE filesystem mounted over the disk, application code
      mmapped a 128MB file on the disk and was randomly dirtying pages in that
      mapping.
      
      Because FUSE uses strictlimit and has a default max_ratio of only 1%,
      in balance_dirty_pages, thresh is ~200, bg_thresh is ~100, and the
      dirty_freerun_ceiling is the average of those, ~150. So, it pauses the
      dirtying processes when we have 151 dirty pages and wakes up a
      background writeback worker. But the worker tests the wrong threshold
      (200 instead of 100), so it does not initiate writeback and just
      returns.
      
      Thus, balance_dirty_pages keeps looping, sleeping and then waking up the
      worker who will do nothing. It remains stuck in this state until the few
      dirty pages that we have finally expire and we write them back for that
      reason. Then the whole process repeats, resulting in near-zero
      throughput through the FUSE BDI.
      
      The fix is to call the parameterized variant of wb_calc_thresh, so that
      the worker will do writeback if the bg_thresh is exceeded which was the
      bahavior before the referenced commit.
      
      Fixes: 947e9762 ("writeback: update wb_over_bg_thresh() to use
                           wb_domain aware operations")
      Signed-off-by: NHoward Cochran <hcochran@kernelspring.com>
      Acked-by: NTejun Heo <tj@kernel.org>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      81b76485
  8. 05 4月, 2016 1 次提交
    • K
      mm, fs: get rid of PAGE_CACHE_* and page_cache_{get,release} macros · 09cbfeaf
      Kirill A. Shutemov 提交于
      PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time
      ago with promise that one day it will be possible to implement page
      cache with bigger chunks than PAGE_SIZE.
      
      This promise never materialized.  And unlikely will.
      
      We have many places where PAGE_CACHE_SIZE assumed to be equal to
      PAGE_SIZE.  And it's constant source of confusion on whether
      PAGE_CACHE_* or PAGE_* constant should be used in a particular case,
      especially on the border between fs and mm.
      
      Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much
      breakage to be doable.
      
      Let's stop pretending that pages in page cache are special.  They are
      not.
      
      The changes are pretty straight-forward:
      
       - <foo> << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
      
       - <foo> >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
      
       - PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN};
      
       - page_cache_get() -> get_page();
      
       - page_cache_release() -> put_page();
      
      This patch contains automated changes generated with coccinelle using
      script below.  For some reason, coccinelle doesn't patch header files.
      I've called spatch for them manually.
      
      The only adjustment after coccinelle is revert of changes to
      PAGE_CAHCE_ALIGN definition: we are going to drop it later.
      
      There are few places in the code where coccinelle didn't reach.  I'll
      fix them manually in a separate patch.  Comments and documentation also
      will be addressed with the separate patch.
      
      virtual patch
      
      @@
      expression E;
      @@
      - E << (PAGE_CACHE_SHIFT - PAGE_SHIFT)
      + E
      
      @@
      expression E;
      @@
      - E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT)
      + E
      
      @@
      @@
      - PAGE_CACHE_SHIFT
      + PAGE_SHIFT
      
      @@
      @@
      - PAGE_CACHE_SIZE
      + PAGE_SIZE
      
      @@
      @@
      - PAGE_CACHE_MASK
      + PAGE_MASK
      
      @@
      expression E;
      @@
      - PAGE_CACHE_ALIGN(E)
      + PAGE_ALIGN(E)
      
      @@
      expression E;
      @@
      - page_cache_get(E)
      + get_page(E)
      
      @@
      expression E;
      @@
      - page_cache_release(E)
      + put_page(E)
      Signed-off-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Acked-by: NMichal Hocko <mhocko@suse.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      09cbfeaf
  9. 16 3月, 2016 4 次提交
  10. 15 1月, 2016 1 次提交
  11. 23 11月, 2015 1 次提交
    • P
      treewide: Remove old email address · 90eec103
      Peter Zijlstra 提交于
      There were still a number of references to my old Red Hat email
      address in the kernel source. Remove these while keeping the
      Red Hat copyright notices intact.
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      90eec103
  12. 21 11月, 2015 1 次提交
    • Y
      mm/page-writeback.c: initialize m_dirty to avoid compile warning · 50e55bf6
      Yang Shi 提交于
      When building kernel with gcc 5.2, the below warning is raised:
      
        mm/page-writeback.c: In function 'balance_dirty_pages.isra.10':
        mm/page-writeback.c:1545:17: warning: 'm_dirty' may be used uninitialized in this function [-Wmaybe-uninitialized]
           unsigned long m_dirty, m_thresh, m_bg_thresh;
      
      The m_dirty{thresh, bg_thresh} are initialized in the block of "if
      (mdtc)", so if mdts is null, they won't be initialized before being used.
      Initialize m_dirty to zero, also initialize m_thresh and m_bg_thresh to
      keep consistency.
      
      They are used later by if condition: !mdtc || m_dirty <=
      dirty_freerun_ceiling(m_thresh, m_bg_thresh)
      
      If mdtc is null, dirty_freerun_ceiling will not be called at all, so the
      initialization will not change any behavior other than just ceasing the
      compile warning.
      
      (akpm: the patch actually reduces .text size by ~20 bytes on gcc-4.x.y)
      
      [akpm@linux-foundation.org: add comment]
      Signed-off-by: NYang Shi <yang.shi@linaro.org>
      Cc: Tejun Heo <tj@kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      50e55bf6
  13. 13 10月, 2015 4 次提交
    • T
      writeback: fix incorrect calculation of available memory for memcg domains · c5edf9cd
      Tejun Heo 提交于
      For memcg domains, the amount of available memory was calculated as
      
       min(the amount currently in use + headroom according to memcg,
           total clean memory)
      
      This isn't quite correct as what should be capped by the amount of
      clean memory is the headroom, not the sum of memory in use and
      headroom.  For example, if a memcg domain has a significant amount of
      dirty memory, the above can lead to a value which is lower than the
      current amount in use which doesn't make much sense.  In most
      circumstances, the above leads to a number which is somewhat but not
      drastically lower.
      
      As the amount of memory which can be readily allocated to the memcg
      domain is capped by the amount of system-wide clean memory which is
      not already assigned to the memcg itself, the number we want is
      
       the amount currently in use +
       min(headroom according to memcg, clean memory elsewhere in the system)
      
      This patch updates mem_cgroup_wb_stats() to return the number of
      filepages and headroom instead of the calculated available pages.
      mdtc_cap_avail() is renamed to mdtc_calc_avail() and performs the
      above calculation from file, headroom, dirty and globally clean pages.
      
      v2: Dummy mem_cgroup_wb_stats() implementation wasn't updated leading
          to build failure when !CGROUP_WRITEBACK.  Fixed.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Fixes: c2aa723a ("writeback: implement memcg writeback domain based throttling")
      Signed-off-by: NJens Axboe <axboe@fb.com>
      c5edf9cd
    • T
      writeback: memcg dirty_throttle_control should be initialized with wb->memcg_completions · d60d1bdd
      Tejun Heo 提交于
      MDTC_INIT() is used to initialize dirty_throttle_control for memcg
      domains.  It used DTC_INIT_COMMON() to initialized mdtc->wb and
      ->wb_completions which is incorrect as DTC_INIT_COMMON() sets the
      latter to wb->completions instead of wb->memcg_completions.  This can
      lead to wildly incorrect results when calculating the proportion of
      dirty memory the memcg domain should get.
      
      Remove DTC_INIT_COMMON() and update MDTC_INIT() to initialize
      mdtc->wb_completions to wb->memcg_completions.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Fixes: c2aa723a ("writeback: implement memcg writeback domain based throttling")
      Signed-off-by: NJens Axboe <axboe@fb.com>
      d60d1bdd
    • T
      writeback: bdi_writeback iteration must not skip dying ones · b817525a
      Tejun Heo 提交于
      bdi_for_each_wb() is used in several places to wake up or issue
      writeback work items to all wb's (bdi_writeback's) on a given bdi.
      The iteration is performed by walking bdi->cgwb_tree; however, the
      tree only indexes wb's which are currently active.
      
      For example, when a memcg gets associated with a different blkcg, the
      old wb is removed from the tree so that the new one can be indexed.
      The old wb starts dying from then on but will linger till all its
      inodes are drained.  As these dying wb's may still host dirty inodes,
      writeback operations which affect all wb's must include them.
      bdi_for_each_wb() skipping dying wb's led to sync(2) missing and
      failing to sync the inodes belonging to those wb's.
      
      This patch adds a RCU protected @bdi->wb_list which lists all wb's
      beloinging to that bdi.  wb's are added on creation and removed on
      release rather than on the start of destruction.  bdi_for_each_wb()
      usages are replaced with list_for_each[_continue]_rcu() iterations
      over @bdi->wb_list and bdi_for_each_wb() and its helpers are removed.
      
      v2: Updated as per Jan.  last_wb ref leak in bdi_split_work_to_wbs()
          fixed and unnecessary list head severing in cgwb_bdi_destroy()
          removed.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Reported-and-tested-by: NArtem Bityutskiy <dedekind1@gmail.com>
      Fixes: ebe41ab0 ("writeback: implement bdi_for_each_wb()")
      Link: http://lkml.kernel.org/g/1443012552.19983.209.camel@gmail.com
      Cc: Jan Kara <jack@suse.cz>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      b817525a
    • T
      writeback: laptop_mode_timer_fn() needs rcu_read_lock() around bdi_writeback iteration · 9ad18ab9
      Tejun Heo 提交于
      laptop_mode_timer_fn() was using bdi_for_each_wb() without the
      required RCU locking leading to the following warning.
      
       WARNING: CPU: 0 PID: 0 at include/linux/backing-dev.h:415 laptop_mode_timer_fn+0x106/0x170()
       ...
       Call Trace:
        <IRQ>  [<ffffffff81480cdc>] dump_stack+0x4e/0x82
        [<ffffffff81051912>] warn_slowpath_common+0x82/0xc0
        [<ffffffff81051a0a>] warn_slowpath_null+0x1a/0x20
        [<ffffffff8115f0e6>] laptop_mode_timer_fn+0x106/0x170
        [<ffffffff810ca8e3>] call_timer_fn+0xb3/0x2f0
        [<ffffffff810cad25>] run_timer_softirq+0x205/0x370
        [<ffffffff81056854>] __do_softirq+0xd4/0x460
        [<ffffffff81056d69>] irq_exit+0x89/0xa0
        [<ffffffff8185a892>] smp_apic_timer_interrupt+0x42/0x50
        [<ffffffff81858a44>] apic_timer_interrupt+0x84/0x90
       ...
      
      Fix it by adding rcu_read_lock() around the iteration.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Fixes: a06fd6b1 ("writeback: make laptop_mode_timer_fn() handle multiple bdi_writeback's")
      Reviewed-by: NJan Kara <jack@suse.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      9ad18ab9
  14. 19 8月, 2015 1 次提交
    • T
      writeback: update writeback tracepoints to report cgroup · 5634cc2a
      Tejun Heo 提交于
      The following tracepoints are updated to report the cgroup used during
      cgroup writeback.
      
      * writeback_write_inode[_start]
      * writeback_queue
      * writeback_exec
      * writeback_start
      * writeback_written
      * writeback_wait
      * writeback_nowork
      * writeback_wake_background
      * wbc_writepage
      * writeback_queue_io
      * bdi_dirty_ratelimit
      * balance_dirty_pages
      * writeback_sb_inodes_requeue
      * writeback_single_inode[_start]
      
      Note that writeback_bdi_register is separated out from writeback_class
      as reporting cgroup doesn't make sense to it.  Tracepoints which take
      bdi are updated to take bdi_writeback instead.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Suggested-by: NJan Kara <jack@suse.cz>
      Reviewed-by: NJan Kara <jack@suse.cz>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      5634cc2a
  15. 07 8月, 2015 1 次提交
  16. 02 6月, 2015 18 次提交
    • T
      writeback: implement unlocked_inode_to_wb transaction and use it for stat updates · 682aa8e1
      Tejun Heo 提交于
      The mechanism for detecting whether an inode should switch its wb
      (bdi_writeback) association is now in place.  This patch build the
      framework for the actual switching.
      
      This patch adds a new inode flag I_WB_SWITCHING, which has two
      functions.  First, the easy one, it ensures that there's only one
      switching in progress for a give inode.  Second, it's used as a
      mechanism to synchronize wb stat updates.
      
      The two stats, WB_RECLAIMABLE and WB_WRITEBACK, aren't event counters
      but track the current number of dirty pages and pages under writeback
      respectively.  As such, when an inode is moved from one wb to another,
      the inode's portion of those stats have to be transferred together;
      unfortunately, this is a bit tricky as those stat updates are percpu
      operations which are performed without holding any lock in some
      places.
      
      This patch solves the problem in a similar way as memcg.  Each such
      lockless stat updates are wrapped in transaction surrounded by
      unlocked_inode_to_wb_begin/end().  During normal operation, they map
      to rcu_read_lock/unlock(); however, if I_WB_SWITCHING is asserted,
      mapping->tree_lock is grabbed across the transaction.
      
      In turn, the switching path sets I_WB_SWITCHING and waits for a RCU
      grace period to pass before actually starting to switch, which
      guarantees that all stat update paths are synchronizing against
      mapping->tree_lock.
      
      This patch still doesn't implement the actual switching.
      
      v3: Updated on top of the recent cancel_dirty_page() updates.
          unlocked_inode_to_wb_begin() now nests inside
          mem_cgroup_begin_page_stat() to match the locking order.
      
      v2: The i_wb access transaction will be used for !stat accesses too.
          Function names and comments updated accordingly.
      
          s/inode_wb_stat_unlocked_{begin|end}/unlocked_inode_to_wb_{begin|end}/
          s/switch_wb/switch_wbs/
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Jan Kara <jack@suse.cz>
      Cc: Wu Fengguang <fengguang.wu@intel.com>
      Cc: Greg Thelen <gthelen@google.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      682aa8e1
    • T
      writeback: implement memcg writeback domain based throttling · c2aa723a
      Tejun Heo 提交于
      While cgroup writeback support now connects memcg and blkcg so that
      writeback IOs are properly attributed and controlled, the IO back
      pressure propagation mechanism implemented in balance_dirty_pages()
      and its subroutines wasn't aware of cgroup writeback.
      
      Processes belonging to a memcg may have access to only subset of total
      memory available in the system and not factoring this into dirty
      throttling rendered it completely ineffective for processes under
      memcg limits and memcg ended up building a separate ad-hoc degenerate
      mechanism directly into vmscan code to limit page dirtying.
      
      The previous patches updated balance_dirty_pages() and its subroutines
      so that they can deal with multiple wb_domain's (writeback domains)
      and defined per-memcg wb_domain.  Processes belonging to a non-root
      memcg are bound to two wb_domains, global wb_domain and memcg
      wb_domain, and should be throttled according to IO pressures from both
      domains.  This patch updates dirty throttling code so that it repeats
      similar calculations for the two domains - the differences between the
      two are few and minor - and applies the lower of the two sets of
      resulting constraints.
      
      wb_over_bg_thresh(), which controls when background writeback
      terminates, is also updated to consider both global and memcg
      wb_domains.  It returns true if dirty is over bg_thresh for either
      domain.
      
      This makes the dirty throttling mechanism operational for memcg
      domains including writeback-bandwidth-proportional dirty page
      distribution inside them but the ad-hoc memcg throttling mechanism in
      vmscan is still in place.  The next patch will rip it out.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Jan Kara <jack@suse.cz>
      Cc: Wu Fengguang <fengguang.wu@intel.com>
      Cc: Greg Thelen <gthelen@google.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      c2aa723a
    • T
      writeback: implement memcg wb_domain · 841710aa
      Tejun Heo 提交于
      Dirtyable memory is distributed to a wb (bdi_writeback) according to
      the relative bandwidth the wb is writing out in the whole system.
      This distribution is global - each wb is measured against all other
      wb's and gets the proportinately sized portion of the memory in the
      whole system.
      
      For cgroup writeback, the amount of dirtyable memory is scoped by
      memcg and thus each wb would need to be measured and controlled in its
      memcg.  IOW, a wb will belong to two writeback domains - the global
      and memcg domains.
      
      The previous patches laid the groundwork to support the two wb_domains
      and this patch implements memcg wb_domain.  memcg->cgwb_domain is
      initialized on css online and destroyed on css release,
      wb->memcg_completions is added, and __wb_writeout_inc() is updated to
      increment completions against both global and memcg wb_domains.
      
      The following patches will update balance_dirty_pages() and its
      subroutines to actually consider memcg wb_domain for throttling.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Jan Kara <jack@suse.cz>
      Cc: Wu Fengguang <fengguang.wu@intel.com>
      Cc: Greg Thelen <gthelen@google.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      841710aa
    • T
      writeback: update wb_over_bg_thresh() to use wb_domain aware operations · 947e9762
      Tejun Heo 提交于
      wb_over_bg_thresh() currently uses global_dirty_limits() and
      wb_dirty_limit() both of which are wrappers around operations which
      take dirty_throttle_control.  For cgroup writeback support, the
      function will be updated to also consider memcg wb_domains which
      requires the context information carried in dirty_throttle_control.
      
      This patch updates wb_over_bg_thresh() so that it uses the underlying
      wb_domain aware operations directly and builds the global
      dirty_throttle_control in the process.
      
      This patch doesn't introduce any behavioral changes.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Jan Kara <jack@suse.cz>
      Cc: Wu Fengguang <fengguang.wu@intel.com>
      Cc: Greg Thelen <gthelen@google.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      947e9762
    • T
      writeback: move over_bground_thresh() to mm/page-writeback.c · aa661bbe
      Tejun Heo 提交于
      and rename it to wb_over_bg_thresh().  The function is closely tied to
      the dirty throttling mechanism implemented in page-writeback.c.  This
      relocation will allow future updates necessary for cgroup writeback
      support.
      
      While at it, add function comment.
      
      This is pure reorganization and doesn't introduce any behavioral
      changes.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Jan Kara <jack@suse.cz>
      Cc: Wu Fengguang <fengguang.wu@intel.com>
      Cc: Greg Thelen <gthelen@google.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      aa661bbe
    • T
      writeback: separate out domain_dirty_limits() · 9fc3a43e
      Tejun Heo 提交于
      global_dirty_limits() calculates thresh and bg_thresh (confusingly
      called *pdirty and *pbackground in the function) assuming
      global_wb_domain; however, cgroup writeback support requires
      considering per-memcg wb_domain too.
      
      This patch separates out domain_dirty_limits() which takes
      dirty_throttle_control out of global_dirty_limits().  As thresh and
      bg_thresh calculation needs the amount of dirtyable memory in the
      domain, dirty_throttle_control->avail is added.  The new function
      calculates the two thresholds and store them directly in the
      dirty_throttle_control.
      
      Also, as memcg domains can't follow vm_dirty_bytes and
      dirty_background_bytes settings directly.  If those are set and
      domain_dirty_limits() is invoked for a !global domain, the settings
      are translated to ratios by scaling them against globally available
      memory.  dirty_throttle_control->gdtc is added to enable this when
      CONFIG_CGROUP_WRITEBACK.
      
      global_dirty_limits() is now a thin wrapper around
      domain_dirty_limits() and balance_dirty_pages() is updated to use the
      new function too.
      
      This patch doesn't introduce any behavioral changes.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Jan Kara <jack@suse.cz>
      Cc: Wu Fengguang <fengguang.wu@intel.com>
      Cc: Greg Thelen <gthelen@google.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      9fc3a43e
    • T
      writeback: make __wb_writeout_inc() and hard_dirty_limit() take wb_domaas a parameter · c7981433
      Tejun Heo 提交于
      Currently __wb_writeout_inc() and hard_dirty_limit() assume
      global_wb_domain; however, cgroup writeback support requires
      considering per-memcg wb_domain too.
      
      This patch separates out domain-specific part of __wb_writeout_inc()
      into wb_domain_writeout_inc() which takes wb_domain as a parameter and
      adds the parameter to hard_dirty_limit().  This will allow these two
      functions to handle per-memcg wb_domains.
      
      This patch doesn't introduce any behavioral changes.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Jan Kara <jack@suse.cz>
      Cc: Wu Fengguang <fengguang.wu@intel.com>
      Cc: Greg Thelen <gthelen@google.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      c7981433
    • T
      writeback: add dirty_throttle_control->dom · e9f07dfd
      Tejun Heo 提交于
      Currently all dirty throttle operations use global_wb_domain; however,
      cgroup writeback support requires considering per-memcg wb_domain too.
      This patch adds dirty_throttle_control->dom and updates functions
      which are directly using globabl_wb_domain to use it instead.
      
      As this makes global_update_bandwidth() a misnomer, the function is
      renamed to domain_update_bandwidth().
      
      This patch doesn't introduce any behavioral changes.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Jan Kara <jack@suse.cz>
      Cc: Wu Fengguang <fengguang.wu@intel.com>
      Cc: Greg Thelen <gthelen@google.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      e9f07dfd
    • T
      writeback: add dirty_throttle_control->wb_completions · e9770b34
      Tejun Heo 提交于
      wb->completions measures the wb's proportional write bandwidth in
      global_wb_domain and thus naturally tied to the wb_domain.  This patch
      adds dirty_throttle_control->wb_completions which is initialized to
      wb->completions by GDTC_INIT() and updates __wb_dirty_limits() to use
      it instead of dereferencing wb->completions directly.
      
      This will allow dirty_throttle_control to represent different
      wb_domains and the matching wb completions.
      
      This patch doesn't introduce any behavioral changes.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Jan Kara <jack@suse.cz>
      Cc: Wu Fengguang <fengguang.wu@intel.com>
      Cc: Greg Thelen <gthelen@google.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      e9770b34
    • T
      writeback: add dirty_throttle_control->pos_ratio · daddfa3c
      Tejun Heo 提交于
      wb_position_ratio() is used to calculate pos_ratio, which is used for
      two purposes.  wb_update_dirty_ratelimit() uses it to adjust
      wb->[balanced_]dirty_ratelimit gradually and balance_dirty_pages() to
      immediately adjust dirty_ratelimit right before applying it to
      determine pause duration.
      
      While wb_update_dirty_ratelimit() is separately rate limited from
      balance_dirty_pages(), on the run where the ratelimit is updated, we
      end up calculating pos_ratio twice with the same parameters.
      
      This patch adds dirty_throttle_control->pos_ratio.
      balance_dirty_pages() calculates it once per run and
      wb_update_dirty_ratelimit() uses the value stored in
      dirty_throttle_control.
      
      This removes the duplicate calculation and also will help implementing
      memcg wb_domain.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Jan Kara <jack@suse.cz>
      Cc: Wu Fengguang <fengguang.wu@intel.com>
      Cc: Greg Thelen <gthelen@google.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      daddfa3c
    • T
      writeback: make __wb_calc_thresh() take dirty_throttle_control · b1cbc6d4
      Tejun Heo 提交于
      wb_calc_thresh() calculates wb_thresh by scaling thresh according to
      the wb's portion in the system-wide write bandwidth.  cgroup writeback
      support would need to calculate wb_thresh against memcg domain too.
      This patch renames wb_calc_thresh() to __wb_calc_thresh() and makes it
      take dirty_throttle_control so that the function can later be updated
      to calculate against different domains according to
      dirty_throttle_control.
      
      wb_calc_thresh() is now a thin wrapper around __wb_calc_thresh().
      
      v2: The original version was incorrectly scaling dtc->dirty instead of
          dtc->thresh.  This was due to the extremely confusing function and
          variable names.  Added a rename patch and fixed this one.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Jan Kara <jack@suse.cz>
      Cc: Wu Fengguang <fengguang.wu@intel.com>
      Cc: Greg Thelen <gthelen@google.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      b1cbc6d4
    • T
      writeback: add dirty_throttle_control->wb_bg_thresh · 970fb01a
      Tejun Heo 提交于
      wb_bg_thresh is currently treated as a second-class citizen.  It's
      only used when BDI_CAP_STRICTLIMIT is set and balance_dirty_pages()
      doesn't calculate it unless the cap is set.  When the cap is set, the
      calculated value is not passed around but instead recalculated
      whenever it's used.
      
      wb_position_ratio() calculates it by scaling wb_thresh proportional to
      bg_thresh / thresh.  wb_update_dirty_ratelimit() uses wb_dirty_limit()
      on bg_thresh, which should generally lead to a similar result as the
      proportional scaling but can also be way off in the presence of
      max/min_ratio settings.
      
      Avoiding wb_bg_thresh calculation saves us one u64 multiplication and
      divsion when BDI_CAP_STRICTLIMIT is not set.  Given that
      balance_dirty_pages() is already ratelimited, this doesn't justify the
      incurred extra complexity.
      
      This patch adds wb_bg_thresh to dirty_throttle_control and makes
      wb_dirty_limits() always calculate it and updates the users to use the
      pre-calculated value.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Jan Kara <jack@suse.cz>
      Cc: Wu Fengguang <fengguang.wu@intel.com>
      Cc: Greg Thelen <gthelen@google.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      970fb01a
    • T
      writeback: consolidate dirty throttle parameters into dirty_throttle_control · 2bc00aef
      Tejun Heo 提交于
      Dirty throttling implemented in balance_dirty_pages() and its
      subroutines makes use of a number of parameters which are passed
      around individually.  This renders these functions somewhat unwieldy
      and makes it difficult to add or change the involved parameters.  Also
      some functions use different or conflicting naming schemes for the
      same parameters making the code confusing to follow.
      
      This patch consolidates the main parameters into struct
      dirty_throttle_control so that they can be passed around easily and
      adding new paramters isn't painful.  This also unifies how a given
      parameter is named and accessed.  The drawback of using this type of
      control structure rather than explicit paramters is that it isn't
      immediately obvious which function accesses and modifies what;
      however, it's fairly clear that the benefits outweigh in this case.
      
      GDTC_INIT() macro is provided to ease initializing
      dirty_throttle_control for the global_wb_domain and
      balance_dirty_pages() uses a separate pointer to point to its global
      dirty_throttle_control.  This is to make it uniform with memcg domain
      handling which will be added later.
      
      This patch doesn't introduce any behavioral changes.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Jan Kara <jack@suse.cz>
      Cc: Wu Fengguang <fengguang.wu@intel.com>
      Cc: Greg Thelen <gthelen@google.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      2bc00aef
    • T
      writeback: move global_dirty_limit into wb_domain · dcc25ae7
      Tejun Heo 提交于
      This patch is a part of the series to define wb_domain which
      represents a domain that wb's (bdi_writeback's) belong to and are
      measured against each other in.  This will enable IO backpressure
      propagation for cgroup writeback.
      
      global_dirty_limit exists to regulate the global dirty threshold which
      is a property of the wb_domain.  This patch moves hard_dirty_limit,
      dirty_lock, and update_time into wb_domain.
      
      This is pure reorganization and doesn't introduce any behavioral
      changes.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Jan Kara <jack@suse.cz>
      Cc: Wu Fengguang <fengguang.wu@intel.com>
      Cc: Greg Thelen <gthelen@google.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      dcc25ae7
    • T
      writeback: implement wb_domain · 380c27ca
      Tejun Heo 提交于
      Dirtyable memory is distributed to a wb (bdi_writeback) according to
      the relative bandwidth the wb is writing out in the whole system.
      This distribution is global - each wb is measured against all other
      wb's and gets the proportinately sized portion of the memory in the
      whole system.
      
      For cgroup writeback, the amount of dirtyable memory is scoped by
      memcg and thus each wb would need to be measured and controlled in its
      memcg.  IOW, a wb will belong to two writeback domains - the global
      and memcg domains.
      
      Currently, what constitutes the global writeback domain are scattered
      across a number of global states.  This patch starts collecting them
      into struct wb_domain.
      
      * fprop_global which serves as the basis for proportional bandwidth
        measurement and its period timer are moved into struct wb_domain.
      
      * global_wb_domain hosts the states for the global domain.
      
      * While at it, flatten wb_writeout_fraction() into its callers.  This
        thin wrapper doesn't provide any actual benefits while getting in
        the way.
      
      This is pure reorganization and doesn't introduce any behavioral
      changes.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Jan Kara <jack@suse.cz>
      Cc: Wu Fengguang <fengguang.wu@intel.com>
      Cc: Greg Thelen <gthelen@google.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      380c27ca
    • T
      writeback: reorganize [__]wb_update_bandwidth() · 8a731799
      Tejun Heo 提交于
      __wb_update_bandwidth() is called from two places -
      fs/fs-writeback.c::balance_dirty_pages() and
      mm/page-writeback.c::wb_writeback().  The latter updates only the
      write bandwidth while the former also deals with the dirty ratelimit.
      The two callsites are distinguished by whether @thresh parameter is
      zero or not, which is cryptic.  In addition, the two files define
      their own different versions of wb_update_bandwidth() on top of
      __wb_update_bandwidth(), which is confusing to say the least.  This
      patch cleans up [__]wb_update_bandwidth() in the following ways.
      
      * __wb_update_bandwidth() now takes explicit @update_ratelimit
        parameter to gate dirty ratelimit handling.
      
      * mm/page-writeback.c::wb_update_bandwidth() is flattened into its
        caller - balance_dirty_pages().
      
      * fs/fs-writeback.c::wb_update_bandwidth() is moved to
        mm/page-writeback.c and __wb_update_bandwidth() is made static.
      
      * While at it, add a lockdep assertion to __wb_update_bandwidth().
      
      Except for the lockdep addition, this is pure reorganization and
      doesn't introduce any behavioral changes.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Jan Kara <jack@suse.cz>
      Cc: Wu Fengguang <fengguang.wu@intel.com>
      Cc: Greg Thelen <gthelen@google.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      8a731799
    • T
      writeback: clean up wb_dirty_limit() · 0d960a38
      Tejun Heo 提交于
      The function name wb_dirty_limit(), its argument @dirty and the local
      variable @wb_dirty are mortally confusing given that the function
      calculates per-wb threshold value not dirty pages, especially given
      that @dirty and @wb_dirty are used elsewhere for dirty pages.
      
      Let's rename the function to wb_calc_thresh() and wb_dirty to
      wb_thresh.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Jan Kara <jack@suse.cz>
      Cc: Wu Fengguang <fengguang.wu@intel.com>
      Cc: Greg Thelen <gthelen@google.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      0d960a38
    • T
      writeback: make bdi_start_background_writeback() take bdi_writeback instead of backing_dev_info · 9ecf4866
      Tejun Heo 提交于
      bdi_start_background_writeback() currently takes @bdi and kicks the
      root wb (bdi_writeback).  In preparation for cgroup writeback support,
      make it take wb instead.
      
      This patch doesn't make any functional difference.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Jan Kara <jack@suse.cz>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      9ecf4866