1. 15 4月, 2014 1 次提交
  2. 04 4月, 2014 31 次提交
    • R
      nilfs2: verify metadata sizes read from disk · 0ec060d1
      Ryusuke Konishi 提交于
      Add code to check sizes of on-disk data of metadata files such as inode
      size, segment usage size, DAT entry size, and checkpoint size.  Although
      these sizes are read from disk, the current implementation doesn't check
      them.
      
      If these sizes are not sane on disk, it can cause out-of-range access to
      metadata or memory access overrun on metadata block buffers due to
      overflow in sundry calculations.
      
      Both lower limit and upper limit of metadata sizes are verified to
      prevent these issues.
      Signed-off-by: NRyusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
      Cc: Andreas Rohner <andreas.rohner@gmx.net>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      0ec060d1
    • A
      nilfs2: implementation of NILFS_IOCTL_SET_SUINFO ioctl · 2cc88f3a
      Andreas Rohner 提交于
      With this ioctl the segment usage entries in the SUFILE can be updated
      from userspace.
      
      This is useful, because it allows the userspace GC to modify and update
      segment usage entries for specific segments, which enables it to avoid
      unnecessary write operations.
      
      If a segment needs to be cleaned, but there is no or very little
      reclaimable space in it, the cleaning operation basically degrades to a
      useless moving operation.  In the end the only thing that changes is the
      location of the data and a timestamp in the segment usage information.
      With this ioctl the GC can skip the cleaning and update the segment
      usage entries directly instead.
      
      This is basically a shortcut to cleaning the segment.  It is still
      necessary to read the segment summary information, but the writing of
      the live blocks can be skipped if it's not worth it.
      
      [konishi.ryusuke@lab.ntt.co.jp: add description of NILFS_IOCTL_SET_SUINFO ioctl]
      Signed-off-by: NAndreas Rohner <andreas.rohner@gmx.net>
      Signed-off-by: NRyusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      2cc88f3a
    • A
      nilfs2: add struct nilfs_suinfo_update and flags · 90ccf7dd
      Andreas Rohner 提交于
      Add the nilfs_suinfo_update structure, which contains the information
      needed to update one segment usage entry.  The flags specify, which
      fields need to be updated.
      Signed-off-by: NAndreas Rohner <andreas.rohner@gmx.net>
      Signed-off-by: NRyusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      90ccf7dd
    • J
      rtc: pm8xxx: add support for devicetree · 5a418558
      Josh Cartwright 提交于
      Add support for describing the PM8921/PM8058 RTC in device tree.
      
      Additionally:
         - drop support for describing the RTC using platform data,
           as there are no current in tree users who do so.
         - make allow_set_time a device-specific flag, instead of mucking
           with the rtc_ops
      Signed-off-by: NJosh Cartwright <joshc@codeaurora.org>
      Reviewed-by: NStephen Boyd <sboyd@codeaurora.org>
      Acked-by: NLee Jones <lee.jones@linaro.org>
      Cc: Alessandro Zummo <a.zummo@towertech.it>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      5a418558
    • R
      lib/decompress_inflate.c: include appropriate header file · af661e83
      Rashika Kheria 提交于
      Include appropriate header file include/linux/decompress/inflate.h in
      lib/decompress_inflate.c because it has prototype declaration of
      function defined in lib/decompress_inflate.c.
      
      Also, fix the guard around the header file
      include/linux/decompress/inflate.h to use a more unique guard symbol.
      This avoids conflict with the INFLATE_H defined by
      zlib_inflate/inflate.h.
      
      This eliminates the following warning in lib/decompress_inflate.c:
      
        lib/decompress_inflate.c:35:17: warning: no previous prototype for `gunzip' [-Wmissing-prototypes]
      Signed-off-by: NRashika Kheria <rashika.kheria@gmail.com>
      Reviewed-by: NJosh Triplett <josh@joshtriplett.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      af661e83
    • L
      backlight: update bd state & fb_blank properties when necessary · a55944ca
      Liu Ying 提交于
      We don't have to update the state and fb_blank properties of a backlight
      device every time a blanking or unblanking event comes because they may
      have already been what we want.  Another thought is that one backlight
      device may be shared by multiple framebuffers.  The backlight driver
      should take the backlight device as a resource shared by all the
      associated framebuffers.
      
      This patch adds some logic to record each framebuffer's backlight usage
      to determine the backlight device use count and whether the two
      properties should be updated or not.  To be more specific, only one
      unblank operation on a certain blanked framebuffer may increase the
      backlight device's use count by one, while one blank operation on a
      certain unblanked framebuffer may decrease the use count by one, because
      the userspace is likely to unblank an unblanked framebuffer or blank a
      blanked framebuffer.
      Signed-off-by: NLiu Ying <Ying.Liu@freescale.com>
      Cc: Jingoo Han <jg1.han@samsung.com>
      Cc: Jean-Christophe PLAGNIOL-VILLARD <plagnioj@jcrosoft.com>
      Cc: Tomi Valkeinen <tomi.valkeinen@ti.com>
      Cc: Jani Nikula <jani.nikula@linux.intel.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      a55944ca
    • S
      include/linux/printk.h: remove double asmlinkage in printk_emit · d487d575
      Simon Kågström 提交于
      The double asmlinkage was introduced in commit 7ff9554b ("printk:
      convert byte-buffer to variable-length record buffer").
      Signed-off-by: NSimon Kagstrom <simon.kagstrom@netinsight.net>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      d487d575
    • P
      printk: remove duplicated check for log level · 0185698c
      Petr Mladek 提交于
      The check for the exact log level is already done in printk_get_level.
      We do not need to duplicate it in printk_skip_level.
      Signed-off-by: NPetr Mladek <pmladek@suse.cz>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Jan Kara <jack@suse.cz>
      Cc: Michal Hocko <mhocko@suse.cz>
      Cc: Kay Sievers <kay@vrfy.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      0185698c
    • S
      xattr: guard against simultaneous glibc header inclusion · ea1a8217
      Serge Hallyn 提交于
      If the glibc xattr.h header is included after the uapi header,
      compilation fails due to an enum re-using a #define from the uapi
      header.
      
      Protect against this by guarding the define and enum inclusions against
      each other.
      
      (See https://lists.debian.org/debian-glibc/2014/03/msg00029.html
      and https://sourceware.org/glibc/wiki/Synchronizing_Headers
      for more information.)
      Signed-off-by: NSerge Hallyn <serge.hallyn@ubuntu.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Allan McRae <allan@archlinux.org>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      ea1a8217
    • J
      err.h: use bool for IS_ERR and IS_ERR_OR_NULL · a5ed3cee
      Joe Perches 提交于
      Use the more natural return of bool for these tests.
      
      No difference observed in .o files produced by gcc for x86.
      
      Remove the dentry description of kernel pointers left over from the 90's
      and 2002's cleanup move of parts of fs.h to err.h.
      Signed-off-by: NJoe Perches <joe@perches.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      a5ed3cee
    • W
      kernel/groups.c: remove return value of set_groups · 8f6c5ffc
      Wang YanQing 提交于
      After commit 6307f8fe ("security: remove dead hook task_setgroups"),
      set_groups will always return zero, so we could just remove return value
      of set_groups.
      
      This patch reduces code size, and simplfies code to use set_groups,
      because we don't need to check its return value any more.
      
      [akpm@linux-foundation.org: remove obsolete claims from set_groups() comment]
      Signed-off-by: NWang YanQing <udknight@gmail.com>
      Cc: "Eric W. Biederman" <ebiederm@xmission.com>
      Cc: Serge Hallyn <serge.hallyn@canonical.com>
      Cc: Eric Paris <eparis@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      8f6c5ffc
    • R
      include/linux/syscalls.h: add sys32_quotactl() prototype · e3a0cfdc
      Rashika Kheria 提交于
      This eliminates the following warning in quota/compat.c:
      
        fs/quota/compat.c:43:17: warning: no previous prototype for `sys32_quotactl' [-Wmissing-prototypes]
      Signed-off-by: NRashika Kheria <rashika.kheria@gmail.com>
      Reviewed-by: NJosh Triplett <josh@joshtriplett.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      e3a0cfdc
    • V
      kobject: don't block for each kobject_uevent · bcccff93
      Vladimir Davydov 提交于
      Currently kobject_uevent has somewhat unpredictable semantics.  The
      point is, since it may call a usermode helper and wait for it to execute
      (UMH_WAIT_EXEC), it is impossible to say for sure what lock dependencies
      it will introduce for the caller - strictly speaking it depends on what
      fs the binary is located on and the set of locks fork may take.  There
      are quite a few kobject_uevent's users that do not take this into
      account and call it with various mutexes taken, e.g.  rtnl_mutex,
      net_mutex, which might potentially lead to a deadlock.
      
      Since there is actually no reason to wait for the usermode helper to
      execute there, let's make kobject_uevent start the helper asynchronously
      with the aid of the UMH_NO_WAIT flag.
      
      Personally, I'm interested in this, because I really want kobject_uevent
      to be called under the slab_mutex in the slub implementation as it used
      to be some time ago, because it greatly simplifies synchronization and
      automatically fixes a kmemcg-related race.  However, there was a
      deadlock detected on an attempt to call kobject_uevent under the
      slab_mutex (see https://lkml.org/lkml/2012/1/14/45), which was reported
      to be fixed by releasing the slab_mutex for kobject_uevent.
      
      Unfortunately, there was no information about who exactly blocked on the
      slab_mutex causing the usermode helper to stall, neither have I managed
      to find this out or reproduce the issue.
      
      BTW, this is not the first attempt to make kobject_uevent use
      UMH_NO_WAIT.  Previous one was made by commit f520360d ("kobject:
      don't block for each kobject_uevent"), but it was wrong (it passed
      arguments allocated on stack to async thread) so it was reverted in
      05f54c13 ("Revert "kobject: don't block for each kobject_uevent".").
      It targeted on speeding up the boot process though.
      Signed-off-by: NVladimir Davydov <vdavydov@parallels.com>
      Cc: Greg KH <greg@kroah.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      bcccff93
    • D
      drop_caches: add some documentation and info message · 5509a5d2
      Dave Hansen 提交于
      There is plenty of anecdotal evidence and a load of blog posts
      suggesting that using "drop_caches" periodically keeps your system
      running in "tip top shape".  Perhaps adding some kernel documentation
      will increase the amount of accurate data on its use.
      
      If we are not shrinking caches effectively, then we have real bugs.
      Using drop_caches will simply mask the bugs and make them harder to
      find, but certainly does not fix them, nor is it an appropriate
      "workaround" to limit the size of the caches.  On the contrary, there
      have been bug reports on issues that turned out to be misguided use of
      cache dropping.
      
      Dropping caches is a very drastic and disruptive operation that is good
      for debugging and running tests, but if it creates bug reports from
      production use, kernel developers should be aware of its use.
      
      Add a bit more documentation about it, a syslog message to track down
      abusers, and vmstat drop counters to help analyze problem reports.
      
      [akpm@linux-foundation.org: checkpatch fixes]
      [hannes@cmpxchg.org: add runtime suppression control]
      Signed-off-by: NDave Hansen <dave@linux.vnet.ibm.com>
      Signed-off-by: NMichal Hocko <mhocko@suse.cz>
      Acked-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Acked-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Signed-off-by: NJohannes Weiner <hannes@cmpxchg.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      5509a5d2
    • S
      mm: remove read_cache_page_async() · 67f9fd91
      Sasha Levin 提交于
      This patch removes read_cache_page_async() which wasn't really needed
      anywhere and simplifies the code around it a bit.
      
      read_cache_page_async() is useful when we want to read a page into the
      cache without waiting for it to complete.  This happens when the
      appropriate callback 'filler' doesn't complete its read operation and
      releases the page lock immediately, and instead queues a different
      completion routine to do that.  This never actually happened anywhere in
      the code.
      
      read_cache_page_async() had 3 different callers:
      
      - read_cache_page() which is the sync version, it would just wait for
        the requested read to complete using wait_on_page_read().
      
      - JFFS2 would call it from jffs2_gc_fetch_page(), but the filler
        function it supplied doesn't do any async reads, and would complete
        before the filler function returns - making it actually a sync read.
      
      - CRAMFS would call it using the read_mapping_page_async() wrapper, with
        a similar story to JFFS2 - the filler function doesn't do anything that
        reminds async reads and would always complete before the filler function
        returns.
      
      To sum it up, the code in mm/filemap.c never took advantage of having
      read_cache_page_async().  While there are filler callbacks that do async
      reads (such as the block one), we always called it with the
      read_cache_page().
      
      This patch adds a mandatory wait for read to complete when adding a new
      page to the cache, and removes read_cache_page_async() and its wrappers.
      Signed-off-by: NSasha Levin <sasha.levin@oracle.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      67f9fd91
    • R
      include/linux/mm.h: remove ifdef condition · c558784f
      Rashika Kheria 提交于
      The ifdef conditions in include/linux/mm.h presents three cases:
      
       - !defined(CONFIG_HAVE_MEMBLOCK_NODE_MAP) && !defined(CONFIG_HAVE_ARCH_EARLY_PFN_TO_NID)
      
         There is no actual definition of function but include/linux/mm.h has a
         static inline stub defined.
      
       - defined(CONFIG_HAVE_MEMBLOCK_NODE_MAP) && !defined(CONFIG_HAVE_ARCH_EARLY_PFN_TO_NID)
      
         linux/mm.h does not define a prototype, but mm/page_alloc.c defines
         the function.
      
         Hence, compiler reports the following warning:
      
           mm/page_alloc.c:4300:15: warning: no previous prototype for `__early_pfn_to_nid' [-Wmissing-prototypes]
      
       - defined(CONFIG_HAVE_ARCH_EARLY_PFN_TO_NID)
      
         The architecture defines the function, and linux/mm.h has a
         prototype.
      
      Thus, join the conditions of Case 2 and 3 ie eliminate the ifdef
      condition of CONFIG_HAVE_ARCH_EARLY_PFN_TO_NID to eliminate the missing
      prototype warning from file mm/page_alloc.c.
      Signed-off-by: NRashika Kheria <rashika.kheria@gmail.com>
      Reviewed-by: NJosh Triplett <josh@joshtriplett.org>
      Reviewed-by: NRik van Riel <riel@redhat.com>
      Cc: David Rientjes <rientjes@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      c558784f
    • J
      mm: keep page cache radix tree nodes in check · 449dd698
      Johannes Weiner 提交于
      Previously, page cache radix tree nodes were freed after reclaim emptied
      out their page pointers.  But now reclaim stores shadow entries in their
      place, which are only reclaimed when the inodes themselves are
      reclaimed.  This is problematic for bigger files that are still in use
      after they have a significant amount of their cache reclaimed, without
      any of those pages actually refaulting.  The shadow entries will just
      sit there and waste memory.  In the worst case, the shadow entries will
      accumulate until the machine runs out of memory.
      
      To get this under control, the VM will track radix tree nodes
      exclusively containing shadow entries on a per-NUMA node list.  Per-NUMA
      rather than global because we expect the radix tree nodes themselves to
      be allocated node-locally and we want to reduce cross-node references of
      otherwise independent cache workloads.  A simple shrinker will then
      reclaim these nodes on memory pressure.
      
      A few things need to be stored in the radix tree node to implement the
      shadow node LRU and allow tree deletions coming from the list:
      
      1. There is no index available that would describe the reverse path
         from the node up to the tree root, which is needed to perform a
         deletion.  To solve this, encode in each node its offset inside the
         parent.  This can be stored in the unused upper bits of the same
         member that stores the node's height at no extra space cost.
      
      2. The number of shadow entries needs to be counted in addition to the
         regular entries, to quickly detect when the node is ready to go to
         the shadow node LRU list.  The current entry count is an unsigned
         int but the maximum number of entries is 64, so a shadow counter
         can easily be stored in the unused upper bits.
      
      3. Tree modification needs tree lock and tree root, which are located
         in the address space, so store an address_space backpointer in the
         node.  The parent pointer of the node is in a union with the 2-word
         rcu_head, so the backpointer comes at no extra cost as well.
      
      4. The node needs to be linked to an LRU list, which requires a list
         head inside the node.  This does increase the size of the node, but
         it does not change the number of objects that fit into a slab page.
      
      [akpm@linux-foundation.org: export the right function]
      Signed-off-by: NJohannes Weiner <hannes@cmpxchg.org>
      Reviewed-by: NRik van Riel <riel@redhat.com>
      Reviewed-by: NMinchan Kim <minchan@kernel.org>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Bob Liu <bob.liu@oracle.com>
      Cc: Christoph Hellwig <hch@infradead.org>
      Cc: Dave Chinner <david@fromorbit.com>
      Cc: Greg Thelen <gthelen@google.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Jan Kara <jack@suse.cz>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Luigi Semenzato <semenzato@google.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Metin Doslu <metin@citusdata.com>
      Cc: Michel Lespinasse <walken@google.com>
      Cc: Ozgun Erdogan <ozgun@citusdata.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Roman Gushchin <klamm@yandex-team.ru>
      Cc: Ryan Mallon <rmallon@gmail.com>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      449dd698
    • J
      lib: radix_tree: tree node interface · 139e5616
      Johannes Weiner 提交于
      Make struct radix_tree_node part of the public interface and provide API
      functions to create, look up, and delete whole nodes.  Refactor the
      existing insert, look up, delete functions on top of these new node
      primitives.
      
      This will allow the VM to track and garbage collect page cache radix
      tree nodes.
      
      [sasha.levin@oracle.com: return correct error code on insertion failure]
      Signed-off-by: NJohannes Weiner <hannes@cmpxchg.org>
      Reviewed-by: NRik van Riel <riel@redhat.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Bob Liu <bob.liu@oracle.com>
      Cc: Christoph Hellwig <hch@infradead.org>
      Cc: Dave Chinner <david@fromorbit.com>
      Cc: Greg Thelen <gthelen@google.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Jan Kara <jack@suse.cz>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Luigi Semenzato <semenzato@google.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Metin Doslu <metin@citusdata.com>
      Cc: Michel Lespinasse <walken@google.com>
      Cc: Ozgun Erdogan <ozgun@citusdata.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Roman Gushchin <klamm@yandex-team.ru>
      Cc: Ryan Mallon <rmallon@gmail.com>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Signed-off-by: NSasha Levin <sasha.levin@oracle.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      139e5616
    • J
      mm: thrash detection-based file cache sizing · a528910e
      Johannes Weiner 提交于
      The VM maintains cached filesystem pages on two types of lists.  One
      list holds the pages recently faulted into the cache, the other list
      holds pages that have been referenced repeatedly on that first list.
      The idea is to prefer reclaiming young pages over those that have shown
      to benefit from caching in the past.  We call the recently usedbut
      ultimately was not significantly better than a FIFO policy and still
      thrashed cache based on eviction speed, rather than actual demand for
      cache.
      
      This patch solves one half of the problem by decoupling the ability to
      detect working set changes from the inactive list size.  By maintaining
      a history of recently evicted file pages it can detect frequently used
      pages with an arbitrarily small inactive list size, and subsequently
      apply pressure on the active list based on actual demand for cache, not
      just overall eviction speed.
      
      Every zone maintains a counter that tracks inactive list aging speed.
      When a page is evicted, a snapshot of this counter is stored in the
      now-empty page cache radix tree slot.  On refault, the minimum access
      distance of the page can be assessed, to evaluate whether the page
      should be part of the active list or not.
      
      This fixes the VM's blindness towards working set changes in excess of
      the inactive list.  And it's the foundation to further improve the
      protection ability and reduce the minimum inactive list size of 50%.
      Signed-off-by: NJohannes Weiner <hannes@cmpxchg.org>
      Reviewed-by: NRik van Riel <riel@redhat.com>
      Reviewed-by: NMinchan Kim <minchan@kernel.org>
      Reviewed-by: NBob Liu <bob.liu@oracle.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Christoph Hellwig <hch@infradead.org>
      Cc: Dave Chinner <david@fromorbit.com>
      Cc: Greg Thelen <gthelen@google.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Jan Kara <jack@suse.cz>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Luigi Semenzato <semenzato@google.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Metin Doslu <metin@citusdata.com>
      Cc: Michel Lespinasse <walken@google.com>
      Cc: Ozgun Erdogan <ozgun@citusdata.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Roman Gushchin <klamm@yandex-team.ru>
      Cc: Ryan Mallon <rmallon@gmail.com>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      a528910e
    • J
      mm + fs: store shadow entries in page cache · 91b0abe3
      Johannes Weiner 提交于
      Reclaim will be leaving shadow entries in the page cache radix tree upon
      evicting the real page.  As those pages are found from the LRU, an
      iput() can lead to the inode being freed concurrently.  At this point,
      reclaim must no longer install shadow pages because the inode freeing
      code needs to ensure the page tree is really empty.
      
      Add an address_space flag, AS_EXITING, that the inode freeing code sets
      under the tree lock before doing the final truncate.  Reclaim will check
      for this flag before installing shadow pages.
      Signed-off-by: NJohannes Weiner <hannes@cmpxchg.org>
      Reviewed-by: NRik van Riel <riel@redhat.com>
      Reviewed-by: NMinchan Kim <minchan@kernel.org>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Bob Liu <bob.liu@oracle.com>
      Cc: Christoph Hellwig <hch@infradead.org>
      Cc: Dave Chinner <david@fromorbit.com>
      Cc: Greg Thelen <gthelen@google.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Jan Kara <jack@suse.cz>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Luigi Semenzato <semenzato@google.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Metin Doslu <metin@citusdata.com>
      Cc: Michel Lespinasse <walken@google.com>
      Cc: Ozgun Erdogan <ozgun@citusdata.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Roman Gushchin <klamm@yandex-team.ru>
      Cc: Ryan Mallon <rmallon@gmail.com>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      91b0abe3
    • J
      mm + fs: prepare for non-page entries in page cache radix trees · 0cd6144a
      Johannes Weiner 提交于
      shmem mappings already contain exceptional entries where swap slot
      information is remembered.
      
      To be able to store eviction information for regular page cache, prepare
      every site dealing with the radix trees directly to handle entries other
      than pages.
      
      The common lookup functions will filter out non-page entries and return
      NULL for page cache holes, just as before.  But provide a raw version of
      the API which returns non-page entries as well, and switch shmem over to
      use it.
      Signed-off-by: NJohannes Weiner <hannes@cmpxchg.org>
      Reviewed-by: NRik van Riel <riel@redhat.com>
      Reviewed-by: NMinchan Kim <minchan@kernel.org>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Bob Liu <bob.liu@oracle.com>
      Cc: Christoph Hellwig <hch@infradead.org>
      Cc: Dave Chinner <david@fromorbit.com>
      Cc: Greg Thelen <gthelen@google.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Jan Kara <jack@suse.cz>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Luigi Semenzato <semenzato@google.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Metin Doslu <metin@citusdata.com>
      Cc: Michel Lespinasse <walken@google.com>
      Cc: Ozgun Erdogan <ozgun@citusdata.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Roman Gushchin <klamm@yandex-team.ru>
      Cc: Ryan Mallon <rmallon@gmail.com>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      0cd6144a
    • J
      mm: filemap: move radix tree hole searching here · e7b563bb
      Johannes Weiner 提交于
      The radix tree hole searching code is only used for page cache, for
      example the readahead code trying to get a a picture of the area
      surrounding a fault.
      
      It sufficed to rely on the radix tree definition of holes, which is
      "empty tree slot".  But this is about to change, though, as shadow page
      descriptors will be stored in the page cache after the actual pages get
      evicted from memory.
      
      Move the functions over to mm/filemap.c and make them native page cache
      operations, where they can later be adapted to handle the new definition
      of "page cache hole".
      Signed-off-by: NJohannes Weiner <hannes@cmpxchg.org>
      Reviewed-by: NRik van Riel <riel@redhat.com>
      Reviewed-by: NMinchan Kim <minchan@kernel.org>
      Acked-by: NMel Gorman <mgorman@suse.de>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Bob Liu <bob.liu@oracle.com>
      Cc: Christoph Hellwig <hch@infradead.org>
      Cc: Dave Chinner <david@fromorbit.com>
      Cc: Greg Thelen <gthelen@google.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Jan Kara <jack@suse.cz>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Luigi Semenzato <semenzato@google.com>
      Cc: Metin Doslu <metin@citusdata.com>
      Cc: Michel Lespinasse <walken@google.com>
      Cc: Ozgun Erdogan <ozgun@citusdata.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Roman Gushchin <klamm@yandex-team.ru>
      Cc: Ryan Mallon <rmallon@gmail.com>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      e7b563bb
    • J
      lib: radix-tree: add radix_tree_delete_item() · 53c59f26
      Johannes Weiner 提交于
      Provide a function that does not just delete an entry at a given index,
      but also allows passing in an expected item.  Delete only if that item
      is still located at the specified index.
      
      This is handy when lockless tree traversals want to delete entries as
      well because they don't have to do an second, locked lookup to verify
      the slot has not changed under them before deleting the entry.
      Signed-off-by: NJohannes Weiner <hannes@cmpxchg.org>
      Reviewed-by: NMinchan Kim <minchan@kernel.org>
      Reviewed-by: NRik van Riel <riel@redhat.com>
      Acked-by: NMel Gorman <mgorman@suse.de>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Bob Liu <bob.liu@oracle.com>
      Cc: Christoph Hellwig <hch@infradead.org>
      Cc: Dave Chinner <david@fromorbit.com>
      Cc: Greg Thelen <gthelen@google.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Jan Kara <jack@suse.cz>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Luigi Semenzato <semenzato@google.com>
      Cc: Metin Doslu <metin@citusdata.com>
      Cc: Michel Lespinasse <walken@google.com>
      Cc: Ozgun Erdogan <ozgun@citusdata.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Roman Gushchin <klamm@yandex-team.ru>
      Cc: Ryan Mallon <rmallon@gmail.com>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      53c59f26
    • J
      mm: vmstat: fix UP zone state accounting · 6a3ed212
      Johannes Weiner 提交于
      Summary:
      
      The VM maintains cached filesystem pages on two types of lists.  One
      list holds the pages recently faulted into the cache, the other list
      holds pages that have been referenced repeatedly on that first list.
      The idea is to prefer reclaiming young pages over those that have shown
      to benefit from caching in the past.  We call the recently used list
      "inactive list" and the frequently used list "active list".
      
      Currently, the VM aims for a 1:1 ratio between the lists, which is the
      "perfect" trade-off between the ability to *protect* frequently used
      pages and the ability to *detect* frequently used pages.  This means
      that working set changes bigger than half of cache memory go undetected
      and thrash indefinitely, whereas working sets bigger than half of cache
      memory are unprotected against used-once streams that don't even need
      caching.
      
      This happens on file servers and media streaming servers, where the
      popular files and file sections change over time.  Even though the
      individual files might be smaller than half of memory, concurrent access
      to many of them may still result in their inter-reference distance being
      greater than half of memory.  It's also been reported as a problem on
      database workloads that switch back and forth between tables that are
      bigger than half of memory.  In these cases the VM never recognizes the
      new working set and will for the remainder of the workload thrash disk
      data which could easily live in memory.
      
      Historically, every reclaim scan of the inactive list also took a
      smaller number of pages from the tail of the active list and moved them
      to the head of the inactive list.  This model gave established working
      sets more gracetime in the face of temporary use-once streams, but
      ultimately was not significantly better than a FIFO policy and still
      thrashed cache based on eviction speed, rather than actual demand for
      cache.
      
      This series solves the problem by maintaining a history of pages evicted
      from the inactive list, enabling the VM to detect frequently used pages
      regardless of inactive list size and facilitate working set transitions.
      
      Tests:
      
      The reported database workload is easily demonstrated on a 8G machine
      with two filesets a 6G.  This fio workload operates on one set first,
      then switches to the other.  The VM should obviously always cache the
      set that the workload is currently using.
      
      This test is based on a problem encountered by Citus Data customers:
        http://citusdata.com/blog/72-linux-memory-manager-and-your-big-data
      
      unpatched:
        db1: READ: io=98304MB, aggrb=885559KB/s, minb=885559KB/s, maxb=885559KB/s, mint= 113672msec, maxt= 113672msec
        db2: READ: io=98304MB, aggrb= 66169KB/s, minb= 66169KB/s, maxb= 66169KB/s, mint=1521302msec, maxt=1521302msec
        sdb: ios=835750/4, merge=2/1, ticks=4659739/60016, in_queue=4719203, util=98.92%
      
        real    27m15.541s
        user    0m19.059s
        sys     0m51.459s
      
      patched:
        db1: READ: io=98304MB, aggrb=877783KB/s, minb=877783KB/s, maxb=877783KB/s, mint=114679msec, maxt=114679msec
        db2: READ: io=98304MB, aggrb=397449KB/s, minb=397449KB/s, maxb=397449KB/s, mint=253273msec, maxt=253273msec
        sdb: ios=170587/4, merge=2/1, ticks=954910/61123, in_queue=1015923, util=90.40%
      
        real    6m8.630s
        user    0m14.714s
        sys     0m31.233s
      
      As can be seen, the unpatched kernel simply never adapts to the
      workingset change and db2 is stuck indefinitely with secondary storage
      speed.  The patched kernel needs 2-3 iterations over db2 before it
      replaces db1 and reaches full memory speed.  Given the unbounded
      negative affect of the existing VM behavior, these patches should be
      considered correctness fixes rather than performance optimizations.
      
      Another test resembles a fileserver or streaming server workload, where
      data in excess of memory size is accessed at different frequencies.
      There is very hot data accessed at a high frequency.  Machines should be
      fitted so that the hot set of such a workload can be fully cached or all
      bets are off.  Then there is a very big (compared to available memory)
      set of data that is used-once or at a very low frequency; this is what
      drives the inactive list and does not really benefit from caching.
      Lastly, there is a big set of warm data in between that is accessed at
      medium frequencies and benefits from caching the pages between the first
      and last streamer of each burst.
      
      unpatched:
         hot: READ: io=128000MB, aggrb=160693KB/s, minb=160693KB/s, maxb=160693KB/s, mint=815665msec, maxt=815665msec
        warm: READ: io= 81920MB, aggrb=109853KB/s, minb= 27463KB/s, maxb= 29244KB/s, mint=717110msec, maxt=763617msec
        cold: READ: io= 30720MB, aggrb= 35245KB/s, minb= 35245KB/s, maxb= 35245KB/s, mint=892530msec, maxt=892530msec
         sdb: ios=797960/4, merge=11763/1, ticks=4307910/796, in_queue=4308380, util=100.00%
      
      patched:
         hot: READ: io=128000MB, aggrb=160678KB/s, minb=160678KB/s, maxb=160678KB/s, mint=815740msec, maxt=815740msec
        warm: READ: io= 81920MB, aggrb=147747KB/s, minb= 36936KB/s, maxb= 40960KB/s, mint=512000msec, maxt=567767msec
        cold: READ: io= 30720MB, aggrb= 40960KB/s, minb= 40960KB/s, maxb= 40960KB/s, mint=768000msec, maxt=768000msec
         sdb: ios=596514/4, merge=9341/1, ticks=2395362/997, in_queue=2396484, util=79.18%
      
      In both kernels, the hot set is propagated to the active list and then
      served from cache.
      
      In both kernels, the beginning of the warm set is propagated to the
      active list as well, but in the unpatched case the active list
      eventually takes up half of memory and no new pages from the warm set
      get activated, despite repeated access, and despite most of the active
      list soon being stale.  The patched kernel on the other hand detects the
      thrashing and manages to keep this cache window rolling through the data
      set.  This frees up enough IO bandwidth that the cold set is served at
      full speed as well and disk utilization even drops by 20%.
      
      For reference, this same test was performed with the traditional
      demotion mechanism, where deactivation is coupled to inactive list
      reclaim.  However, this had the same outcome as the unpatched kernel:
      while the warm set does indeed get activated continuously, it is forced
      out of the active list by inactive list pressure, which is dictated
      primarily by the unrelated cold set.  The warm set is evicted before
      subsequent streamers can benefit from it, even though there would be
      enough space available to cache the pages of interest.
      
      Costs:
      
      Page reclaim used to shrink the radix trees but now the tree nodes are
      reused for shadow entries, where the cost depends heavily on the page
      cache access patterns.  However, with workloads that maintain spatial or
      temporal locality, the shadow entries are either refaulted quickly or
      reclaimed along with the inode object itself.  Workloads that will
      experience a memory cost increase are those that don't really benefit
      from caching in the first place.
      
      A more predictable alternative would be a fixed-cost separate pool of
      shadow entries, but this would incur relatively higher memory cost for
      well-behaved workloads at the benefit of cornercases.  It would also
      make the shadow entry lookup more costly compared to storing them
      directly in the cache structure.
      
      Future:
      
      To simplify the merging process, this patch set is implementing thrash
      detection on a global per-zone level only for now, but the design is
      such that it can be extended to memory cgroups as well.  All we need to
      do is store the unique cgroup ID along the node and zone identifier
      inside the eviction cookie to identify the lruvec.
      
      Right now we have a fixed ratio (50:50) between inactive and active list
      but we already have complaints about working sets exceeding half of
      memory being pushed out of the cache by simple streaming in the
      background.  Ultimately, we want to adjust this ratio and allow for a
      much smaller inactive list.  These patches are an essential step in this
      direction because they decouple the VMs ability to detect working set
      changes from the inactive list size.  This would allow us to base the
      inactive list size on the combined readahead window size for example and
      potentially protect a much bigger working set.
      
      It's also a big step towards activating pages with a reuse distance
      larger than memory, as long as they are the most frequently used pages
      in the workload.  This will require knowing more about the access
      frequency of active pages than what we measure right now, so it's also
      deferred in this series.
      
      Another possibility of having thrashing information would be to revisit
      the idea of local reclaim in the form of zero-config memory control
      groups.  Instead of having allocating tasks go straight to global
      reclaim, they could try to reclaim the pages in the memcg they are part
      of first as long as the group is not thrashing.  This would allow a user
      to drop e.g.  a back-up job in an otherwise unconfigured memcg and it
      would only inflate (and possibly do global reclaim) until it has enough
      memory to do proper readahead.  But once it reaches that point and stops
      thrashing it would just recycle its own used-once pages without kicking
      out the cache of any other tasks in the system more than necessary.
      
      This patch (of 10):
      
      Fengguang Wu's build testing spotted problems with inc_zone_state() and
      dec_zone_state() on UP configurations in out-of-tree patches.
      
      inc_zone_state() is declared but not defined, dec_zone_state() is
      missing entirely.
      
      Just like with *_zone_page_state(), they can be defined like their
      preemption-unsafe counterparts on UP.
      
      [akpm@linux-foundation.org: make it build]
      Signed-off-by: NJohannes Weiner <hannes@cmpxchg.org>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Bob Liu <bob.liu@oracle.com>
      Cc: Christoph Hellwig <hch@infradead.org>
      Cc: Dave Chinner <david@fromorbit.com>
      Cc: Greg Thelen <gthelen@google.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Jan Kara <jack@suse.cz>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Luigi Semenzato <semenzato@google.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Metin Doslu <metin@citusdata.com>
      Cc: Michel Lespinasse <walken@google.com>
      Cc: Minchan Kim <minchan.kim@gmail.com>
      Cc: Ozgun Erdogan <ozgun@citusdata.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Roman Gushchin <klamm@yandex-team.ru>
      Cc: Ryan Mallon <rmallon@gmail.com>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      6a3ed212
    • D
      mm, hugetlb: fix race in region tracking · 7b24d861
      Davidlohr Bueso 提交于
      There is a race condition if we map a same file on different processes.
      Region tracking is protected by mmap_sem and hugetlb_instantiation_mutex.
      When we do mmap, we don't grab a hugetlb_instantiation_mutex, but only
      mmap_sem (exclusively).  This doesn't prevent other tasks from modifying
      the region structure, so it can be modified by two processes
      concurrently.
      
      To solve this, introduce a spinlock to resv_map and make region
      manipulation function grab it before they do actual work.
      
      [davidlohr@hp.com: updated changelog]
      Signed-off-by: NDavidlohr Bueso <davidlohr@hp.com>
      Signed-off-by: NJoonsoo Kim <iamjoonsoo.kim@lge.com>
      Suggested-by: NJoonsoo Kim <iamjoonsoo.kim@lge.com>
      Acked-by: NDavid Gibson <david@gibson.dropbear.id.au>
      Cc: David Gibson <david@gibson.dropbear.id.au>
      Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      7b24d861
    • J
      mm, hugetlb: unify region structure handling · 9119a41e
      Joonsoo Kim 提交于
      Currently, to track reserved and allocated regions, we use two different
      ways, depending on the mapping.  For MAP_SHARED, we use
      address_mapping's private_list and, while for MAP_PRIVATE, we use a
      resv_map.
      
      Now, we are preparing to change a coarse grained lock which protect a
      region structure to fine grained lock, and this difference hinder it.
      So, before changing it, unify region structure handling, consistently
      using a resv_map regardless of the kind of mapping.
      Signed-off-by: NJoonsoo Kim <iamjoonsoo.kim@lge.com>
      Signed-off-by: NDavidlohr Bueso <davidlohr@hp.com>
      Reviewed-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Reviewed-by: NNaoya Horiguchi <n-horiguchi@ah.jp.nec.com>
      Cc: David Gibson <david@gibson.dropbear.id.au>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      9119a41e
    • M
      mm: optimize put_mems_allowed() usage · d26914d1
      Mel Gorman 提交于
      Since put_mems_allowed() is strictly optional, its a seqcount retry, we
      don't need to evaluate the function if the allocation was in fact
      successful, saving a smp_rmb some loads and comparisons on some relative
      fast-paths.
      
      Since the naming, get/put_mems_allowed() does suggest a mandatory
      pairing, rename the interface, as suggested by Mel, to resemble the
      seqcount interface.
      
      This gives us: read_mems_allowed_begin() and read_mems_allowed_retry(),
      where it is important to note that the return value of the latter call
      is inverted from its previous incarnation.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: NMel Gorman <mgorman@suse.de>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      d26914d1
    • J
      quota: provide function to grab quota structure reference · 9f985cb6
      Jan Kara 提交于
      Provide dqgrab() function to get quota structure reference when we are
      sure it already has at least one active reference.  Make use of this
      function inside quota code.
      Signed-off-by: NJan Kara <jack@suse.cz>
      Reviewed-by: NMark Fasheh <mfasheh@suse.de>
      Reviewed-by: NSrinivas Eeda <srinivas.eeda@oracle.com>
      Cc: Joel Becker <jlbec@evilplan.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      9f985cb6
    • J
      fanotify: convert access_mutex to spinlock · 9573f793
      Jan Kara 提交于
      access_mutex is used only to guard operations on access_list.  There's
      no need for sleeping within this lock so just make a spinlock out of it.
      Signed-off-by: NJan Kara <jack@suse.cz>
      Cc: Eric Paris <eparis@redhat.com>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      9573f793
    • L
      kmemleak: remove redundant code · 5f3bf19a
      Li Zefan 提交于
      Remove kmemleak_padding() and kmemleak_release().
      Signed-off-by: NLi Zefan <lizefan@huawei.com>
      Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      5f3bf19a
    • J
      bdi: avoid oops on device removal · 5acda9d1
      Jan Kara 提交于
      After commit 839a8e86 ("writeback: replace custom worker pool
      implementation with unbound workqueue") when device is removed while we
      are writing to it we crash in bdi_writeback_workfn() ->
      set_worker_desc() because bdi->dev is NULL.
      
      This can happen because even though bdi_unregister() cancels all pending
      flushing work, nothing really prevents new ones from being queued from
      balance_dirty_pages() or other places.
      
      Fix the problem by clearing BDI_registered bit in bdi_unregister() and
      checking it before scheduling of any flushing work.
      
      Fixes: 839a8e86Reviewed-by: NTejun Heo <tj@kernel.org>
      Signed-off-by: NJan Kara <jack@suse.cz>
      Cc: Derek Basehore <dbasehore@chromium.org>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      5acda9d1
  3. 02 4月, 2014 7 次提交
  4. 01 4月, 2014 1 次提交