1. 04 4月, 2014 40 次提交
    • J
      lib: radix_tree: tree node interface · 139e5616
      Johannes Weiner 提交于
      Make struct radix_tree_node part of the public interface and provide API
      functions to create, look up, and delete whole nodes.  Refactor the
      existing insert, look up, delete functions on top of these new node
      primitives.
      
      This will allow the VM to track and garbage collect page cache radix
      tree nodes.
      
      [sasha.levin@oracle.com: return correct error code on insertion failure]
      Signed-off-by: NJohannes Weiner <hannes@cmpxchg.org>
      Reviewed-by: NRik van Riel <riel@redhat.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Bob Liu <bob.liu@oracle.com>
      Cc: Christoph Hellwig <hch@infradead.org>
      Cc: Dave Chinner <david@fromorbit.com>
      Cc: Greg Thelen <gthelen@google.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Jan Kara <jack@suse.cz>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Luigi Semenzato <semenzato@google.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Metin Doslu <metin@citusdata.com>
      Cc: Michel Lespinasse <walken@google.com>
      Cc: Ozgun Erdogan <ozgun@citusdata.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Roman Gushchin <klamm@yandex-team.ru>
      Cc: Ryan Mallon <rmallon@gmail.com>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Signed-off-by: NSasha Levin <sasha.levin@oracle.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      139e5616
    • J
      mm: thrash detection-based file cache sizing · a528910e
      Johannes Weiner 提交于
      The VM maintains cached filesystem pages on two types of lists.  One
      list holds the pages recently faulted into the cache, the other list
      holds pages that have been referenced repeatedly on that first list.
      The idea is to prefer reclaiming young pages over those that have shown
      to benefit from caching in the past.  We call the recently usedbut
      ultimately was not significantly better than a FIFO policy and still
      thrashed cache based on eviction speed, rather than actual demand for
      cache.
      
      This patch solves one half of the problem by decoupling the ability to
      detect working set changes from the inactive list size.  By maintaining
      a history of recently evicted file pages it can detect frequently used
      pages with an arbitrarily small inactive list size, and subsequently
      apply pressure on the active list based on actual demand for cache, not
      just overall eviction speed.
      
      Every zone maintains a counter that tracks inactive list aging speed.
      When a page is evicted, a snapshot of this counter is stored in the
      now-empty page cache radix tree slot.  On refault, the minimum access
      distance of the page can be assessed, to evaluate whether the page
      should be part of the active list or not.
      
      This fixes the VM's blindness towards working set changes in excess of
      the inactive list.  And it's the foundation to further improve the
      protection ability and reduce the minimum inactive list size of 50%.
      Signed-off-by: NJohannes Weiner <hannes@cmpxchg.org>
      Reviewed-by: NRik van Riel <riel@redhat.com>
      Reviewed-by: NMinchan Kim <minchan@kernel.org>
      Reviewed-by: NBob Liu <bob.liu@oracle.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Christoph Hellwig <hch@infradead.org>
      Cc: Dave Chinner <david@fromorbit.com>
      Cc: Greg Thelen <gthelen@google.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Jan Kara <jack@suse.cz>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Luigi Semenzato <semenzato@google.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Metin Doslu <metin@citusdata.com>
      Cc: Michel Lespinasse <walken@google.com>
      Cc: Ozgun Erdogan <ozgun@citusdata.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Roman Gushchin <klamm@yandex-team.ru>
      Cc: Ryan Mallon <rmallon@gmail.com>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      a528910e
    • J
      mm + fs: store shadow entries in page cache · 91b0abe3
      Johannes Weiner 提交于
      Reclaim will be leaving shadow entries in the page cache radix tree upon
      evicting the real page.  As those pages are found from the LRU, an
      iput() can lead to the inode being freed concurrently.  At this point,
      reclaim must no longer install shadow pages because the inode freeing
      code needs to ensure the page tree is really empty.
      
      Add an address_space flag, AS_EXITING, that the inode freeing code sets
      under the tree lock before doing the final truncate.  Reclaim will check
      for this flag before installing shadow pages.
      Signed-off-by: NJohannes Weiner <hannes@cmpxchg.org>
      Reviewed-by: NRik van Riel <riel@redhat.com>
      Reviewed-by: NMinchan Kim <minchan@kernel.org>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Bob Liu <bob.liu@oracle.com>
      Cc: Christoph Hellwig <hch@infradead.org>
      Cc: Dave Chinner <david@fromorbit.com>
      Cc: Greg Thelen <gthelen@google.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Jan Kara <jack@suse.cz>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Luigi Semenzato <semenzato@google.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Metin Doslu <metin@citusdata.com>
      Cc: Michel Lespinasse <walken@google.com>
      Cc: Ozgun Erdogan <ozgun@citusdata.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Roman Gushchin <klamm@yandex-team.ru>
      Cc: Ryan Mallon <rmallon@gmail.com>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      91b0abe3
    • J
      mm + fs: prepare for non-page entries in page cache radix trees · 0cd6144a
      Johannes Weiner 提交于
      shmem mappings already contain exceptional entries where swap slot
      information is remembered.
      
      To be able to store eviction information for regular page cache, prepare
      every site dealing with the radix trees directly to handle entries other
      than pages.
      
      The common lookup functions will filter out non-page entries and return
      NULL for page cache holes, just as before.  But provide a raw version of
      the API which returns non-page entries as well, and switch shmem over to
      use it.
      Signed-off-by: NJohannes Weiner <hannes@cmpxchg.org>
      Reviewed-by: NRik van Riel <riel@redhat.com>
      Reviewed-by: NMinchan Kim <minchan@kernel.org>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Bob Liu <bob.liu@oracle.com>
      Cc: Christoph Hellwig <hch@infradead.org>
      Cc: Dave Chinner <david@fromorbit.com>
      Cc: Greg Thelen <gthelen@google.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Jan Kara <jack@suse.cz>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Luigi Semenzato <semenzato@google.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Metin Doslu <metin@citusdata.com>
      Cc: Michel Lespinasse <walken@google.com>
      Cc: Ozgun Erdogan <ozgun@citusdata.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Roman Gushchin <klamm@yandex-team.ru>
      Cc: Ryan Mallon <rmallon@gmail.com>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      0cd6144a
    • J
      mm: filemap: move radix tree hole searching here · e7b563bb
      Johannes Weiner 提交于
      The radix tree hole searching code is only used for page cache, for
      example the readahead code trying to get a a picture of the area
      surrounding a fault.
      
      It sufficed to rely on the radix tree definition of holes, which is
      "empty tree slot".  But this is about to change, though, as shadow page
      descriptors will be stored in the page cache after the actual pages get
      evicted from memory.
      
      Move the functions over to mm/filemap.c and make them native page cache
      operations, where they can later be adapted to handle the new definition
      of "page cache hole".
      Signed-off-by: NJohannes Weiner <hannes@cmpxchg.org>
      Reviewed-by: NRik van Riel <riel@redhat.com>
      Reviewed-by: NMinchan Kim <minchan@kernel.org>
      Acked-by: NMel Gorman <mgorman@suse.de>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Bob Liu <bob.liu@oracle.com>
      Cc: Christoph Hellwig <hch@infradead.org>
      Cc: Dave Chinner <david@fromorbit.com>
      Cc: Greg Thelen <gthelen@google.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Jan Kara <jack@suse.cz>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Luigi Semenzato <semenzato@google.com>
      Cc: Metin Doslu <metin@citusdata.com>
      Cc: Michel Lespinasse <walken@google.com>
      Cc: Ozgun Erdogan <ozgun@citusdata.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Roman Gushchin <klamm@yandex-team.ru>
      Cc: Ryan Mallon <rmallon@gmail.com>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      e7b563bb
    • J
      mm: shmem: save one radix tree lookup when truncating swapped pages · 6dbaf22c
      Johannes Weiner 提交于
      Page cache radix tree slots are usually stabilized by the page lock, but
      shmem's swap cookies have no such thing.  Because the overall truncation
      loop is lockless, the swap entry is currently confirmed by a tree lookup
      and then deleted by another tree lookup under the same tree lock region.
      
      Use radix_tree_delete_item() instead, which does the verification and
      deletion with only one lookup.  This also allows removing the
      delete-only special case from shmem_radix_tree_replace().
      Signed-off-by: NJohannes Weiner <hannes@cmpxchg.org>
      Reviewed-by: NMinchan Kim <minchan@kernel.org>
      Reviewed-by: NRik van Riel <riel@redhat.com>
      Acked-by: NMel Gorman <mgorman@suse.de>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Bob Liu <bob.liu@oracle.com>
      Cc: Christoph Hellwig <hch@infradead.org>
      Cc: Dave Chinner <david@fromorbit.com>
      Cc: Greg Thelen <gthelen@google.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Jan Kara <jack@suse.cz>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Luigi Semenzato <semenzato@google.com>
      Cc: Metin Doslu <metin@citusdata.com>
      Cc: Michel Lespinasse <walken@google.com>
      Cc: Ozgun Erdogan <ozgun@citusdata.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Roman Gushchin <klamm@yandex-team.ru>
      Cc: Ryan Mallon <rmallon@gmail.com>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      6dbaf22c
    • J
      lib: radix-tree: add radix_tree_delete_item() · 53c59f26
      Johannes Weiner 提交于
      Provide a function that does not just delete an entry at a given index,
      but also allows passing in an expected item.  Delete only if that item
      is still located at the specified index.
      
      This is handy when lockless tree traversals want to delete entries as
      well because they don't have to do an second, locked lookup to verify
      the slot has not changed under them before deleting the entry.
      Signed-off-by: NJohannes Weiner <hannes@cmpxchg.org>
      Reviewed-by: NMinchan Kim <minchan@kernel.org>
      Reviewed-by: NRik van Riel <riel@redhat.com>
      Acked-by: NMel Gorman <mgorman@suse.de>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Bob Liu <bob.liu@oracle.com>
      Cc: Christoph Hellwig <hch@infradead.org>
      Cc: Dave Chinner <david@fromorbit.com>
      Cc: Greg Thelen <gthelen@google.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Jan Kara <jack@suse.cz>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Luigi Semenzato <semenzato@google.com>
      Cc: Metin Doslu <metin@citusdata.com>
      Cc: Michel Lespinasse <walken@google.com>
      Cc: Ozgun Erdogan <ozgun@citusdata.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Roman Gushchin <klamm@yandex-team.ru>
      Cc: Ryan Mallon <rmallon@gmail.com>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      53c59f26
    • J
      fs: cachefiles: use add_to_page_cache_lru() · 55881bc7
      Johannes Weiner 提交于
      This code used to have its own lru cache pagevec up until a0b8cab3 ("mm:
      remove lru parameter from __pagevec_lru_add and remove parts of pagevec
      API").  Now it's just add_to_page_cache() followed by lru_cache_add(),
      might as well use add_to_page_cache_lru() directly.
      Signed-off-by: NJohannes Weiner <hannes@cmpxchg.org>
      Reviewed-by: NRik van Riel <riel@redhat.com>
      Reviewed-by: NMinchan Kim <minchan@kernel.org>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Bob Liu <bob.liu@oracle.com>
      Cc: Christoph Hellwig <hch@infradead.org>
      Cc: Dave Chinner <david@fromorbit.com>
      Cc: Greg Thelen <gthelen@google.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Jan Kara <jack@suse.cz>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Luigi Semenzato <semenzato@google.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Metin Doslu <metin@citusdata.com>
      Cc: Michel Lespinasse <walken@google.com>
      Cc: Ozgun Erdogan <ozgun@citusdata.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Roman Gushchin <klamm@yandex-team.ru>
      Cc: Ryan Mallon <rmallon@gmail.com>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: David Howells <dhowells@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      55881bc7
    • J
      mm: vmstat: fix UP zone state accounting · 6a3ed212
      Johannes Weiner 提交于
      Summary:
      
      The VM maintains cached filesystem pages on two types of lists.  One
      list holds the pages recently faulted into the cache, the other list
      holds pages that have been referenced repeatedly on that first list.
      The idea is to prefer reclaiming young pages over those that have shown
      to benefit from caching in the past.  We call the recently used list
      "inactive list" and the frequently used list "active list".
      
      Currently, the VM aims for a 1:1 ratio between the lists, which is the
      "perfect" trade-off between the ability to *protect* frequently used
      pages and the ability to *detect* frequently used pages.  This means
      that working set changes bigger than half of cache memory go undetected
      and thrash indefinitely, whereas working sets bigger than half of cache
      memory are unprotected against used-once streams that don't even need
      caching.
      
      This happens on file servers and media streaming servers, where the
      popular files and file sections change over time.  Even though the
      individual files might be smaller than half of memory, concurrent access
      to many of them may still result in their inter-reference distance being
      greater than half of memory.  It's also been reported as a problem on
      database workloads that switch back and forth between tables that are
      bigger than half of memory.  In these cases the VM never recognizes the
      new working set and will for the remainder of the workload thrash disk
      data which could easily live in memory.
      
      Historically, every reclaim scan of the inactive list also took a
      smaller number of pages from the tail of the active list and moved them
      to the head of the inactive list.  This model gave established working
      sets more gracetime in the face of temporary use-once streams, but
      ultimately was not significantly better than a FIFO policy and still
      thrashed cache based on eviction speed, rather than actual demand for
      cache.
      
      This series solves the problem by maintaining a history of pages evicted
      from the inactive list, enabling the VM to detect frequently used pages
      regardless of inactive list size and facilitate working set transitions.
      
      Tests:
      
      The reported database workload is easily demonstrated on a 8G machine
      with two filesets a 6G.  This fio workload operates on one set first,
      then switches to the other.  The VM should obviously always cache the
      set that the workload is currently using.
      
      This test is based on a problem encountered by Citus Data customers:
        http://citusdata.com/blog/72-linux-memory-manager-and-your-big-data
      
      unpatched:
        db1: READ: io=98304MB, aggrb=885559KB/s, minb=885559KB/s, maxb=885559KB/s, mint= 113672msec, maxt= 113672msec
        db2: READ: io=98304MB, aggrb= 66169KB/s, minb= 66169KB/s, maxb= 66169KB/s, mint=1521302msec, maxt=1521302msec
        sdb: ios=835750/4, merge=2/1, ticks=4659739/60016, in_queue=4719203, util=98.92%
      
        real    27m15.541s
        user    0m19.059s
        sys     0m51.459s
      
      patched:
        db1: READ: io=98304MB, aggrb=877783KB/s, minb=877783KB/s, maxb=877783KB/s, mint=114679msec, maxt=114679msec
        db2: READ: io=98304MB, aggrb=397449KB/s, minb=397449KB/s, maxb=397449KB/s, mint=253273msec, maxt=253273msec
        sdb: ios=170587/4, merge=2/1, ticks=954910/61123, in_queue=1015923, util=90.40%
      
        real    6m8.630s
        user    0m14.714s
        sys     0m31.233s
      
      As can be seen, the unpatched kernel simply never adapts to the
      workingset change and db2 is stuck indefinitely with secondary storage
      speed.  The patched kernel needs 2-3 iterations over db2 before it
      replaces db1 and reaches full memory speed.  Given the unbounded
      negative affect of the existing VM behavior, these patches should be
      considered correctness fixes rather than performance optimizations.
      
      Another test resembles a fileserver or streaming server workload, where
      data in excess of memory size is accessed at different frequencies.
      There is very hot data accessed at a high frequency.  Machines should be
      fitted so that the hot set of such a workload can be fully cached or all
      bets are off.  Then there is a very big (compared to available memory)
      set of data that is used-once or at a very low frequency; this is what
      drives the inactive list and does not really benefit from caching.
      Lastly, there is a big set of warm data in between that is accessed at
      medium frequencies and benefits from caching the pages between the first
      and last streamer of each burst.
      
      unpatched:
         hot: READ: io=128000MB, aggrb=160693KB/s, minb=160693KB/s, maxb=160693KB/s, mint=815665msec, maxt=815665msec
        warm: READ: io= 81920MB, aggrb=109853KB/s, minb= 27463KB/s, maxb= 29244KB/s, mint=717110msec, maxt=763617msec
        cold: READ: io= 30720MB, aggrb= 35245KB/s, minb= 35245KB/s, maxb= 35245KB/s, mint=892530msec, maxt=892530msec
         sdb: ios=797960/4, merge=11763/1, ticks=4307910/796, in_queue=4308380, util=100.00%
      
      patched:
         hot: READ: io=128000MB, aggrb=160678KB/s, minb=160678KB/s, maxb=160678KB/s, mint=815740msec, maxt=815740msec
        warm: READ: io= 81920MB, aggrb=147747KB/s, minb= 36936KB/s, maxb= 40960KB/s, mint=512000msec, maxt=567767msec
        cold: READ: io= 30720MB, aggrb= 40960KB/s, minb= 40960KB/s, maxb= 40960KB/s, mint=768000msec, maxt=768000msec
         sdb: ios=596514/4, merge=9341/1, ticks=2395362/997, in_queue=2396484, util=79.18%
      
      In both kernels, the hot set is propagated to the active list and then
      served from cache.
      
      In both kernels, the beginning of the warm set is propagated to the
      active list as well, but in the unpatched case the active list
      eventually takes up half of memory and no new pages from the warm set
      get activated, despite repeated access, and despite most of the active
      list soon being stale.  The patched kernel on the other hand detects the
      thrashing and manages to keep this cache window rolling through the data
      set.  This frees up enough IO bandwidth that the cold set is served at
      full speed as well and disk utilization even drops by 20%.
      
      For reference, this same test was performed with the traditional
      demotion mechanism, where deactivation is coupled to inactive list
      reclaim.  However, this had the same outcome as the unpatched kernel:
      while the warm set does indeed get activated continuously, it is forced
      out of the active list by inactive list pressure, which is dictated
      primarily by the unrelated cold set.  The warm set is evicted before
      subsequent streamers can benefit from it, even though there would be
      enough space available to cache the pages of interest.
      
      Costs:
      
      Page reclaim used to shrink the radix trees but now the tree nodes are
      reused for shadow entries, where the cost depends heavily on the page
      cache access patterns.  However, with workloads that maintain spatial or
      temporal locality, the shadow entries are either refaulted quickly or
      reclaimed along with the inode object itself.  Workloads that will
      experience a memory cost increase are those that don't really benefit
      from caching in the first place.
      
      A more predictable alternative would be a fixed-cost separate pool of
      shadow entries, but this would incur relatively higher memory cost for
      well-behaved workloads at the benefit of cornercases.  It would also
      make the shadow entry lookup more costly compared to storing them
      directly in the cache structure.
      
      Future:
      
      To simplify the merging process, this patch set is implementing thrash
      detection on a global per-zone level only for now, but the design is
      such that it can be extended to memory cgroups as well.  All we need to
      do is store the unique cgroup ID along the node and zone identifier
      inside the eviction cookie to identify the lruvec.
      
      Right now we have a fixed ratio (50:50) between inactive and active list
      but we already have complaints about working sets exceeding half of
      memory being pushed out of the cache by simple streaming in the
      background.  Ultimately, we want to adjust this ratio and allow for a
      much smaller inactive list.  These patches are an essential step in this
      direction because they decouple the VMs ability to detect working set
      changes from the inactive list size.  This would allow us to base the
      inactive list size on the combined readahead window size for example and
      potentially protect a much bigger working set.
      
      It's also a big step towards activating pages with a reuse distance
      larger than memory, as long as they are the most frequently used pages
      in the workload.  This will require knowing more about the access
      frequency of active pages than what we measure right now, so it's also
      deferred in this series.
      
      Another possibility of having thrashing information would be to revisit
      the idea of local reclaim in the form of zero-config memory control
      groups.  Instead of having allocating tasks go straight to global
      reclaim, they could try to reclaim the pages in the memcg they are part
      of first as long as the group is not thrashing.  This would allow a user
      to drop e.g.  a back-up job in an otherwise unconfigured memcg and it
      would only inflate (and possibly do global reclaim) until it has enough
      memory to do proper readahead.  But once it reaches that point and stops
      thrashing it would just recycle its own used-once pages without kicking
      out the cache of any other tasks in the system more than necessary.
      
      This patch (of 10):
      
      Fengguang Wu's build testing spotted problems with inc_zone_state() and
      dec_zone_state() on UP configurations in out-of-tree patches.
      
      inc_zone_state() is declared but not defined, dec_zone_state() is
      missing entirely.
      
      Just like with *_zone_page_state(), they can be defined like their
      preemption-unsafe counterparts on UP.
      
      [akpm@linux-foundation.org: make it build]
      Signed-off-by: NJohannes Weiner <hannes@cmpxchg.org>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Bob Liu <bob.liu@oracle.com>
      Cc: Christoph Hellwig <hch@infradead.org>
      Cc: Dave Chinner <david@fromorbit.com>
      Cc: Greg Thelen <gthelen@google.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Jan Kara <jack@suse.cz>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Luigi Semenzato <semenzato@google.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Metin Doslu <metin@citusdata.com>
      Cc: Michel Lespinasse <walken@google.com>
      Cc: Minchan Kim <minchan.kim@gmail.com>
      Cc: Ozgun Erdogan <ozgun@citusdata.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Roman Gushchin <klamm@yandex-team.ru>
      Cc: Ryan Mallon <rmallon@gmail.com>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      6a3ed212
    • V
      mm: vmscan: shrink_slab: rename max_pass -> freeable · d5bc5fd3
      Vladimir Davydov 提交于
      The name `max_pass' is misleading, because this variable actually keeps
      the estimate number of freeable objects, not the maximal number of
      objects we can scan in this pass, which can be twice that.  Rename it to
      reflect its actual meaning.
      Signed-off-by: NVladimir Davydov <vdavydov@parallels.com>
      Acked-by: NDavid Rientjes <rientjes@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      d5bc5fd3
    • D
      mm, hugetlb: improve page-fault scalability · 8382d914
      Davidlohr Bueso 提交于
      The kernel can currently only handle a single hugetlb page fault at a
      time.  This is due to a single mutex that serializes the entire path.
      This lock protects from spurious OOM errors under conditions of low
      availability of free hugepages.  This problem is specific to hugepages,
      because it is normal to want to use every single hugepage in the system
      - with normal pages we simply assume there will always be a few spare
      pages which can be used temporarily until the race is resolved.
      
      Address this problem by using a table of mutexes, allowing a better
      chance of parallelization, where each hugepage is individually
      serialized.  The hash key is selected depending on the mapping type.
      For shared ones it consists of the address space and file offset being
      faulted; while for private ones the mm and virtual address are used.
      The size of the table is selected based on a compromise of collisions
      and memory footprint of a series of database workloads.
      
      Large database workloads that make heavy use of hugepages can be
      particularly exposed to this issue, causing start-up times to be
      painfully slow.  This patch reduces the startup time of a 10 Gb Oracle
      DB (with ~5000 faults) from 37.5 secs to 25.7 secs.  Larger workloads
      will naturally benefit even more.
      
      NOTE:
      The only downside to this patch, detected by Joonsoo Kim, is that a
      small race is possible in private mappings: A child process (with its
      own mm, after cow) can instantiate a page that is already being handled
      by the parent in a cow fault.  When low on pages, can trigger spurious
      OOMs.  I have not been able to think of a efficient way of handling
      this...  but do we really care about such a tiny window? We already
      maintain another theoretical race with normal pages.  If not, one
      possible way to is to maintain the single hash for private mappings --
      any workloads that *really* suffer from this scaling problem should
      already use shared mappings.
      
      [akpm@linux-foundation.org: remove stray + characters, go BUG if hugetlb_init() kmalloc fails]
      Signed-off-by: NDavidlohr Bueso <davidlohr@hp.com>
      Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Cc: David Gibson <david@gibson.dropbear.id.au>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      8382d914
    • J
      mm, hugetlb: use vma_resv_map() map types · 4e35f483
      Joonsoo Kim 提交于
      Util now, we get a resv_map by two ways according to each mapping type.
      This makes code dirty and unreadable.  Unify it.
      
      [davidlohr@hp.com: code cleanups]
      Signed-off-by: NJoonsoo Kim <iamjoonsoo.kim@lge.com>
      Signed-off-by: NDavidlohr Bueso <davidlohr@hp.com>
      Reviewed-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Reviewed-by: NNaoya Horiguchi <n-horiguchi@ah.jp.nec.com>
      Cc: David Gibson <david@gibson.dropbear.id.au>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      4e35f483
    • J
      mm, hugetlb: remove resv_map_put · f031dd27
      Joonsoo Kim 提交于
      This is a preparation patch to unify the use of vma_resv_map()
      regardless of the map type.  This patch prepares it by removing
      resv_map_put(), which only works for HPAGE_RESV_OWNER's resv_map, not
      for all resv_maps.
      
      [davidlohr@hp.com: update changelog]
      Signed-off-by: NJoonsoo Kim <iamjoonsoo.kim@lge.com>
      Signed-off-by: NDavidlohr Bueso <davidlohr@hp.com>
      Reviewed-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Reviewed-by: NNaoya Horiguchi <n-horiguchi@ah.jp.nec.com>
      Cc: David Gibson <david@gibson.dropbear.id.au>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      f031dd27
    • D
      mm, hugetlb: fix race in region tracking · 7b24d861
      Davidlohr Bueso 提交于
      There is a race condition if we map a same file on different processes.
      Region tracking is protected by mmap_sem and hugetlb_instantiation_mutex.
      When we do mmap, we don't grab a hugetlb_instantiation_mutex, but only
      mmap_sem (exclusively).  This doesn't prevent other tasks from modifying
      the region structure, so it can be modified by two processes
      concurrently.
      
      To solve this, introduce a spinlock to resv_map and make region
      manipulation function grab it before they do actual work.
      
      [davidlohr@hp.com: updated changelog]
      Signed-off-by: NDavidlohr Bueso <davidlohr@hp.com>
      Signed-off-by: NJoonsoo Kim <iamjoonsoo.kim@lge.com>
      Suggested-by: NJoonsoo Kim <iamjoonsoo.kim@lge.com>
      Acked-by: NDavid Gibson <david@gibson.dropbear.id.au>
      Cc: David Gibson <david@gibson.dropbear.id.au>
      Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      7b24d861
    • J
      mm, hugetlb: improve, cleanup resv_map parameters · 1406ec9b
      Joonsoo Kim 提交于
      To change a protection method for region tracking to find grained one,
      we pass the resv_map, instead of list_head, to region manipulation
      functions.
      
      This doesn't introduce any functional change, and it is just for
      preparing a next step.
      
      [davidlohr@hp.com: update changelog]
      Signed-off-by: NJoonsoo Kim <iamjoonsoo.kim@lge.com>
      Signed-off-by: NDavidlohr Bueso <davidlohr@hp.com>
      Reviewed-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Reviewed-by: NNaoya Horiguchi <n-horiguchi@ah.jp.nec.com>
      Cc: David Gibson <david@gibson.dropbear.id.au>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      1406ec9b
    • J
      mm, hugetlb: unify region structure handling · 9119a41e
      Joonsoo Kim 提交于
      Currently, to track reserved and allocated regions, we use two different
      ways, depending on the mapping.  For MAP_SHARED, we use
      address_mapping's private_list and, while for MAP_PRIVATE, we use a
      resv_map.
      
      Now, we are preparing to change a coarse grained lock which protect a
      region structure to fine grained lock, and this difference hinder it.
      So, before changing it, unify region structure handling, consistently
      using a resv_map regardless of the kind of mapping.
      Signed-off-by: NJoonsoo Kim <iamjoonsoo.kim@lge.com>
      Signed-off-by: NDavidlohr Bueso <davidlohr@hp.com>
      Reviewed-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Reviewed-by: NNaoya Horiguchi <n-horiguchi@ah.jp.nec.com>
      Cc: David Gibson <david@gibson.dropbear.id.au>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      9119a41e
    • M
      mm: optimize put_mems_allowed() usage · d26914d1
      Mel Gorman 提交于
      Since put_mems_allowed() is strictly optional, its a seqcount retry, we
      don't need to evaluate the function if the allocation was in fact
      successful, saving a smp_rmb some loads and comparisons on some relative
      fast-paths.
      
      Since the naming, get/put_mems_allowed() does suggest a mandatory
      pairing, rename the interface, as suggested by Mel, to resemble the
      seqcount interface.
      
      This gives us: read_mems_allowed_begin() and read_mems_allowed_retry(),
      where it is important to note that the return value of the latter call
      is inverted from its previous incarnation.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: NMel Gorman <mgorman@suse.de>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      d26914d1
    • D
      mm, compaction: ignore pageblock skip when manually invoking compaction · 91ca9186
      David Rientjes 提交于
      The cached pageblock hint should be ignored when triggering compaction
      through /proc/sys/vm/compact_memory so all eligible memory is isolated.
      Manually invoking compaction is known to be expensive, there's no need
      to skip pageblocks based on heuristics (mainly for debugging).
      Signed-off-by: NDavid Rientjes <rientjes@google.com>
      Acked-by: NMel Gorman <mgorman@suse.de>
      Cc: Rik van Riel <riel@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      91ca9186
    • V
      mm: vmscan: remove shrink_control arg from do_try_to_free_pages() · 3115cd91
      Vladimir Davydov 提交于
      There is no need passing on a shrink_control struct from
      try_to_free_pages() and friends to do_try_to_free_pages() and then to
      shrink_zones(), because it is only used in shrink_zones() and the only
      field initialized on the top level is gfp_mask, which is always equal to
      scan_control.gfp_mask.  So let's move shrink_control initialization to
      shrink_zones().
      Signed-off-by: NVladimir Davydov <vdavydov@parallels.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Michal Hocko <mhocko@suse.cz>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Dave Chinner <dchinner@redhat.com>
      Cc: Glauber Costa <glommer@gmail.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      3115cd91
    • V
      mm: vmscan: move call to shrink_slab() to shrink_zones() · 65ec02cb
      Vladimir Davydov 提交于
      This reduces the indentation level of do_try_to_free_pages() and removes
      extra loop over all eligible zones counting the number of on-LRU pages.
      Signed-off-by: NVladimir Davydov <vdavydov@parallels.com>
      Reviewed-by: NGlauber Costa <glommer@gmail.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Michal Hocko <mhocko@suse.cz>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Dave Chinner <dchinner@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      65ec02cb
    • V
      mm: vmscan: respect NUMA policy mask when shrinking slab on direct reclaim · 99120b77
      Vladimir Davydov 提交于
      When direct reclaim is executed by a process bound to a set of NUMA
      nodes, we should scan only those nodes when possible, but currently we
      will scan kmem from all online nodes even if the kmem shrinker is NUMA
      aware.  That said, binding a process to a particular NUMA node won't
      prevent it from shrinking inode/dentry caches from other nodes, which is
      not good.  Fix this.
      Signed-off-by: NVladimir Davydov <vdavydov@parallels.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Michal Hocko <mhocko@suse.cz>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Dave Chinner <dchinner@redhat.com>
      Cc: Glauber Costa <glommer@gmail.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      99120b77
    • B
      kernel/watchdog.c: touch_nmi_watchdog should only touch local cpu not every one · 62572e29
      Ben Zhang 提交于
      I ran into a scenario where while one cpu was stuck and should have
      panic'd because of the NMI watchdog, it didn't.  The reason was another
      cpu was spewing stack dumps on to the console.  Upon investigation, I
      noticed that when writing to the console and also when dumping the
      stack, the watchdog is touched.
      
      This causes all the cpus to reset their NMI watchdog flags and the
      'stuck' cpu just spins forever.
      
      This change causes the semantics of touch_nmi_watchdog to be changed
      slightly.  Previously, I accidentally changed the semantics and we
      noticed there was a codepath in which touch_nmi_watchdog could be
      touched from a preemtible area.  That caused a BUG() to happen when
      CONFIG_DEBUG_PREEMPT was enabled.  I believe it was the acpi code.
      
      My attempt here re-introduces the change to have the
      touch_nmi_watchdog() code only touch the local cpu instead of all of the
      cpus.  But instead of using __get_cpu_var(), I use the
      __raw_get_cpu_var() version.
      
      This avoids the preemption problem.  However my reasoning wasn't because
      I was trying to be lazy.  Instead I rationalized it as, well if
      preemption is enabled then interrupts should be enabled to and the NMI
      watchdog will have no reason to trigger.  So it won't matter if the
      wrong cpu is touched because the percpu interrupt counters the NMI
      watchdog uses should still be incrementing.
      
      Don said:
      
      : I'm ok with this patch, though it does alter the behaviour of how
      : touch_nmi_watchdog works.  For the most part I don't think most callers
      : need to touch all of the watchdogs (on each cpu).  Perhaps a corner case
      : will pop up (the scheduler??  to mimic touch_all_softlockup_watchdogs() ).
      :
      : But this does address an issue where if a system is locked up and one cpu
      : is spewing out useful debug messages (or error messages), the hard lockup
      : will fail to go off.  We have seen this on RHEL also.
      Signed-off-by: NDon Zickus <dzickus@redhat.com>
      Signed-off-by: NBen Zhang <benzh@chromium.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      62572e29
    • D
      fs/direct-io.c: remove some left over checks · 45d4f855
      Dan Carpenter 提交于
      We know that "ret > 0" is true here.  These tests were left over from
      commit 02afc27f ('direct-io: Handle O_(D)SYNC AIO') and aren't
      needed any more.
      Signed-off-by: NDan Carpenter <dan.carpenter@oracle.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      45d4f855
    • G
      fs/direct-io.c: remove redundant comparison · 2b665e27
      Gu Zheng 提交于
      The return value of bio_get_nr_vecs() cannot be bigger than
      BIO_MAX_PAGES, so we can remove redundant the comparison between
      nr_pages and BIO_MAX_PAGES.
      Signed-off-by: NGu Zheng <guz.fnst@cn.fujitsu.com>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Reviewed-by: NJeff Moyer <jmoyer@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      2b665e27
    • W
      ocfs2: pass "new" parameter to ocfs2_init_xattr_bucket · 9c339255
      Wengang Wang 提交于
      This patch fixes the following crash:
      
        kernel BUG at fs/ocfs2/uptodate.c:530!
        Modules linked in: ocfs2(F) ocfs2_dlmfs ocfs2_stack_o2cb ocfs2_dlm ocfs2_nodemanager ocfs2_stackglue configfs bridge xen_pciback xen_netback xen_blkback xen_gntalloc xen_gntdev xen_evtchn xenfs xen_privcmd sunrpc 8021q garp stp llc bonding be2iscsi iscsi_boot_sysfs bnx2i cnic uio cxgb4i cxgb4 cxgb3i libcxgbi cxgb3 mdio ib_iser rdma_cm ib_cm iw_cm ib_sa ib_mad ib_core ib_addr ipv6 iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi iTCO_wdt iTCO_vendor_support dcdbas coretemp freq_table mperf microcode pcspkr serio_raw bnx2 lpc_ich mfd_core i5k_amb i5000_edac edac_core e1000e sg shpchp ext4(F) jbd2(F) mbcache(F) dm_round_robin(F) sr_mod(F) cdrom(F) usb_storage(F) sd_mod(F) crc_t10dif(F) pata_acpi(F) ata_generic(F) ata_piix(F) mptsas(F) mptscsih(F) mptbase(F) scsi_transport_sas(F) radeon(F)
         ttm(F) drm_kms_helper(F) drm(F) hwmon(F) i2c_algo_bit(F) i2c_core(F) dm_multipath(F) dm_mirror(F) dm_region_hash(F) dm_log(F) dm_mod(F)
        CPU 5
        Pid: 21303, comm: xattr-test Tainted: GF       W    3.8.13-30.el6uek.x86_64 #2 Dell Inc. PowerEdge 1950/0M788G
        RIP: ocfs2_set_new_buffer_uptodate+0x51/0x60 [ocfs2]
        Process xattr-test (pid: 21303, threadinfo ffff880017aca000, task ffff880016a2c480)
        Call Trace:
          ocfs2_init_xattr_bucket+0x8a/0x120 [ocfs2]
          ocfs2_cp_xattr_bucket+0xbb/0x1b0 [ocfs2]
          ocfs2_extend_xattr_bucket+0x20a/0x2f0 [ocfs2]
          ocfs2_add_new_xattr_bucket+0x23e/0x4b0 [ocfs2]
          ocfs2_xattr_set_entry_index_block+0x13c/0x3d0 [ocfs2]
          ocfs2_xattr_block_set+0xf9/0x220 [ocfs2]
          __ocfs2_xattr_set_handle+0x118/0x710 [ocfs2]
          ocfs2_xattr_set+0x691/0x880 [ocfs2]
          ocfs2_xattr_user_set+0x46/0x50 [ocfs2]
          generic_setxattr+0x96/0xa0
          __vfs_setxattr_noperm+0x7b/0x170
          vfs_setxattr+0xbc/0xc0
          setxattr+0xde/0x230
          sys_fsetxattr+0xc6/0xf0
          system_call_fastpath+0x16/0x1b
        Code: 41 80 0c 24 01 48 89 df e8 7d f0 ff ff 4c 89 e6 48 89 df e8 a2 fe ff ff 48 89 df e8 3a f0 ff ff 48 8b 1c 24 4c 8b 64 24 08 c9 c3 <0f> 0b eb fe 90 90 90 90 90 90 90 90 90 90 90 55 48 89 e5 66 66
        RIP  ocfs2_set_new_buffer_uptodate+0x51/0x60 [ocfs2]
      
      It hit the BUG_ON() in ocfs2_set_new_buffer_uptodate():
      
          void ocfs2_set_new_buffer_uptodate(struct ocfs2_caching_info *ci,
                                             struct buffer_head *bh)
          {
                /* This should definitely *not* exist in our cache */
                if (ocfs2_buffer_cached(ci, bh))
                        printk(KERN_ERR "bh->b_blocknr: %lu @ %p\n", bh->b_blocknr, bh);
                BUG_ON(ocfs2_buffer_cached(ci, bh));
      
                set_buffer_uptodate(bh);
      
                ocfs2_metadata_cache_io_lock(ci);
                ocfs2_set_buffer_uptodate(ci, bh);
                ocfs2_metadata_cache_io_unlock(ci);
          }
      
      The problem here is:
      
      We cached a block, but the buffer_head got reused.  When we are to pick
      up this block again, a new buffer_head created with UPTODATE flag
      cleared.  ocfs2_buffer_uptodate() returned false since no UPTODATE is
      set on the buffer_head.  so we set this block to cache as a NEW block,
      then it failed at asserting block is not in cache.
      
      The fix is to add a new parameter indicating the bucket is a new
      allocated or not to ocfs2_init_xattr_bucket().
      ocfs2_init_xattr_bucket() assert block not cached accordingly.
      Signed-off-by: NWengang Wang <wen.gang.wang@oracle.com>
      Cc: Joel Becker <jlbec@evilplan.org>
      Reviewed-by: NMark Fasheh <mfasheh@suse.de>
      Cc: Joe Jin <joe.jin@oracle.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      9c339255
    • J
      ocfs2: avoid system inode ref confusion by adding mutex lock · 43b10a20
      jiangyiwen 提交于
      The following case may lead to the same system inode ref in confusion.
      
      A thread                            B thread
      ocfs2_get_system_file_inode
      ->get_local_system_inode
      ->_ocfs2_get_system_file_inode
                                          because of *arr == NULL,
                                          ocfs2_get_system_file_inode
                                          ->get_local_system_inode
                                          ->_ocfs2_get_system_file_inode
      gets first ref thru
      _ocfs2_get_system_file_inode,
      gets second ref thru igrab and
      set *arr = inode
                                          at the moment, B thread also gets
                                          two refs, so lead to one more
                                          inode ref.
      
      So add mutex lock to avoid multi thread set two inode ref once at the
      same time.
      Signed-off-by: Njiangyiwen <jiangyiwen@huawei.com>
      Reviewed-by: NJoseph Qi <joseph.qi@huawei.com>
      Cc: Joel Becker <jlbec@evilplan.org>
      Cc: Mark Fasheh <mfasheh@suse.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      43b10a20
    • J
      ocfs2: iput inode alloc when failed locally · 7dc3e839
      jiangyiwen 提交于
      In ocfs2_info_handle_freeinode() and ocfs2_test_inode_bit() func, after
      calls ocfs2_get_system_file_inode() to get inode ref, if calls
      ocfs2_info_scan_inode_alloc() or ocfs2_inode_lock() failed, we should
      iput inode alloc to avoid leaking the inode.
      Signed-off-by: Njiangyiwen <jiangyiwen@huawei.com>
      Reviewed-by: NJoseph Qi <joseph.qi@huawei.com>
      Cc: Mark Fasheh <mfasheh@suse.de>
      Cc: Joel Becker <jlbec@evilplan.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      7dc3e839
    • T
      ocfs2/o2net: o2net_listen_data_ready should do nothing if socket state is not TCP_LISTEN · da8ded40
      Tariq Saeed 提交于
      Orabug: 17330860
      
      When accepting an incomming connection o2net_accept_one clones a child
      data socket from the parent listening socket.  It then proceeds to setup
      the child with callback o2net_data_ready() and sk_user_data to NULL.  If
      data arrives in this window, o2net_listen_data_ready will be called with
      some non-deterministic value in sk_user_data (not inherited).  We panic
      when we page fault on sk_user_data -- in parent it is
      sock_def_readable().
      
      The fix is to recognize that this is a data socket being set up by
      looking at the socket state and do nothing.
      Signed-off-by: NTariq Saseed <tariq.x.saeed@oracle.com>
      Signed-off-by: NSrinivas Eeda <srinivas.eeda@oracle.com>
      Reviewed-by: NMark Fasheh <mfasheh@suse.com>
      Cc: Joel Becker <jlbec@evilplan.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      da8ded40
    • Y
      ocfs2: rollback alloc_dinode counts when ocfs2_block_group_set_bits() failed · db66c715
      Younger Liu 提交于
      After updating alloc_dinode counts in ocfs2_alloc_dinode_update_counts(),
      if ocfs2_alloc_dinode_update_bitmap() failed, there is a rare case that
      some space may be lost.
      
      So, roll back alloc_dinode counts when ocfs2_block_group_set_bits()
      failed.
      
      [akpm@linux-foundation.org: coding-style fixes]
      Signed-off-by: NYounger Liu <younger.liucn@gmail.com>
      Reviewed-by: NMark Fasheh <mfasheh@suse.de>
      Cc: Joel Becker <jlbec@evilplan.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      db66c715
    • W
      ocfs2: flock: drop cross-node lock when failed locally · e228f643
      Wengang Wang 提交于
      ocfs2_do_flock() calls ocfs2_file_lock() to get the cross-node clock and
      then call flock_lock_file_wait() to compete with local processes.  In
      case flock_lock_file_wait() failed, say -ENOMEM, clean up work is not
      done.  This patch adds the cleanup --drop the cross-node lock which was
      just granted.
      
      [akpm@linux-foundation.org: coding-style fixes]
      Signed-off-by: NWengang Wang <wen.gang.wang@oracle.com>
      Cc: Joel Becker <jlbec@evilplan.org>
      Reviewed-by: NMark Fasheh <mfasheh@suse.de>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      e228f643
    • D
      ocfs2: call ocfs2_update_inode_fsync_trans when updating any inode · 6fdb702d
      Darrick J. Wong 提交于
      Ensure that ocfs2_update_inode_fsync_trans() is called any time we touch
      an inode in a given transaction.  This is a follow-on to the previous
      patch to reduce lock contention and deadlocking during an fsync
      operation.
      Signed-off-by: NDarrick J. Wong <darrick.wong@oracle.com>
      Cc: Mark Fasheh <mfasheh@suse.de>
      Cc: Joel Becker <jlbec@evilplan.org>
      Cc: Wengang <wen.gang.wang@oracle.com>
      Cc: Greg Marsden <greg.marsden@oracle.com>
      Cc: Srinivas Eeda <srinivas.eeda@oracle.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      6fdb702d
    • T
      ocfs2: fix panic on kfree(xattr->name) · f81c2015
      Tetsuo Handa 提交于
      Commit 9548906b ('xattr: Constify ->name member of "struct xattr"')
      missed that ocfs2 is calling kfree(xattr->name).  As a result, kernel
      panic occurs upon calling kfree(xattr->name) because xattr->name refers
      static constant names.  This patch removes kfree(xattr->name) from
      ocfs2_mknod() and ocfs2_symlink().
      Signed-off-by: NTetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
      Reported-by: NTariq Saeed <tariq.x.saeed@oracle.com>
      Tested-by: NTariq Saeed <tariq.x.saeed@oracle.com>
      Reviewed-by: NSrinivas Eeda <srinivas.eeda@oracle.com>
      Cc: Joel Becker <jlbec@evilplan.org>
      Cc: Mark Fasheh <mfasheh@suse.com>
      Cc: <stable@vger.kernel.org>	[3.12+]
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      f81c2015
    • A
      ocfs2: do not put bh when buffer_uptodate failed · f7cf4f5b
      alex chen 提交于
      Do not put bh when buffer_uptodate failed in ocfs2_write_block and
      ocfs2_write_super_or_backup, because it will put bh in b_end_io.
      Otherwise it will hit a warning "VFS: brelse: Trying to free free
      buffer".
      Signed-off-by: NAlex Chen <alex.chen@huawei.com>
      Reviewed-by: NJoseph Qi <joseph.qi@huawei.com>
      Reviewed-by: NSrinivas Eeda <srinivas.eeda@oracle.com>
      Cc: Mark Fasheh <mfasheh@suse.com>
      Acked-by: NJoel Becker <jlbec@evilplan.org>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      f7cf4f5b
    • X
      ocfs2: __ocfs2_mknod_locked should return error when ocfs2_create_new_inode_locks() failed · 466e68c4
      Xue jiufei 提交于
      When ocfs2_create_new_inode_locks() return error, inode open lock may
      not be obtainted for this inode.  So other nodes can remove this file
      and free dinode when inode still remain in memory on this node, which is
      not correct and may trigger BUG.  So __ocfs2_mknod_locked should return
      error when ocfs2_create_new_inode_locks() failed.
      
                    Node_1                              Node_2
      create fileA, call ocfs2_mknod()
        -> ocfs2_get_init_inode(), allocate inodeA
        -> ocfs2_claim_new_inode(), claim dinode(dinodeA)
        -> call ocfs2_create_new_inode_locks(),
           create open lock failed, return error
        -> __ocfs2_mknod_locked return success
      
                                                      unlink fileA
                                                      try open lock succeed,
                                                      and free dinodeA
      
      create another file, call ocfs2_mknod()
        -> ocfs2_get_init_inode(), allocate inodeB
        -> ocfs2_claim_new_inode(), as Node_2 had freed dinodeA,
           so claim dinodeA and update generation for dinodeA
      
      call __ocfs2_drop_dl_inodes()->ocfs2_delete_inode()
      to free inodeA, and finally triggers BUG
      on(inode->i_generation != le32_to_cpu(fe->i_generation))
      in function ocfs2_inode_lock_update().
      Signed-off-by: Njoyce.xue <xuejiufei@huawei.com>
      Cc: Mark Fasheh <mfasheh@suse.com>
      Cc: Joel Becker <jlbec@evilplan.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      466e68c4
    • T
      ocfs2: allow for more than one data extent when creating xattr · 3ed2be71
      Tariq Saeed 提交于
      Orabug: 18108070
      
      ocfs2_xattr_extend_allocation() hits panic when creating xattr during
      data extent alloc phase.  The problem occurs if due to local alloc
      fragmentation, clusters are spread over multiple extents.  In this case
      ocfs2_add_clusters_in_btree() finds no space to store more than one
      extent record and therefore fails returning RESTART_META.  The situation
      is anticipated for xattr update case but not xattr create case.  This
      fix simply ports that code to create case.
      Signed-off-by: NTariq Saeed <tariq.x.saeed@oracle.com>
      Cc: Joel Becker <jlbec@evilplan.org>
      Cc: Mark Fasheh <mfasheh@suse.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      3ed2be71
    • Z
      ocfs2: fix deadlock risk when kmalloc failed in dlm_query_region_handler · a35ad97c
      Zhonghua Guo 提交于
      In dlm_query_region_handler(), once kmalloc failed, it will unlock
      dlm_domain_lock without lock first, then deadlock happens.
      Signed-off-by: NZhonghua Guo <guozhonghua@h3c.com>
      Signed-off-by: NJoseph Qi <joseph.qi@huawei.com>
      Reviewed-by: NSrinivas Eeda <srinivas.eeda@oracle.com>
      Tested-by: NJoseph Qi <joseph.qi@huawei.com>
      Cc: Joel Becker <jlbec@evilplan.org>
      Cc: Mark Fasheh <mfasheh@suse.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      a35ad97c
    • J
      ocfs2: llseek requires ocfs2 inode lock for the file in SEEK_END · c8d888d9
      Jensen 提交于
      llseek requires ocfs2 inode lock for updating the file size in SEEK_END.
      because the file size maybe update on another node.
      
      This bug can be reproduce the following scenario: at first, we dd a test
      fileA, the file size is 10k.
      
      on NodeA:
      ---------
       1) open the test fileA, lseek the end of file. and print the position.
       2) close the test fileA
      
      on NodeB:
       1) open the test fileA, append the 5k data to test FileA.
       2) lseek the end of file. and print the position.
       3) close file.
      
      At first we run the test program1 on NodeA , the result is 10k.  And
      then run the test program2 on NodeB, the result is 15k.  At last, we run
      the test program1 on NodeA again, the result is 10k.
      
      After applying this patch the three step result is 15k.
      
      test result: 1000000 times lseek call;
      index        lseek with inode lock (unit:us)                lseek without inode lock (unit:us)
        1                   1168162                                    555383
        2                   1168011                                    549504
        3                   1170538                                    549396
        4                   1170375                                    551685
        5                   1170444                                    556719
        6                   1174364                                    555307
        7                   1163294                                    551552
        8                   1170080                                    549350
        9                   1162464                                    553700
       10                   1165441                                    552594
       avg                  1168317                                    552519
      
      avg with lock - avg without lock = 615798
      (avg with lock - avg without lock)/1000000=0.615798 us
      Signed-off-by: NJensen <shencanquan@huawei.com>
      Cc: Jie Liu <jeff.liu@oracle.com>
      Acked-by: NJoel Becker <jlbec@evilplan.org>
      Cc: Mark Fasheh <mfasheh@suse.com>
      Cc: Sunil Mushran <sunil.mushran@gmail.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      c8d888d9
    • J
      ocfs2: fix type conversion risk when get cluster attributes · 41b63efb
      Joseph Qi 提交于
      In o2nm_cluster, cl_idle_timeout_ms, cl_keepalive_delay_ms, as well as
      cl_reconnect_delay_ms, are defined as type of unsigned int.  So we
      should also use unsigned int in the helper functions.
      Signed-off-by: NJoseph Qi <joseph.qi@huawei.com>
      Cc: Joel Becker <jlbec@evilplan.org>
      Cc: Mark Fasheh <mfasheh@suse.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      41b63efb
    • G
      ocfs2: revert iput deferring code in ocfs2_drop_dentry_lock · 8ed6b237
      Goldwyn Rodrigues 提交于
      The following patches are reverted in this patch because these patches
      caused performance regression in the remote unlink() calls.
      
        ea455f8a - ocfs2: Push out dropping of dentry lock to ocfs2_wq
        f7b1aa69 - ocfs2: Fix deadlock on umount
        5fd13189 - ocfs2: Don't oops in ocfs2_kill_sb on a failed mount
      
      Previous patches in this series removed the possible deadlocks from
      downconvert thread so the above patches shouldn't be needed anymore.
      
      The regression is caused because these patches delay the iput() in case
      of dentry unlocks.  This also delays the unlocking of the open lockres.
      The open lockresource is required to test if the inode can be wiped from
      disk or not.  When the deleting node does not get the open lock, it
      marks it as orphan (even though it is not in use by another
      node/process) and causes a journal checkpoint.  This delays operations
      following the inode eviction.  This also moves the inode to the orphaned
      inode which further causes more I/O and a lot of unneccessary orphans.
      
      The following script can be used to generate the load causing issues:
      
        declare -a create
        declare -a remove
        declare -a iterations=(1 2 4 8 16 32 64 128 256 512 1024 2048 4096 8192 16384)
        unique="`mktemp -u XXXXX`"
        script="/tmp/idontknow-${unique}.sh"
        cat <<EOF > "${script}"
        for n in {1..8}; do mkdir -p test/dir\${n}
          eval touch test/dir\${n}/foo{1.."\$1"}
        done
        EOF
        chmod 700 "${script}"
      
        function fcreate ()
        {
          exec 2>&1 /usr/bin/time --format=%E "${script}" "$1"
        }
      
        function fremove ()
        {
          exec 2>&1 /usr/bin/time --format=%E ssh node2 "cd `pwd`; rm -Rf test*"
        }
      
        function fcp ()
        {
          exec 2>&1 /usr/bin/time --format=%E ssh node3 "cd `pwd`; cp -R test test.new"
        }
      
        echo -------------------------------------------------
        echo "| # files | create #s | copy #s | remove #s |"
        echo -------------------------------------------------
        for ((x=0; x < ${#iterations[*]} ; x++)) do
          create[$x]="`fcreate ${iterations[$x]}`"
          copy[$x]="`fcp ${iterations[$x]}`"
          remove[$x]="`fremove`"
          printf "| %8d | %9s | %9s | %9s |\n" ${iterations[$x]} ${create[$x]} ${copy[$x]} ${remove[$x]}
        done
        rm "${script}"
        echo "------------------------"
      Signed-off-by: NSrinivas Eeda <srinivas.eeda@oracle.com>
      Signed-off-by: NGoldwyn Rodrigues <rgoldwyn@suse.com>
      Signed-off-by: NJan Kara <jack@suse.cz>
      Reviewed-by: NMark Fasheh <mfasheh@suse.de>
      Cc: Joel Becker <jlbec@evilplan.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      8ed6b237
    • J
      ocfs2: avoid blocking in ocfs2_mark_lockres_freeing() in downconvert thread · 84d86f83
      Jan Kara 提交于
      If we are dropping last inode reference from downconvert thread, we will
      end up calling ocfs2_mark_lockres_freeing() which can block if the lock
      we are freeing is queued thus creating an A-A deadlock.  Luckily, since
      we are the downconvert thread, we can immediately dequeue the lock and
      thus avoid waiting in this case.
      Signed-off-by: NJan Kara <jack@suse.cz>
      Reviewed-by: NMark Fasheh <mfasheh@suse.de>
      Reviewed-by: NSrinivas Eeda <srinivas.eeda@oracle.com>
      Cc: Joel Becker <jlbec@evilplan.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      84d86f83