1. 16 4月, 2015 11 次提交
  2. 15 4月, 2015 29 次提交
    • V
      memtest: use phys_addr_t for physical addresses · 7f70baee
      Vladimir Murzin 提交于
      Since memtest might be used by other architectures pass input parameters
      as phys_addr_t instead of long to prevent overflow.
      Signed-off-by: NVladimir Murzin <vladimir.murzin@arm.com>
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Tested-by: NMark Rutland <mark.rutland@arm.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Russell King <rmk@arm.linux.org.uk>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      7f70baee
    • V
      mm: move memtest under mm · 4a20799d
      Vladimir Murzin 提交于
      Memtest is a simple feature which fills the memory with a given set of
      patterns and validates memory contents, if bad memory regions is detected
      it reserves them via memblock API.  Since memblock API is widely used by
      other architectures this feature can be enabled outside of x86 world.
      
      This patch set promotes memtest to live under generic mm umbrella and
      enables memtest feature for arm/arm64.
      
      It was reported that this patch set was useful for tracking down an issue
      with some errant DMA on an arm64 platform.
      
      This patch (of 6):
      
      There is nothing platform dependent in the core memtest code, so other
      platforms might benefit from this feature too.
      
      [linux@roeck-us.net: MEMTEST depends on MEMBLOCK]
      Signed-off-by: NVladimir Murzin <vladimir.murzin@arm.com>
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Tested-by: NMark Rutland <mark.rutland@arm.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Russell King <rmk@arm.linux.org.uk>
      Cc: Paul Bolle <pebolle@tiscali.nl>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      4a20799d
    • D
      mm, hugetlb: abort __get_user_pages if current has been oom killed · 02057967
      David Rientjes 提交于
      If __get_user_pages() is faulting a significant number of hugetlb pages,
      usually as the result of mmap(MAP_LOCKED), it can potentially allocate a
      very large amount of memory.
      
      If the process has been oom killed, this will cause a lot of memory to
      potentially deplete memory reserves.
      
      In the same way that commit 4779280d ("mm: make get_user_pages()
      interruptible") aborted for pending SIGKILLs when faulting non-hugetlb
      memory, based on the premise of commit 462e00cc ("oom: stop
      allocating user memory if TIF_MEMDIE is set"), hugetlb page faults now
      terminate when the process has been oom killed.
      Signed-off-by: NDavid Rientjes <rientjes@google.com>
      Acked-by: NRik van Riel <riel@redhat.com>
      Acked-by: NGreg Thelen <gthelen@google.com>
      Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
      Acked-by: NDavidlohr Bueso <dave@stgolabs.net>
      Acked-by: N"Kirill A. Shutemov" <kirill@shutemov.name>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      02057967
    • D
      mm, mempool: do not allow atomic resizing · 11d83360
      David Rientjes 提交于
      Allocating a large number of elements in atomic context could quickly
      deplete memory reserves, so just disallow atomic resizing entirely.
      
      Nothing currently uses mempool_resize() with anything other than
      GFP_KERNEL, so convert existing callers to drop the gfp_mask.
      
      [akpm@linux-foundation.org: coding-style fixes]
      Signed-off-by: NDavid Rientjes <rientjes@google.com>
      Acked-by: Steffen Maier <maier@linux.vnet.ibm.com>	[zfcp]
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Steve French <sfrench@samba.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      11d83360
    • B
      memcg: print cgroup information when system panics due to panic_on_oom · 2415b9f5
      Balasubramani Vivekanandan 提交于
      If kernel panics due to oom, caused by a cgroup reaching its limit, when
      'compulsory panic_on_oom' is enabled, then we will only see that the OOM
      happened because of "compulsory panic_on_oom is enabled" but this doesn't
      tell the difference between mempolicy and memcg.  And dumping system wide
      information is plain wrong and more confusing.  This patch provides the
      information of the cgroup whose limit triggerred panic
      Signed-off-by: NBalasubramani Vivekanandan <balasubramani_vivekanandan@mentor.com>
      Acked-by: NMichal Hocko <mhocko@suse.cz>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      2415b9f5
    • M
      mm: numa: remove migrate_ratelimited · 2a8e7002
      Mel Gorman 提交于
      This code is dead since commit 9e645ab6 ("sched/numa: Continue PTE
      scanning even if migrate rate limited") so remove it.
      Signed-off-by: NMel Gorman <mgorman@suse.de>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      2a8e7002
    • C
      mm: memcontrol: let mem_cgroup_move_account() have effect only if MMU enabled · b1b0deab
      Chen Gang 提交于
      When !MMU, it will report warning. The related warning with allmodconfig
      under c6x:
      
          CC      mm/memcontrol.o
        mm/memcontrol.c:2802:12: warning: 'mem_cgroup_move_account' defined but not used [-Wunused-function]
         static int mem_cgroup_move_account(struct page *page,
                    ^
      Signed-off-by: NChen Gang <gang.chen.5i5j@gmail.com>
      Acked-by: NMichal Hocko <mhocko@suse.cz>
      Acked-by: NJohannes Weiner <hannes@cmpxchg.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      b1b0deab
    • T
      mm: change vunmap to tear down huge KVA mappings · b9820d8f
      Toshi Kani 提交于
      Change vunmap_pmd_range() and vunmap_pud_range() to tear down huge KVA
      mappings when they are set.  pud_clear_huge() and pmd_clear_huge() return
      zero when no-operation is performed, i.e.  huge page mapping was not used.
      
      These changes are only enabled when CONFIG_HAVE_ARCH_HUGE_VMAP is defined
      on the architecture.
      
      [akpm@linux-foundation.org: use consistent code layout]
      Signed-off-by: NToshi Kani <toshi.kani@hp.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Robert Elliott <Elliott@hp.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      b9820d8f
    • T
      mm: change __get_vm_area_node() to use fls_long() · 0f616be1
      Toshi Kani 提交于
      ioremap() and its related interfaces are used to create I/O mappings to
      memory-mapped I/O devices.  The mapping sizes of the traditional I/O
      devices are relatively small.  Non-volatile memory (NVM), however, has
      many GB and is going to have TB soon.  It is not very efficient to create
      large I/O mappings with 4KB.
      
      This patchset extends the ioremap() interfaces to transparently create I/O
      mappings with huge pages whenever possible.  ioremap() continues to use
      4KB mappings when a huge page does not fit into a requested range.  There
      is no change necessary to the drivers using ioremap().  A requested
      physical address must be aligned by a huge page size (1GB or 2MB on x86)
      for using huge page mapping, though.  The kernel huge I/O mapping will
      improve performance of NVM and other devices with large memory, and reduce
      the time to create their mappings as well.
      
      On x86, MTRRs can override PAT memory types with a 4KB granularity.  When
      using a huge page, MTRRs can override the memory type of the huge page,
      which may lead a performance penalty.  The processor can also behave in an
      undefined manner if a huge page is mapped to a memory range that MTRRs
      have mapped with multiple different memory types.  Therefore, the mapping
      code falls back to use a smaller page size toward 4KB when a mapping range
      is covered by non-WB type of MTRRs.  The WB type of MTRRs has no affect on
      the PAT memory types.
      
      The patchset introduces HAVE_ARCH_HUGE_VMAP, which indicates that the arch
      supports huge KVA mappings for ioremap().  User may specify a new kernel
      option "nohugeiomap" to disable the huge I/O mapping capability of
      ioremap() when necessary.
      
      Patch 1-4 change common files to support huge I/O mappings.  There is no
      change in the functinalities unless HAVE_ARCH_HUGE_VMAP is defined on the
      architecture of the system.
      
      Patch 5-6 implement the HAVE_ARCH_HUGE_VMAP funcs on x86, and set
      HAVE_ARCH_HUGE_VMAP on x86.
      
      This patch (of 6):
      
      __get_vm_area_node() takes unsigned long size, which is a 64-bit value on
      a 64-bit kernel.  However, fls(size) simply ignores the upper 32-bit.
      Change to use fls_long() to handle the size properly.
      Signed-off-by: NToshi Kani <toshi.kani@hp.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Robert Elliott <Elliott@hp.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      0f616be1
    • Y
      42ff2703
    • S
      mm: cma: constify and use correct signness in mm/cma.c · ac173824
      Sasha Levin 提交于
      Constify function parameters and use correct signness where needed.
      Signed-off-by: NSasha Levin <sasha.levin@oracle.com>
      Cc: Michal Nazarewicz <mina86@mina86.com>
      Cc: Marek Szyprowski <m.szyprowski@samsung.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
      Acked-by: NGregory Fong <gregory.0xf0@gmail.com>
      Cc: Pintu Kumar <pintu.k@samsung.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      ac173824
    • D
      mm, thp: really limit transparent hugepage allocation to local node · 5265047a
      David Rientjes 提交于
      Commit 077fcf11 ("mm/thp: allocate transparent hugepages on local
      node") restructured alloc_hugepage_vma() with the intent of only
      allocating transparent hugepages locally when there was not an effective
      interleave mempolicy.
      
      alloc_pages_exact_node() does not limit the allocation to the single node,
      however, but rather prefers it.  This is because __GFP_THISNODE is not set
      which would cause the node-local nodemask to be passed.  Without it, only
      a nodemask that prefers the local node is passed.
      
      Fix this by passing __GFP_THISNODE and falling back to small pages when
      the allocation fails.
      
      Commit 9f1b868a ("mm: thp: khugepaged: add policy for finding target
      node") suffers from a similar problem for khugepaged, which is also fixed.
      
      Fixes: 077fcf11 ("mm/thp: allocate transparent hugepages on local node")
      Fixes: 9f1b868a ("mm: thp: khugepaged: add policy for finding target node")
      Signed-off-by: NDavid Rientjes <rientjes@google.com>
      Acked-by: NVlastimil Babka <vbabka@suse.cz>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Pravin Shelar <pshelar@nicira.com>
      Cc: Jarno Rajahalme <jrajahalme@nicira.com>
      Cc: Li Zefan <lizefan@huawei.com>
      Cc: Greg Thelen <gthelen@google.com>
      Cc: Tejun Heo <tj@kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      5265047a
    • D
      mm: remove GFP_THISNODE · 4167e9b2
      David Rientjes 提交于
      NOTE: this is not about __GFP_THISNODE, this is only about GFP_THISNODE.
      
      GFP_THISNODE is a secret combination of gfp bits that have different
      behavior than expected.  It is a combination of __GFP_THISNODE,
      __GFP_NORETRY, and __GFP_NOWARN and is special-cased in the page
      allocator slowpath to fail without trying reclaim even though it may be
      used in combination with __GFP_WAIT.
      
      An example of the problem this creates: commit e97ca8e5 ("mm: fix
      GFP_THISNODE callers and clarify") fixed up many users of GFP_THISNODE
      that really just wanted __GFP_THISNODE.  The problem doesn't end there,
      however, because even it was a no-op for alloc_misplaced_dst_page(),
      which also sets __GFP_NORETRY and __GFP_NOWARN, and
      migrate_misplaced_transhuge_page(), where __GFP_NORETRY and __GFP_NOWAIT
      is set in GFP_TRANSHUGE.  Converting GFP_THISNODE to __GFP_THISNODE is a
      no-op in these cases since the page allocator special-cases
      __GFP_THISNODE && __GFP_NORETRY && __GFP_NOWARN.
      
      It's time to just remove GFP_THISNODE entirely.  We leave __GFP_THISNODE
      to restrict an allocation to a local node, but remove GFP_THISNODE and
      its obscurity.  Instead, we require that a caller clear __GFP_WAIT if it
      wants to avoid reclaim.
      
      This allows the aforementioned functions to actually reclaim as they
      should.  It also enables any future callers that want to do
      __GFP_THISNODE but also __GFP_NORETRY && __GFP_NOWARN to reclaim.  The
      rule is simple: if you don't want to reclaim, then don't set __GFP_WAIT.
      
      Aside: ovs_flow_stats_update() really wants to avoid reclaim as well, so
      it is unchanged.
      Signed-off-by: NDavid Rientjes <rientjes@google.com>
      Acked-by: NVlastimil Babka <vbabka@suse.cz>
      Cc: Christoph Lameter <cl@linux.com>
      Acked-by: NPekka Enberg <penberg@kernel.org>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Acked-by: NJohannes Weiner <hannes@cmpxchg.org>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Pravin Shelar <pshelar@nicira.com>
      Cc: Jarno Rajahalme <jrajahalme@nicira.com>
      Cc: Li Zefan <lizefan@huawei.com>
      Cc: Greg Thelen <gthelen@google.com>
      Cc: Tejun Heo <tj@kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      4167e9b2
    • D
      mm, mempolicy: migrate_to_node should only migrate to node · b360edb4
      David Rientjes 提交于
      migrate_to_node() is intended to migrate a page from one source node to
      a target node.
      
      Today, migrate_to_node() could end up migrating to any node, not only
      the target node.  This is because the page migration allocator,
      new_node_page() does not pass __GFP_THISNODE to
      alloc_pages_exact_node().  This causes the target node to be preferred
      but allows fallback to any other node in order of affinity.
      
      Prevent this by allocating with __GFP_THISNODE.  If memory is not
      available, -ENOMEM will be returned as appropriate.
      Signed-off-by: NDavid Rientjes <rientjes@google.com>
      Reviewed-by: NNaoya Horiguchi <n-horiguchi@ah.jp.nec.com>
      Cc: Mel Gorman <mel@csn.ul.ie>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      b360edb4
    • V
      cleancache: remove limit on the number of cleancache enabled filesystems · 3cb29d11
      Vladimir Davydov 提交于
      The limit equals 32 and is imposed by the number of entries in the
      fs_poolid_map and shared_fs_poolid_map.  Nowadays it is insufficient,
      because with containers on board a Linux host can have hundreds of
      active fs mounts.
      
      These maps were introduced by commit 49a9ab81 ("mm: cleancache:
      lazy initialization to allow tmem backends to build/run as modules") in
      order to allow compiling cleancache drivers as modules.  Real pool ids
      are stored in these maps while super_block->cleancache_poolid points to
      an entry in the map, so that on cleancache registration we can walk over
      all (if there are <= 32 of them, of course) cleancache-enabled super
      blocks and assign real pool ids.
      
      Actually, there is absolutely no need in these maps, because we can
      iterate over all super blocks immediately using iterate_supers.  This is
      not racy, because cleancache_init_ops is called from mount_fs with
      super_block->s_umount held for writing, while iterate_supers takes this
      semaphore for reading, so if we call iterate_supers after setting
      cleancache_ops, all super blocks that had been created before
      cleancache_register_ops was called will be assigned pool ids by the
      action function of iterate_supers while all newer super blocks will
      receive it in cleancache_init_fs.
      
      This patch therefore removes the maps and hence the artificial limit on
      the number of cleancache enabled filesystems.
      Signed-off-by: NVladimir Davydov <vdavydov@parallels.com>
      Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Cc: David Vrabel <david.vrabel@citrix.com>
      Cc: Mark Fasheh <mfasheh@suse.com>
      Cc: Joel Becker <jlbec@evilplan.org>
      Cc: Stefan Hengelein <ilendir@googlemail.com>
      Cc: Florian Schmaus <fschmaus@gmail.com>
      Cc: Andor Daam <andor.daam@googlemail.com>
      Cc: Dan Magenheimer <dan.magenheimer@oracle.com>
      Cc: Bob Liu <lliubbo@gmail.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      3cb29d11
    • V
      cleancache: forbid overriding cleancache_ops · 53d85c98
      Vladimir Davydov 提交于
      Currently, cleancache_register_ops returns the previous value of
      cleancache_ops to allow chaining.  However, chaining, as it is
      implemented now, is extremely dangerous due to possible pool id
      collisions.  Suppose, a new cleancache driver is registered after the
      previous one assigned an id to a super block.  If the new driver assigns
      the same id to another super block, which is perfectly possible, we will
      have two different filesystems using the same id.  No matter if the new
      driver implements chaining or not, we are likely to get data corruption
      with such a configuration eventually.
      
      This patch therefore disables the ability to override cleancache_ops
      altogether as potentially dangerous.  If there is already cleancache
      driver registered, all further calls to cleancache_register_ops will
      return EBUSY.  Since no user of cleancache implements chaining, we only
      need to make minor changes to the code outside the cleancache core.
      Signed-off-by: NVladimir Davydov <vdavydov@parallels.com>
      Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Cc: David Vrabel <david.vrabel@citrix.com>
      Cc: Mark Fasheh <mfasheh@suse.com>
      Cc: Joel Becker <jlbec@evilplan.org>
      Cc: Stefan Hengelein <ilendir@googlemail.com>
      Cc: Florian Schmaus <fschmaus@gmail.com>
      Cc: Andor Daam <andor.daam@googlemail.com>
      Cc: Dan Magenheimer <dan.magenheimer@oracle.com>
      Cc: Bob Liu <lliubbo@gmail.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      53d85c98
    • V
      cleancache: zap uuid arg of cleancache_init_shared_fs · 9de16262
      Vladimir Davydov 提交于
      Use super_block->s_uuid instead.  Every shared filesystem using cleancache
      must now initialize super_block->s_uuid before calling
      cleancache_init_shared_fs.  The only one on the tree, ocfs2, already meets
      this requirement.
      Signed-off-by: NVladimir Davydov <vdavydov@parallels.com>
      Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Cc: David Vrabel <david.vrabel@citrix.com>
      Cc: Mark Fasheh <mfasheh@suse.com>
      Cc: Joel Becker <jlbec@evilplan.org>
      Cc: Stefan Hengelein <ilendir@googlemail.com>
      Cc: Florian Schmaus <fschmaus@gmail.com>
      Cc: Andor Daam <andor.daam@googlemail.com>
      Cc: Dan Magenheimer <dan.magenheimer@oracle.com>
      Cc: Bob Liu <lliubbo@gmail.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      9de16262
    • S
      mm: refactor do_wp_page handling of shared vma into a function · 93e478d4
      Shachar Raindel 提交于
      The do_wp_page function is extremely long.  Extract the logic for
      handling a page belonging to a shared vma into a function of its own.
      
      This helps the readability of the code, without doing any functional
      change in it.
      Signed-off-by: NShachar Raindel <raindel@mellanox.com>
      Acked-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Acked-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Acked-by: NRik van Riel <riel@redhat.com>
      Acked-by: NAndi Kleen <ak@linux.intel.com>
      Acked-by: NHaggai Eran <haggaie@mellanox.com>
      Acked-by: NJohannes Weiner <hannes@cmpxchg.org>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Matthew Wilcox <matthew.r.wilcox@intel.com>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Peter Feiner <pfeiner@google.com>
      Cc: Michel Lespinasse <walken@google.com>
      Reviewed-by: NMichal Hocko <mhocko@suse.cz>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      93e478d4
    • S
      mm: refactor do_wp_page, extract the page copy flow · 2f38ab2c
      Shachar Raindel 提交于
      In some cases, do_wp_page had to copy the page suffering a write fault
      to a new location.  If the function logic decided that to do this, it
      was done by jumping with a "goto" operation to the relevant code block.
      This made the code really hard to understand.  It is also against the
      kernel coding style guidelines.
      
      This patch extracts the page copy and page table update logic to a
      separate function.  It also clean up the naming, from "gotten" to
      "wp_page_copy", and adds few comments.
      Signed-off-by: NShachar Raindel <raindel@mellanox.com>
      Acked-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Acked-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Acked-by: NRik van Riel <riel@redhat.com>
      Acked-by: NAndi Kleen <ak@linux.intel.com>
      Acked-by: NHaggai Eran <haggaie@mellanox.com>
      Acked-by: NJohannes Weiner <hannes@cmpxchg.org>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Matthew Wilcox <matthew.r.wilcox@intel.com>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Peter Feiner <pfeiner@google.com>
      Cc: Michel Lespinasse <walken@google.com>
      Reviewed-by: NMichal Hocko <mhocko@suse.cz>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      2f38ab2c
    • S
      mm: refactor do_wp_page - rewrite the unlock flow · 28766805
      Shachar Raindel 提交于
      When do_wp_page is ending, in several cases it needs to unlock the pages
      and ptls it was accessing.
      
      Currently, this logic was "called" by using a goto jump.  This makes
      following the control flow of the function harder.  Readability was
      further hampered by the unlock case containing large amount of logic
      needed only in one of the 3 cases.
      
      Using goto for cleanup is generally allowed.  However, moving the
      trivial unlocking flows to the relevant call sites allow deeper
      refactoring in the next patch.
      Signed-off-by: NShachar Raindel <raindel@mellanox.com>
      Acked-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Acked-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Acked-by: NRik van Riel <riel@redhat.com>
      Acked-by: NAndi Kleen <ak@linux.intel.com>
      Acked-by: NHaggai Eran <haggaie@mellanox.com>
      Acked-by: NJohannes Weiner <hannes@cmpxchg.org>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Matthew Wilcox <matthew.r.wilcox@intel.com>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Peter Feiner <pfeiner@google.com>
      Cc: Michel Lespinasse <walken@google.com>
      Reviewed-by: NMichal Hocko <mhocko@suse.cz>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      28766805
    • S
      mm: refactor do_wp_page, extract the reuse case · 4e047f89
      Shachar Raindel 提交于
      Currently do_wp_page contains 265 code lines.  It also contains 9 goto
      statements, of which 5 are targeting labels which are not cleanup
      related.  This makes the function extremely difficult to understand.
      
      The following patches are an attempt at breaking the function to its
      basic components, and making it easier to understand.
      
      The patches are straight forward function extractions from do_wp_page.
      As we extract functions, we remove unneeded parameters and simplify the
      code as much as possible.  However, the functionality is supposed to
      remain completely unchanged.  The patches also attempt to document the
      functionality of each extracted function.  In patch 2, we split the
      unlock logic to the contain logic relevant to specific needs of each use
      case, instead of having huge number of conditional decisions in a single
      unlock flow.
      
      This patch (of 4):
      
      When do_wp_page is ending, in several cases it needs to reuse the existing
      page.  This is achieved by making the page table writable, and possibly
      updating the page-cache state.
      
      Currently, this logic was "called" by using a goto jump.  This makes
      following the control flow of the function harder.  It is also against the
      coding style guidelines for using goto.
      
      As the code can easily be refactored into a specialized function, refactor
      it out and simplify the code flow in do_wp_page.
      Acked-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Acked-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Acked-by: NRik van Riel <riel@redhat.com>
      Acked-by: NAndi Kleen <ak@linux.intel.com>
      Acked-by: NHaggai Eran <haggaie@mellanox.com>
      Acked-by: NJohannes Weiner <hannes@cmpxchg.org>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Matthew Wilcox <matthew.r.wilcox@intel.com>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Peter Feiner <pfeiner@google.com>
      Cc: Michel Lespinasse <walken@google.com>
      Reviewed-by: NMichal Hocko <mhocko@suse.cz>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      4e047f89
    • K
      mm: completely remove dumping per-cpu lists from show_mem() · 761b0677
      Konstantin Khlebnikov 提交于
      It seems nobody needs this.
      Signed-off-by: NKonstantin Khlebnikov <koct9i@gmail.com>
      Cc: Michal Hocko <mhocko@suse.cz>
      Cc: David Rientjes <rientjes@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      761b0677
    • K
      mm: hide per-cpu lists in output of show_mem() · d1bfcdb8
      Konstantin Khlebnikov 提交于
      This makes show_mem() much less verbose on huge machines.  Instead of huge
      and almost useless dump of counters for each per-zone per-cpu lists this
      patch prints the sum of these counters for each zone (free_pcp) and size
      of per-cpu list for current cpu (local_pcp).
      
      The filter flag SHOW_MEM_PERCPU_LISTS reverts to the old verbose mode.
      
      [akpm@linux-foundation.org: update show_free_areas comment]
      Signed-off-by: NKonstantin Khlebnikov <khlebnikov@yandex-team.ru>
      Acked-by: NMichal Hocko <mhocko@suse.cz>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      d1bfcdb8
    • K
      page_writeback: clean up mess around cancel_dirty_page() · b9ea2515
      Konstantin Khlebnikov 提交于
      This patch replaces cancel_dirty_page() with a helper function
      account_page_cleaned() which only updates counters.  It's called from
      truncate_complete_page() and from try_to_free_buffers() (hack for ext3).
      Page is locked in both cases, page-lock protects against concurrent
      dirtiers: see commit 2d6d7f98 ("mm: protect set_page_dirty() from
      ongoing truncation").
      
      Delete_from_page_cache() shouldn't be called for dirty pages, they must
      be handled by caller (either written or truncated).  This patch treats
      final dirty accounting fixup at the end of __delete_from_page_cache() as
      a debug check and adds WARN_ON_ONCE() around it.  If something removes
      dirty pages without proper handling that might be a bug and unwritten
      data might be lost.
      
      Hugetlbfs has no dirty pages accounting, ClearPageDirty() is enough
      here.
      
      cancel_dirty_page() in nfs_wb_page_cancel() is redundant.  This is
      helper for nfs_invalidate_page() and it's called only in case complete
      invalidation.
      
      The mess was started in v2.6.20 after commits 46d2277c ("Clean up
      and make try_to_free_buffers() not race with dirty pages") and
      3e67c098 ("truncate: clear page dirtiness before running
      try_to_free_buffers()") first was reverted right in v2.6.20 in commit
      ecdfc978 ("Resurrect 'try_to_free_buffers()' VM hackery"), second in
      v2.6.25 commit a2b34564 ("Fix dirty page accounting leak with ext3
      data=journal").
      
      Custom fixes were introduced between these points.  NFS in v2.6.23, commit
      1b3b4a1a ("NFS: Fix a write request leak in nfs_invalidate_page()").
      Kludge in __delete_from_page_cache() in v2.6.24, commit 3a692790 ("Do
      dirty page accounting when removing a page from the page cache").  Since
      v2.6.25 all of them are redundant.
      
      [akpm@linux-foundation.org: coding-style fixes]
      Signed-off-by: NKonstantin Khlebnikov <khlebnikov@yandex-team.ru>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Jan Kara <jack@suse.cz>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      b9ea2515
    • E
      mm: incorporate zero pages into transparent huge pages · ca0984ca
      Ebru Akagunduz 提交于
      This patch improves THP collapse rates, by allowing zero pages.
      
      Currently THP can collapse 4kB pages into a THP when there are up to
      khugepaged_max_ptes_none pte_none ptes in a 2MB range.  This patch counts
      pte none and mapped zero pages with the same variable.
      
      The patch was tested with a program that allocates 800MB of
      memory, and performs interleaved reads and writes, in a pattern
      that causes some 2MB areas to first see read accesses, resulting
      in the zero pfn being mapped there.
      
      To simulate memory fragmentation at allocation time, I modified
      do_huge_pmd_anonymous_page to return VM_FAULT_FALLBACK for read faults.
      
      Without the patch, only %50 of the program was collapsed into THP and the
      percentage did not increase over time.
      
      With this patch after 10 minutes of waiting khugepaged had collapsed %99
      of the program's memory.
      
      [aarcange@redhat.com: fix bogus BUG()]
      Signed-off-by: NEbru Akagunduz <ebru.akagunduz@gmail.com>
      Reviewed-by: NRik van Riel <riel@redhat.com>
      Reviewed-by: NAndrea Arcangeli <aarcange@redhat.com>
      Acked-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Acked-by: NVlastimil Babka <vbabka@suse.cz>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Sasha Levin <sasha.levin@oracle.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Mel Gorman <mel@csn.ul.ie>
      Signed-off-by: NAndrea Arcangeli <aarcange@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      ca0984ca
    • J
      mm/compaction: enhance compaction finish condition · 2149cdae
      Joonsoo Kim 提交于
      Compaction has anti fragmentation algorithm.  It is that freepage should
      be more than pageblock order to finish the compaction if we don't find any
      freepage in requested migratetype buddy list.  This is for mitigating
      fragmentation, but, there is a lack of migratetype consideration and it is
      too excessive compared to page allocator's anti fragmentation algorithm.
      
      Not considering migratetype would cause premature finish of compaction.
      For example, if allocation request is for unmovable migratetype, freepage
      with CMA migratetype doesn't help that allocation and compaction should
      not be stopped.  But, current logic regards this situation as compaction
      is no longer needed, so finish the compaction.
      
      Secondly, condition is too excessive compared to page allocator's logic.
      We can steal freepage from other migratetype and change pageblock
      migratetype on more relaxed conditions in page allocator.  This is
      designed to prevent fragmentation and we can use it here.  Imposing hard
      constraint only to the compaction doesn't help much in this case since
      page allocator would cause fragmentation again.
      
      To solve these problems, this patch borrows anti fragmentation logic from
      page allocator.  It will reduce premature compaction finish in some cases
      and reduce excessive compaction work.
      
      stress-highalloc test in mmtests with non movable order 7 allocation shows
      considerable increase of compaction success rate.
      
      Compaction success rate (Compaction success * 100 / Compaction stalls, %)
      31.82 : 42.20
      
      I tested it on non-reboot 5 runs stress-highalloc benchmark and found that
      there is no more degradation on allocation success rate than before.  That
      roughly means that this patch doesn't result in more fragmentations.
      
      Vlastimil suggests additional idea that we only test for fallbacks when
      migration scanner has scanned a whole pageblock.  It looked good for
      fragmentation because chance of stealing increase due to making more free
      pages in certain pageblock.  So, I tested it, but, it results in decreased
      compaction success rate, roughly 38.00.  I guess the reason that if system
      is low memory condition, watermark check could be failed due to not enough
      order 0 free page and so, sometimes, we can't reach a fallback check
      although migrate_pfn is aligned to pageblock_nr_pages.  I can insert code
      to cope with this situation but it makes code more complicated so I don't
      include his idea at this patch.
      
      [akpm@linux-foundation.org: fix CONFIG_CMA=n build]
      Signed-off-by: NJoonsoo Kim <iamjoonsoo.kim@lge.com>
      Acked-by: NVlastimil Babka <vbabka@suse.cz>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Zhang Yanfei <zhangyanfei@cn.fujitsu.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      2149cdae
    • J
      mm/page_alloc: factor out fallback freepage checking · 4eb7dce6
      Joonsoo Kim 提交于
      This is preparation step to use page allocator's anti fragmentation logic
      in compaction.  This patch just separates fallback freepage checking part
      from fallback freepage management part.  Therefore, there is no functional
      change.
      Signed-off-by: NJoonsoo Kim <iamjoonsoo.kim@lge.com>
      Acked-by: NVlastimil Babka <vbabka@suse.cz>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Zhang Yanfei <zhangyanfei@cn.fujitsu.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      4eb7dce6
    • J
      mm/cma: change fallback behaviour for CMA freepage · dc67647b
      Joonsoo Kim 提交于
      Freepage with MIGRATE_CMA can be used only for MIGRATE_MOVABLE and they
      should not be expanded to other migratetype buddy list to protect them
      from unmovable/reclaimable allocation.  Implementing these requirements in
      __rmqueue_fallback(), that is, finding largest possible block of freepage
      has bad effect that high order freepage with MIGRATE_CMA are broken
      continually although there are suitable order CMA freepage.  Reason is
      that they are not be expanded to other migratetype buddy list and next
      __rmqueue_fallback() invocation try to finds another largest block of
      freepage and break it again.  So, MIGRATE_CMA fallback should be handled
      separately.  This patch introduces __rmqueue_cma_fallback(), that just
      wrapper of __rmqueue_smallest() and call it before __rmqueue_fallback() if
      migratetype == MIGRATE_MOVABLE.
      
      This results in unintended behaviour change that MIGRATE_CMA freepage is
      always used first rather than other migratetype as movable allocation's
      fallback.  But, as already mentioned above, MIGRATE_CMA can be used only
      for MIGRATE_MOVABLE, so it is better to use MIGRATE_CMA freepage first as
      much as possible.  Otherwise, we needlessly take up precious freepages
      with other migratetype and increase chance of fragmentation.
      Signed-off-by: NJoonsoo Kim <iamjoonsoo.kim@lge.com>
      Acked-by: NVlastimil Babka <vbabka@suse.cz>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Zhang Yanfei <zhangyanfei@cn.fujitsu.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      dc67647b
    • D
      mm, hotplug: fix concurrent memory hot-add deadlock · 30467e0b
      David Rientjes 提交于
      There's a deadlock when concurrently hot-adding memory through the probe
      interface and switching a memory block from offline to online.
      
      When hot-adding memory via the probe interface, add_memory() first takes
      mem_hotplug_begin() and then device_lock() is later taken when registering
      the newly initialized memory block.  This creates a lock dependency of (1)
      mem_hotplug.lock (2) dev->mutex.
      
      When switching a memory block from offline to online, dev->mutex is first
      grabbed in device_online() when the write(2) transitions an existing
      memory block from offline to online, and then online_pages() will take
      mem_hotplug_begin().
      
      This creates a lock inversion between mem_hotplug.lock and dev->mutex.
      Vitaly reports that this deadlock can happen when kworker handling a probe
      event races with systemd-udevd switching a memory block's state.
      
      This patch requires the state transition to take mem_hotplug_begin()
      before dev->mutex.  Hot-adding memory via the probe interface creates a
      memory block while holding mem_hotplug_begin(), there is no way to take
      dev->mutex first in this case.
      
      online_pages() and offline_pages() are only called when transitioning
      memory block state.  We now require that mem_hotplug_begin() is taken
      before calling them -- this requires exporting the mem_hotplug_begin() and
      mem_hotplug_done() to generic code.  In all hot-add and hot-remove cases,
      mem_hotplug_begin() is done prior to device_online().  This is all that is
      needed to avoid the deadlock.
      Signed-off-by: NDavid Rientjes <rientjes@google.com>
      Reported-by: NVitaly Kuznetsov <vkuznets@redhat.com>
      Tested-by: NVitaly Kuznetsov <vkuznets@redhat.com>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net>
      Cc: "K. Y. Srinivasan" <kys@microsoft.com>
      Cc: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
      Cc: Tang Chen <tangchen@cn.fujitsu.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: Zhang Zhen <zhenzhang.zhang@huawei.com>
      Cc: Vladimir Davydov <vdavydov@parallels.com>
      Cc: Wang Nan <wangnan0@huawei.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      30467e0b