1. 30 4月, 2013 40 次提交
    • S
      mm: rewrite the comment over migrate_pages() more comprehensibly · c73e5c9c
      Srivatsa S. Bhat 提交于
      The comment over migrate_pages() looks quite weird, and makes it hard to
      grasp what it is trying to say.  Rewrite it more comprehensibly.
      Signed-off-by: NSrivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
      Acked-by: NChristoph Lameter <cl@linux.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Hugh Dickins <hughd@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      c73e5c9c
    • M
      THP: fix comment about memory barrier · 52f37629
      Minchan Kim 提交于
      Currently the memory barrier in __do_huge_pmd_anonymous_page doesn't
      work.  Because lru_cache_add_lru uses pagevec so it could miss spinlock
      easily so above rule was broken so user might see inconsistent data.
      
      I was not first person who pointed out the problem.  Mel and Peter
      pointed out a few months ago and Peter pointed out further that even
      spin_lock/unlock can't make sure of it:
      
        http://marc.info/?t=134333512700004
      
      	In particular:
      
              	*A = a;
              	LOCK
              	UNLOCK
              	*B = b;
      
      	may occur as:
      
              	LOCK, STORE *B, STORE *A, UNLOCK
      
      At last, Hugh pointed out that even we don't need memory barrier in
      there because __SetPageUpdate already have done it from Nick's commit
      0ed361de ("mm: fix PageUptodate data race") explicitly.
      
      So this patch fixes comment on THP and adds same comment for
      do_anonymous_page, too because everybody except Hugh was missing that.
      It means we need a comment about that.
      Signed-off-by: NMinchan Kim <minchan@kernel.org>
      Acked-by: NAndrea Arcangeli <aarcange@redhat.com>
      Acked-by: NDavid Rientjes <rientjes@google.com>
      Acked-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      52f37629
    • Y
      mm: remove CONFIG_HOTPLUG ifdefs · f1cb0879
      Yijing Wang 提交于
      CONFIG_HOTPLUG is going away as an option, cleanup CONFIG_HOTPLUG
      ifdefs in mm files.
      Signed-off-by: NYijing Wang <wangyijing@huawei.com>
      Acked-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      Acked-by: NRik van Riel <riel@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      f1cb0879
    • M
      mm/memcontrol.c: remove unnecessary ; · 573b400d
      Michel Lespinasse 提交于
      Just a trivial issue I stumbled on while doing something else...
      Signed-off-by: NMichel Lespinasse <walken@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      573b400d
    • A
      mm: reinititalise user and admin reserves if memory is added or removed · 1640879a
      Andrew Shewmaker 提交于
      Alter the admin and user reserves of the previous patches in this series
      when memory is added or removed.
      
      If memory is added and the reserves have been eliminated or increased
      above the default max, then we'll trust the admin.
      
      If memory is removed and there isn't enough free memory, then we need to
      reset the reserves.
      
      Otherwise keep the reserve set by the admin.
      
      The reserve reset code is the same as the reserve initialization code.
      
      I tested hot addition and removal by triggering it via sysfs.  The
      reserves shrunk when they were set high and memory was removed.  They
      were reset higher when memory was added again.
      
      [akpm@linux-foundation.org: use register_hotmemory_notifier()]
      [akpm@linux-foundation.org: init_user_reserve() and init_admin_reserve can no longer be __meminit]
      [fengguang.wu@intel.com: make init_reserve_notifier() static]
      [akpm@linux-foundation.org: coding-style fixes]
      Signed-off-by: NAndrew Shewmaker <agshew@gmail.com>
      Signed-off-by: NFengguang Wu <fengguang.wu@intel.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      1640879a
    • A
      mm: replace hardcoded 3% with admin_reserve_pages knob · 4eeab4f5
      Andrew Shewmaker 提交于
      Add an admin_reserve_kbytes knob to allow admins to change the hardcoded
      memory reserve to something other than 3%, which may be multiple
      gigabytes on large memory systems.  Only about 8MB is necessary to
      enable recovery in the default mode, and only a few hundred MB are
      required even when overcommit is disabled.
      
      This affects OVERCOMMIT_GUESS and OVERCOMMIT_NEVER.
      
      admin_reserve_kbytes is initialized to min(3% free pages, 8MB)
      
      I arrived at 8MB by summing the RSS of sshd or login, bash, and top.
      
      Please see first patch in this series for full background, motivation,
      testing, and full changelog.
      
      [akpm@linux-foundation.org: coding-style fixes]
      [akpm@linux-foundation.org: make init_admin_reserve() static]
      Signed-off-by: NAndrew Shewmaker <agshew@gmail.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      4eeab4f5
    • A
      mm: limit growth of 3% hardcoded other user reserve · c9b1d098
      Andrew Shewmaker 提交于
      Add user_reserve_kbytes knob.
      
      Limit the growth of the memory reserved for other user processes to
      min(3% current process size, user_reserve_pages).  Only about 8MB is
      necessary to enable recovery in the default mode, and only a few hundred
      MB are required even when overcommit is disabled.
      
      user_reserve_pages defaults to min(3% free pages, 128MB)
      
      I arrived at 128MB by taking the max VSZ of sshd, login, bash, and top ...
      then adding the RSS of each.
      
      This only affects OVERCOMMIT_NEVER mode.
      
      Background
      
      1. user reserve
      
      __vm_enough_memory reserves a hardcoded 3% of the current process size for
      other applications when overcommit is disabled.  This was done so that a
      user could recover if they launched a memory hogging process.  Without the
      reserve, a user would easily run into a message such as:
      
      bash: fork: Cannot allocate memory
      
      2. admin reserve
      
      Additionally, a hardcoded 3% of free memory is reserved for root in both
      overcommit 'guess' and 'never' modes.  This was intended to prevent a
      scenario where root-cant-log-in and perform recovery operations.
      
      Note that this reserve shrinks, and doesn't guarantee a useful reserve.
      
      Motivation
      
      The two hardcoded memory reserves should be updated to account for current
      memory sizes.
      
      Also, the admin reserve would be more useful if it didn't shrink too much.
      
      When the current code was originally written, 1GB was considered
      "enterprise".  Now the 3% reserve can grow to multiple GB on large memory
      systems, and it only needs to be a few hundred MB at most to enable a user
      or admin to recover a system with an unwanted memory hogging process.
      
      I've found that reducing these reserves is especially beneficial for a
      specific type of application load:
      
       * single application system
       * one or few processes (e.g. one per core)
       * allocating all available memory
       * not initializing every page immediately
       * long running
      
      I've run scientific clusters with this sort of load.  A long running job
      sometimes failed many hours (weeks of CPU time) into a calculation.  They
      weren't initializing all of their memory immediately, and they weren't
      using calloc, so I put systems into overcommit 'never' mode.  These
      clusters run diskless and have no swap.
      
      However, with the current reserves, a user wishing to allocate as much
      memory as possible to one process may be prevented from using, for
      example, almost 2GB out of 32GB.
      
      The effect is less, but still significant when a user starts a job with
      one process per core.  I have repeatedly seen a set of processes
      requesting the same amount of memory fail because one of them could not
      allocate the amount of memory a user would expect to be able to allocate.
      For example, Message Passing Interfce (MPI) processes, one per core.  And
      it is similar for other parallel programming frameworks.
      
      Changing this reserve code will make the overcommit never mode more useful
      by allowing applications to allocate nearly all of the available memory.
      
      Also, the new admin_reserve_kbytes will be safer than the current behavior
      since the hardcoded 3% of available memory reserve can shrink to something
      useless in the case where applications have grabbed all available memory.
      
      Risks
      
      * "bash: fork: Cannot allocate memory"
      
        The downside of the first patch-- which creates a tunable user reserve
        that is only used in overcommit 'never' mode--is that an admin can set
        it so low that a user may not be able to kill their process, even if
        they already have a shell prompt.
      
        Of course, a user can get in the same predicament with the current 3%
        reserve--they just have to launch processes until 3% becomes negligible.
      
      * root-cant-log-in problem
      
        The second patch, adding the tunable rootuser_reserve_pages, allows
        the admin to shoot themselves in the foot by setting it too small.  They
        can easily get the system into a state where root-can't-log-in.
      
        However, the new admin_reserve_kbytes will be safer than the current
        behavior since the hardcoded 3% of available memory reserve can shrink
        to something useless in the case where applications have grabbed all
        available memory.
      
      Alternatives
      
       * Memory cgroups provide a more flexible way to limit application memory.
      
         Not everyone wants to set up cgroups or deal with their overhead.
      
       * We could create a fourth overcommit mode which provides smaller reserves.
      
         The size of useful reserves may be drastically different depending
         on the whether the system is embedded or enterprise.
      
       * Force users to initialize all of their memory or use calloc.
      
         Some users don't want/expect the system to overcommit when they malloc.
         Overcommit 'never' mode is for this scenario, and it should work well.
      
      The new user and admin reserve tunables are simple to use, with low
      overhead compared to cgroups.  The patches preserve current behavior where
      3% of memory is less than 128MB, except that the admin reserve doesn't
      shrink to an unusable size under pressure.  The code allows admins to tune
      for embedded and enterprise usage.
      
      FAQ
      
       * How is the root-cant-login problem addressed?
         What happens if admin_reserve_pages is set to 0?
      
         Root is free to shoot themselves in the foot by setting
         admin_reserve_kbytes too low.
      
         On x86_64, the minimum useful reserve is:
           8MB for overcommit 'guess'
         128MB for overcommit 'never'
      
         admin_reserve_pages defaults to min(3% free memory, 8MB)
      
         So, anyone switching to 'never' mode needs to adjust
         admin_reserve_pages.
      
       * How do you calculate a minimum useful reserve?
      
         A user or the admin needs enough memory to login and perform
         recovery operations, which includes, at a minimum:
      
         sshd or login + bash (or some other shell) + top (or ps, kill, etc.)
      
         For overcommit 'guess', we can sum resident set sizes (RSS)
         because we only need enough memory to handle what the recovery
         programs will typically use. On x86_64 this is about 8MB.
      
         For overcommit 'never', we can take the max of their virtual sizes (VSZ)
         and add the sum of their RSS. We use VSZ instead of RSS because mode
         forces us to ensure we can fulfill all of the requested memory allocations--
         even if the programs only use a fraction of what they ask for.
         On x86_64 this is about 128MB.
      
         When swap is enabled, reserves are useful even when they are as
         small as 10MB, regardless of overcommit mode.
      
         When both swap and overcommit are disabled, then the admin should
         tune the reserves higher to be absolutley safe. Over 230MB each
         was safest in my testing.
      
       * What happens if user_reserve_pages is set to 0?
      
         Note, this only affects overcomitt 'never' mode.
      
         Then a user will be able to allocate all available memory minus
         admin_reserve_kbytes.
      
         However, they will easily see a message such as:
      
         "bash: fork: Cannot allocate memory"
      
         And they won't be able to recover/kill their application.
         The admin should be able to recover the system if
         admin_reserve_kbytes is set appropriately.
      
       * What's the difference between overcommit 'guess' and 'never'?
      
         "Guess" allows an allocation if there are enough free + reclaimable
         pages. It has a hardcoded 3% of free pages reserved for root.
      
         "Never" allows an allocation if there is enough swap + a configurable
         percentage (default is 50) of physical RAM. It has a hardcoded 3% of
         free pages reserved for root, like "Guess" mode. It also has a
         hardcoded 3% of the current process size reserved for additional
         applications.
      
       * Why is overcommit 'guess' not suitable even when an app eventually
         writes to every page? It takes free pages, file pages, available
         swap pages, reclaimable slab pages into consideration. In other words,
         these are all pages available, then why isn't overcommit suitable?
      
         Because it only looks at the present state of the system. It
         does not take into account the memory that other applications have
         malloced, but haven't initialized yet. It overcommits the system.
      
      Test Summary
      
      There was little change in behavior in the default overcommit 'guess'
      mode with swap enabled before and after the patch. This was expected.
      
      Systems run most predictably (i.e. no oom kills) in overcommit 'never'
      mode with swap enabled. This also allowed the most memory to be allocated
      to a user application.
      
      Overcommit 'guess' mode without swap is a bad idea. It is easy to
      crash the system. None of the other tested combinations crashed.
      This matches my experience on the Roadrunner supercomputer.
      
      Without the tunable user reserve, a system in overcommit 'never' mode
      and without swap does not allow the admin to recover, although the
      admin can.
      
      With the new tunable reserves, a system in overcommit 'never' mode
      and without swap can be configured to:
      
      1. maximize user-allocatable memory, running close to the edge of
      recoverability
      
      2. maximize recoverability, sacrificing allocatable memory to
      ensure that a user cannot take down a system
      
      Test Description
      
      Fedora 18 VM - 4 x86_64 cores, 5725MB RAM, 4GB Swap
      
      System is booted into multiuser console mode, with unnecessary services
      turned off. Caches were dropped before each test.
      
      Hogs are user memtester processes that attempt to allocate all free memory
      as reported by /proc/meminfo
      
      In overcommit 'never' mode, memory_ratio=100
      
      Test Results
      
      3.9.0-rc1-mm1
      
      Overcommit | Swap | Hogs | MB Got/Wanted | OOMs | User Recovery | Admin Recovery
      ----------   ----   ----   -------------   ----   -------------   --------------
      guess        yes    1      5432/5432       no     yes             yes
      guess        yes    4      5444/5444       1      yes             yes
      guess        no     1      5302/5449       no     yes             yes
      guess        no     4      -               crash  no              no
      
      never        yes    1      5460/5460       1      yes             yes
      never        yes    4      5460/5460       1      yes             yes
      never        no     1      5218/5432       no     no              yes
      never        no     4      5203/5448       no     no              yes
      
      3.9.0-rc1-mm1-tunablereserves
      
      User and Admin Recovery show their respective reserves, if applicable.
      
      Overcommit | Swap | Hogs | MB Got/Wanted | OOMs | User Recovery | Admin Recovery
      ----------   ----   ----   -------------   ----   -------------   --------------
      guess        yes    1      5419/5419       no     - yes           8MB yes
      guess        yes    4      5436/5436       1      - yes           8MB yes
      guess        no     1      5440/5440       *      - yes           8MB yes
      guess        no     4      -               crash  - no            8MB no
      
      * process would successfully mlock, then the oom killer would pick it
      
      never        yes    1      5446/5446       no     10MB yes        20MB yes
      never        yes    4      5456/5456       no     10MB yes        20MB yes
      never        no     1      5387/5429       no     128MB no        8MB barely
      never        no     1      5323/5428       no     226MB barely    8MB barely
      never        no     1      5323/5428       no     226MB barely    8MB barely
      
      never        no     1      5359/5448       no     10MB no         10MB barely
      
      never        no     1      5323/5428       no     0MB no          10MB barely
      never        no     1      5332/5428       no     0MB no          50MB yes
      never        no     1      5293/5429       no     0MB no          90MB yes
      
      never        no     1      5001/5427       no     230MB yes       338MB yes
      never        no     4*     4998/5424       no     230MB yes       338MB yes
      
      * more memtesters were launched, able to allocate approximately another 100MB
      
      Future Work
      
       - Test larger memory systems.
      
       - Test an embedded image.
      
       - Test other architectures.
      
       - Time malloc microbenchmarks.
      
       - Would it be useful to be able to set overcommit policy for
         each memory cgroup?
      
       - Some lines are slightly above 80 chars.
         Perhaps define a macro to convert between pages and kb?
         Other places in the kernel do this.
      
      [akpm@linux-foundation.org: coding-style fixes]
      [akpm@linux-foundation.org: make init_user_reserve() static]
      Signed-off-by: NAndrew Shewmaker <agshew@gmail.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      c9b1d098
    • A
      mm/slub.c: use register_hotmemory_notifier() · 3ac38faa
      Andrew Morton 提交于
      Squishes a statement-with-no-effect warning, removes some ifdefs and
      shrinks .text by 2 bytes.
      
      Note that this code fails to check for blocking_notifier_chain_register()
      failures.
      
      Cc: Pekka Enberg <penberg@kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      3ac38faa
    • C
      page_alloc: make setup_nr_node_ids() usable for arch init code · f9872caf
      Cody P Schafer 提交于
      powerpc and x86 were opencoding copies of setup_nr_node_ids(), which
      page_alloc provides but makes static.  Make it avaliable to the archs in
      linux/mm.h.
      Signed-off-by: NCody P Schafer <cody@linux.vnet.ibm.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      f9872caf
    • R
      mm: speedup in __early_pfn_to_nid · 7c243c71
      Russ Anderson 提交于
      When booting on a large memory system, the kernel spends considerable
      time in memmap_init_zone() setting up memory zones.  Analysis shows
      significant time spent in __early_pfn_to_nid().
      
      The routine memmap_init_zone() checks each PFN to verify the nid is
      valid.  __early_pfn_to_nid() sequentially scans the list of pfn ranges
      to find the right range and returns the nid.  This does not scale well.
      On a 4 TB (single rack) system there are 308 memory ranges to scan.  The
      higher the PFN the more time spent sequentially spinning through memory
      ranges.
      
      Since memmap_init_zone() increments pfn, it will almost always be
      looking for the same range as the previous pfn, so check that range
      first.  If it is in the same range, return that nid.  If not, scan the
      list as before.
      
      A 4 TB (single rack) UV1 system takes 512 seconds to get through the
      zone code.  This performance optimization reduces the time by 189
      seconds, a 36% improvement.
      
      A 2 TB (single rack) UV2 system goes from 212.7 seconds to 99.8 seconds,
      a 112.9 second (53%) reduction.
      
      [akpm@linux-foundation.org: make the statics __meminitdata]
      [akpm@linux-foundation.org: fix comment formatting]
      [akpm@linux-foundation.org: fix ia64, per yinghai]
      [akpm@linux-foundation.org: add missing semicolon, per Tony]
      Signed-off-by: NRuss Anderson <rja@sgi.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Tested-by: N"Luck, Tony" <tony.luck@intel.com>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Cc: Lin Feng <linfeng@cn.fujitsu.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@gmail.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      7c243c71
    • J
    • M
      mm: page_alloc: avoid marking zones full prematurely after zone_reclaim() · fed2719e
      Mel Gorman 提交于
      The following problem was reported against a distribution kernel when
      zone_reclaim was enabled but the same problem applies to the mainline
      kernel.  The reproduction case was as follows
      
      1. Run numactl -m +0 dd if=largefile of=/dev/null
         This allocates a large number of clean pages in node 0
      
      2. numactl -N +0 memhog 0.5*Mg
         This start a memory-using application in node 0.
      
      The expected behaviour is that the clean pages get reclaimed and the
      application uses node 0 for its memory.  The observed behaviour was that
      the memory for the memhog application was allocated off-node since
      commits cd38b115 ("mm: page allocator: initialise ZLC for first zone
      eligible for zone_reclaim") and commit 76d3fbf8 ("mm: page
      allocator: reconsider zones for allocation after direct reclaim").
      
      The assumption of those patches was that it was always preferable to
      allocate quickly than stall for long periods of time and they were meant
      to take care that the zone was only marked full when necessary but an
      important case was missed.
      
      In the allocator fast path, only the low watermarks are checked.  If the
      zones free pages are between the low and min watermark then allocations
      from the allocators slow path will succeed.  However, zone_reclaim will
      only reclaim SWAP_CLUSTER_MAX or 1<<order pages.  There is no guarantee
      that this will meet the low watermark causing the zone to be marked full
      prematurely.
      
      This patch will only mark the zone full after zone_reclaim if it the min
      watermarks are checked or if page reclaim failed to make sufficient
      progress.
      
      [mhocko@suse.cz: fix alloc_flags test]
      Signed-off-by: NMel Gorman <mgorman@suse.de>
      Reported-by: NHedi Berriche <hedi@sgi.com>
      Tested-by: NHedi Berriche <hedi@sgi.com>
      Reviewed-by: NMichal Hocko <mhocko@suse.cz>
      Reviewed-by: NWanpeng Li <liwanp@linux.vnet.ibm.com>
      Signed-off-by: NMichal Hocko <mhocko@suse.cz>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      fed2719e
    • J
      sparse-vmemmap: specify vmemmap population range in bytes · 0aad818b
      Johannes Weiner 提交于
      The sparse code, when asking the architecture to populate the vmemmap,
      specifies the section range as a starting page and a number of pages.
      
      This is an awkward interface, because none of the arch-specific code
      actually thinks of the range in terms of 'struct page' units and always
      translates it to bytes first.
      
      In addition, later patches mix huge page and regular page backing for
      the vmemmap.  For this, they need to call vmemmap_populate_basepages()
      on sub-section ranges with PAGE_SIZE and PMD_SIZE in mind.  But these
      are not necessarily multiples of the 'struct page' size and so this unit
      is too coarse.
      
      Just translate the section range into bytes once in the generic sparse
      code, then pass byte ranges down the stack.
      Signed-off-by: NJohannes Weiner <hannes@cmpxchg.org>
      Cc: Ben Hutchings <ben@decadent.org.uk>
      Cc: Bernhard Schmidt <Bernhard.Schmidt@lrz.de>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Russell King <rmk@arm.linux.org.uk>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: "Luck, Tony" <tony.luck@intel.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Acked-by: NDavid S. Miller <davem@davemloft.net>
      Tested-by: NDavid S. Miller <davem@davemloft.net>
      Cc: Wu Fengguang <fengguang.wu@intel.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      0aad818b
    • B
      mm: try harder to allocate vmemmap blocks · 055e4fd9
      Ben Hutchings 提交于
      Hot-adding memory on x86_64 normally requires huge page allocation.
      When this is done to a VM guest, it's usually because the system is
      already tight on memory, so the request tends to fail.  Try to avoid
      this by adding __GFP_REPEAT to the allocation flags.
      
      Addresses http://bugs.debian.org/699913Signed-off-by: NBen Hutchings <ben@decadent.org.uk>
      Signed-off-by: NJohannes Weiner <hannes@cmpxchg.org>
      Reported-by: NBernhard Schmidt <Bernhard.Schmidt@lrz.de>
      Tested-by: NBernhard Schmidt <Bernhard.Schmidt@lrz.de>
      Cc: Russell King <rmk@arm.linux.org.uk>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: "Luck, Tony" <tony.luck@intel.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: David Miller <davem@davemloft.net>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      055e4fd9
    • D
      mm, hugetlb: include hugepages in meminfo · 949f7ec5
      David Rientjes 提交于
      Particularly in oom conditions, it's troublesome that hugetlb memory is
      not displayed.  All other meminfo that is emitted will not add up to
      what is expected, and there is no artifact left in the kernel log to
      show that a potentially significant amount of memory is actually
      allocated as hugepages which are not available to be reclaimed.
      
      Booting with hugepages=8192 on the command line, this memory is now
      shown in oom conditions.  For example, with echo m >
      /proc/sysrq-trigger:
      
        Node 0 hugepages_total=2048 hugepages_free=2048 hugepages_surp=0 hugepages_size=2048kB
        Node 1 hugepages_total=2048 hugepages_free=2048 hugepages_surp=0 hugepages_size=2048kB
        Node 2 hugepages_total=2048 hugepages_free=2048 hugepages_surp=0 hugepages_size=2048kB
        Node 3 hugepages_total=2048 hugepages_free=2048 hugepages_surp=0 hugepages_size=2048kB
      
      [akpm@linux-foundation.org: coding-style fixes]
      Signed-off-by: NDavid Rientjes <rientjes@google.com>
      Acked-by: NMichal Hocko <mhocko@suse.cz>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      949f7ec5
    • H
      mm: merging memory blocks resets mempolicy · 1444f92c
      Hampson, Steven T 提交于
      Using mbind to change the mempolicy to MPOL_BIND on several adjacent
      mmapped blocks may result in a reset of the mempolicy to MPOL_DEFAULT in
      vma_adjust.
      
      Test code.  Correct result is three lines containing "OK".
      
      #include <stdio.h>
      #include <unistd.h>
      #include <sys/mman.h>
      #include <numaif.h>
      #include <errno.h>
      
      /* gcc mbind_test.c -lnuma -o mbind_test -Wall */
      #define MAXNODE 4096
      
      void allocate()
      {
      	int ret;
      	int len;
      	int policy = -1;
      	unsigned char *p;
      	unsigned long mask[MAXNODE] = { 0 };
      	unsigned long retmask[MAXNODE] = { 0 };
      
      	len = getpagesize() * 0x2fc00;
      	p = mmap(NULL, len, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS,
      		 -1, 0);
      	if (p == MAP_FAILED)
      		printf("mbind err: %d\n", errno);
      
      	mask[0] = 1;
      	ret = mbind(p, len, MPOL_BIND, mask, MAXNODE, 0);
      	if (ret < 0)
      		printf("mbind err: %d %d\n", ret, errno);
      	ret = get_mempolicy(&policy, retmask, MAXNODE, p, MPOL_F_ADDR);
      	if (ret < 0)
      		printf("get_mempolicy err: %d %d\n", ret, errno);
      
      	if (policy == MPOL_BIND)
      		printf("OK\n");
      	else
      		printf("ERROR: policy is %d\n", policy);
      }
      
      int main()
      {
      	allocate();
      	allocate();
      	allocate();
      	return 0;
      }
      Signed-off-by: NSteven T Hampson <steven.t.hampson@intel.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Andi Kleen <andi@firstfloor.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      1444f92c
    • H
      mm: allow arch code to control the user page table ceiling · 6ee8630e
      Hugh Dickins 提交于
      On architectures where a pgd entry may be shared between user and kernel
      (e.g.  ARM+LPAE), freeing page tables needs a ceiling other than 0.
      This patch introduces a generic USER_PGTABLES_CEILING that arch code can
      override.  It is the responsibility of the arch code setting the ceiling
      to ensure the complete freeing of the page tables (usually in
      pgd_free()).
      
      [catalin.marinas@arm.com: commit log; shift_arg_pages(), asm-generic/pgtables.h changes]
      Signed-off-by: NHugh Dickins <hughd@google.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: <stable@vger.kernel.org>	[3.3+]
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      6ee8630e
    • M
      memcg: do not check for do_swap_account in mem_cgroup_{read,write,reset} · acb6d558
      Michal Hocko 提交于
      Since commit 2d11085e ("memcg: do not create memsw files if swap
      accounting is disabled") memsw files are created only if memcg swap
      accounting is enabled so it doesn't make any sense to check for it
      explicitly in mem_cgroup_read(), mem_cgroup_write() and
      mem_cgroup_reset().
      Signed-off-by: NMichal Hocko <mhocko@suse.cz>
      Cc: Kamezawa Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Tejun Heo <tj@kernel.org>
      Acked-by: NJohannes Weiner <hannes@cmpxchg.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      acb6d558
    • Z
      mmap: find_vma: remove the WARN_ON_ONCE(!mm) check · ee5df057
      Zhang Yanfei 提交于
      Remove the WARN_ON_ONCE(!mm) check as the comment suggested.  Kernel
      code calls find_vma only when it is absolutely sure that the mm_struct
      arg to it is non-NULL.
      Signed-off-by: NZhang Yanfei <zhangyanfei@cn.fujitsu.com>
      Cc: k80c <k80ck80c@gmail.com>
      Cc: Michel Lespinasse <walken@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      ee5df057
    • A
      kexec, vmalloc: export additional vmalloc layer information · 13ba3fcb
      Atsushi Kumagai 提交于
      Now, vmap_area_list is exported as VMCOREINFO for makedumpfile to get
      the start address of vmalloc region (vmalloc_start).  The address which
      contains vmalloc_start value is represented as below:
      
        vmap_area_list.next - OFFSET(vmap_area.list) + OFFSET(vmap_area.va_start)
      
      However, both OFFSET(vmap_area.va_start) and OFFSET(vmap_area.list)
      aren't exported as VMCOREINFO.
      
      So this patch exports them externally with small cleanup.
      
      [akpm@linux-foundation.org: vmalloc.h should include list.h for list_head]
      Signed-off-by: NAtsushi Kumagai <kumagai-atsushi@mxc.nes.nec.co.jp>
      Cc: Joonsoo Kim <js1304@gmail.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Atsushi Kumagai <kumagai-atsushi@mxc.nes.nec.co.jp>
      Cc: Chris Metcalf <cmetcalf@tilera.com>
      Cc: Dave Anderson <anderson@redhat.com>
      Cc: Eric Biederman <ebiederm@xmission.com>
      Cc: Guan Xuetao <gxt@mprc.pku.edu.cn>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Vivek Goyal <vgoyal@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      13ba3fcb
    • J
      mm, vmalloc: remove list management of vmlist after initializing vmalloc · 4341fa45
      Joonsoo Kim 提交于
      Now, there is no need to maintain vmlist after initializing vmalloc.  So
      remove related code and data structure.
      Signed-off-by: NJoonsoo Kim <js1304@gmail.com>
      Signed-off-by: NJoonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Atsushi Kumagai <kumagai-atsushi@mxc.nes.nec.co.jp>
      Cc: Chris Metcalf <cmetcalf@tilera.com>
      Cc: Dave Anderson <anderson@redhat.com>
      Cc: Eric Biederman <ebiederm@xmission.com>
      Cc: Guan Xuetao <gxt@mprc.pku.edu.cn>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Vivek Goyal <vgoyal@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      4341fa45
    • J
      mm, vmalloc: export vmap_area_list, instead of vmlist · f1c4069e
      Joonsoo Kim 提交于
      Although our intention is to unexport internal structure entirely, but
      there is one exception for kexec.  kexec dumps address of vmlist and
      makedumpfile uses this information.
      
      We are about to remove vmlist, then another way to retrieve information
      of vmalloc layer is needed for makedumpfile.  For this purpose, we
      export vmap_area_list, instead of vmlist.
      Signed-off-by: NJoonsoo Kim <js1304@gmail.com>
      Signed-off-by: NJoonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Eric Biederman <ebiederm@xmission.com>
      Cc: Dave Anderson <anderson@redhat.com>
      Cc: Vivek Goyal <vgoyal@redhat.com>
      Cc: Atsushi Kumagai <kumagai-atsushi@mxc.nes.nec.co.jp>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Chris Metcalf <cmetcalf@tilera.com>
      Cc: Guan Xuetao <gxt@mprc.pku.edu.cn>
      Cc: Ingo Molnar <mingo@kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      f1c4069e
    • J
      mm, vmalloc: iterate vmap_area_list, instead of vmlist, in vmallocinfo() · d4033afd
      Joonsoo Kim 提交于
      This patch is a preparatory step for removing vmlist entirely.  For
      above purpose, we change iterating a vmap_list codes to iterating a
      vmap_area_list.  It is somewhat trivial change, but just one thing
      should be noticed.
      
      Using vmap_area_list in vmallocinfo() introduce ordering problem in SMP
      system.  In s_show(), we retrieve some values from vm_struct.
      vm_struct's values is not fully setup when va->vm is assigned.  Full
      setup is notified by removing VM_UNLIST flag without holding a lock.
      When we see that VM_UNLIST is removed, it is not ensured that vm_struct
      has proper values in view of other CPUs.  So we need smp_[rw]mb for
      ensuring that proper values is assigned when we see that VM_UNLIST is
      removed.
      
      Therefore, this patch not only change a iteration list, but also add a
      appropriate smp_[rw]mb to right places.
      Signed-off-by: NJoonsoo Kim <js1304@gmail.com>
      Signed-off-by: NJoonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Atsushi Kumagai <kumagai-atsushi@mxc.nes.nec.co.jp>
      Cc: Chris Metcalf <cmetcalf@tilera.com>
      Cc: Dave Anderson <anderson@redhat.com>
      Cc: Eric Biederman <ebiederm@xmission.com>
      Cc: Guan Xuetao <gxt@mprc.pku.edu.cn>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Vivek Goyal <vgoyal@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      d4033afd
    • J
      mm, vmalloc: iterate vmap_area_list in get_vmalloc_info() · f98782dd
      Joonsoo Kim 提交于
      This patch is a preparatory step for removing vmlist entirely.  For
      above purpose, we change iterating a vmap_list codes to iterating a
      vmap_area_list.  It is somewhat trivial change, but just one thing
      should be noticed.
      
      vmlist is lack of information about some areas in vmalloc address space.
      For example, vm_map_ram() allocate area in vmalloc address space, but it
      doesn't make a link with vmlist.  To provide full information about
      vmalloc address space is better idea, so we don't use va->vm and use
      vmap_area directly.  This makes get_vmalloc_info() more precise.
      Signed-off-by: NJoonsoo Kim <js1304@gmail.com>
      Signed-off-by: NJoonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Atsushi Kumagai <kumagai-atsushi@mxc.nes.nec.co.jp>
      Cc: Chris Metcalf <cmetcalf@tilera.com>
      Cc: Dave Anderson <anderson@redhat.com>
      Cc: Eric Biederman <ebiederm@xmission.com>
      Cc: Guan Xuetao <gxt@mprc.pku.edu.cn>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Vivek Goyal <vgoyal@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      f98782dd
    • J
      mm, vmalloc: iterate vmap_area_list, instead of vmlist in vread/vwrite() · e81ce85f
      Joonsoo Kim 提交于
      Now, when we hold a vmap_area_lock, va->vm can't be discarded.  So we can
      safely access to va->vm when iterating a vmap_area_list with holding a
      vmap_area_lock.  With this property, change iterating vmlist codes in
      vread/vwrite() to iterating vmap_area_list.
      
      There is a little difference relate to lock, because vmlist_lock is mutex,
      but, vmap_area_lock is spin_lock.  It may introduce a spinning overhead
      during vread/vwrite() is executing.  But, these are debug-oriented
      functions, so this overhead is not real problem for common case.
      Signed-off-by: NJoonsoo Kim <js1304@gmail.com>
      Signed-off-by: NJoonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Atsushi Kumagai <kumagai-atsushi@mxc.nes.nec.co.jp>
      Cc: Chris Metcalf <cmetcalf@tilera.com>
      Cc: Dave Anderson <anderson@redhat.com>
      Cc: Eric Biederman <ebiederm@xmission.com>
      Cc: Guan Xuetao <gxt@mprc.pku.edu.cn>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Vivek Goyal <vgoyal@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      e81ce85f
    • J
      mm, vmalloc: protect va->vm by vmap_area_lock · c69480ad
      Joonsoo Kim 提交于
      Inserting and removing an entry to vmlist is linear time complexity, so
      it is inefficient.  Following patches will try to remove vmlist
      entirely.  This patch is preparing step for it.
      
      For removing vmlist, iterating vmlist codes should be changed to
      iterating a vmap_area_list.  Before implementing that, we should make
      sure that when we iterate a vmap_area_list, accessing to va->vm doesn't
      cause a race condition.  This patch ensure that when iterating a
      vmap_area_list, there is no race condition for accessing to vm_struct.
      Signed-off-by: NJoonsoo Kim <js1304@gmail.com>
      Signed-off-by: NJoonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Atsushi Kumagai <kumagai-atsushi@mxc.nes.nec.co.jp>
      Cc: Chris Metcalf <cmetcalf@tilera.com>
      Cc: Dave Anderson <anderson@redhat.com>
      Cc: Eric Biederman <ebiederm@xmission.com>
      Cc: Guan Xuetao <gxt@mprc.pku.edu.cn>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Vivek Goyal <vgoyal@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      c69480ad
    • J
      mm, vmalloc: move get_vmalloc_info() to vmalloc.c · db3808c1
      Joonsoo Kim 提交于
      Now get_vmalloc_info() is in fs/proc/mmu.c.  There is no reason that this
      code must be here and it's implementation needs vmlist_lock and it iterate
      a vmlist which may be internal data structure for vmalloc.
      
      It is preferable that vmlist_lock and vmlist is only used in vmalloc.c
      for maintainability. So move the code to vmalloc.c
      Signed-off-by: NJoonsoo Kim <js1304@gmail.com>
      Signed-off-by: NJoonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Atsushi Kumagai <kumagai-atsushi@mxc.nes.nec.co.jp>
      Cc: Chris Metcalf <cmetcalf@tilera.com>
      Cc: Dave Anderson <anderson@redhat.com>
      Cc: Eric Biederman <ebiederm@xmission.com>
      Cc: Guan Xuetao <gxt@mprc.pku.edu.cn>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Vivek Goyal <vgoyal@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      db3808c1
    • D
      mm: make snapshotting pages for stable writes a per-bio operation · 71368511
      Darrick J. Wong 提交于
      Walking a bio's page mappings has proved problematic, so create a new
      bio flag to indicate that a bio's data needs to be snapshotted in order
      to guarantee stable pages during writeback.  Next, for the one user
      (ext3/jbd) of snapshotting, hook all the places where writes can be
      initiated without PG_writeback set, and set BIO_SNAP_STABLE there.
      
      We must also flag journal "metadata" bios for stable writeout, since
      file data can be written through the journal.  Finally, the
      MS_SNAP_STABLE mount flag (only used by ext3) is now superfluous, so get
      rid of it.
      
      [akpm@linux-foundation.org: rename _submit_bh()'s `flags' to `bio_flags', delobotomize the _submit_bh declaration]
      [akpm@linux-foundation.org: teeny cleanup]
      Signed-off-by: NDarrick J. Wong <darrick.wong@oracle.com>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Adrian Hunter <adrian.hunter@intel.com>
      Cc: Artem Bityutskiy <dedekind1@gmail.com>
      Reviewed-by: NJan Kara <jack@suse.cz>
      Cc: Jens Axboe <axboe@kernel.dk>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      71368511
    • G
      mm/hugetlb: add more arch-defined huge_pte functions · 106c992a
      Gerald Schaefer 提交于
      Commit abf09bed ("s390/mm: implement software dirty bits")
      introduced another difference in the pte layout vs.  the pmd layout on
      s390, thoroughly breaking the s390 support for hugetlbfs.  This requires
      replacing some more pte_xxx functions in mm/hugetlbfs.c with a
      huge_pte_xxx version.
      
      This patch introduces those huge_pte_xxx functions and their generic
      implementation in asm-generic/hugetlb.h, which will now be included on
      all architectures supporting hugetlbfs apart from s390.  This change
      will be a no-op for those architectures.
      
      [akpm@linux-foundation.org: fix warning]
      Signed-off-by: NGerald Schaefer <gerald.schaefer@de.ibm.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Hillf Danton <dhillf@gmail.com>
      Acked-by: Michal Hocko <mhocko@suse.cz>	[for !s390 parts]
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Paul Mundt <lethal@linux-sh.org>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Chris Metcalf <cmetcalf@tilera.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      106c992a
    • M
      memcg: further simplify mem_cgroup_iter · 16248d8f
      Michal Hocko 提交于
      mem_cgroup_iter basically does two things currently.  It takes care of
      the house keeping (reference counting, raclaim cookie) and it iterates
      through a hierarchy tree (by using cgroup generic tree walk).  The code
      would be much more easier to follow if we move the iteration outside of
      the function (to __mem_cgrou_iter_next) so the distinction is more
      clear.  This patch doesn't introduce any functional changes.
      Signed-off-by: NMichal Hocko <mhocko@suse.cz>
      Acked-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Li Zefan <lizefan@huawei.com>
      Cc: Ying Han <yinghan@google.com>
      Cc: Tejun Heo <htejun@gmail.com>
      Cc: Glauber Costa <glommer@parallels.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      16248d8f
    • M
      memcg: simplify mem_cgroup_iter · 19f39402
      Michal Hocko 提交于
      The current implementation of mem_cgroup_iter has to consider both css
      and memcg to find out whether no group has been found (css==NULL - aka
      the loop is completed) and that no memcg is associated with the found
      node (!memcg - aka css_tryget failed because the group is no longer
      alive).  This leads to awkward tweaks like tests for css && !memcg to
      skip the current node.
      
      It will be much easier if we got rid off css variable altogether and
      only rely on memcg.  In order to do that the iteration part has to skip
      dead nodes.  This sounds natural to me and as a nice side effect we will
      get a simple invariant that memcg is always alive when non-NULL and all
      nodes have been visited otherwise.
      
      We could get rid of the surrounding while loop but keep it in for now to
      make review easier.  It will go away in the following patch.
      Signed-off-by: NMichal Hocko <mhocko@suse.cz>
      Acked-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Li Zefan <lizefan@huawei.com>
      Cc: Ying Han <yinghan@google.com>
      Cc: Tejun Heo <htejun@gmail.com>
      Cc: Glauber Costa <glommer@parallels.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      19f39402
    • M
      memcg: relax memcg iter caching · 5f578161
      Michal Hocko 提交于
      Now that the per-node-zone-priority iterator caches memory cgroups
      rather than their css ids we have to be careful and remove them from the
      iterator when they are on the way out otherwise they might live for
      unbounded amount of time even though their group is already gone (until
      the global/targeted reclaim triggers the zone under priority to find out
      the group is dead and let it to find the final rest).
      
      We can fix this issue by relaxing rules for the last_visited memcg.
      Instead of taking a reference to the css before it is stored into
      iter->last_visited we can just store its pointer and track the number of
      removed groups from each memcg's subhierarchy.
      
      This number would be stored into iterator everytime when a memcg is
      cached.  If the iter count doesn't match the curent walker root's one we
      will start from the root again.  The group counter is incremented
      upwards the hierarchy every time a group is removed.
      
      The iter_lock can be dropped because racing iterators cannot leak the
      reference anymore as the reference count is not elevated for
      last_visited when it is cached.
      
      Locking rules got a bit complicated by this change though.  The iterator
      primarily relies on rcu read lock which makes sure that once we see a
      valid last_visited pointer then it will be valid for the whole RCU walk.
      smp_rmb makes sure that dead_count is read before last_visited and
      last_dead_count while smp_wmb makes sure that last_visited is updated
      before last_dead_count so the up-to-date last_dead_count cannot point to
      an outdated last_visited.  css_tryget then makes sure that the
      last_visited is still alive in case the iteration races with the cached
      group removal (css is invalidated before mem_cgroup_css_offline
      increments dead_count).
      
      In short:
      mem_cgroup_iter
       rcu_read_lock()
       dead_count = atomic_read(parent->dead_count)
       smp_rmb()
       if (dead_count != iter->last_dead_count)
       	last_visited POSSIBLY INVALID -> last_visited = NULL
       if (!css_tryget(iter->last_visited))
       	last_visited DEAD -> last_visited = NULL
       next = find_next(last_visited)
       css_tryget(next)
       css_put(last_visited) 	// css would be invalidated and parent->dead_count
       			// incremented if this was the last reference
       iter->last_visited = next
       smp_wmb()
       iter->last_dead_count = dead_count
       rcu_read_unlock()
      
      cgroup_rmdir
       cgroup_destroy_locked
        atomic_add(CSS_DEACT_BIAS, &css->refcnt) // subsequent css_tryget fail
         mem_cgroup_css_offline
          mem_cgroup_invalidate_reclaim_iterators
           while(parent = parent_mem_cgroup)
           	atomic_inc(parent->dead_count)
        css_put(css) // last reference held by cgroup core
      
      Spotted by Ying Han.
      
      Original idea from Johannes Weiner.
      
      [akpm@linux-foundation.org: coding-style fixes]
      Signed-off-by: NMichal Hocko <mhocko@suse.cz>
      Signed-off-by: NJohannes Weiner <hannes@cmpxchg.org>
      Acked-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Ying Han <yinghan@google.com>
      Cc: Li Zefan <lizefan@huawei.com>
      Cc: Tejun Heo <htejun@gmail.com>
      Cc: Glauber Costa <glommer@parallels.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      5f578161
    • M
      memcg: rework mem_cgroup_iter to use cgroup iterators · 542f85f9
      Michal Hocko 提交于
      mem_cgroup_iter curently relies on css->id when walking down a group
      hierarchy tree.  This is really awkward because the tree walk depends on
      the groups creation ordering.  The only guarantee is that a parent node is
      visited before its children.
      
      Example:
      
       1) mkdir -p a a/d a/b/c
       2) mkdir -a a/b/c a/d
      
      Will create the same trees but the tree walks will be different:
      
       1) a, d, b, c
       2) a, b, c, d
      
      Commit 574bd9f7 ("cgroup: implement generic child / descendant walk
      macros") has introduced generic cgroup tree walkers which provide either
      pre-order or post-order tree walk.  This patch converts css->id based
      iteration to pre-order tree walk to keep the semantic with the original
      iterator where parent is always visited before its subtree.
      
      cgroup_for_each_descendant_pre suggests using post_create and
      pre_destroy for proper synchronization with groups addidition resp.
      removal.  This implementation doesn't use those because a new memory
      cgroup is initialized sufficiently for iteration in mem_cgroup_css_alloc
      already and css reference counting enforces that the group is alive for
      both the last seen cgroup and the found one resp.  it signals that the
      group is dead and it should be skipped.
      
      If the reclaim cookie is used we need to store the last visited group
      into the iterator so we have to be careful that it doesn't disappear in
      the mean time.  Elevated reference count on the css keeps it alive even
      though the group have been removed (parked waiting for the last dput so
      that it can be freed).
      
      Per node-zone-prio iter_lock has been introduced to ensure that
      css_tryget and iter->last_visited is set atomically.  Otherwise two
      racing walkers could both take a references and only one release it
      leading to a css leak (which pins cgroup dentry).
      Signed-off-by: NMichal Hocko <mhocko@suse.cz>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Li Zefan <lizefan@huawei.com>
      Cc: Ying Han <yinghan@google.com>
      Cc: Tejun Heo <htejun@gmail.com>
      Cc: Glauber Costa <glommer@parallels.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      542f85f9
    • M
      memcg: keep prev's css alive for the whole mem_cgroup_iter · c40046f3
      Michal Hocko 提交于
      The patchset tries to make mem_cgroup_iter saner in the way how it walks
      hierarchies.  css->id based traversal is far from being ideal as it is not
      deterministic because it depends on the creation ordering.  Additional to
      that css_id is considered a burden for cgroup maintainers because it is
      quite some code and memcg is the last user of it.  After this series only
      the swap accounting uses css_id but that one will follow up later.
      
      Diffstat (if we exclude removed/added comments) looks quite
      promising. We got rid of some code:
      
        $ git diff mmotm... | grep -v "^[+-][[:space:]]*[/ ]\*" | diffstat
         b/include/linux/cgroup.h |    3 ---
         kernel/cgroup.c          |   33 ---------------------------------
         mm/memcontrol.c          |    4 +++-
         3 files changed, 3 insertions(+), 37 deletions(-)
      
      The first patch is just preparatory and it changes when we release css of
      the previously returned memcg.  Nothing controlversial.
      
      The second patch is the core of the patchset and it replaces css_get_next
      based on css_id by the generic cgroup pre-order.  This brings some
      chalanges for the last visited group caching during the reclaim
      (mem_cgroup_per_zone::reclaim_iter).  We have to use memcg pointers
      directly now which means that we have to keep a reference to those groups'
      css to keep them alive.
      
      I also folded iter_lock introduced by https://lkml.org/lkml/2013/1/3/295
      in the previous version into this patch.  Johannes felt the race I was
      describing should be mostly harmless and I haven't been able to trigger it
      so the lock doesn't deserve its own patch.  It is still needed
      temporarily, though, because the reference counting on iter->last_visited
      depends on it.  It will go away with the next patch.
      
      The next patch fixups an unbounded cgroup removal holdoff caused by the
      elevated css refcount.  The issue has been observed by Ying Han.  Johannes
      wasn't impressed by the previous version of the fix
      (https://lkml.org/lkml/2013/2/8/379) which cleaned up pending references
      during mem_cgroup_css_offline when a group is removed.  He has suggested a
      different way when the iterator checks whether a cached memcg is still
      valid or no.  More on that in the patch but the basic idea is that every
      memcg tracks the number removed subgroups and iterator records this number
      when a group is cached.  These numbers are checked before
      iter->last_visited is about to be used and the iteration is restarted if
      it is invalid.
      
      The fourth and fifth patches are an attempt for simplification of the
      mem_cgroup_iter.  css juggling is removed and the iteration logic is moved
      to a helper so that the reference counting and iteration are separated.
      
      The last patch just removes css_get_next as there is no user for it any
      longer.
      
      My testing looked as follows:
              A (use_hierarchy=1, limit_in_bytes=150M)
             /|\
            1 2 3
      
      Children groups were created so that the number is never higher than 3 and
      their limits were random between 50-100M.  Each group hosts a kernel build
      (starting with tar -xf so the tree is not shared and make -jNUM_CPUs/3)
      and terminated after random time - up to 5 minutes) and then it is
      removed.
      
      This should exercise both leaf and hierarchical reclaim as well as races
      with cgroup removals and debugging messages I added on top proved that.
      100 groups were created during the test.
      
      This patch:
      
      css reference counting keeps the cgroup alive even though it has been
      already removed.  mem_cgroup_iter relies on this fact and takes a
      reference to the returned group.  The reference is then released on the
      next iteration or mem_cgroup_iter_break.  mem_cgroup_iter currently
      releases the reference right after it gets the last css_id.
      
      This is correct because neither prev's memcg nor cgroup are accessed after
      then.  This will change in the next patch so we need to hold the group
      alive a bit longer so let's move the css_put at the end of the function.
      Signed-off-by: NMichal Hocko <mhocko@suse.cz>
      Acked-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Li Zefan <lizefan@huawei.com>
      Cc: Ying Han <yinghan@google.com>
      Cc: Tejun Heo <htejun@gmail.com>
      Cc: Glauber Costa <glommer@parallels.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      c40046f3
    • J
      mm: introduce free_highmem_page() helper to free highmem pages into buddy system · cfa11e08
      Jiang Liu 提交于
      The original goal of this patchset is to fix the bug reported by
      
        https://bugzilla.kernel.org/show_bug.cgi?id=53501
      
      Now it has also been expanded to reduce common code used by memory
      initializion.
      
      This is the second part, which applies to the previous part at:
        http://marc.info/?l=linux-mm&m=136289696323825&w=2
      
      It introduces a helper function free_highmem_page() to free highmem
      pages into the buddy system when initializing mm subsystem.
      Introduction of free_highmem_page() is one step forward to clean up
      accesses and modificaitons of totalhigh_pages, totalram_pages and
      zone->managed_pages etc. I hope we could remove all references to
      totalhigh_pages from the arch/ subdirectory.
      
      We have only tested these patchset on x86 platforms, and have done basic
      compliation tests using cross-compilers from ftp.kernel.org. That means
      some code may not pass compilation on some architectures. So any help
      to test this patchset are welcomed!
      
      There are several other parts still under development:
      Part3: refine code to manage totalram_pages, totalhigh_pages and
      	zone->managed_pages
      Part4: introduce helper functions to simplify mem_init() and remove the
      	global variable num_physpages.
      
      This patch:
      
      Introduce helper function free_highmem_page(), which will be used by
      architectures with HIGHMEM enabled to free highmem pages into the buddy
      system.
      Signed-off-by: NJiang Liu <jiang.liu@huawei.com>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: "Suzuki K. Poulose" <suzuki@in.ibm.com>
      Cc: Alexander Graf <agraf@suse.de>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Attilio Rao <attilio.rao@citrix.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Cong Wang <amwang@redhat.com>
      Cc: David Daney <david.daney@cavium.com>
      Cc: David Howells <dhowells@redhat.com>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: James Hogan <james.hogan@imgtec.com>
      Cc: Jeff Dike <jdike@addtoit.com>
      Cc: Jiang Liu <jiang.liu@huawei.com>
      Cc: Jiang Liu <liuj97@gmail.com>
      Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Cc: Konstantin Khlebnikov <khlebnikov@openvz.org>
      Cc: Linus Walleij <linus.walleij@linaro.org>
      Cc: Marek Szyprowski <m.szyprowski@samsung.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Michal Nazarewicz <mina86@mina86.com>
      Cc: Michal Simek <monstr@monstr.eu>
      Cc: Michel Lespinasse <walken@google.com>
      Cc: Minchan Kim <minchan@kernel.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Richard Weinberger <richard@nod.at>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: Sam Ravnborg <sam@ravnborg.org>
      Cc: Stephen Boyd <sboyd@codeaurora.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Reviewed-by: NPekka Enberg <penberg@kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      cfa11e08
    • J
      mm: introduce common help functions to deal with reserved/managed pages · 69afade7
      Jiang Liu 提交于
      The original goal of this patchset is to fix the bug reported by
      
        https://bugzilla.kernel.org/show_bug.cgi?id=53501
      
      Now it has also been expanded to reduce common code used by memory
      initializion.
      
      This is the first part, which applies to v3.9-rc1.
      
      It introduces following common helper functions to simplify
      free_initmem() and free_initrd_mem() on different architectures:
      
      adjust_managed_page_count():
      	will be used to adjust totalram_pages, totalhigh_pages,
      	zone->managed_pages when reserving/unresering a page.
      
      __free_reserved_page():
      	free a reserved page into the buddy system without adjusting
      	page statistics info
      
      free_reserved_page():
      	free a reserved page into the buddy system and adjust page
      	statistics info
      
      mark_page_reserved():
      	mark a page as reserved and adjust page statistics info
      
      free_reserved_area():
      	free a continous ranges of pages by calling free_reserved_page()
      
      free_initmem_default():
      	default method to free __init pages.
      
      We have only tested these patchset on x86 platforms, and have done basic
      compliation tests using cross-compilers from ftp.kernel.org.  That means
      some code may not pass compilation on some architectures.  So any help to
      test this patchset are welcomed!
      
      There are several other parts still under development:
      Part2: introduce free_highmem_page() to simplify freeing highmem pages
      Part3: refine code to manage totalram_pages, totalhigh_pages and
      	zone->managed_pages
      Part4: introduce helper functions to simplify mem_init() and remove the
      	global variable num_physpages.
      
      This patch:
      
      Code to deal with reserved/managed pages are duplicated by many
      architectures, so introduce common help functions to reduce duplicated
      code.  These common help functions will also be used to concentrate code
      to modify totalram_pages and zone->managed_pages, which makes the code
      much more clear.
      Signed-off-by: NJiang Liu <jiang.liu@huawei.com>
      Acked-by: NGeert Uytterhoeven <geert@linux-m68k.org>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: "James E.J. Bottomley" <jejb@parisc-linux.org>
      Cc: Anatolij Gustschin <agust@denx.de>
      Cc: Aurelien Jacquiot <a-jacquiot@ti.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Chen Liqin <liqin.chen@sunplusct.com>
      Cc: Chris Zankel <chris@zankel.net>
      Cc: David Howells <dhowells@redhat.com>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: Eric Biederman <ebiederm@xmission.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: Guan Xuetao <gxt@mprc.pku.edu.cn>
      Cc: Haavard Skinnemoen <hskinnemoen@gmail.com>
      Cc: Hans-Christian Egtvedt <egtvedt@samfundet.no>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Helge Deller <deller@gmx.de>
      Cc: Hirokazu Takata <takata@linux-m32r.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
      Cc: James Hogan <james.hogan@imgtec.com>
      Cc: Jeff Dike <jdike@addtoit.com>
      Cc: Jiang Liu <jiang.liu@huawei.com>
      Cc: Jiang Liu <liuj97@gmail.com>
      Cc: Jonas Bonn <jonas@southpole.se>
      Cc: Koichi Yasutake <yasutake.koichi@jp.panasonic.com>
      Cc: Lennox Wu <lennox.wu@gmail.com>
      Cc: Mark Salter <msalter@redhat.com>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Matt Turner <mattst88@gmail.com>
      Cc: Max Filippov <jcmvbkbc@gmail.com>
      Cc: Michal Simek <monstr@monstr.eu>
      Cc: Mikael Starvik <starvik@axis.com>
      Cc: Mike Frysinger <vapier@gentoo.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Paul Mundt <lethal@linux-sh.org>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Richard Henderson <rth@twiddle.net>
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: Sam Ravnborg <sam@ravnborg.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Vineet Gupta <vgupta@synopsys.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Cc: Zhang Yanfei <zhangyanfei@cn.fujitsu.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      69afade7
    • H
      mm/vmscan.c: minor cleanup for kswapd · 2d42a40d
      Hillf Danton 提交于
      Local variable total_scanned is no longer used.
      Signed-off-by: NHillf Danton <dhillf@gmail.com>
      Acked-by: NRik van Riel <riel@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      2d42a40d
    • T
      mm: walk_memory_range(): fix typo in comment · e05c4bbf
      Toshi Kani 提交于
      Fix a typo "end_pft" in the comment of walk_memory_range().
      Signed-off-by: NToshi Kani <toshi.kani@hp.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      e05c4bbf
    • V
      memblock: add assertion for zero allocation alignment · 94f3d3af
      Vineet Gupta 提交于
      This came to light when calling memblock allocator from arc port (for
      copying flattended DT).  If a "0" alignment is passed, the allocator
      round_up() call incorrectly rounds up the size to 0.
      
      round_up(num, alignto) => ((num - 1) | (alignto -1)) + 1
      
      While the obvious allocation failure causes kernel to panic, it is better
      to warn the caller to fix the code.
      
      Tejun suggested that instead of BUG_ON(!align) - which might be
      ineffective due to pending console init and such, it is better to WARN_ON,
      and continue the boot with a reasonable default align.
      
      Caller passing @size need not be handled similarly as the subsequent
      panic will indicate that anyhow.
      Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Cc: Wanpeng Li <liwanp@linux.vnet.ibm.com>
      Cc: Ingo Molnar <mingo@kernel.org>
      Acked-by: NTejun Heo <tj@kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      94f3d3af
    • H
      rmap: recompute pgoff for unmapping huge page · 369a713e
      Hillf Danton 提交于
      We have to recompute pgoff if the given page is huge, since result based
      on HPAGE_SIZE is not approapriate for scanning the vma interval tree, as
      shown by commit 36e4f20a ("hugetlb: do not use vma_hugecache_offset()
      for vma_prio_tree_foreach").
      Signed-off-by: NHillf Danton <dhillf@gmail.com>
      Cc: Michal Hocko <mhocko@suse.cz>
      Cc: Michel Lespinasse <walken@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      369a713e