1. 13 12月, 2011 2 次提交
    • G
      socket: initial cgroup code. · e1aab161
      Glauber Costa 提交于
      The goal of this work is to move the memory pressure tcp
      controls to a cgroup, instead of just relying on global
      conditions.
      
      To avoid excessive overhead in the network fast paths,
      the code that accounts allocated memory to a cgroup is
      hidden inside a static_branch(). This branch is patched out
      until the first non-root cgroup is created. So when nobody
      is using cgroups, even if it is mounted, no significant performance
      penalty should be seen.
      
      This patch handles the generic part of the code, and has nothing
      tcp-specific.
      Signed-off-by: NGlauber Costa <glommer@parallels.com>
      Reviewed-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujtsu.com>
      CC: Kirill A. Shutemov <kirill@shutemov.name>
      CC: David S. Miller <davem@davemloft.net>
      CC: Eric W. Biederman <ebiederm@xmission.com>
      CC: Eric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      e1aab161
    • G
      Basic kernel memory functionality for the Memory Controller · e5671dfa
      Glauber Costa 提交于
      This patch lays down the foundation for the kernel memory component
      of the Memory Controller.
      
      As of today, I am only laying down the following files:
      
       * memory.independent_kmem_limit
       * memory.kmem.limit_in_bytes (currently ignored)
       * memory.kmem.usage_in_bytes (always zero)
      Signed-off-by: NGlauber Costa <glommer@parallels.com>
      CC: Kirill A. Shutemov <kirill@shutemov.name>
      CC: Paul Menage <paul@paulmenage.org>
      CC: Greg Thelen <gthelen@google.com>
      CC: Johannes Weiner <jweiner@redhat.com>
      CC: Michal Hocko <mhocko@suse.cz>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      e5671dfa
  2. 03 11月, 2011 1 次提交
  3. 15 9月, 2011 1 次提交
  4. 27 7月, 2011 1 次提交
    • K
      memcg: add memory.vmscan_stat · 82f9d486
      KAMEZAWA Hiroyuki 提交于
      The commit log of 0ae5e89c ("memcg: count the soft_limit reclaim
      in...") says it adds scanning stats to memory.stat file.  But it doesn't
      because we considered we needed to make a concensus for such new APIs.
      
      This patch is a trial to add memory.scan_stat. This shows
        - the number of scanned pages(total, anon, file)
        - the number of rotated pages(total, anon, file)
        - the number of freed pages(total, anon, file)
        - the number of elaplsed time (including sleep/pause time)
      
        for both of direct/soft reclaim.
      
      The biggest difference with oringinal Ying's one is that this file
      can be reset by some write, as
      
        # echo 0 ...../memory.scan_stat
      
      Example of output is here. This is a result after make -j 6 kernel
      under 300M limit.
      
        [kamezawa@bluextal ~]$ cat /cgroup/memory/A/memory.scan_stat
        [kamezawa@bluextal ~]$ cat /cgroup/memory/A/memory.vmscan_stat
        scanned_pages_by_limit 9471864
        scanned_anon_pages_by_limit 6640629
        scanned_file_pages_by_limit 2831235
        rotated_pages_by_limit 4243974
        rotated_anon_pages_by_limit 3971968
        rotated_file_pages_by_limit 272006
        freed_pages_by_limit 2318492
        freed_anon_pages_by_limit 962052
        freed_file_pages_by_limit 1356440
        elapsed_ns_by_limit 351386416101
        scanned_pages_by_system 0
        scanned_anon_pages_by_system 0
        scanned_file_pages_by_system 0
        rotated_pages_by_system 0
        rotated_anon_pages_by_system 0
        rotated_file_pages_by_system 0
        freed_pages_by_system 0
        freed_anon_pages_by_system 0
        freed_file_pages_by_system 0
        elapsed_ns_by_system 0
        scanned_pages_by_limit_under_hierarchy 9471864
        scanned_anon_pages_by_limit_under_hierarchy 6640629
        scanned_file_pages_by_limit_under_hierarchy 2831235
        rotated_pages_by_limit_under_hierarchy 4243974
        rotated_anon_pages_by_limit_under_hierarchy 3971968
        rotated_file_pages_by_limit_under_hierarchy 272006
        freed_pages_by_limit_under_hierarchy 2318492
        freed_anon_pages_by_limit_under_hierarchy 962052
        freed_file_pages_by_limit_under_hierarchy 1356440
        elapsed_ns_by_limit_under_hierarchy 351386416101
        scanned_pages_by_system_under_hierarchy 0
        scanned_anon_pages_by_system_under_hierarchy 0
        scanned_file_pages_by_system_under_hierarchy 0
        rotated_pages_by_system_under_hierarchy 0
        rotated_anon_pages_by_system_under_hierarchy 0
        rotated_file_pages_by_system_under_hierarchy 0
        freed_pages_by_system_under_hierarchy 0
        freed_anon_pages_by_system_under_hierarchy 0
        freed_file_pages_by_system_under_hierarchy 0
        elapsed_ns_by_system_under_hierarchy 0
      
      total_xxxx is for hierarchy management.
      
      This will be useful for further memcg developments and need to be
      developped before we do some complicated rework on LRU/softlimit
      management.
      
      This patch adds a new struct memcg_scanrecord into scan_control struct.
      sc->nr_scanned at el is not designed for exporting information.  For
      example, nr_scanned is reset frequentrly and incremented +2 at scanning
      mapped pages.
      
      To avoid complexity, I added a new param in scan_control which is for
      exporting scanning score.
      Signed-off-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
      Cc: Michal Hocko <mhocko@suse.cz>
      Cc: Ying Han <yinghan@google.com>
      Cc: Andrew Bresticker <abrestic@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      82f9d486
  5. 16 6月, 2011 3 次提交
  6. 29 4月, 2011 1 次提交
  7. 17 2月, 2011 1 次提交
  8. 14 1月, 2011 2 次提交
  9. 28 5月, 2010 4 次提交
  10. 23 4月, 2010 1 次提交
  11. 25 3月, 2010 1 次提交
  12. 19 3月, 2010 1 次提交
  13. 13 3月, 2010 4 次提交
    • K
      memcg: handle panic_on_oom=always case · daaf1e68
      KAMEZAWA Hiroyuki 提交于
      Presently, if panic_on_oom=2, the whole system panics even if the oom
      happend in some special situation (as cpuset, mempolicy....).  Then,
      panic_on_oom=2 means painc_on_oom_always.
      
      Now, memcg doesn't check panic_on_oom flag. This patch adds a check.
      
      BTW, how it's useful ?
      
      kdump+panic_on_oom=2 is the last tool to investigate what happens in
      oom-ed system.  When a task is killed, the sysytem recovers and there will
      be few hint to know what happnes.  In mission critical system, oom should
      never happen.  Then, panic_on_oom=2+kdump is useful to avoid next OOM by
      knowing precise information via snapshot.
      
      TODO:
       - For memcg, it's for isolate system's memory usage, oom-notiifer and
         freeze_at_oom (or rest_at_oom) should be implemented. Then, management
         daemon can do similar jobs (as kdump) or taking snapshot per cgroup.
      Signed-off-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Balbir Singh <balbir@linux.vnet.ibm.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Nick Piggin <npiggin@suse.de>
      Reviewed-by: NDaisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      daaf1e68
    • K
      memcg: implement memory thresholds · 2e72b634
      Kirill A. Shutemov 提交于
      It allows to register multiple memory and memsw thresholds and gets
      notifications when it crosses.
      
      To register a threshold application need:
      - create an eventfd;
      - open memory.usage_in_bytes or memory.memsw.usage_in_bytes;
      - write string like "<event_fd> <memory.usage_in_bytes> <threshold>" to
        cgroup.event_control.
      
      Application will be notified through eventfd when memory usage crosses
      threshold in any direction.
      
      It's applicable for root and non-root cgroup.
      
      It uses stats to track memory usage, simmilar to soft limits. It checks
      if we need to send event to userspace on every 100 page in/out. I guess
      it's good compromise between performance and accuracy of thresholds.
      
      [akpm@linux-foundation.org: coding-style fixes]
      [nishimura@mxp.nes.nec.co.jp: fix documentation merge issue]
      Signed-off-by: NKirill A. Shutemov <kirill@shutemov.name>
      Cc: Li Zefan <lizf@cn.fujitsu.com>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Balbir Singh <balbir@linux.vnet.ibm.com>
      Cc: Pavel Emelyanov <xemul@openvz.org>
      Cc: Dan Malek <dan@embeddedalley.com>
      Cc: Vladislav Buzov <vbuzov@embeddedalley.com>
      Cc: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
      Cc: Alexander Shishkin <virtuoso@slind.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      2e72b634
    • D
      memcg: move charges of anonymous swap · 02491447
      Daisuke Nishimura 提交于
      This patch is another core part of this move-charge-at-task-migration
      feature.  It enables moving charges of anonymous swaps.
      
      To move the charge of swap, we need to exchange swap_cgroup's record.
      
      In current implementation, swap_cgroup's record is protected by:
      
        - page lock: if the entry is on swap cache.
        - swap_lock: if the entry is not on swap cache.
      
      This works well in usual swap-in/out activity.
      
      But this behavior make the feature of moving swap charge check many
      conditions to exchange swap_cgroup's record safely.
      
      So I changed modification of swap_cgroup's recored(swap_cgroup_record())
      to use xchg, and define a new function to cmpxchg swap_cgroup's record.
      
      This patch also enables moving charge of non pte_present but not uncharged
      swap caches, which can be exist on swap-out path, by getting the target
      pages via find_get_page() as do_mincore() does.
      
      [kosaki.motohiro@jp.fujitsu.com: fix ia64 build]
      [akpm@linux-foundation.org: fix typos]
      Signed-off-by: NDaisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
      Cc: Balbir Singh <balbir@linux.vnet.ibm.com>
      Acked-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Li Zefan <lizf@cn.fujitsu.com>
      Cc: Paul Menage <menage@google.com>
      Cc: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
      Signed-off-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      02491447
    • D
      memcg: add interface to move charge at task migration · 7dc74be0
      Daisuke Nishimura 提交于
      In current memcg, charges associated with a task aren't moved to the new
      cgroup at task migration.  Some users feel this behavior to be strange.
      These patches are for this feature, that is, for charging to the new
      cgroup and, of course, uncharging from the old cgroup at task migration.
      
      This patch adds "memory.move_charge_at_immigrate" file, which is a flag
      file to determine whether charges should be moved to the new cgroup at
      task migration or not and what type of charges should be moved.  This
      patch also adds read and write handlers of the file.
      
      This patch also adds no-op handlers for this feature.  These handlers will
      be implemented in later patches.  And you cannot write any values other
      than 0 to move_charge_at_immigrate yet.
      Signed-off-by: NDaisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
      Cc: Balbir Singh <balbir@linux.vnet.ibm.com>
      Acked-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Li Zefan <lizf@cn.fujitsu.com>
      Cc: Paul Menage <menage@google.com>
      Cc: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      7dc74be0
  14. 24 9月, 2009 2 次提交
  15. 19 6月, 2009 2 次提交
  16. 14 4月, 2009 1 次提交
  17. 30 3月, 2009 1 次提交
  18. 16 1月, 2009 1 次提交
  19. 09 1月, 2009 7 次提交
    • K
      memcg: swappiness · a7885eb8
      KOSAKI Motohiro 提交于
      Currently, /proc/sys/vm/swappiness can change swappiness ratio for global
      reclaim.  However, memcg reclaim doesn't have tuning parameter for itself.
      
      In general, the optimal swappiness depend on workload.  (e.g.  hpc
      workload need to low swappiness than the others.)
      
      Then, per cgroup swappiness improve administrator tunability.
      Signed-off-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Signed-off-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Balbir Singh <balbir@in.ibm.com>
      Cc: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
      Cc: Hugh Dickins <hugh@veritas.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      a7885eb8
    • K
      memcg: show reclaim stat · 7f016ee8
      KOSAKI Motohiro 提交于
      Add the following four fields to memory.stat file:
      
        - inactive_ratio
        - recent_rotated_anon
        - recent_rotated_file
        - recent_scanned_anon
        - recent_scanned_file
      Acked-by: NRik van Riel <riel@redhat.com>
      Signed-off-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Signed-off-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Balbir Singh <balbir@in.ibm.com>
      Cc: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
      Cc: Hugh Dickins <hugh@veritas.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      7f016ee8
    • B
      memcg: memory cgroup hierarchy documentation · 52bc0d82
      Balbir Singh 提交于
      Documentation updates for hierarchy support
      Signed-off-by: NBalbir Singh <balbir@linux.vnet.ibm.com>
      Cc: YAMAMOTO Takashi <yamamoto@valinux.co.jp>
      Cc: Paul Menage <menage@google.com>
      Cc: Li Zefan <lizf@cn.fujitsu.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Pavel Emelianov <xemul@openvz.org>
      Cc: Dhaval Giani <dhaval@linux.vnet.ibm.com>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      52bc0d82
    • K
      memcg: mem+swap controller core · 8c7c6e34
      KAMEZAWA Hiroyuki 提交于
      This patch implements per cgroup limit for usage of memory+swap.  However
      there are SwapCache, double counting of swap-cache and swap-entry is
      avoided.
      
      Mem+Swap controller works as following.
        - memory usage is limited by memory.limit_in_bytes.
        - memory + swap usage is limited by memory.memsw_limit_in_bytes.
      
      This has following benefits.
        - A user can limit total resource usage of mem+swap.
      
          Without this, because memory resource controller doesn't take care of
          usage of swap, a process can exhaust all the swap (by memory leak.)
          We can avoid this case.
      
          And Swap is shared resource but it cannot be reclaimed (goes back to memory)
          until it's used. This characteristic can be trouble when the memory
          is divided into some parts by cpuset or memcg.
          Assume group A and group B.
          After some application executes, the system can be..
      
          Group A -- very large free memory space but occupy 99% of swap.
          Group B -- under memory shortage but cannot use swap...it's nearly full.
      
          Ability to set appropriate swap limit for each group is required.
      
      Maybe someone wonder "why not swap but mem+swap ?"
      
        - The global LRU(kswapd) can swap out arbitrary pages. Swap-out means
          to move account from memory to swap...there is no change in usage of
          mem+swap.
      
          In other words, when we want to limit the usage of swap without affecting
          global LRU, mem+swap limit is better than just limiting swap.
      
      Accounting target information is stored in swap_cgroup which is
      per swap entry record.
      
      Charge is done as following.
        map
          - charge  page and memsw.
      
        unmap
          - uncharge page/memsw if not SwapCache.
      
        swap-out (__delete_from_swap_cache)
          - uncharge page
          - record mem_cgroup information to swap_cgroup.
      
        swap-in (do_swap_page)
          - charged as page and memsw.
            record in swap_cgroup is cleared.
            memsw accounting is decremented.
      
        swap-free (swap_free())
          - if swap entry is freed, memsw is uncharged by PAGE_SIZE.
      
      There are people work under never-swap environments and consider swap as
      something bad. For such people, this mem+swap controller extension is just an
      overhead.  This overhead is avoided by config or boot option.
      (see Kconfig. detail is not in this patch.)
      
      TODO:
       - maybe more optimization can be don in swap-in path. (but not very safe.)
         But we just do simple accounting at this stage.
      
      [nishimura@mxp.nes.nec.co.jp: make resize limit hold mutex]
      [hugh@veritas.com: memswap controller core swapcache fixes]
      Signed-off-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Li Zefan <lizf@cn.fujitsu.com>
      Cc: Balbir Singh <balbir@in.ibm.com>
      Cc: Pavel Emelyanov <xemul@openvz.org>
      Signed-off-by: NDaisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
      Signed-off-by: NHugh Dickins <hugh@veritas.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      8c7c6e34
    • K
      memcg: handle swap caches · d13d1443
      KAMEZAWA Hiroyuki 提交于
      SwapCache support for memory resource controller (memcg)
      
      Before mem+swap controller, memcg itself should handle SwapCache in proper
      way.  This is cut-out from it.
      
      In current memcg, SwapCache is just leaked and the user can create tons of
      SwapCache.  This is a leak of account and should be handled.
      
      SwapCache accounting is done as following.
      
        charge (anon)
      	- charged when it's mapped.
      	  (because of readahead, charge at add_to_swap_cache() is not sane)
        uncharge (anon)
      	- uncharged when it's dropped from swapcache and fully unmapped.
      	  means it's not uncharged at unmap.
      	  Note: delete from swap cache at swap-in is done after rmap information
      	        is established.
        charge (shmem)
      	- charged at swap-in. this prevents charge at add_to_page_cache().
      
        uncharge (shmem)
      	- uncharged when it's dropped from swapcache and not on shmem's
      	  radix-tree.
      
        at migration, check against 'old page' is modified to handle shmem.
      
      Comparing to the old version discussed (and caused troubles), we have
      advantages of
        - PCG_USED bit.
        - simple migrating handling.
      
      So, situation is much easier than several months ago, maybe.
      
      [hugh@veritas.com: memcg: handle swap caches build fix]
      Reviewed-by: NDaisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
      Tested-by: NDaisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
      Signed-off-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Hugh Dickins <hugh@veritas.com>
      Cc: Li Zefan <lizf@cn.fujitsu.com>
      Cc: Balbir Singh <balbir@in.ibm.com>
      Cc: Pavel Emelyanov <xemul@openvz.org>
      Signed-off-by: NHugh Dickins <hugh@veritas.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      d13d1443
    • K
      memcg: new force_empty to free pages under group · c1e862c1
      KAMEZAWA Hiroyuki 提交于
      By memcg-move-all-accounts-to-parent-at-rmdir.patch, there is no leak of
      memory usage and force_empty is removed.
      
      This patch adds "force_empty" again, in reasonable manner.
      
      memory.force_empty file works when
      
        #echo 0 (or some) > memory.force_empty
        and have following function.
      
        1. only works when there are no task in this cgroup.
        2. free all page under this cgroup as much as possible.
        3. page which cannot be freed will be moved up to parent.
        4. Then, memcg will be empty after above echo returns.
      
      This is much better behavior than old "force_empty" which just forget
      all accounts. This patch also check signal_pending() and above "echo"
      can be stopped by "Ctrl-C".
      
      [akpm@linux-foundation.org: cleanup]
      Signed-off-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Li Zefan <lizf@cn.fujitsu.com>
      Cc: Balbir Singh <balbir@in.ibm.com>
      Cc: Pavel Emelyanov <xemul@openvz.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      c1e862c1
    • K
      memcg: move all acccounting to parent at rmdir() · f817ed48
      KAMEZAWA Hiroyuki 提交于
      This patch provides a function to move account information of a page
      between mem_cgroups and rewrite force_empty to make use of this.
      
      This moving of page_cgroup is done under
       - lru_lock of source/destination mem_cgroup is held.
       - lock_page_cgroup() is held.
      
      Then, a routine which touches pc->mem_cgroup without lock_page_cgroup()
      should confirm pc->mem_cgroup is still valid or not.  Typical code can be
      following.
      
      (while page is not under lock_page())
      	mem = pc->mem_cgroup;
      	mz = page_cgroup_zoneinfo(pc)
      	spin_lock_irqsave(&mz->lru_lock);
      	if (pc->mem_cgroup == mem)
      		...../* some list handling */
      	spin_unlock_irqrestore(&mz->lru_lock);
      
      Of course, better way is
      	lock_page_cgroup(pc);
      	....
      	unlock_page_cgroup(pc);
      
      But you should confirm the nest of lock and avoid deadlock.
      
      If you treats page_cgroup from mem_cgroup's LRU under mz->lru_lock,
      you don't have to worry about what pc->mem_cgroup points to.
      moved pages are added to head of lru, not to tail.
      
      Expected users of this routine is:
        - force_empty (rmdir)
        - moving tasks between cgroup (for moving account information.)
        - hierarchy (maybe useful.)
      
      force_empty(rmdir) uses this move_account and move pages to its parent.
      This "move" will not cause OOM (I added "oom" parameter to try_charge().)
      
      If the parent is busy (not enough memory), force_empty calls try_to_free_page()
      and reduce usage.
      
      Purpose of this behavior is
        - Fix "forget all" behavior of force_empty and avoid leak of accounting.
        - By "moving first, free if necessary", keep pages on memory as much as
          possible.
      
      Adding a switch to change behavior of force_empty to
        - free first, move if necessary
        - free all, if there is mlocked/busy pages, return -EBUSY.
      is under consideration. (I'll add if someone requtests.)
      
      This patch also removes memory.force_empty file, a brutal debug-only interface.
      Reviewed-by: NDaisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
      Tested-by: NDaisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
      Signed-off-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Balbir Singh <balbir@in.ibm.com>
      Cc: Paul Menage <menage@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      f817ed48
  20. 20 10月, 2008 1 次提交
  21. 26 7月, 2008 1 次提交
  22. 05 3月, 2008 1 次提交