1. 06 3月, 2009 1 次提交
    • M
      tracing, Text Edit Lock - Architecture Independent Code · 0e39ac44
      Mathieu Desnoyers 提交于
      This is an architecture independant synchronization around kernel text
      modifications through use of a global mutex.
      
      A mutex has been chosen so that kprobes, the main user of this, can sleep
      during memory allocation between the memory read of the instructions it
      must replace and the memory write of the breakpoint.
      
      Other user of this interface: immediate values.
      
      Paravirt and alternatives are always done when SMP is inactive, so there
      is no need to use locks.
      Signed-off-by: NMathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
      LKML-Reference: <49B142D8.7020601@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      0e39ac44
  2. 06 2月, 2009 1 次提交
    • C
      do_wp_page: fix regression with execute in place · ab92661d
      Carsten Otte 提交于
      Fix do_wp_page for VM_MIXEDMAP mappings.
      
      In the case where pfn_valid returns 0 for a pfn at the beginning of
      do_wp_page and the mapping is not shared writable, the code branches to
      label `gotten:' with old_page == NULL.
      
      In case the vma is locked (vma->vm_flags & VM_LOCKED), lock_page,
      clear_page_mlock, and unlock_page try to access the old_page.
      
      This patch checks whether old_page is valid before it is dereferenced.
      
      The regression was introduced by "mlock: mlocked pages are unevictable"
      (commit b291f000).
      Signed-off-by: NCarsten Otte <cotte@de.ibm.com>
      Cc: Nick Piggin <npiggin@suse.de>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: <stable@kernel.org>		[2.6.28.x]
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      ab92661d
  3. 14 1月, 2009 2 次提交
  4. 12 1月, 2009 1 次提交
  5. 09 1月, 2009 5 次提交
    • K
      memcg: fix swap accounting leak · 03f3c433
      KAMEZAWA Hiroyuki 提交于
      Fix swapin charge operation of memcg.
      
      Now, memcg has hooks to swap-out operation and checks SwapCache is really
      unused or not.  That check depends on contents of struct page.  I.e.  If
      PageAnon(page) && page_mapped(page), the page is recoginized as
      still-in-use.
      
      Now, reuse_swap_page() calles delete_from_swap_cache() before establishment
      of any rmap. Then, in followinig sequence
      
      	(Page fault with WRITE)
      	try_charge() (charge += PAGESIZE)
      	commit_charge() (Check page_cgroup is used or not..)
      	reuse_swap_page()
      		-> delete_from_swapcache()
      			-> mem_cgroup_uncharge_swapcache() (charge -= PAGESIZE)
      	......
      New charge is uncharged soon....
      To avoid this,  move commit_charge() after page_mapcount() goes up to 1.
      By this,
      
      	try_charge()		(usage += PAGESIZE)
      	reuse_swap_page()	(may usage -= PAGESIZE if PCG_USED is set)
      	commit_charge()		(If page_cgroup is not marked as PCG_USED,
      				 add new charge.)
      Accounting will be correct.
      
      Changelog (v2) -> (v3)
        - fixed invalid charge to swp_entry==0.
        - updated documentation.
      Changelog (v1) -> (v2)
        - fixed comment.
      
      [nishimura@mxp.nes.nec.co.jp: swap accounting leak doc fix]
      Signed-off-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Acked-by: NBalbir Singh <balbir@linux.vnet.ibm.com>
      Tested-by: NBalbir Singh <balbir@linux.vnet.ibm.com>
      Cc: Hugh Dickins <hugh@veritas.com>
      Cc: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
      Signed-off-by: NDaisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      03f3c433
    • K
      memcg: revert gfp mask fix · 2c26fdd7
      KAMEZAWA Hiroyuki 提交于
      My patch, memcg-fix-gfp_mask-of-callers-of-charge.patch changed gfp_mask
      of callers of charge to be GFP_HIGHUSER_MOVABLE for showing what will
      happen at memory reclaim.
      
      But in recent discussion, it's NACKed because it sounds ugly.
      
      This patch is for reverting it and add some clean up to gfp_mask of
      callers of charge.  No behavior change but need review before generating
      HUNK in deep queue.
      
      This patch also adds explanation to meaning of gfp_mask passed to charge
      functions in memcontrol.h.
      Signed-off-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Balbir Singh <balbir@in.ibm.com>
      Cc: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
      Cc: Hugh Dickins <hugh@veritas.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      2c26fdd7
    • K
      memcg: mem+swap controller core · 8c7c6e34
      KAMEZAWA Hiroyuki 提交于
      This patch implements per cgroup limit for usage of memory+swap.  However
      there are SwapCache, double counting of swap-cache and swap-entry is
      avoided.
      
      Mem+Swap controller works as following.
        - memory usage is limited by memory.limit_in_bytes.
        - memory + swap usage is limited by memory.memsw_limit_in_bytes.
      
      This has following benefits.
        - A user can limit total resource usage of mem+swap.
      
          Without this, because memory resource controller doesn't take care of
          usage of swap, a process can exhaust all the swap (by memory leak.)
          We can avoid this case.
      
          And Swap is shared resource but it cannot be reclaimed (goes back to memory)
          until it's used. This characteristic can be trouble when the memory
          is divided into some parts by cpuset or memcg.
          Assume group A and group B.
          After some application executes, the system can be..
      
          Group A -- very large free memory space but occupy 99% of swap.
          Group B -- under memory shortage but cannot use swap...it's nearly full.
      
          Ability to set appropriate swap limit for each group is required.
      
      Maybe someone wonder "why not swap but mem+swap ?"
      
        - The global LRU(kswapd) can swap out arbitrary pages. Swap-out means
          to move account from memory to swap...there is no change in usage of
          mem+swap.
      
          In other words, when we want to limit the usage of swap without affecting
          global LRU, mem+swap limit is better than just limiting swap.
      
      Accounting target information is stored in swap_cgroup which is
      per swap entry record.
      
      Charge is done as following.
        map
          - charge  page and memsw.
      
        unmap
          - uncharge page/memsw if not SwapCache.
      
        swap-out (__delete_from_swap_cache)
          - uncharge page
          - record mem_cgroup information to swap_cgroup.
      
        swap-in (do_swap_page)
          - charged as page and memsw.
            record in swap_cgroup is cleared.
            memsw accounting is decremented.
      
        swap-free (swap_free())
          - if swap entry is freed, memsw is uncharged by PAGE_SIZE.
      
      There are people work under never-swap environments and consider swap as
      something bad. For such people, this mem+swap controller extension is just an
      overhead.  This overhead is avoided by config or boot option.
      (see Kconfig. detail is not in this patch.)
      
      TODO:
       - maybe more optimization can be don in swap-in path. (but not very safe.)
         But we just do simple accounting at this stage.
      
      [nishimura@mxp.nes.nec.co.jp: make resize limit hold mutex]
      [hugh@veritas.com: memswap controller core swapcache fixes]
      Signed-off-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Li Zefan <lizf@cn.fujitsu.com>
      Cc: Balbir Singh <balbir@in.ibm.com>
      Cc: Pavel Emelyanov <xemul@openvz.org>
      Signed-off-by: NDaisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
      Signed-off-by: NHugh Dickins <hugh@veritas.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      8c7c6e34
    • K
      memcg: fix gfp_mask of callers of charge · bced0520
      KAMEZAWA Hiroyuki 提交于
      Fix misuse of gfp_kernel.
      
      Now, most of callers of mem_cgroup_charge_xxx functions uses GFP_KERNEL.
      
      I think that this is from the fact that page_cgroup *was* dynamically
      allocated.
      
      But now, we allocate all page_cgroup at boot.  And
      mem_cgroup_try_to_free_pages() reclaim memory from GFP_HIGHUSER_MOVABLE +
      specified GFP_RECLAIM_MASK.
      
        * This is because we just want to reduce memory usage.
          "Where we should reclaim from ?" is not a problem in memcg.
      
      This patch modifies gfp masks to be GFP_HIGUSER_MOVABLE if possible.
      
      Note: This patch is not for fixing behavior but for showing sane information
            in source code.
      Signed-off-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Reviewed-by: NDaisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
      Cc: Balbir Singh <balbir@in.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      bced0520
    • K
      memcg: introduce charge-commit-cancel style of functions · 7a81b88c
      KAMEZAWA Hiroyuki 提交于
      There is a small race in do_swap_page().  When the page swapped-in is
      charged, the mapcount can be greater than 0.  But, at the same time some
      process (shares it ) call unmap and make mapcount 1->0 and the page is
      uncharged.
      
            CPUA 			CPUB
             mapcount == 1.
         (1) charge if mapcount==0     zap_pte_range()
                                      (2) mapcount 1 => 0.
      			        (3) uncharge(). (success)
         (4) set page's rmap()
             mapcount 0=>1
      
      Then, this swap page's account is leaked.
      
      For fixing this, I added a new interface.
        - charge
         account to res_counter by PAGE_SIZE and try to free pages if necessary.
        - commit
         register page_cgroup and add to LRU if necessary.
        - cancel
         uncharge PAGE_SIZE because of do_swap_page failure.
      
           CPUA
        (1) charge (always)
        (2) set page's rmap (mapcount > 0)
        (3) commit charge was necessary or not after set_pte().
      
      This protocol uses PCG_USED bit on page_cgroup for avoiding over accounting.
      Usual mem_cgroup_charge_common() does charge -> commit at a time.
      
      And this patch also adds following function to clarify all charges.
      
        - mem_cgroup_newpage_charge() ....replacement for mem_cgroup_charge()
      	called against newly allocated anon pages.
      
        - mem_cgroup_charge_migrate_fixup()
              called only from remove_migration_ptes().
      	we'll have to rewrite this later.(this patch just keeps old behavior)
      	This function will be removed by additional patch to make migration
      	clearer.
      
      Good for clarifying "what we do"
      
      Then, we have 4 following charge points.
        - newpage
        - swap-in
        - add-to-cache.
        - migration.
      
      [akpm@linux-foundation.org: add missing inline directives to stubs]
      Signed-off-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Reviewed-by: NDaisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
      Cc: Balbir Singh <balbir@in.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      7a81b88c
  6. 07 1月, 2009 17 次提交
    • Y
      mm: make get_user_pages() interruptible · 4779280d
      Ying Han 提交于
      The initial implementation of checking TIF_MEMDIE covers the cases of OOM
      killing.  If the process has been OOM killed, the TIF_MEMDIE is set and it
      return immediately.  This patch includes:
      
      1.  add the case that the SIGKILL is sent by user processes.  The
         process can try to get_user_pages() unlimited memory even if a user
         process has sent a SIGKILL to it(maybe a monitor find the process
         exceed its memory limit and try to kill it).  In the old
         implementation, the SIGKILL won't be handled until the get_user_pages()
         returns.
      
      2.  change the return value to be ERESTARTSYS.  It makes no sense to
         return ENOMEM if the get_user_pages returned by getting a SIGKILL
         signal.  Considering the general convention for a system call
         interrupted by a signal is ERESTARTNOSYS, so the current return value
         is consistant to that.
      
      Lee:
      
      An unfortunate side effect of "make-get_user_pages-interruptible" is that
      it prevents a SIGKILL'd task from munlock-ing pages that it had mlocked,
      resulting in freeing of mlocked pages.  Freeing of mlocked pages, in
      itself, is not so bad.  We just count them now--altho' I had hoped to
      remove this stat and add PG_MLOCKED to the free pages flags check.
      
      However, consider pages in shared libraries mapped by more than one task
      that a task mlocked--e.g., via mlockall().  If the task that mlocked the
      pages exits via SIGKILL, these pages would be left mlocked and
      unevictable.
      
      Proposed fix:
      
      Add another GUP flag to ignore sigkill when calling get_user_pages from
      munlock()--similar to Kosaki Motohiro's 'IGNORE_VMA_PERMISSIONS flag for
      the same purpose.  We are not actually allocating memory in this case,
      which "make-get_user_pages-interruptible" intends to avoid.  We're just
      munlocking pages that are already resident and mapped, and we're reusing
      get_user_pages() to access those pages.
      
      ??  Maybe we should combine 'IGNORE_VMA_PERMISSIONS and '_IGNORE_SIGKILL
      into a single flag: GUP_FLAGS_MUNLOCK ???
      
      [Lee.Schermerhorn@hp.com: ignore sigkill in get_user_pages during munlock]
      Signed-off-by: NPaul Menage <menage@google.com>
      Signed-off-by: NYing Han <yinghan@google.com>
      Reviewed-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Reviewed-by: NPekka Enberg <penberg@cs.helsinki.fi>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Cc: Hugh Dickins <hugh@veritas.com>
      Cc: Oleg Nesterov <oleg@tv-sign.ru>
      Cc: Lee Schermerhorn <lee.schermerhorn@hp.com>
      Cc: Rohit Seth <rohitseth@google.com>
      Cc: David Rientjes <rientjes@google.com>
      Signed-off-by: NLee Schermerhorn <lee.schermerhorn@hp.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      4779280d
    • H
      badpage: KERN_ALERT BUG instead of KERN_EMERG · 1e9e6365
      Hugh Dickins 提交于
      bad_page() and rmap Eeek messages have said KERN_EMERG for a few years,
      which I've followed in print_bad_pte().  These are serious system errors,
      on a par with BUGs, but they're not quite emergencies, and we do our best
      to carry on: say KERN_ALERT "BUG: " like the x86 oops does.
      
      And remove the "Trying to fix it up, but a reboot is needed" line: it's
      not untrue, but I hope the KERN_ALERT "BUG: " conveys as much.
      Signed-off-by: NHugh Dickins <hugh@veritas.com>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Cc: Christoph Lameter <cl@linux-foundation.org>
      Cc: Mel Gorman <mel@csn.ul.ie>
      Cc: Rik van Riel <riel@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      1e9e6365
    • H
      badpage: ratelimit print_bad_pte and bad_page · d936cf9b
      Hugh Dickins 提交于
      print_bad_pte() and bad_page() might each need ratelimiting - especially
      for their dump_stacks, almost never of interest, yet not quite
      dispensible.  Correlating corruption across neighbouring entries can be
      very helpful, so allow a burst of 60 reports before keeping quiet for the
      remainder of that minute (or allow a steady drip of one report per
      second).
      Signed-off-by: NHugh Dickins <hugh@veritas.com>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Cc: Christoph Lameter <cl@linux-foundation.org>
      Cc: Mel Gorman <mel@csn.ul.ie>
      Cc: Rik van Riel <riel@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      d936cf9b
    • H
      badpage: remove vma from page_remove_rmap · edc315fd
      Hugh Dickins 提交于
      Remove page_remove_rmap()'s vma arg, which was only for the Eeek message.
      And remove the BUG_ON(page_mapcount(page) == 0) from CONFIG_DEBUG_VM's
      page_dup_rmap(): we're trying to be more resilient about that than BUGs.
      Signed-off-by: NHugh Dickins <hugh@veritas.com>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Cc: Christoph Lameter <cl@linux-foundation.org>
      Cc: Mel Gorman <mel@csn.ul.ie>
      Cc: Rik van Riel <riel@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      edc315fd
    • H
      badpage: zap print_bad_pte on swap and file · 2509ef26
      Hugh Dickins 提交于
      Complete zap_pte_range()'s coverage of bad pagetable entries by calling
      print_bad_pte() on a pte_file in a linear vma and on a bad swap entry.
      That needs free_swap_and_cache() to tell it, which will also have shown
      one of those "swap_free" errors (but with much less information).
      
      Similar checks in fork's copy_one_pte()?  No, that would be more noisy
      than helpful: we'll see them when parent and child exec or exit.
      
      Where do_nonlinear_fault() calls print_bad_pte(): omit !VM_CAN_NONLINEAR
      case, that could only be a bug in sys_remap_file_pages(), not a bad pte.
      VM_FAULT_OOM rather than VM_FAULT_SIGBUS?  Well, okay, that is consistent
      with what happens if do_swap_page() operates a bad swap entry; but don't
      we have patches to be more careful about killing when VM_FAULT_OOM?
      Signed-off-by: NHugh Dickins <hugh@veritas.com>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Cc: Christoph Lameter <cl@linux-foundation.org>
      Cc: Mel Gorman <mel@csn.ul.ie>
      Cc: Rik van Riel <riel@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      2509ef26
    • H
      badpage: vm_normal_page use print_bad_pte · 22b31eec
      Hugh Dickins 提交于
      print_bad_pte() is so far being called only when zap_pte_range() finds
      negative page_mapcount, or there's a fault on a pte_file where it does not
      belong.  That's weak coverage when we suspect pagetable corruption.
      
      Originally, it was called when vm_normal_page() found an invalid pfn: but
      pfn_valid is expensive on some architectures and configurations, so 2.6.24
      put that under CONFIG_DEBUG_VM (which doesn't help in the field), then
      2.6.26 replaced it by a VM_BUG_ON (likewise).
      
      Reinstate the print_bad_pte() in vm_normal_page(), but use a cheaper test
      than pfn_valid(): memmap_init_zone() (used in bootup and hotplug) keep a
      __read_mostly note of the highest_memmap_pfn, vm_normal_page() then check
      pfn against that.  We could call this pfn_plausible() or pfn_sane(), but I
      doubt we'll need it elsewhere: of course it's not reliable, but gives much
      stronger pagetable validation on many boxes.
      
      Also use print_bad_pte() when the pte_special bit is found outside a
      VM_PFNMAP or VM_MIXEDMAP area, instead of VM_BUG_ON.
      Signed-off-by: NHugh Dickins <hugh@veritas.com>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Cc: Christoph Lameter <cl@linux-foundation.org>
      Cc: Mel Gorman <mel@csn.ul.ie>
      Cc: Rik van Riel <riel@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      22b31eec
    • H
      badpage: replace page_remove_rmap Eeek and BUG · 3dc14741
      Hugh Dickins 提交于
      Now that bad pages are kept out of circulation, there is no need for the
      infamous page_remove_rmap() BUG() - once that page is freed, its negative
      mapcount will issue a "Bad page state" message and the page won't be
      freed.  Removing the BUG() allows more info, on subsequent pages, to be
      gathered.
      
      We do have more info about the page at this point than bad_page() can know
      - notably, what the pmd is, which might pinpoint something like low 64kB
      corruption - but page_remove_rmap() isn't given the address to find that.
      
      In practice, there is only one call to page_remove_rmap() which has ever
      reported anything, that from zap_pte_range() (usually on exit, sometimes
      on munmap).  It has all the info, so remove page_remove_rmap()'s "Eeek"
      message and leave it all to zap_pte_range().
      
      mm/memory.c already has a hardly used print_bad_pte() function, showing
      some of the appropriate info: extend it to show what we want for the rmap
      case: pte info, page info (when there is a page) and vma info to compare.
      zap_pte_range() already knows the pmd, but print_bad_pte() is easier to
      use if it works that out for itself.
      
      Some of this info is also shown in bad_page()'s "Bad page state" message.
      Keep them separate, but adjust them to match each other as far as
      possible.  Say "Bad page map" in print_bad_pte(), and add a TAINT_BAD_PAGE
      there too.
      
      print_bad_pte() show current->comm unconditionally (though it should get
      repeated in the usually irrelevant stack trace): sorry, I misled Nick
      Piggin to make it conditional on vm_mm == current->mm, but current->mm is
      already NULL in the exit case.  Usually current->comm is good, though
      exceptionally it may not be that of the mm (when "swapoff" for example).
      Signed-off-by: NHugh Dickins <hugh@veritas.com>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Cc: Christoph Lameter <cl@linux-foundation.org>
      Cc: Mel Gorman <mel@csn.ul.ie>
      Cc: Rik van Riel <riel@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      3dc14741
    • K
      mm: make maddr __iomem · 2bc7273b
      KOSAKI Motohiro 提交于
      sparse output following warnings.
      
      mm/memory.c:2936:8: warning: incorrect type in assignment (different address spaces)
      mm/memory.c:2936:8:    expected void *maddr
      mm/memory.c:2936:8:    got void [noderef] <asn:2>
      
      cleanup here.
      Signed-off-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      2bc7273b
    • H
      mm: try_to_free_swap replaces remove_exclusive_swap_page · a2c43eed
      Hugh Dickins 提交于
      remove_exclusive_swap_page(): its problem is in living up to its name.
      
      It doesn't matter if someone else has a reference to the page (raised
      page_count); it doesn't matter if the page is mapped into userspace
      (raised page_mapcount - though that hints it may be worth keeping the
      swap): all that matters is that there be no more references to the swap
      (and no writeback in progress).
      
      swapoff (try_to_unuse) has been removing pages from swapcache for years,
      with no concern for page count or page mapcount, and we used to have a
      comment in lookup_swap_cache() recognizing that: if you go for a page of
      swapcache, you'll get the right page, but it could have been removed from
      swapcache by the time you get page lock.
      
      So, give up asking for exclusivity: get rid of
      remove_exclusive_swap_page(), and remove_exclusive_swap_page_ref() and
      remove_exclusive_swap_page_count() which were spawned for the recent LRU
      work: replace them by the simpler try_to_free_swap() which just checks
      page_swapcount().
      
      Similarly, remove the page_count limitation from free_swap_and_count(),
      but assume that it's worth holding on to the swap if page is mapped and
      swap nowhere near full.  Add a vm_swap_full() test in free_swap_cache()?
      It would be consistent, but I think we probably have enough for now.
      Signed-off-by: NHugh Dickins <hugh@veritas.com>
      Cc: Lee Schermerhorn <lee.schermerhorn@hp.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Robin Holt <holt@sgi.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      a2c43eed
    • H
      mm: reuse_swap_page replaces can_share_swap_page · 7b1fe597
      Hugh Dickins 提交于
      A good place to free up old swap is where do_wp_page(), or do_swap_page(),
      is about to redirty the page: the data on disk is then stale and won't be
      read again; and if we do decide to write the page out later, using the
      previous swap location makes an unnecessary disk seek very likely.
      
      So give can_share_swap_page() the side-effect of delete_from_swap_cache()
      when it safely can.  And can_share_swap_page() was always a misleading
      name, the more so if it has a side-effect: rename it reuse_swap_page().
      
      Irrelevant cleanup nearby: remove swap_token_default_timeout definition
      from swap.h: it's used nowhere.
      Signed-off-by: NHugh Dickins <hugh@veritas.com>
      Cc: Lee Schermerhorn <lee.schermerhorn@hp.com>
      Acked-by: NRik van Riel <riel@redhat.com>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Robin Holt <holt@sgi.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      7b1fe597
    • H
      mm: wp lock page before deciding cow · ab967d86
      Hugh Dickins 提交于
      An application may rely on get_user_pages() to give it pages writable from
      userspace and shared with a driver, GUP breaking COW if necessary.  It may
      mprotect() the pages' writability, off and on, from time to time.
      
      Normally this works fine (so long as the app does not fork); but just
      occasionally, under memory pressure, a readonly pte in a newly writable
      area is COWed unnecessarily, breaking the link with the driver: because
      do_wp_page() does trylock_page, and falls back to COW whenever that fails.
      
      For reliable behaviour in the unshared case, when the trylock_page fails,
      now unlock pagetable, lock page and relock pagetable, before deciding
      whether Copy-On-Write is really necessary.
      
      Reported-by: Zhou Yingchao
      Signed-off-by: NHugh Dickins <hugh@veritas.com>
      Cc: Lee Schermerhorn <lee.schermerhorn@hp.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Robin Holt <holt@sgi.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      ab967d86
    • H
      mm: gup persist for write permission · 878b63ac
      Hugh Dickins 提交于
      do_wp_page()'s VM_FAULT_WRITE return value tells __get_user_pages() that
      COW has been done if necessary, though it may be leaving the pte without
      write permission - for the odd case of forced writing to a readonly vma
      for ptrace.  At present GUP then retries the follow_page() without asking
      for write permission, to escape an endless loop when forced.
      
      But an application may be relying on GUP to guarantee a writable page
      which won't be COWed again when written from userspace, whereas a race
      here might leave a readonly pte in place?  Change the VM_FAULT_WRITE
      handling to ask follow_page() for write permission again, except in that
      odd case of forced writing to a readonly vma.
      Signed-off-by: NHugh Dickins <hugh@veritas.com>
      Cc: Lee Schermerhorn <lee.schermerhorn@hp.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Robin Holt <holt@sgi.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      878b63ac
    • H
      mm: further cleanup page_add_new_anon_rmap · cbf84b7a
      Hugh Dickins 提交于
      Moving lru_cache_add_active_or_unevictable() into page_add_new_anon_rmap()
      was good but stupid: we can and should SetPageSwapBacked() there too; and
      we know for sure that this anonymous, swap-backed page is not file cache.
      Signed-off-by: NHugh Dickins <hugh@veritas.com>
      Cc: Lee Schermerhorn <lee.schermerhorn@hp.com>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Acked-by: NRik van Riel <riel@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      cbf84b7a
    • H
      mm: add_active_or_unevictable into rmap · b5934c53
      Hugh Dickins 提交于
      lru_cache_add_active_or_unevictable() and page_add_new_anon_rmap() always
      appear together.  Save some symbol table space and some jumping around by
      removing lru_cache_add_active_or_unevictable(), folding its code into
      page_add_new_anon_rmap(): like how we add file pages to lru just after
      adding them to page cache.
      
      Remove the nearby "TODO: is this safe?" comments (yes, it is safe), and
      change page_add_new_anon_rmap()'s address BUG_ON to VM_BUG_ON as
      originally intended.
      Signed-off-by: NHugh Dickins <hugh@veritas.com>
      Acked-by: NRik van Riel <riel@redhat.com>
      Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Cc: Mel Gorman <mel@csn.ul.ie>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      b5934c53
    • J
      mm/apply_to_range: call pte function with lazy updates · 38e0edb1
      Jeremy Fitzhardinge 提交于
      Make the pte-level function in apply_to_range be called in lazy mmu mode,
      so that any pagetable modifications can be batched.
      Signed-off-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Cc: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
      Cc: Hugh Dickins <hugh@veritas.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      38e0edb1
    • J
      mm: more likely reclaim MADV_SEQUENTIAL mappings · 4917e5d0
      Johannes Weiner 提交于
      File pages mapped only in sequentially read mappings are perfect reclaim
      canditates.
      
      This patch makes these mappings behave like weak references, their pages
      will be reclaimed unless they have a strong reference from a normal
      mapping as well.
      
      It changes the reclaim and the unmap path where they check if the page has
      been referenced.  In both cases, accesses through sequentially read
      mappings will be ignored.
      
      Benchmark results from KOSAKI Motohiro:
      
          http://marc.info/?l=linux-mm&m=122485301925098&w=2Signed-off-by: NJohannes Weiner <hannes@saeurebad.de>
      Signed-off-by: NRik van Riel <riel@redhat.com>
      Acked-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Nick Piggin <npiggin@suse.de>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      4917e5d0
    • N
      mm: don't mark_page_accessed in fault path · bf3f3bc5
      Nick Piggin 提交于
      Doing a mark_page_accessed at fault-time, then doing SetPageReferenced at
      unmap-time if the pte is young has a number of problems.
      
      mark_page_accessed is supposed to be roughly the equivalent of a young pte
      for unmapped references. Unfortunately it doesn't come with any context:
      after being called, reclaim doesn't know who or why the page was touched.
      
      So calling mark_page_accessed not only adds extra lru or PG_referenced
      manipulations for pages that are already going to have pte_young ptes anyway,
      but it also adds these references which are difficult to work with from the
      context of vma specific references (eg. MADV_SEQUENTIAL pte_young may not
      wish to contribute to the page being referenced).
      
      Then, simply doing SetPageReferenced when zapping a pte and finding it is
      young, is not a really good solution either. SetPageReferenced does not
      correctly promote the page to the active list for example. So after removing
      mark_page_accessed from the fault path, several mmap()+touch+munmap() would
      have a very different result from several read(2) calls for example, which
      is not really desirable.
      Signed-off-by: NNick Piggin <npiggin@suse.de>
      Acked-by: NJohannes Weiner <hannes@saeurebad.de>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      bf3f3bc5
  7. 06 1月, 2009 1 次提交
    • A
      inode->i_op is never NULL · acfa4380
      Al Viro 提交于
      We used to have rather schizophrenic set of checks for NULL ->i_op even
      though it had been eliminated years ago.  You'd need to go out of your
      way to set it to NULL explicitly _and_ a bunch of code would die on
      such inodes anyway.  After killing two remaining places that still
      did that bogosity, all that crap can go away.
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      acfa4380
  8. 20 12月, 2008 3 次提交
  9. 19 12月, 2008 3 次提交
  10. 21 10月, 2008 1 次提交
  11. 20 10月, 2008 5 次提交