1. 16 8月, 2010 1 次提交
    • L
      mm: fix up some user-visible effects of the stack guard page · d7824370
      Linus Torvalds 提交于
      This commit makes the stack guard page somewhat less visible to user
      space. It does this by:
      
       - not showing the guard page in /proc/<pid>/maps
      
         It looks like lvm-tools will actually read /proc/self/maps to figure
         out where all its mappings are, and effectively do a specialized
         "mlockall()" in user space.  By not showing the guard page as part of
         the mapping (by just adding PAGE_SIZE to the start for grows-up
         pages), lvm-tools ends up not being aware of it.
      
       - by also teaching the _real_ mlock() functionality not to try to lock
         the guard page.
      
         That would just expand the mapping down to create a new guard page,
         so there really is no point in trying to lock it in place.
      
      It would perhaps be nice to show the guard page specially in
      /proc/<pid>/maps (or at least mark grow-down segments some way), but
      let's not open ourselves up to more breakage by user space from programs
      that depends on the exact deails of the 'maps' file.
      
      Special thanks to Henrique de Moraes Holschuh for diving into lvm-tools
      source code to see what was going on with the whole new warning.
      
      Reported-and-tested-by: François Valenduc <francois.valenduc@tvcablenet.be
      Reported-by: NHenrique de Moraes Holschuh <hmh@hmh.eng.br>
      Cc: stable@kernel.org
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      d7824370
  2. 26 3月, 2010 1 次提交
    • P
      x86, perf, bts, mm: Delete the never used BTS-ptrace code · faa4602e
      Peter Zijlstra 提交于
      Support for the PMU's BTS features has been upstreamed in
      v2.6.32, but we still have the old and disabled ptrace-BTS,
      as Linus noticed it not so long ago.
      
      It's buggy: TIF_DEBUGCTLMSR is trampling all over that MSR without
      regard for other uses (perf) and doesn't provide the flexibility
      needed for perf either.
      
      Its users are ptrace-block-step and ptrace-bts, since ptrace-bts
      was never used and ptrace-block-step can be implemented using a
      much simpler approach.
      
      So axe all 3000 lines of it. That includes the *locked_memory*()
      APIs in mm/mlock.c as well.
      Reported-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Roland McGrath <roland@redhat.com>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Markus Metzger <markus.t.metzger@intel.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      LKML-Reference: <20100325135413.938004390@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      faa4602e
  3. 07 3月, 2010 1 次提交
  4. 16 12月, 2009 3 次提交
  5. 22 9月, 2009 3 次提交
    • H
      mm: m(un)lock avoid ZERO_PAGE · 6e919717
      Hugh Dickins 提交于
      I'm still reluctant to clutter __get_user_pages() with another flag, just
      to avoid touching ZERO_PAGE count in mlock(); though we can add that later
      if it shows up as an issue in practice.
      
      But when mlocking, we can test page->mapping slightly earlier, to avoid
      the potentially bouncy rescheduling of lock_page on ZERO_PAGE - mlock
      didn't lock_page in olden ZERO_PAGE days, so we might have regressed.
      
      And when munlocking, it turns out that FOLL_DUMP coincidentally does
      what's needed to avoid all updates to ZERO_PAGE, so use that here also.
      Plus add comment suggested by KAMEZAWA Hiroyuki.
      Signed-off-by: NHugh Dickins <hugh.dickins@tiscali.co.uk>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Nick Piggin <npiggin@suse.de>
      Acked-by: NMel Gorman <mel@csn.ul.ie>
      Cc: Minchan Kim <minchan.kim@gmail.com>
      Reviewed-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      6e919717
    • H
      mm: FOLL flags for GUP flags · 58fa879e
      Hugh Dickins 提交于
      __get_user_pages() has been taking its own GUP flags, then processing
      them into FOLL flags for follow_page().  Though oddly named, the FOLL
      flags are more widely used, so pass them to __get_user_pages() now.
      Sorry, VM flags, VM_FAULT flags and FAULT_FLAGs are still distinct.
      
      (The patch to __get_user_pages() looks peculiar, with both gup_flags
      and foll_flags: the gup_flags remain constant; but as before there's
      an exceptional case, out of scope of the patch, in which foll_flags
      per page have FOLL_WRITE masked off.)
      Signed-off-by: NHugh Dickins <hugh.dickins@tiscali.co.uk>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Nick Piggin <npiggin@suse.de>
      Cc: Mel Gorman <mel@csn.ul.ie>
      Cc: Minchan Kim <minchan.kim@gmail.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      58fa879e
    • H
      mm: munlock use follow_page · 408e82b7
      Hugh Dickins 提交于
      Hiroaki Wakabayashi points out that when mlock() has been interrupted
      by SIGKILL, the subsequent munlock() takes unnecessarily long because
      its use of __get_user_pages() insists on faulting in all the pages
      which mlock() never reached.
      
      It's worse than slowness if mlock() is terminated by Out Of Memory kill:
      the munlock_vma_pages_all() in exit_mmap() insists on faulting in all the
      pages which mlock() could not find memory for; so innocent bystanders are
      killed too, and perhaps the system hangs.
      
      __get_user_pages() does a lot that's silly for munlock(): so remove the
      munlock option from __mlock_vma_pages_range(), and use a simple loop of
      follow_page()s in munlock_vma_pages_range() instead; ignoring absent
      pages, and not marking present pages as accessed or dirty.
      
      (Change munlock() to only go so far as mlock() reached?  That does not
      work out, given the convention that mlock() claims complete success even
      when it has to give up early - in part so that an underlying file can be
      extended later, and those pages locked which earlier would give SIGBUS.)
      Signed-off-by: NHugh Dickins <hugh.dickins@tiscali.co.uk>
      Cc: <stable@kernel.org>
      Acked-by: NRik van Riel <riel@redhat.com>
      Reviewed-by: NMinchan Kim <minchan.kim@gmail.com>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Nick Piggin <npiggin@suse.de>
      Cc: Mel Gorman <mel@csn.ul.ie>
      Reviewed-by: NHiroaki Wakabayashi <primulaelatior@gmail.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      408e82b7
  6. 17 6月, 2009 1 次提交
  7. 24 4月, 2009 1 次提交
  8. 08 4月, 2009 1 次提交
  9. 07 4月, 2009 1 次提交
  10. 11 2月, 2009 1 次提交
  11. 09 2月, 2009 1 次提交
  12. 02 2月, 2009 1 次提交
    • L
      Manually revert "mlock: downgrade mmap sem while populating mlocked regions" · 27421e21
      Linus Torvalds 提交于
      This essentially reverts commit 8edb08ca.
      
      It downgraded our mmap semaphore to a read-lock while mlocking pages, in
      order to allow other threads (and external accesses like "ps" et al) to
      walk the vma lists and take page faults etc.  Which is a nice idea, but
      the implementation does not work.
      
      Because we cannot upgrade the lock back to a write lock without
      releasing the mmap semaphore, the code had to release the lock entirely
      and then re-take it as a writelock.  However, that meant that the caller
      possibly lost the vma chain that it was following, since now another
      thread could come in and mmap/munmap the range.
      
      The code tried to work around that by just looking up the vma again and
      erroring out if that happened, but quite frankly, that was just a buggy
      hack that doesn't actually protect against anything (the other thread
      could just have replaced the vma with another one instead of totally
      unmapping it).
      
      The only way to downgrade to a read map _reliably_ is to do it at the
      end, which is likely the right thing to do: do all the 'vma' operations
      with the write-lock held, then downgrade to a read after completing them
      all, and then do the "populate the newly mlocked regions" while holding
      just the read lock.  And then just drop the read-lock and return to user
      space.
      
      The (perhaps somewhat simpler) alternative is to just make all the
      callers of mlock_vma_pages_range() know that the mmap lock got dropped,
      and just re-grab the mmap semaphore if it needs to mlock more than one
      vma region.
      
      So we can do this "downgrade mmap sem while populating mlocked regions"
      thing right, but the way it was done here was absolutely not correct.
      Thus the revert, in the expectation that we will do it all correctly
      some day.
      
      Cc: Lee Schermerhorn <lee.schermerhorn@hp.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: stable@kernel.org
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      27421e21
  13. 14 1月, 2009 2 次提交
  14. 07 1月, 2009 1 次提交
    • Y
      mm: make get_user_pages() interruptible · 4779280d
      Ying Han 提交于
      The initial implementation of checking TIF_MEMDIE covers the cases of OOM
      killing.  If the process has been OOM killed, the TIF_MEMDIE is set and it
      return immediately.  This patch includes:
      
      1.  add the case that the SIGKILL is sent by user processes.  The
         process can try to get_user_pages() unlimited memory even if a user
         process has sent a SIGKILL to it(maybe a monitor find the process
         exceed its memory limit and try to kill it).  In the old
         implementation, the SIGKILL won't be handled until the get_user_pages()
         returns.
      
      2.  change the return value to be ERESTARTSYS.  It makes no sense to
         return ENOMEM if the get_user_pages returned by getting a SIGKILL
         signal.  Considering the general convention for a system call
         interrupted by a signal is ERESTARTNOSYS, so the current return value
         is consistant to that.
      
      Lee:
      
      An unfortunate side effect of "make-get_user_pages-interruptible" is that
      it prevents a SIGKILL'd task from munlock-ing pages that it had mlocked,
      resulting in freeing of mlocked pages.  Freeing of mlocked pages, in
      itself, is not so bad.  We just count them now--altho' I had hoped to
      remove this stat and add PG_MLOCKED to the free pages flags check.
      
      However, consider pages in shared libraries mapped by more than one task
      that a task mlocked--e.g., via mlockall().  If the task that mlocked the
      pages exits via SIGKILL, these pages would be left mlocked and
      unevictable.
      
      Proposed fix:
      
      Add another GUP flag to ignore sigkill when calling get_user_pages from
      munlock()--similar to Kosaki Motohiro's 'IGNORE_VMA_PERMISSIONS flag for
      the same purpose.  We are not actually allocating memory in this case,
      which "make-get_user_pages-interruptible" intends to avoid.  We're just
      munlocking pages that are already resident and mapped, and we're reusing
      get_user_pages() to access those pages.
      
      ??  Maybe we should combine 'IGNORE_VMA_PERMISSIONS and '_IGNORE_SIGKILL
      into a single flag: GUP_FLAGS_MUNLOCK ???
      
      [Lee.Schermerhorn@hp.com: ignore sigkill in get_user_pages during munlock]
      Signed-off-by: NPaul Menage <menage@google.com>
      Signed-off-by: NYing Han <yinghan@google.com>
      Reviewed-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Reviewed-by: NPekka Enberg <penberg@cs.helsinki.fi>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Cc: Hugh Dickins <hugh@veritas.com>
      Cc: Oleg Nesterov <oleg@tv-sign.ru>
      Cc: Lee Schermerhorn <lee.schermerhorn@hp.com>
      Cc: Rohit Seth <rohitseth@google.com>
      Cc: David Rientjes <rientjes@google.com>
      Signed-off-by: NLee Schermerhorn <lee.schermerhorn@hp.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      4779280d
  15. 20 12月, 2008 1 次提交
  16. 17 11月, 2008 1 次提交
  17. 13 11月, 2008 1 次提交
    • K
      mm: remove lru_add_drain_all() from the munlock path · 8891d6da
      KOSAKI Motohiro 提交于
      lockdep warns about following message at boot time on one of my test
      machine.  Then, schedule_on_each_cpu() sholdn't be called when the task
      have mmap_sem.
      
      Actually, lru_add_drain_all() exist to prevent the unevictalble pages
      stay on reclaimable lru list.  but currenct unevictable code can rescue
      unevictable pages although it stay on reclaimable list.
      
      So removing is better.
      
      In addition, this patch add lru_add_drain_all() to sys_mlock() and
      sys_mlockall().  it isn't must.  but it reduce the failure of moving to
      unevictable list.  its failure can rescue in vmscan later.  but reducing
      is better.
      
      Note, if above rescuing happend, the Mlocked and the Unevictable field
      mismatching happend in /proc/meminfo.  but it doesn't cause any real
      trouble.
      
      =======================================================
      [ INFO: possible circular locking dependency detected ]
      2.6.28-rc2-mm1 #2
      -------------------------------------------------------
      lvm/1103 is trying to acquire lock:
       (&cpu_hotplug.lock){--..}, at: [<c0130789>] get_online_cpus+0x29/0x50
      
      but task is already holding lock:
       (&mm->mmap_sem){----}, at: [<c01878ae>] sys_mlockall+0x4e/0xb0
      
      which lock already depends on the new lock.
      
      the existing dependency chain (in reverse order) is:
      
      -> #3 (&mm->mmap_sem){----}:
             [<c0153da2>] check_noncircular+0x82/0x110
             [<c0185e6a>] might_fault+0x4a/0xa0
             [<c0156161>] validate_chain+0xb11/0x1070
             [<c0185e6a>] might_fault+0x4a/0xa0
             [<c0156923>] __lock_acquire+0x263/0xa10
             [<c015714c>] lock_acquire+0x7c/0xb0			(*) grab mmap_sem
             [<c0185e6a>] might_fault+0x4a/0xa0
             [<c0185e9b>] might_fault+0x7b/0xa0
             [<c0185e6a>] might_fault+0x4a/0xa0
             [<c0294dd0>] copy_to_user+0x30/0x60
             [<c01ae3ec>] filldir+0x7c/0xd0
             [<c01e3a6a>] sysfs_readdir+0x11a/0x1f0			(*) grab sysfs_mutex
             [<c01ae370>] filldir+0x0/0xd0
             [<c01ae370>] filldir+0x0/0xd0
             [<c01ae4c6>] vfs_readdir+0x86/0xa0			(*) grab i_mutex
             [<c01ae75b>] sys_getdents+0x6b/0xc0
             [<c010355a>] syscall_call+0x7/0xb
             [<ffffffff>] 0xffffffff
      
      -> #2 (sysfs_mutex){--..}:
             [<c0153da2>] check_noncircular+0x82/0x110
             [<c01e3d2c>] sysfs_addrm_start+0x2c/0xc0
             [<c0156161>] validate_chain+0xb11/0x1070
             [<c01e3d2c>] sysfs_addrm_start+0x2c/0xc0
             [<c0156923>] __lock_acquire+0x263/0xa10
             [<c015714c>] lock_acquire+0x7c/0xb0			(*) grab sysfs_mutex
             [<c01e3d2c>] sysfs_addrm_start+0x2c/0xc0
             [<c04f8b55>] mutex_lock_nested+0xa5/0x2f0
             [<c01e3d2c>] sysfs_addrm_start+0x2c/0xc0
             [<c01e3d2c>] sysfs_addrm_start+0x2c/0xc0
             [<c01e3d2c>] sysfs_addrm_start+0x2c/0xc0
             [<c01e422f>] create_dir+0x3f/0x90
             [<c01e42a9>] sysfs_create_dir+0x29/0x50
             [<c04faaf5>] _spin_unlock+0x25/0x40
             [<c028f21d>] kobject_add_internal+0xcd/0x1a0
             [<c028f37a>] kobject_set_name_vargs+0x3a/0x50
             [<c028f41d>] kobject_init_and_add+0x2d/0x40
             [<c019d4d2>] sysfs_slab_add+0xd2/0x180
             [<c019d580>] sysfs_add_func+0x0/0x70
             [<c019d5dc>] sysfs_add_func+0x5c/0x70			(*) grab slub_lock
             [<c01400f2>] run_workqueue+0x172/0x200
             [<c014008f>] run_workqueue+0x10f/0x200
             [<c0140bd0>] worker_thread+0x0/0xf0
             [<c0140c6c>] worker_thread+0x9c/0xf0
             [<c0143c80>] autoremove_wake_function+0x0/0x50
             [<c0140bd0>] worker_thread+0x0/0xf0
             [<c0143972>] kthread+0x42/0x70
             [<c0143930>] kthread+0x0/0x70
             [<c01042db>] kernel_thread_helper+0x7/0x1c
             [<ffffffff>] 0xffffffff
      
      -> #1 (slub_lock){----}:
             [<c0153d2d>] check_noncircular+0xd/0x110
             [<c04f650f>] slab_cpuup_callback+0x11f/0x1d0
             [<c0156161>] validate_chain+0xb11/0x1070
             [<c04f650f>] slab_cpuup_callback+0x11f/0x1d0
             [<c015433d>] mark_lock+0x35d/0xd00
             [<c0156923>] __lock_acquire+0x263/0xa10
             [<c015714c>] lock_acquire+0x7c/0xb0
             [<c04f650f>] slab_cpuup_callback+0x11f/0x1d0
             [<c04f93a3>] down_read+0x43/0x80
             [<c04f650f>] slab_cpuup_callback+0x11f/0x1d0		(*) grab slub_lock
             [<c04f650f>] slab_cpuup_callback+0x11f/0x1d0
             [<c04fd9ac>] notifier_call_chain+0x3c/0x70
             [<c04f5454>] _cpu_up+0x84/0x110
             [<c04f552b>] cpu_up+0x4b/0x70				(*) grab cpu_hotplug.lock
             [<c06d1530>] kernel_init+0x0/0x170
             [<c06d15e5>] kernel_init+0xb5/0x170
             [<c06d1530>] kernel_init+0x0/0x170
             [<c01042db>] kernel_thread_helper+0x7/0x1c
             [<ffffffff>] 0xffffffff
      
      -> #0 (&cpu_hotplug.lock){--..}:
             [<c0155bff>] validate_chain+0x5af/0x1070
             [<c040f7e0>] dev_status+0x0/0x50
             [<c0156923>] __lock_acquire+0x263/0xa10
             [<c015714c>] lock_acquire+0x7c/0xb0
             [<c0130789>] get_online_cpus+0x29/0x50
             [<c04f8b55>] mutex_lock_nested+0xa5/0x2f0
             [<c0130789>] get_online_cpus+0x29/0x50
             [<c0130789>] get_online_cpus+0x29/0x50
             [<c017bc30>] lru_add_drain_per_cpu+0x0/0x10
             [<c0130789>] get_online_cpus+0x29/0x50			(*) grab cpu_hotplug.lock
             [<c0140cf2>] schedule_on_each_cpu+0x32/0xe0
             [<c0187095>] __mlock_vma_pages_range+0x85/0x2c0
             [<c0156945>] __lock_acquire+0x285/0xa10
             [<c0188f09>] vma_merge+0xa9/0x1d0
             [<c0187450>] mlock_fixup+0x180/0x200
             [<c0187548>] do_mlockall+0x78/0x90			(*) grab mmap_sem
             [<c01878e1>] sys_mlockall+0x81/0xb0
             [<c010355a>] syscall_call+0x7/0xb
             [<ffffffff>] 0xffffffff
      
      other info that might help us debug this:
      
      1 lock held by lvm/1103:
       #0:  (&mm->mmap_sem){----}, at: [<c01878ae>] sys_mlockall+0x4e/0xb0
      
      stack backtrace:
      Pid: 1103, comm: lvm Not tainted 2.6.28-rc2-mm1 #2
      Call Trace:
       [<c01555fc>] print_circular_bug_tail+0x7c/0xd0
       [<c0155bff>] validate_chain+0x5af/0x1070
       [<c040f7e0>] dev_status+0x0/0x50
       [<c0156923>] __lock_acquire+0x263/0xa10
       [<c015714c>] lock_acquire+0x7c/0xb0
       [<c0130789>] get_online_cpus+0x29/0x50
       [<c04f8b55>] mutex_lock_nested+0xa5/0x2f0
       [<c0130789>] get_online_cpus+0x29/0x50
       [<c0130789>] get_online_cpus+0x29/0x50
       [<c017bc30>] lru_add_drain_per_cpu+0x0/0x10
       [<c0130789>] get_online_cpus+0x29/0x50
       [<c0140cf2>] schedule_on_each_cpu+0x32/0xe0
       [<c0187095>] __mlock_vma_pages_range+0x85/0x2c0
       [<c0156945>] __lock_acquire+0x285/0xa10
       [<c0188f09>] vma_merge+0xa9/0x1d0
       [<c0187450>] mlock_fixup+0x180/0x200
       [<c0187548>] do_mlockall+0x78/0x90
       [<c01878e1>] sys_mlockall+0x81/0xb0
       [<c010355a>] syscall_call+0x7/0xb
      Signed-off-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Tested-by: NKamalesh Babulal <kamalesh@linux.vnet.ibm.com>
      Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
      Cc: Christoph Lameter <cl@linux-foundation.org>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Cc: Hugh Dickins <hugh@veritas.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      8891d6da
  18. 20 10月, 2008 5 次提交
    • L
      mlock: make mlock error return Posixly Correct · 9978ad58
      Lee Schermerhorn 提交于
      Rework Posix error return for mlock().
      
      Posix requires error code for mlock*() system calls for some conditions
      that differ from what kernel low level functions, such as
      get_user_pages(), return for those conditions.  For more info, see:
      
      http://marc.info/?l=linux-kernel&m=121750892930775&w=2
      
      This patch provides the same translation of get_user_pages()
      error codes to posix specified error codes in the context
      of the mlock rework for unevictable lru.
      
      [akpm@linux-foundation.org: fix build]
      Signed-off-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Signed-off-by: NLee Schermerhorn <lee.schermerhorn@hp.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      9978ad58
    • N
      vmstat: mlocked pages statistics · 5344b7e6
      Nick Piggin 提交于
      Add NR_MLOCK zone page state, which provides a (conservative) count of
      mlocked pages (actually, the number of mlocked pages moved off the LRU).
      
      Reworked by lts to fit in with the modified mlock page support in the
      Reclaim Scalability series.
      
      [kosaki.motohiro@jp.fujitsu.com: fix incorrect Mlocked field of /proc/meminfo]
      [lee.schermerhorn@hp.com: mlocked-pages: add event counting with statistics]
      Signed-off-by: NNick Piggin <npiggin@suse.de>
      Signed-off-by: NLee Schermerhorn <lee.schermerhorn@hp.com>
      Signed-off-by: NRik van Riel <riel@redhat.com>
      Signed-off-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      5344b7e6
    • R
      mmap: handle mlocked pages during map, remap, unmap · ba470de4
      Rik van Riel 提交于
      Originally by Nick Piggin <npiggin@suse.de>
      
      Remove mlocked pages from the LRU using "unevictable infrastructure"
      during mmap(), munmap(), mremap() and truncate().  Try to move back to
      normal LRU lists on munmap() when last mlocked mapping removed.  Remove
      PageMlocked() status when page truncated from file.
      
      [akpm@linux-foundation.org: cleanup]
      [kamezawa.hiroyu@jp.fujitsu.com: fix double unlock_page()]
      [kosaki.motohiro@jp.fujitsu.com: split LRU: munlock rework]
      [lee.schermerhorn@hp.com: mlock: fix __mlock_vma_pages_range comment block]
      [akpm@linux-foundation.org: remove bogus kerneldoc token]
      Signed-off-by: NNick Piggin <npiggin@suse.de>
      Signed-off-by: NLee Schermerhorn <lee.schermerhorn@hp.com>
      Signed-off-by: NRik van Riel <riel@redhat.com>
      Signed-off-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Signed-off-by: NKAMEZAWA Hiroyuki <kamewzawa.hiroyu@jp.fujitsu.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      ba470de4
    • L
      mlock: downgrade mmap sem while populating mlocked regions · 8edb08ca
      Lee Schermerhorn 提交于
      We need to hold the mmap_sem for write to initiatate mlock()/munlock()
      because we may need to merge/split vmas.  However, this can lead to very
      long lock hold times attempting to fault in a large memory region to mlock
      it into memory.  This can hold off other faults against the mm
      [multithreaded tasks] and other scans of the mm, such as via /proc.  To
      alleviate this, downgrade the mmap_sem to read mode during the population
      of the region for locking.  This is especially the case if we need to
      reclaim memory to lock down the region.  We [probably?] don't need to do
      this for unlocking as all of the pages should be resident--they're already
      mlocked.
      
      Now, the caller's of the mlock functions [mlock_fixup() and
      mlock_vma_pages_range()] expect the mmap_sem to be returned in write mode.
       Changing all callers appears to be way too much effort at this point.
      So, restore write mode before returning.  Note that this opens a window
      where the mmap list could change in a multithreaded process.  So, at least
      for mlock_fixup(), where we could be called in a loop over multiple vmas,
      we check that a vma still exists at the start address and that vma still
      covers the page range [start,end).  If not, we return an error, -EAGAIN,
      and let the caller deal with it.
      
      Return -EAGAIN from mlock_vma_pages_range() function and mlock_fixup() if
      the vma at 'start' disappears or changes so that the page range
      [start,end) is no longer contained in the vma.  Again, let the caller deal
      with it.  Looks like only sys_remap_file_pages() [via mmap_region()]
      should actually care.
      
      With this patch, I no longer see processes like ps(1) blocked for seconds
      or minutes at a time waiting for a large [multiple gigabyte] region to be
      locked down.  However, I occassionally see delays while unlocking or
      unmapping a large mlocked region.  Should we also downgrade the mmap_sem
      for the unlock path?
      Signed-off-by: NLee Schermerhorn <lee.schermerhorn@hp.com>
      Signed-off-by: NRik van Riel <riel@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      8edb08ca
    • N
      mlock: mlocked pages are unevictable · b291f000
      Nick Piggin 提交于
      Make sure that mlocked pages also live on the unevictable LRU, so kswapd
      will not scan them over and over again.
      
      This is achieved through various strategies:
      
      1) add yet another page flag--PG_mlocked--to indicate that
         the page is locked for efficient testing in vmscan and,
         optionally, fault path.  This allows early culling of
         unevictable pages, preventing them from getting to
         page_referenced()/try_to_unmap().  Also allows separate
         accounting of mlock'd pages, as Nick's original patch
         did.
      
         Note:  Nick's original mlock patch used a PG_mlocked
         flag.  I had removed this in favor of the PG_unevictable
         flag + an mlock_count [new page struct member].  I
         restored the PG_mlocked flag to eliminate the new
         count field.
      
      2) add the mlock/unevictable infrastructure to mm/mlock.c,
         with internal APIs in mm/internal.h.  This is a rework
         of Nick's original patch to these files, taking into
         account that mlocked pages are now kept on unevictable
         LRU list.
      
      3) update vmscan.c:page_evictable() to check PageMlocked()
         and, if vma passed in, the vm_flags.  Note that the vma
         will only be passed in for new pages in the fault path;
         and then only if the "cull unevictable pages in fault
         path" patch is included.
      
      4) add try_to_unlock() to rmap.c to walk a page's rmap and
         ClearPageMlocked() if no other vmas have it mlocked.
         Reuses as much of try_to_unmap() as possible.  This
         effectively replaces the use of one of the lru list links
         as an mlock count.  If this mechanism let's pages in mlocked
         vmas leak through w/o PG_mlocked set [I don't know that it
         does], we should catch them later in try_to_unmap().  One
         hopes this will be rare, as it will be relatively expensive.
      
      Original mm/internal.h, mm/rmap.c and mm/mlock.c changes:
      Signed-off-by: NNick Piggin <npiggin@suse.de>
      
      splitlru: introduce __get_user_pages():
      
        New munlock processing need to GUP_FLAGS_IGNORE_VMA_PERMISSIONS.
        because current get_user_pages() can't grab PROT_NONE pages theresore it
        cause PROT_NONE pages can't munlock.
      
      [akpm@linux-foundation.org: fix this for pagemap-pass-mm-into-pagewalkers.patch]
      [akpm@linux-foundation.org: untangle patch interdependencies]
      [akpm@linux-foundation.org: fix things after out-of-order merging]
      [hugh@veritas.com: fix page-flags mess]
      [lee.schermerhorn@hp.com: fix munlock page table walk - now requires 'mm']
      [kosaki.motohiro@jp.fujitsu.com: build fix]
      [kosaki.motohiro@jp.fujitsu.com: fix truncate race and sevaral comments]
      [kosaki.motohiro@jp.fujitsu.com: splitlru: introduce __get_user_pages()]
      Signed-off-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Signed-off-by: NRik van Riel <riel@redhat.com>
      Signed-off-by: NLee Schermerhorn <lee.schermerhorn@hp.com>
      Cc: Nick Piggin <npiggin@suse.de>
      Cc: Dave Hansen <dave@linux.vnet.ibm.com>
      Cc: Matt Mackall <mpm@selenic.com>
      Signed-off-by: NHugh Dickins <hugh@veritas.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      b291f000
  19. 05 8月, 2008 1 次提交
    • K
      mlock() fix return values · a477097d
      KOSAKI Motohiro 提交于
      Halesh says:
      
      Please find the below testcase provide to test mlock.
      
      Test Case :
      ===========================
      
      #include <sys/resource.h>
      #include <stdio.h>
      #include <sys/stat.h>
      #include <sys/types.h>
      #include <unistd.h>
      #include <sys/mman.h>
      #include <fcntl.h>
      #include <errno.h>
      #include <stdlib.h>
      
      int main(void)
      {
        int fd,ret, i = 0;
        char *addr, *addr1 = NULL;
        unsigned int page_size;
        struct rlimit rlim;
      
        if (0 != geteuid())
        {
         printf("Execute this pgm as root\n");
         exit(1);
        }
      
        /* create a file */
        if ((fd = open("mmap_test.c",O_RDWR|O_CREAT,0755)) == -1)
        {
         printf("cant create test file\n");
         exit(1);
        }
      
        page_size = sysconf(_SC_PAGE_SIZE);
      
        /* set the MEMLOCK limit */
        rlim.rlim_cur = 2000;
        rlim.rlim_max = 2000;
      
        if ((ret = setrlimit(RLIMIT_MEMLOCK,&rlim)) != 0)
        {
         printf("Cant change limit values\n");
         exit(1);
        }
      
        addr = 0;
        while (1)
        {
        /* map a page into memory each time*/
        if ((addr = (char *) mmap(addr,page_size, PROT_READ |
      PROT_WRITE,MAP_SHARED,fd,0)) == MAP_FAILED)
        {
         printf("cant do mmap on file\n");
         exit(1);
        }
      
        if (0 == i)
          addr1 = addr;
        i++;
        errno = 0;
        /* lock the mapped memory pagewise*/
        if ((ret = mlock((char *)addr, 1500)) == -1)
        {
         printf("errno value is %d\n", errno);
         printf("cant lock maped region\n");
         exit(1);
        }
        addr = addr + page_size;
       }
      }
      ======================================================
      
      This testcase results in an mlock() failure with errno 14 that is EFAULT,
      but it has nowhere been specified that mlock() will return EFAULT.  When I
      tested the same on older kernels like 2.6.18, I got the correct result i.e
      errno 12 (ENOMEM).
      
      I think in source code mlock(2), setting errno ENOMEM has been missed in
      do_mlock() , on mlock_fixup() failure.
      
      SUSv3 requires the following behavior frmo mlock(2).
      
      [ENOMEM]
          Some or all of the address range specified by the addr and
          len arguments does not correspond to valid mapped pages
          in the address space of the process.
      
      [EAGAIN]
          Some or all of the memory identified by the operation could not
          be locked when the call was made.
      
      This rule isn't so nice and slighly strange.  but many people think
      POSIX/SUS compliance is important.
      Reported-by: NHalesh Sadashiv <halesh.sadashiv@ap.sony.com>
      Tested-by: NHalesh Sadashiv <halesh.sadashiv@ap.sony.com>
      Signed-off-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: <stable@kernel.org>		[2.6.25.x, 2.6.26.x]
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      a477097d
  20. 17 7月, 2007 1 次提交
  21. 22 5月, 2007 1 次提交
    • A
      Detach sched.h from mm.h · e8edc6e0
      Alexey Dobriyan 提交于
      First thing mm.h does is including sched.h solely for can_do_mlock() inline
      function which has "current" dereference inside. By dealing with can_do_mlock()
      mm.h can be detached from sched.h which is good. See below, why.
      
      This patch
      a) removes unconditional inclusion of sched.h from mm.h
      b) makes can_do_mlock() normal function in mm/mlock.c
      c) exports can_do_mlock() to not break compilation
      d) adds sched.h inclusions back to files that were getting it indirectly.
      e) adds less bloated headers to some files (asm/signal.h, jiffies.h) that were
         getting them indirectly
      
      Net result is:
      a) mm.h users would get less code to open, read, preprocess, parse, ... if
         they don't need sched.h
      b) sched.h stops being dependency for significant number of files:
         on x86_64 allmodconfig touching sched.h results in recompile of 4083 files,
         after patch it's only 3744 (-8.3%).
      
      Cross-compile tested on
      
      	all arm defconfigs, all mips defconfigs, all powerpc defconfigs,
      	alpha alpha-up
      	arm
      	i386 i386-up i386-defconfig i386-allnoconfig
      	ia64 ia64-up
      	m68k
      	mips
      	parisc parisc-up
      	powerpc powerpc-up
      	s390 s390-up
      	sparc sparc-up
      	sparc64 sparc64-up
      	um-x86_64
      	x86_64 x86_64-up x86_64-defconfig x86_64-allnoconfig
      
      as well as my two usual configs.
      Signed-off-by: NAlexey Dobriyan <adobriyan@gmail.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      e8edc6e0
  22. 08 12月, 2006 1 次提交
  23. 12 1月, 2006 1 次提交
  24. 17 4月, 2005 1 次提交
    • L
      Linux-2.6.12-rc2 · 1da177e4
      Linus Torvalds 提交于
      Initial git repository build. I'm not bothering with the full history,
      even though we have it. We can create a separate "historical" git
      archive of that later if we want to, and in the meantime it's about
      3.2GB when imported into git - space that would just make the early
      git days unnecessarily complicated, when we don't have a lot of good
      infrastructure for it.
      
      Let it rip!
      1da177e4