1. 10 1月, 2006 1 次提交
  2. 09 1月, 2006 1 次提交
  3. 07 1月, 2006 3 次提交
    • N
      [PATCH] mm: pfault optimisation · 41e9b63b
      Nick Piggin 提交于
      This atomic operation is superfluous: the pte will be added with the
      referenced bit set, and the page will be referenced through this mapping after
      the page fault handler returns anyway.
      Signed-off-by: NNick Piggin <npiggin@suse.de>
      Cc: Hugh Dickins <hugh@veritas.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      41e9b63b
    • N
      [PATCH] mm: rmap optimisation · 9617d95e
      Nick Piggin 提交于
      Optimise rmap functions by minimising atomic operations when we know there
      will be no concurrent modifications.
      Signed-off-by: NNick Piggin <npiggin@suse.de>
      Cc: Hugh Dickins <hugh@veritas.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      9617d95e
    • B
      [PATCH] madvise(MADV_REMOVE): remove pages from tmpfs shm backing store · f6b3ec23
      Badari Pulavarty 提交于
      Here is the patch to implement madvise(MADV_REMOVE) - which frees up a
      given range of pages & its associated backing store.  Current
      implementation supports only shmfs/tmpfs and other filesystems return
      -ENOSYS.
      
      "Some app allocates large tmpfs files, then when some task quits and some
      client disconnect, some memory can be released.  However the only way to
      release tmpfs-swap is to MADV_REMOVE". - Andrea Arcangeli
      
      Databases want to use this feature to drop a section of their bufferpool
      (shared memory segments) - without writing back to disk/swap space.
      
      This feature is also useful for supporting hot-plug memory on UML.
      
      Concerns raised by Andrew Morton:
      
      - "We have no plan for holepunching!  If we _do_ have such a plan (or
        might in the future) then what would the API look like?  I think
        sys_holepunch(fd, start, len), so we should start out with that."
      
      - Using madvise is very weird, because people will ask "why do I need to
        mmap my file before I can stick a hole in it?"
      
      - None of the other madvise operations call into the filesystem in this
        manner.  A broad question is: is this capability an MM operation or a
        filesytem operation?  truncate, for example, is a filesystem operation
        which sometimes has MM side-effects.  madvise is an mm operation and with
        this patch, it gains FS side-effects, only they're really, really
        significant ones."
      
      Comments:
      
      - Andrea suggested the fs operation too but then it's more efficient to
        have it as a mm operation with fs side effects, because they don't
        immediatly know fd and physical offset of the range.  It's possible to
        fixup in userland and to use the fs operation but it's more expensive,
        the vmas are already in the kernel and we can use them.
      
      Short term plan &  Future Direction:
      
      - We seem to need this interface only for shmfs/tmpfs files in the short
        term.  We have to add hooks into the filesystem for correctness and
        completeness.  This is what this patch does.
      
      - In the future, plan is to support both fs and mmap apis also.  This
        also involves (other) filesystem specific functions to be implemented.
      
      - Current patch doesn't support VM_NONLINEAR - which can be addressed in
        the future.
      Signed-off-by: NBadari Pulavarty <pbadari@us.ibm.com>
      Cc: Hugh Dickins <hugh@veritas.com>
      Cc: Andrea Arcangeli <andrea@suse.de>
      Cc: Michael Kerrisk <mtk-manpages@gmx.net>
      Cc: Ulrich Drepper <drepper@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      f6b3ec23
  4. 17 12月, 2005 1 次提交
  5. 13 12月, 2005 1 次提交
    • L
      get_user_pages: don't try to follow PFNMAP pages · 1ff80389
      Linus Torvalds 提交于
      Nick Piggin points out that a few drivers play games with VM_IO (why?
      who knows..) and thus a pfn-remapped area may not have that bit set even
      if remap_pfn_range() set it originally.
      
      So make it explicit in get_user_pages() that we don't follow VM_PFNMAP
      pages, since pretty much by definition they do not have a "struct page"
      associated with them.
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      1ff80389
  6. 12 12月, 2005 3 次提交
  7. 04 12月, 2005 1 次提交
  8. 01 12月, 2005 1 次提交
    • L
      VM: add "vm_insert_page()" function · a145dd41
      Linus Torvalds 提交于
      This is what a lot of drivers will actually want to use to insert
      individual pages into a user VMA.  It doesn't have the old PageReserved
      restrictions of remap_pfn_range(), and it doesn't complain about partial
      remappings.
      
      The page you insert needs to be a nice clean kernel allocation, so you
      can't insert arbitrary page mappings with this, but that's not what
      people want.
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      a145dd41
  9. 30 11月, 2005 7 次提交
  10. 29 11月, 2005 3 次提交
    • N
      [PATCH] Fix vma argument in get_usr_pages() for gate areas · fa2a455b
      Nick Piggin 提交于
      The system call gate area handling called vm_normal_page() with the
      wrong vma (which was always NULL, and caused an oops).
      Signed-off-by: NNick Piggin <npiggin@suse.de>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      fa2a455b
    • A
      [PATCH] Workaround for gcc 2.96 (undefined references) · e0f39591
      Alan Stern 提交于
        LD      .tmp_vmlinux1
      mm/built-in.o(.text+0x100d6): In function `copy_page_range':
      : undefined reference to `__pud_alloc'
      mm/built-in.o(.text+0x1010b): In function `copy_page_range':
      : undefined reference to `__pmd_alloc'
      mm/built-in.o(.text+0x11ef4): In function `__handle_mm_fault':
      : undefined reference to `__pud_alloc'
      fs/built-in.o(.text+0xc930): In function `install_arg_page':
      : undefined reference to `__pud_alloc'
      make: *** [.tmp_vmlinux1] Error 1
      
      Those missing references in mm/memory.c arise from this code in
      include/linux/mm.h, combined with the fact that __PGTABLE_PMD_FOLDED and
      __PGTABLE_PUD_FOLDED are both set and __ARCH_HAS_4LEVEL_HACK is not:
      
      /*
       * The following ifdef needed to get the 4level-fixup.h header to work.
       * Remove it when 4level-fixup.h has been removed.
       */
      #if defined(CONFIG_MMU) && !defined(__ARCH_HAS_4LEVEL_HACK)
      static inline pud_t *pud_alloc(struct mm_struct *mm, pgd_t *pgd, unsigned long address)
      {
              return (unlikely(pgd_none(*pgd)) && __pud_alloc(mm, pgd, address))?
                      NULL: pud_offset(pgd, address);
      }
      
      static inline pmd_t *pmd_alloc(struct mm_struct *mm, pud_t *pud, unsigned long address)
      {
              return (unlikely(pud_none(*pud)) && __pmd_alloc(mm, pud, address))?
                      NULL: pmd_offset(pud, address);
      }
      #endif /* CONFIG_MMU && !__ARCH_HAS_4LEVEL_HACK */
      
      With my configuration the pgd_none and pud_none routines are inlines
      returning a constant 0.  Apparently the old compiler avoids generating
      calls to __pud_alloc and __pmd_alloc but still lists them as undefined
      references in the module's symbol table.
      
      I don't know which change caused this problem.  I think it was added
      somewhere between 2.6.14 and 2.6.15-rc1, because I remember building
      several 2.6.14-rc kernels without difficulty.  However I can't point to an
      individual culprit.
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      e0f39591
    • L
      mm: re-architect the VM_UNPAGED logic · 6aab341e
      Linus Torvalds 提交于
      This replaces the (in my opinion horrible) VM_UNMAPPED logic with very
      explicit support for a "remapped page range" aka VM_PFNMAP.  It allows a
      VM area to contain an arbitrary range of page table entries that the VM
      never touches, and never considers to be normal pages.
      
      Any user of "remap_pfn_range()" automatically gets this new
      functionality, and doesn't even have to mark the pages reserved or
      indeed mark them any other way.  It just works.  As a side effect, doing
      mmap() on /dev/mem works for arbitrary ranges.
      
      Sparc update from David in the next commit.
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      6aab341e
  11. 23 11月, 2005 5 次提交
    • H
      [PATCH] unpaged: ZERO_PAGE in VM_UNPAGED · f57e88a8
      Hugh Dickins 提交于
      It's strange enough to be looking out for anonymous pages in VM_UNPAGED areas,
      let's not insert the ZERO_PAGE there - though whether it would matter will
      depend on what we decide about ZERO_PAGE refcounting.
      
      But whereas do_anonymous_page may (exceptionally) be called on a VM_UNPAGED
      area, do_no_page should never be: just BUG_ON.
      Signed-off-by: NHugh Dickins <hugh@veritas.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      f57e88a8
    • H
      [PATCH] unpaged: anon in VM_UNPAGED · ee498ed7
      Hugh Dickins 提交于
      copy_one_pte needs to copy the anonymous COWed pages in a VM_UNPAGED area,
      zap_pte_range needs to free them, do_wp_page needs to COW them: just like
      ordinary pages, not like the unpaged.
      
      But recognizing them is a little subtle: because PageReserved is no longer a
      condition for remap_pfn_range, we can now mmap all of /dev/mem (whether the
      distro permits, and whether it's advisable on this or that architecture, is
      another matter).  So if we can see a PageAnon, it may not be ours to mess with
      (or may be ours from elsewhere in the address space).  I suspect there's an
      entertaining insoluble self-referential problem here, but the page_is_anon
      function does a good practical job, and MAP_PRIVATE PROT_WRITE VM_UNPAGED will
      always be an odd choice.
      
      In updating the comment on page_address_in_vma, noticed a potential NULL
      dereference, in a path we don't actually take, but fixed it.
      Signed-off-by: NHugh Dickins <hugh@veritas.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      ee498ed7
    • H
      [PATCH] unpaged: COW on VM_UNPAGED · 920fc356
      Hugh Dickins 提交于
      Remove the BUG_ON(vma->vm_flags & VM_UNPAGED) from do_wp_page, and let it do
      Copy-On-Write without touching the VM_UNPAGED's page counts - but this is
      incomplete, because the anonymous page it inserts will itself need to be
      handled, here and in other functions - next patch.
      
      We still don't copy the page if the pfn is invalid, because the
      copy_user_highpage interface does not allow it.  But that's not been a problem
      in the past: can be added in later if the need arises.
      Signed-off-by: NHugh Dickins <hugh@veritas.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      920fc356
    • H
      [PATCH] unpaged: VM_UNPAGED · 0b14c179
      Hugh Dickins 提交于
      Although we tend to associate VM_RESERVED with remap_pfn_range, quite a few
      drivers set VM_RESERVED on areas which are then populated by nopage.  The
      PageReserved removal in 2.6.15-rc1 changed VM_RESERVED not to free pages in
      zap_pte_range, without changing those drivers not to set it: so their pages
      just leak away.
      
      Let's not change miscellaneous drivers now: introduce VM_UNPAGED at the core,
      to flag the special areas where the ptes may have no struct page, or if they
      have then it's not to be touched.  Replace most instances of VM_RESERVED in
      core mm by VM_UNPAGED.  Force it on in remap_pfn_range, and the sparc and
      sparc64 io_remap_pfn_range.
      
      Revert addition of VM_RESERVED to powerpc vdso, it's not needed there.  Is it
      needed anywhere?  It still governs the mm->reserved_vm statistic, and special
      vmas not to be merged, and areas not to be core dumped; but could probably be
      eliminated later (the drivers are probably specifying it because in 2.4 it
      kept swapout off the vma, but in 2.6 we work from the LRU, which these pages
      don't get on).
      
      Use the VM_SHM slot for VM_UNPAGED, and define VM_SHM to 0: it serves no
      purpose whatsoever, and should be removed from drivers when we clean up.
      Signed-off-by: NHugh Dickins <hugh@veritas.com>
      Acked-by: NWilliam Irwin <wli@holomorphy.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      0b14c179
    • H
      [PATCH] unpaged: get_user_pages VM_RESERVED · ed5297a9
      Hugh Dickins 提交于
      The PageReserved removal in 2.6.15-rc1 prohibited get_user_pages on the areas
      flagged VM_RESERVED in place of PageReserved.  That is correct in theory - we
      ought not to interfere with struct pages in such a reserved area; but in
      practice it broke BTTV for one.
      
      So revert to prohibiting only on VM_IO: if someone gets into trouble with
      get_user_pages on VM_RESERVED, it'll just be a "don't do that".
      
      You can argue that videobuf_mmap_mapper shouldn't set VM_RESERVED in the first
      place, but now's not the time for breaking drivers without notice.
      Signed-off-by: NHugh Dickins <hugh@veritas.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      ed5297a9
  12. 14 11月, 2005 1 次提交
    • R
      [PATCH] mm: ZAP_BLOCK causes redundant work · 51c6f666
      Robin Holt 提交于
      The address based work estimate for unmapping (for lockbreak) is and always
      was horribly inefficient for sparse mappings.  The problem is most simply
      explained with an example:
      
      If we find a pgd is clear, we still have to call into unmap_page_range
      PGDIR_SIZE / ZAP_BLOCK_SIZE times, each time checking the clear pgd, in
      order to progress the working address to the next pgd.
      
      The fundamental way to solve the problem is to keep track of the end
      address we've processed and pass it back to the higher layers.
      
      From: Nick Piggin <npiggin@suse.de>
      
        Modification to completely get away from address based work estimate
        and instead use an abstract count, with a very small cost for empty
        entries as opposed to present pages.
      
        On 2.6.14-git2, ppc64, and CONFIG_PREEMPT=y, mapping and unmapping 1TB
        of virtual address space takes 1.69s; with the following patch applied,
        this operation can be done 1000 times in less than 0.01s
      
      From: Andrew Morton <akpm@osdl.org>
      
      With CONFIG_HUTETLB_PAGE=n:
      
      mm/memory.c: In function `unmap_vmas':
      mm/memory.c:779: warning: division by zero
      
      Due to
      
      			zap_work -= (end - start) /
      					(HPAGE_SIZE / PAGE_SIZE);
      
      So make the dummy HPAGE_SIZE non-zero
      Signed-off-by: NRobin Holt <holt@sgi.com>
      Signed-off-by: NNick Piggin <npiggin@suse.de>
      Cc: Hugh Dickins <hugh@veritas.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      51c6f666
  13. 30 10月, 2005 12 次提交
    • A
      [PATCH] .text page fault SMP scalability optimization · 1a44e149
      Andrea Arcangeli 提交于
      We had a problem on ppc64 where with more than 4 threads a large system
      wouldn't scale well while faulting in the .text (most of the time was spent
      in the kernel despite it was an userland compute intensive app).  The
      reason is the useless overwrite of the same pte from all cpu.
      
      I fixed it this way (verified on an older kernel but the forward port is
      almost identical).  This will benefit all archs not just ppc64.
      Signed-off-by: NAndrea Arcangeli <andrea@suse.de>
      Cc: Hugh Dickins <hugh@veritas.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      1a44e149
    • H
      [PATCH] mm: fix rss and mmlist locking · f412ac08
      Hugh Dickins 提交于
      A couple of oddities were guarded by page_table_lock, no longer properly
      guarded when that is split.
      
      The mm_counters of file_rss and anon_rss: make those an atomic_t, or an
      atomic64_t if the architecture supports it, in such a case.  Definitions by
      courtesy of Christoph Lameter: who spent considerable effort on more scalable
      ways of counting, but found insufficient benefit in practice.
      
      And adding an mm with swap to the mmlist for swapoff: the list is well-
      guarded by its own lock, but the list_empty check now has to be repeated
      inside it.
      Signed-off-by: NHugh Dickins <hugh@veritas.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      f412ac08
    • H
      [PATCH] mm: split page table lock · 4c21e2f2
      Hugh Dickins 提交于
      Christoph Lameter demonstrated very poor scalability on the SGI 512-way, with
      a many-threaded application which concurrently initializes different parts of
      a large anonymous area.
      
      This patch corrects that, by using a separate spinlock per page table page, to
      guard the page table entries in that page, instead of using the mm's single
      page_table_lock.  (But even then, page_table_lock is still used to guard page
      table allocation, and anon_vma allocation.)
      
      In this implementation, the spinlock is tucked inside the struct page of the
      page table page: with a BUILD_BUG_ON in case it overflows - which it would in
      the case of 32-bit PA-RISC with spinlock debugging enabled.
      
      Splitting the lock is not quite for free: another cacheline access.  Ideally,
      I suppose we would use split ptlock only for multi-threaded processes on
      multi-cpu machines; but deciding that dynamically would have its own costs.
      So for now enable it by config, at some number of cpus - since the Kconfig
      language doesn't support inequalities, let preprocessor compare that with
      NR_CPUS.  But I don't think it's worth being user-configurable: for good
      testing of both split and unsplit configs, split now at 4 cpus, and perhaps
      change that to 8 later.
      
      There is a benefit even for singly threaded processes: kswapd can be attacking
      one part of the mm while another part is busy faulting.
      Signed-off-by: NHugh Dickins <hugh@veritas.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      4c21e2f2
    • H
      [PATCH] mm: follow_page with inner ptlock · deceb6cd
      Hugh Dickins 提交于
      Final step in pushing down common core's page_table_lock.  follow_page no
      longer wants caller to hold page_table_lock, uses pte_offset_map_lock itself;
      and so no page_table_lock is taken in get_user_pages itself.
      
      But get_user_pages (and get_futex_key) do then need follow_page to pin the
      page for them: take Daniel's suggestion of bitflags to follow_page.
      
      Need one for WRITE, another for TOUCH (it was the accessed flag before:
      vanished along with check_user_page_readable, but surely get_numa_maps is
      wrong to mark every page it finds as accessed), another for GET.
      
      And another, ANON to dispose of untouched_anonymous_page: it seems silly for
      that to descend a second time, let follow_page observe if there was no page
      table and return ZERO_PAGE if so.  Fix minor bug in that: check VM_LOCKED -
      make_pages_present ought to make readonly anonymous present.
      
      Give get_numa_maps a cond_resched while we're there.
      Signed-off-by: NHugh Dickins <hugh@veritas.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      deceb6cd
    • H
      [PATCH] mm: kill check_user_page_readable · c34d1b4d
      Hugh Dickins 提交于
      check_user_page_readable is a problematic variant of follow_page.  It's used
      only by oprofile's i386 and arm backtrace code, at interrupt time, to
      establish whether a userspace stackframe is currently readable.
      
      This is problematic, because we want to push the page_table_lock down inside
      follow_page, and later split it; whereas oprofile is doing a spin_trylock on
      it (in the i386 case, forgotten in the arm case), and needs that to pin
      perhaps two pages spanned by the stackframe (which might be covered by
      different locks when we split).
      
      I think oprofile is going about this in the wrong way: it doesn't need to know
      the area is readable (neither i386 nor arm uses read protection of user
      pages), it doesn't need to pin the memory, it should simply
      __copy_from_user_inatomic, and see if that succeeds or not.  Sorry, but I've
      not got around to devising the sparse __user annotations for this.
      
      Then we can eliminate check_user_page_readable, and return to a single
      follow_page without the __follow_page variants.
      Signed-off-by: NHugh Dickins <hugh@veritas.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      c34d1b4d
    • H
      [PATCH] mm: unmap_vmas with inner ptlock · 508034a3
      Hugh Dickins 提交于
      Remove the page_table_lock from around the calls to unmap_vmas, and replace
      the pte_offset_map in zap_pte_range by pte_offset_map_lock: all callers are
      now safe to descend without page_table_lock.
      
      Don't attempt fancy locking for hugepages, just take page_table_lock in
      unmap_hugepage_range.  Which makes zap_hugepage_range, and the hugetlb test in
      zap_page_range, redundant: unmap_vmas calls unmap_hugepage_range anyway.  Nor
      does unmap_vmas have much use for its mm arg now.
      
      The tlb_start_vma and tlb_end_vma in unmap_page_range are now called without
      page_table_lock: if they're implemented at all, they typically come down to
      flush_cache_range (usually done outside page_table_lock) and flush_tlb_range
      (which we already audited for the mprotect case).
      Signed-off-by: NHugh Dickins <hugh@veritas.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      508034a3
    • H
      [PATCH] mm: unlink vma before pagetables · 8f4f8c16
      Hugh Dickins 提交于
      In most places the descent from pgd to pud to pmd to pte holds mmap_sem
      (exclusively or not), which ensures that free_pgtables cannot be freeing page
      tables from any level at the same time.  But truncation and reverse mapping
      descend without mmap_sem.
      
      No problem: just make sure that a vma is unlinked from its prio_tree (or
      nonlinear list) and from its anon_vma list, after zapping the vma, but before
      freeing its page tables.  Then neither vmtruncate nor rmap can reach that vma
      whose page tables are now volatile (nor do they need to reach it, since all
      its page entries have been zapped by this stage).
      
      The i_mmap_lock and anon_vma->lock already serialize this correctly; but the
      locking hierarchy is such that we cannot take them while holding
      page_table_lock.  Well, we're trying to push that down anyway.  So in this
      patch, move anon_vma_unlink and unlink_file_vma into free_pgtables, at the
      same time as moving page_table_lock around calls to unmap_vmas.
      
      tlb_gather_mmu and tlb_finish_mmu then fall outside the page_table_lock, but
      we made them preempt_disable and preempt_enable earlier; and a long source
      audit of all the architectures has shown no problem with removing
      page_table_lock from them.  free_pgtables doesn't need page_table_lock for
      itself, nor for what it calls; tlb->mm->nr_ptes is usually protected by
      page_table_lock, but partly by non-exclusive mmap_sem - here it's decremented
      with exclusive mmap_sem, or mm_users 0.  update_hiwater_rss and
      vm_unacct_memory don't need page_table_lock either.
      Signed-off-by: NHugh Dickins <hugh@veritas.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      8f4f8c16
    • H
      [PATCH] mm: page fault handler locking · 8f4e2101
      Hugh Dickins 提交于
      On the page fault path, the patch before last pushed acquiring the
      page_table_lock down to the head of handle_pte_fault (though it's also taken
      and dropped earlier when a new page table has to be allocated).
      
      Now delete that line, read "entry = *pte" without it, and go off to this or
      that page fault handler on the basis of this unlocked peek.  Usually the
      handler can proceed without the lock, relying on the subsequent locked
      pte_same or pte_none test to back out when necessary; though do_wp_page needs
      the lock immediately, and do_file_page doesn't check (if there's a race,
      install_page just zaps the entry and reinstalls it).
      
      But on those architectures (notably i386 with PAE) whose pte is too big to be
      read atomically, if SMP or preemption is enabled, do_swap_page and
      do_file_page might cause irretrievable damage if passed a Frankenstein entry
      stitched together from unrelated parts.  In those configs, "pte_unmap_same"
      has to take page_table_lock, validate orig_pte still the same, and drop
      page_table_lock before unmapping, before proceeding.
      
      Use pte_offset_map_lock and pte_unmap_unlock throughout the handlers; but lock
      avoidance leaves more lone maps and unmaps than elsewhere.
      Signed-off-by: NHugh Dickins <hugh@veritas.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      8f4e2101
    • H
      [PATCH] mm: ptd_alloc take ptlock · c74df32c
      Hugh Dickins 提交于
      Second step in pushing down the page_table_lock.  Remove the temporary
      bridging hack from __pud_alloc, __pmd_alloc, __pte_alloc: expect callers not
      to hold page_table_lock, whether it's on init_mm or a user mm; take
      page_table_lock internally to check if a racing task already allocated.
      
      Convert their callers from common code.  But avoid coming back to change them
      again later: instead of moving the spin_lock(&mm->page_table_lock) down,
      switch over to new macros pte_alloc_map_lock and pte_unmap_unlock, which
      encapsulate the mapping+locking and unlocking+unmapping together, and in the
      end may use alternatives to the mm page_table_lock itself.
      
      These callers all hold mmap_sem (some exclusively, some not), so at no level
      can a page table be whipped away from beneath them; and pte_alloc uses the
      "atomic" pmd_present to test whether it needs to allocate.  It appears that on
      all arches we can safely descend without page_table_lock.
      Signed-off-by: NHugh Dickins <hugh@veritas.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      c74df32c
    • H
      [PATCH] mm: ptd_alloc inline and out · 1bb3630e
      Hugh Dickins 提交于
      It seems odd to me that, whereas pud_alloc and pmd_alloc test inline, only
      calling out-of-line __pud_alloc __pmd_alloc if allocation needed,
      pte_alloc_map and pte_alloc_kernel are entirely out-of-line.  Though it does
      add a little to kernel size, change them to macros testing inline, calling
      __pte_alloc or __pte_alloc_kernel to allocate out-of-line.  Mark none of them
      as fastcalls, leave that to CONFIG_REGPARM or not.
      
      It also seems more natural for the out-of-line functions to leave the offset
      calculation and map to the inline, which has to do it anyway for the common
      case.  At least mremap move wants __pte_alloc without _map.
      
      Macros rather than inline functions, certainly to avoid the header file issues
      which arise from CONFIG_HIGHPTE needing kmap_types.h, but also in case any
      architectures I haven't built would have other such problems.
      Signed-off-by: NHugh Dickins <hugh@veritas.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      1bb3630e
    • H
      [PATCH] mm: init_mm without ptlock · 872fec16
      Hugh Dickins 提交于
      First step in pushing down the page_table_lock.  init_mm.page_table_lock has
      been used throughout the architectures (usually for ioremap): not to serialize
      kernel address space allocation (that's usually vmlist_lock), but because
      pud_alloc,pmd_alloc,pte_alloc_kernel expect caller holds it.
      
      Reverse that: don't lock or unlock init_mm.page_table_lock in any of the
      architectures; instead rely on pud_alloc,pmd_alloc,pte_alloc_kernel to take
      and drop it when allocating a new one, to check lest a racing task already
      did.  Similarly no page_table_lock in vmalloc's map_vm_area.
      
      Some temporary ugliness in __pud_alloc and __pmd_alloc: since they also handle
      user mms, which are converted only by a later patch, for now they have to lock
      differently according to whether or not it's init_mm.
      
      If sources get muddled, there's a danger that an arch source taking
      init_mm.page_table_lock will be mixed with common source also taking it (or
      neither take it).  So break the rules and make another change, which should
      break the build for such a mismatch: remove the redundant mm arg from
      pte_alloc_kernel (ppc64 scrapped its distinct ioremap_mm in 2.6.13).
      
      Exceptions: arm26 used pte_alloc_kernel on user mm, now pte_alloc_map; ia64
      used pte_alloc_map on init_mm, now pte_alloc_kernel; parisc had bad args to
      pmd_alloc and pte_alloc_kernel in unused USE_HPPA_IOREMAP code; ppc64
      map_io_page forgot to unlock on failure; ppc mmu_mapin_ram and ppc64 im_free
      took page_table_lock for no good reason.
      Signed-off-by: NHugh Dickins <hugh@veritas.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      872fec16
    • H
      [PATCH] mm: update_hiwaters just in time · 365e9c87
      Hugh Dickins 提交于
      update_mem_hiwater has attracted various criticisms, in particular from those
      concerned with mm scalability.  Originally it was called whenever rss or
      total_vm got raised.  Then many of those callsites were replaced by a timer
      tick call from account_system_time.  Now Frank van Maarseveen reports that to
      be found inadequate.  How about this?  Works for Frank.
      
      Replace update_mem_hiwater, a poor combination of two unrelated ops, by macros
      update_hiwater_rss and update_hiwater_vm.  Don't attempt to keep
      mm->hiwater_rss up to date at timer tick, nor every time we raise rss (usually
      by 1): those are hot paths.  Do the opposite, update only when about to lower
      rss (usually by many), or just before final accounting in do_exit.  Handle
      mm->hiwater_vm in the same way, though it's much less of an issue.  Demand
      that whoever collects these hiwater statistics do the work of taking the
      maximum with rss or total_vm.
      
      And there has been no collector of these hiwater statistics in the tree.  The
      new convention needs an example, so match Frank's usage by adding a VmPeak
      line above VmSize to /proc/<pid>/status, and also a VmHWM line above VmRSS
      (High-Water-Mark or High-Water-Memory).
      
      There was a particular anomaly during mremap move, that hiwater_vm might be
      captured too high.  A fleeting such anomaly remains, but it's quickly
      corrected now, whereas before it would stick.
      
      What locking?  None: if the app is racy then these statistics will be racy,
      it's not worth any overhead to make them exact.  But whenever it suits,
      hiwater_vm is updated under exclusive mmap_sem, and hiwater_rss under
      page_table_lock (for now) or with preemption disabled (later on): without
      going to any trouble, minimize the time between reading current values and
      updating, to minimize those occasions when a racing thread bumps a count up
      and back down in between.
      Signed-off-by: NHugh Dickins <hugh@veritas.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      365e9c87