1. 26 7月, 2011 1 次提交
    • J
      MIPS: topdown mmap support · d0be89f6
      Jian Peng 提交于
      This patch introduced topdown mmap support in user process address
      space allocation policy.
      
      Recently, we ran some large applications that use mmap heavily and
      lead to OOM due to inflexible mmap allocation policy on MIPS32.
      
      Since most other major archs supported it for years, it is reasonable
      to follow the trend and reduce the pain of porting applications.
      
      Due to cache aliasing concern, arch_get_unmapped_area_topdown() and
      other helper functions are implemented in arch/mips/kernel/syscall.c.
      Signed-off-by: NJian Peng <jipeng2005@gmail.com>
      Cc: David Daney <ddaney@caviumnetworks.com>
      Cc: linux-mips@linux-mips.org
      Patchwork: https://patchwork.linux-mips.org/patch/2389/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      d0be89f6
  2. 27 2月, 2010 2 次提交
  3. 21 2月, 2010 1 次提交
    • R
      MM: Pass a PTE pointer to update_mmu_cache() rather than the PTE itself · 4b3073e1
      Russell King 提交于
      On VIVT ARM, when we have multiple shared mappings of the same file
      in the same MM, we need to ensure that we have coherency across all
      copies.  We do this via make_coherent() by making the pages
      uncacheable.
      
      This used to work fine, until we allowed highmem with highpte - we
      now have a page table which is mapped as required, and is not available
      for modification via update_mmu_cache().
      
      Ralf Beache suggested getting rid of the PTE value passed to
      update_mmu_cache():
      
        On MIPS update_mmu_cache() calls __update_tlb() which walks pagetables
        to construct a pointer to the pte again.  Passing a pte_t * is much
        more elegant.  Maybe we might even replace the pte argument with the
        pte_t?
      
      Ben Herrenschmidt would also like the pte pointer for PowerPC:
      
        Passing the ptep in there is exactly what I want.  I want that
        -instead- of the PTE value, because I have issue on some ppc cases,
        for I$/D$ coherency, where set_pte_at() may decide to mask out the
        _PAGE_EXEC.
      
      So, pass in the mapped page table pointer into update_mmu_cache(), and
      remove the PTE value, updating all implementations and call sites to
      suit.
      
      Includes a fix from Stephen Rothwell:
      
        sparc: fix fallout from update_mmu_cache API change
      Signed-off-by: NStephen Rothwell <sfr@canb.auug.org.au>
      Acked-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      4b3073e1
  4. 17 12月, 2009 1 次提交
  5. 22 9月, 2009 1 次提交
    • H
      mm: ZERO_PAGE without PTE_SPECIAL · 62eede62
      Hugh Dickins 提交于
      Reinstate anonymous use of ZERO_PAGE to all architectures, not just to
      those which __HAVE_ARCH_PTE_SPECIAL: as suggested by Nick Piggin.
      
      Contrary to how I'd imagined it, there's nothing ugly about this, just a
      zero_pfn test built into one or another block of vm_normal_page().
      
      But the MIPS ZERO_PAGE-of-many-colours case demands is_zero_pfn() and
      my_zero_pfn() inlines.  Reinstate its mremap move_pte() shuffling of
      ZERO_PAGEs we did from 2.6.17 to 2.6.19?  Not unless someone shouts for
      that: it would have to take vm_flags to weed out some cases.
      Signed-off-by: NHugh Dickins <hugh.dickins@tiscali.co.uk>
      Cc: Rik van Riel <riel@redhat.com>
      Reviewed-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Nick Piggin <npiggin@suse.de>
      Cc: Mel Gorman <mel@csn.ul.ie>
      Cc: Minchan Kim <minchan.kim@gmail.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      62eede62
  6. 17 6月, 2009 1 次提交
  7. 11 10月, 2008 1 次提交
  8. 06 6月, 2008 1 次提交
  9. 29 4月, 2008 2 次提交
  10. 28 4月, 2008 1 次提交
    • N
      mm: introduce pte_special pte bit · 7e675137
      Nick Piggin 提交于
      s390 for one, cannot implement VM_MIXEDMAP with pfn_valid, due to their memory
      model (which is more dynamic than most).  Instead, they had proposed to
      implement it with an additional path through vm_normal_page(), using a bit in
      the pte to determine whether or not the page should be refcounted:
      
      vm_normal_page()
      {
      	...
              if (unlikely(vma->vm_flags & (VM_PFNMAP|VM_MIXEDMAP))) {
                      if (vma->vm_flags & VM_MIXEDMAP) {
      #ifdef s390
      			if (!mixedmap_refcount_pte(pte))
      				return NULL;
      #else
                              if (!pfn_valid(pfn))
                                      return NULL;
      #endif
                              goto out;
                      }
      	...
      }
      
      This is fine, however if we are allowed to use a bit in the pte to determine
      refcountedness, we can use that to _completely_ replace all the vma based
      schemes.  So instead of adding more cases to the already complex vma-based
      scheme, we can have a clearly seperate and simple pte-based scheme (and get
      slightly better code generation in the process):
      
      vm_normal_page()
      {
      #ifdef s390
      	if (!mixedmap_refcount_pte(pte))
      		return NULL;
      	return pte_page(pte);
      #else
      	...
      #endif
      }
      
      And finally, we may rather make this concept usable by any architecture rather
      than making it s390 only, so implement a new type of pte state for this.
      Unfortunately the old vma based code must stay, because some architectures may
      not be able to spare pte bits.  This makes vm_normal_page a little bit more
      ugly than we would like, but the 2 cases are clearly seperate.
      
      So introduce a pte_special pte state, and use it in mm/memory.c.  It is
      currently a noop for all architectures, so this doesn't actually result in any
      compiled code changes to mm/memory.o.
      
      BTW:
      I haven't put vm_normal_page() into arch code as-per an earlier suggestion.
      The reason is that, regardless of where vm_normal_page is actually
      implemented, the *abstraction* is still exactly the same. Also, while it
      depends on whether the architecture has pte_special or not, that is the
      only two possible cases, and it really isn't an arch specific function --
      the role of the arch code should be to provide primitive functions and
      accessors with which to build the core code; pte_special does that. We do
      not want architectures to know or care about vm_normal_page itself, and
      we definitely don't want them being able to invent something new there
      out of sight of mm/ code. If we made vm_normal_page an arch function, then
      we have to make vm_insert_mixed (next patch) an arch function too. So I
      don't think moving it to arch code fundamentally improves any abstractions,
      while it does practically make the code more difficult to follow, for both
      mm and arch developers, and easier to misuse.
      
      [akpm@linux-foundation.org: build fix]
      Signed-off-by: NNick Piggin <npiggin@suse.de>
      Acked-by: NCarsten Otte <cotte@de.ibm.com>
      Cc: Jared Hulbert <jaredeh@gmail.com>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      7e675137
  11. 12 10月, 2007 1 次提交
  12. 27 8月, 2007 2 次提交
  13. 17 7月, 2007 1 次提交
  14. 09 5月, 2007 1 次提交
  15. 25 3月, 2007 1 次提交
  16. 31 1月, 2007 1 次提交
    • H
      [PATCH] mm: mremap correct rmap accounting · 701dfbc1
      Hugh Dickins 提交于
      Nick Piggin points out that page accounting on MIPS multiple ZERO_PAGEs
      is not maintained by its move_pte, and could lead to freeing a ZERO_PAGE.
      
      Instead of complicating that move_pte, just forget the minor optimization
      when mremapping, and change the one thing which needed it for correctness
      - filemap_xip use ZERO_PAGE(0) throughout instead of according to address.
      
      [ "There is no block device driver one could use for XIP on mips
         platforms" - Carsten Otte ]
      Signed-off-by: NHugh Dickins <hugh@veritas.com>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Cc: Andrew Morton <akpm@osdl.org>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Carsten Otte <cotte@de.ibm.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      701dfbc1
  17. 30 11月, 2006 1 次提交
    • F
      [MIPS] page.h: remove __pa() usages. · 99e3b942
      Franck Bui-Huu 提交于
      __pa() was used by virt_to_page() and virt_addr_valid(). These
      latter are used when kernel is initialised so __pa() is not
      appropriate, we use virt_to_phys() instead.
      
      Futhermore __pa() is going to take care of CKSEG0/XKPHYS
      address mix for 64 bit kernels. This makes __pa() more complex
      than virt_to_phys() and this extra work is not needed by
      virt_to_page() and virt_addr_valid().
      
      Eventually it consolidates virt_to_phys() prototype by making
      its argument 'const'. this avoids some warnings that was due
      to some virt_to_page() usages which pass const pointer.
      Signed-off-by: NFranck Bui-Huu <fbuihuu@gmail.com>
      Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      99e3b942
  18. 26 9月, 2006 1 次提交
    • D
      [PATCH] Standardize pxx_page macros · 46a82b2d
      Dave McCracken 提交于
      One of the changes necessary for shared page tables is to standardize the
      pxx_page macros.  pte_page and pmd_page have always returned the struct
      page associated with their entry, while pte_page_kernel and pmd_page_kernel
      have returned the kernel virtual address.  pud_page and pgd_page, on the
      other hand, return the kernel virtual address.
      
      Shared page tables needs pud_page and pgd_page to return the actual page
      structures.  There are very few actual users of these functions, so it is
      simple to standardize their usage.
      
      Since this is basic cleanup, I am submitting these changes as a standalone
      patch.  Per Hugh Dickins' comments about it, I am also changing the
      pxx_page_kernel macros to pxx_page_vaddr to clarify their meaning.
      Signed-off-by: NDave McCracken <dmccr@us.ibm.com>
      Cc: Hugh Dickins <hugh@veritas.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      46a82b2d
  19. 20 6月, 2006 1 次提交
    • R
      [MIPS] Drop 0 definition for kern_addr_valid · d9b8d0da
      Ralf Baechle 提交于
      kern_addr_valid is currently only being used in kmem_ptr_validate which
      is making some vague attempt at verfying the validity of an address.
      Only IA-64, PARISC and x86-64 actually make some actual effort to verify
      the validity of the pointer.  Most architecture definitions of
      kern_addr_valid() just define it as 1; the Alpha and CONFIG_DISCONTIGMEM
      on i386 and MIPS even as 0; the 0-definition will result in
      kmem_ptr_validate always failing which in turn will cause d_validate to
      always fail.  d_validate's only two users are smbfs and ncpfs, so the
      0 definition ended breaking those ...
      Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      d9b8d0da
  20. 06 6月, 2006 1 次提交
  21. 02 6月, 2006 1 次提交
    • D
      [SPARC64]: Fix D-cache corruption in mremap · 0b0968a3
      David S. Miller 提交于
      If we move a mapping from one virtual address to another,
      and this changes the virtual color of the mapping to those
      pages, we can see corrupt data due to D-cache aliasing.
      
      Check for and deal with this by overriding the move_pte()
      macro.  Set things up so that other platforms can cleanly
      override the move_pte() macro too.
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      0b0968a3
  22. 01 6月, 2006 1 次提交
    • S
      [MIPS] Fix marking buddy of pte global for MIPS32 w/36-bit physical address · 6e953891
      Sergei Shtylyov 提交于
          
      In case of CONFIG_64BIT_PHYS_ADDR, set_pte() and pte_clear() functions
      only set _PAGE_GLOBAL bit in the pte_low field of the buddy PTEs,
      forgetting to propagate ito to pte_high. Thus, the both pages might not
      really be made global for the CPU (since it AND's the G-bit of the
      odd / even PTEs together to decide whether they're global or not). Thus,
      if only a single page is allocated via vmalloc() or ioremap(), it's not
      really global for CPU (and it must be, since this is kernel mapping),
      and thus its ASID is compared against the current process' one -- so,
      we'll get into trouble sooner or later...  Also, pte_none() will fail
      on global pages because _PAGE_GLOBAL bit is set in both pte_low and
      pte_high, and pte_val() will return u64 value consisting of those fields
      concateneted.
      Signed-off-by: NSergei Shtylyov <sshtylyov@ru.mvista.com>
      Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      6e953891
  23. 26 4月, 2006 1 次提交
  24. 07 11月, 2005 1 次提交
  25. 31 10月, 2005 1 次提交
    • T
      [PATCH] vm: remove unused/broken page_pte[_prot] macros · 1426d7a8
      Tejun Heo 提交于
      This patch removes page_pte_prot and page_pte macros from all
      architectures.  Some architectures define both, some only page_pte (broken)
      and others none.  These macros are not used anywhere.
      
      page_pte_prot(page, prot) is identical to mk_pte(page, prot) and
      page_pte(page) is identical to page_pte_prot(page, __pgprot(0)).
      
      * The following architectures define both page_pte_prot and page_pte
      
        arm, arm26, ia64, sh64, sparc, sparc64
      
      * The following architectures define only page_pte (broken)
      
        frv, i386, m32r, mips, sh, x86-64
      
      * All other architectures define neither
      Signed-off-by: NTejun Heo <htejun@gmail.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      1426d7a8
  26. 30 10月, 2005 4 次提交
  27. 28 9月, 2005 1 次提交
    • N
      [PATCH] mm: move_pte to remap ZERO_PAGE · 8b1f3124
      Nick Piggin 提交于
      Move the ZERO_PAGE remapping complexity to the move_pte macro in
      asm-generic, have it conditionally depend on
      __HAVE_ARCH_MULTIPLE_ZERO_PAGE, which gets defined for MIPS.
      
      For architectures without __HAVE_ARCH_MULTIPLE_ZERO_PAGE, move_pte becomes
      a noop.
      
      From: Hugh Dickins <hugh@veritas.com>
      
      Fix nasty little bug we've missed in Nick's mremap move ZERO_PAGE patch.
      The "pte" at that point may be a swap entry or a pte_file entry: we must
      check pte_present before perhaps corrupting such an entry.
      
      Patch below against 2.6.14-rc2-mm1, but the same bug is in 2.6.14-rc2's
      mm/mremap.c, and more dangerous there since it's affecting all arches: I
      think the safest course is to send Nick's patch and Yoichi's build fix and
      this fix (build tested) on to Linus - so only MIPS can be affected.
      Signed-off-by: NNick Piggin <npiggin@suse.de>
      Signed-off-by: NHugh Dickins <hugh@veritas.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      8b1f3124
  28. 13 9月, 2005 1 次提交
  29. 05 9月, 2005 1 次提交
  30. 26 6月, 2005 1 次提交
  31. 17 4月, 2005 1 次提交
    • L
      Linux-2.6.12-rc2 · 1da177e4
      Linus Torvalds 提交于
      Initial git repository build. I'm not bothering with the full history,
      even though we have it. We can create a separate "historical" git
      archive of that later if we want to, and in the meantime it's about
      3.2GB when imported into git - space that would just make the early
      git days unnecessarily complicated, when we don't have a lot of good
      infrastructure for it.
      
      Let it rip!
      1da177e4