1. 16 2月, 2015 1 次提交
  2. 25 11月, 2014 3 次提交
  3. 22 9月, 2014 1 次提交
  4. 26 8月, 2014 1 次提交
    • L
      MIPS: Remove race window in page fault handling · 4b34cdde
      Lars Persson 提交于
      Multicore MIPSes without I/D hardware coherency suffered from a race
      condition in the page fault handler. The page table entry was
      published before any pending lazy D-cache flush was committed, hence
      it allowed execution of stale page cache data by other VPEs in the
      system.
      
      To make the cache handling safe we need to perform flushing already in
      the set_pte_at function. MIPSes without coherent I-caches can get a
      small increase in flushes due to the unavailability of the execute
      flag in set_pte_at.
      
      [ralf@linux-mips.org: outlining set_pte_at() saves a good k in a test
      build, so I moved its definition from pgtable.h to cache.c.]
      Signed-off-by: NLars Persson <larper@axis.com>
      Cc: linux-mips@linux-mips.org
      Patchwork: https://patchwork.linux-mips.org/patch/7511/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      4b34cdde
  5. 19 8月, 2014 1 次提交
    • L
      MIPS: Remove race window in page fault handling · 2a4a8b1e
      Lars Persson 提交于
      Multicore MIPSes without I/D hardware coherency suffered from a race
      condition in the page fault handler. The page table entry was
      published before any pending lazy D-cache flush was committed, hence
      it allowed execution of stale page cache data by other VPEs in the
      system.
      
      To make the cache handling safe we need to perform flushing already in
      the set_pte_at function. MIPSes without coherent I-caches can get a
      small increase in flushes due to the unavailability of the execute
      flag in set_pte_at.
      
      [ralf@linux-mips.org: outlining set_pte_at() saves a good k in a test
      build, so I moved its definition from pgtable.h to cache.c.]
      Signed-off-by: NLars Persson <larper@axis.com>
      Cc: linux-mips@linux-mips.org
      Patchwork: https://patchwork.linux-mips.org/patch/7511/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      2a4a8b1e
  6. 02 8月, 2014 1 次提交
    • M
      MIPS: mm: Use the Hardware Page Table Walker if the core supports it · f1014d1b
      Markos Chandras 提交于
      The Hardware Page Table Walker aims to speed up TLB refill exceptions
      by handling them in the hardware level instead of having a software
      TLB refill handler. However, a TLB refill exception can still be
      thrown in certain cases such as, synchronus exceptions, or address
      translation or memory errors during the HTW operation. As a result of
      which, HTW must not be considered a complete replacement for the TLB
      refill software handler, but rather a fast-path for it.
      For HTW to work, the PWBase register must contain the task's page
      global directory address so the HTW will kick in on TLB refill
      exceptions.
      
      Due to HTW being a separate engine embedded deep in the CPU pipeline,
      we need to restart the HTW everytime a PTE changes to avoid HTW
      fetching a old entry from the page tables. It's also necessary to
      restart the HTW on context switches to prevent it from fetching a
      page from the previous process. Finally, since HTW is using the
      entryhi register to write the translations to the TLB, it's necessary
      to stop the HTW whenever the entryhi changes (eg for tlb probe
      perations) and enable it back afterwards.
      
      == Performance ==
      
      The following trivial test was used to measure the performance of the
      HTW. Using the same root filesystem, the following command was used
      to measure the number of tlb refill handler executions with and
      without (using 'nohtw' kernel parameter) HTW support.  The kernel was
      modified to use a scratch register as a counter for the TLB refill
      exceptions.
      
      find /usr -type f -exec ls -lh {} \;
      
      HTW Enabled:
      TLB refill exceptions: 12306
      
      HTW Disabled:
      TLB refill exceptions: 17805
      Signed-off-by: NMarkos Chandras <markos.chandras@imgtec.com>
      Cc: linux-mips@linux-mips.org
      Cc: Markos Chandras <markos.chandras@imgtec.com>
      Patchwork: https://patchwork.linux-mips.org/patch/7336/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      f1014d1b
  7. 28 5月, 2014 1 次提交
  8. 29 6月, 2013 1 次提交
  9. 08 4月, 2013 1 次提交
  10. 01 2月, 2013 1 次提交
  11. 13 12月, 2012 1 次提交
  12. 12 12月, 2012 1 次提交
  13. 26 11月, 2012 1 次提交
  14. 14 9月, 2012 1 次提交
  15. 26 7月, 2011 1 次提交
    • J
      MIPS: topdown mmap support · d0be89f6
      Jian Peng 提交于
      This patch introduced topdown mmap support in user process address
      space allocation policy.
      
      Recently, we ran some large applications that use mmap heavily and
      lead to OOM due to inflexible mmap allocation policy on MIPS32.
      
      Since most other major archs supported it for years, it is reasonable
      to follow the trend and reduce the pain of porting applications.
      
      Due to cache aliasing concern, arch_get_unmapped_area_topdown() and
      other helper functions are implemented in arch/mips/kernel/syscall.c.
      Signed-off-by: NJian Peng <jipeng2005@gmail.com>
      Cc: David Daney <ddaney@caviumnetworks.com>
      Cc: linux-mips@linux-mips.org
      Patchwork: https://patchwork.linux-mips.org/patch/2389/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      d0be89f6
  16. 27 2月, 2010 2 次提交
  17. 21 2月, 2010 1 次提交
    • R
      MM: Pass a PTE pointer to update_mmu_cache() rather than the PTE itself · 4b3073e1
      Russell King 提交于
      On VIVT ARM, when we have multiple shared mappings of the same file
      in the same MM, we need to ensure that we have coherency across all
      copies.  We do this via make_coherent() by making the pages
      uncacheable.
      
      This used to work fine, until we allowed highmem with highpte - we
      now have a page table which is mapped as required, and is not available
      for modification via update_mmu_cache().
      
      Ralf Beache suggested getting rid of the PTE value passed to
      update_mmu_cache():
      
        On MIPS update_mmu_cache() calls __update_tlb() which walks pagetables
        to construct a pointer to the pte again.  Passing a pte_t * is much
        more elegant.  Maybe we might even replace the pte argument with the
        pte_t?
      
      Ben Herrenschmidt would also like the pte pointer for PowerPC:
      
        Passing the ptep in there is exactly what I want.  I want that
        -instead- of the PTE value, because I have issue on some ppc cases,
        for I$/D$ coherency, where set_pte_at() may decide to mask out the
        _PAGE_EXEC.
      
      So, pass in the mapped page table pointer into update_mmu_cache(), and
      remove the PTE value, updating all implementations and call sites to
      suit.
      
      Includes a fix from Stephen Rothwell:
      
        sparc: fix fallout from update_mmu_cache API change
      Signed-off-by: NStephen Rothwell <sfr@canb.auug.org.au>
      Acked-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      4b3073e1
  18. 17 12月, 2009 1 次提交
  19. 22 9月, 2009 1 次提交
    • H
      mm: ZERO_PAGE without PTE_SPECIAL · 62eede62
      Hugh Dickins 提交于
      Reinstate anonymous use of ZERO_PAGE to all architectures, not just to
      those which __HAVE_ARCH_PTE_SPECIAL: as suggested by Nick Piggin.
      
      Contrary to how I'd imagined it, there's nothing ugly about this, just a
      zero_pfn test built into one or another block of vm_normal_page().
      
      But the MIPS ZERO_PAGE-of-many-colours case demands is_zero_pfn() and
      my_zero_pfn() inlines.  Reinstate its mremap move_pte() shuffling of
      ZERO_PAGEs we did from 2.6.17 to 2.6.19?  Not unless someone shouts for
      that: it would have to take vm_flags to weed out some cases.
      Signed-off-by: NHugh Dickins <hugh.dickins@tiscali.co.uk>
      Cc: Rik van Riel <riel@redhat.com>
      Reviewed-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Nick Piggin <npiggin@suse.de>
      Cc: Mel Gorman <mel@csn.ul.ie>
      Cc: Minchan Kim <minchan.kim@gmail.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      62eede62
  20. 17 6月, 2009 1 次提交
  21. 11 10月, 2008 1 次提交
  22. 06 6月, 2008 1 次提交
  23. 29 4月, 2008 2 次提交
  24. 28 4月, 2008 1 次提交
    • N
      mm: introduce pte_special pte bit · 7e675137
      Nick Piggin 提交于
      s390 for one, cannot implement VM_MIXEDMAP with pfn_valid, due to their memory
      model (which is more dynamic than most).  Instead, they had proposed to
      implement it with an additional path through vm_normal_page(), using a bit in
      the pte to determine whether or not the page should be refcounted:
      
      vm_normal_page()
      {
      	...
              if (unlikely(vma->vm_flags & (VM_PFNMAP|VM_MIXEDMAP))) {
                      if (vma->vm_flags & VM_MIXEDMAP) {
      #ifdef s390
      			if (!mixedmap_refcount_pte(pte))
      				return NULL;
      #else
                              if (!pfn_valid(pfn))
                                      return NULL;
      #endif
                              goto out;
                      }
      	...
      }
      
      This is fine, however if we are allowed to use a bit in the pte to determine
      refcountedness, we can use that to _completely_ replace all the vma based
      schemes.  So instead of adding more cases to the already complex vma-based
      scheme, we can have a clearly seperate and simple pte-based scheme (and get
      slightly better code generation in the process):
      
      vm_normal_page()
      {
      #ifdef s390
      	if (!mixedmap_refcount_pte(pte))
      		return NULL;
      	return pte_page(pte);
      #else
      	...
      #endif
      }
      
      And finally, we may rather make this concept usable by any architecture rather
      than making it s390 only, so implement a new type of pte state for this.
      Unfortunately the old vma based code must stay, because some architectures may
      not be able to spare pte bits.  This makes vm_normal_page a little bit more
      ugly than we would like, but the 2 cases are clearly seperate.
      
      So introduce a pte_special pte state, and use it in mm/memory.c.  It is
      currently a noop for all architectures, so this doesn't actually result in any
      compiled code changes to mm/memory.o.
      
      BTW:
      I haven't put vm_normal_page() into arch code as-per an earlier suggestion.
      The reason is that, regardless of where vm_normal_page is actually
      implemented, the *abstraction* is still exactly the same. Also, while it
      depends on whether the architecture has pte_special or not, that is the
      only two possible cases, and it really isn't an arch specific function --
      the role of the arch code should be to provide primitive functions and
      accessors with which to build the core code; pte_special does that. We do
      not want architectures to know or care about vm_normal_page itself, and
      we definitely don't want them being able to invent something new there
      out of sight of mm/ code. If we made vm_normal_page an arch function, then
      we have to make vm_insert_mixed (next patch) an arch function too. So I
      don't think moving it to arch code fundamentally improves any abstractions,
      while it does practically make the code more difficult to follow, for both
      mm and arch developers, and easier to misuse.
      
      [akpm@linux-foundation.org: build fix]
      Signed-off-by: NNick Piggin <npiggin@suse.de>
      Acked-by: NCarsten Otte <cotte@de.ibm.com>
      Cc: Jared Hulbert <jaredeh@gmail.com>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      7e675137
  25. 12 10月, 2007 1 次提交
  26. 27 8月, 2007 2 次提交
  27. 17 7月, 2007 1 次提交
  28. 09 5月, 2007 1 次提交
  29. 25 3月, 2007 1 次提交
  30. 31 1月, 2007 1 次提交
    • H
      [PATCH] mm: mremap correct rmap accounting · 701dfbc1
      Hugh Dickins 提交于
      Nick Piggin points out that page accounting on MIPS multiple ZERO_PAGEs
      is not maintained by its move_pte, and could lead to freeing a ZERO_PAGE.
      
      Instead of complicating that move_pte, just forget the minor optimization
      when mremapping, and change the one thing which needed it for correctness
      - filemap_xip use ZERO_PAGE(0) throughout instead of according to address.
      
      [ "There is no block device driver one could use for XIP on mips
         platforms" - Carsten Otte ]
      Signed-off-by: NHugh Dickins <hugh@veritas.com>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Cc: Andrew Morton <akpm@osdl.org>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Carsten Otte <cotte@de.ibm.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      701dfbc1
  31. 30 11月, 2006 1 次提交
    • F
      [MIPS] page.h: remove __pa() usages. · 99e3b942
      Franck Bui-Huu 提交于
      __pa() was used by virt_to_page() and virt_addr_valid(). These
      latter are used when kernel is initialised so __pa() is not
      appropriate, we use virt_to_phys() instead.
      
      Futhermore __pa() is going to take care of CKSEG0/XKPHYS
      address mix for 64 bit kernels. This makes __pa() more complex
      than virt_to_phys() and this extra work is not needed by
      virt_to_page() and virt_addr_valid().
      
      Eventually it consolidates virt_to_phys() prototype by making
      its argument 'const'. this avoids some warnings that was due
      to some virt_to_page() usages which pass const pointer.
      Signed-off-by: NFranck Bui-Huu <fbuihuu@gmail.com>
      Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      99e3b942
  32. 26 9月, 2006 1 次提交
    • D
      [PATCH] Standardize pxx_page macros · 46a82b2d
      Dave McCracken 提交于
      One of the changes necessary for shared page tables is to standardize the
      pxx_page macros.  pte_page and pmd_page have always returned the struct
      page associated with their entry, while pte_page_kernel and pmd_page_kernel
      have returned the kernel virtual address.  pud_page and pgd_page, on the
      other hand, return the kernel virtual address.
      
      Shared page tables needs pud_page and pgd_page to return the actual page
      structures.  There are very few actual users of these functions, so it is
      simple to standardize their usage.
      
      Since this is basic cleanup, I am submitting these changes as a standalone
      patch.  Per Hugh Dickins' comments about it, I am also changing the
      pxx_page_kernel macros to pxx_page_vaddr to clarify their meaning.
      Signed-off-by: NDave McCracken <dmccr@us.ibm.com>
      Cc: Hugh Dickins <hugh@veritas.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      46a82b2d
  33. 20 6月, 2006 1 次提交
    • R
      [MIPS] Drop 0 definition for kern_addr_valid · d9b8d0da
      Ralf Baechle 提交于
      kern_addr_valid is currently only being used in kmem_ptr_validate which
      is making some vague attempt at verfying the validity of an address.
      Only IA-64, PARISC and x86-64 actually make some actual effort to verify
      the validity of the pointer.  Most architecture definitions of
      kern_addr_valid() just define it as 1; the Alpha and CONFIG_DISCONTIGMEM
      on i386 and MIPS even as 0; the 0-definition will result in
      kmem_ptr_validate always failing which in turn will cause d_validate to
      always fail.  d_validate's only two users are smbfs and ncpfs, so the
      0 definition ended breaking those ...
      Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      d9b8d0da
  34. 06 6月, 2006 1 次提交
  35. 02 6月, 2006 1 次提交
    • D
      [SPARC64]: Fix D-cache corruption in mremap · 0b0968a3
      David S. Miller 提交于
      If we move a mapping from one virtual address to another,
      and this changes the virtual color of the mapping to those
      pages, we can see corrupt data due to D-cache aliasing.
      
      Check for and deal with this by overriding the move_pte()
      macro.  Set things up so that other platforms can cleanly
      override the move_pte() macro too.
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      0b0968a3