1. 27 9月, 2006 10 次提交
    • P
      sh: page table alloc cleanups and page fault optimizations. · 26ff6c11
      Paul Mundt 提交于
      Cleanup of page table allocators, using generic folded PMD and PUD
      helpers. TLB flushing operations are moved to a more sensible spot.
      
      The page fault handler is also optimized slightly, we no longer waste
      cycles on IRQ disabling for flushing of the page from the ITLB, since
      we're already under CLI protection by the initial exception handler.
      Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
      26ff6c11
    • P
      sh: SH-4A Privileged Space Mapping Buffer (PMB) support. · 0c7b1df6
      Paul Mundt 提交于
      Add support for 32-bit physical addressing through the SH-4A
      Privileged Space Mapping Buffer (PMB).
      Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
      0c7b1df6
    • P
      sh: Add control register barriers. · 29847622
      Paul Mundt 提交于
      Currently when making changes to control registers, we
      typically need some time for changes to take effect (8
      nops, generally).  However, for sh4a we simply need to
      do an icbi..
      
      This is a simple patch for implementing a general purpose
      ctrl_barrier() which functions as a control register write
      barrier. There's some additional documentation in the patch
      itself, but it's pretty self explanatory.
      
      There were also some places where we were not doing the
      barrier, which didn't seem to have any adverse effects on
      legacy parts, but certainly did on sh4a. It's safer to have
      the barrier in place for legacy parts as well in these cases,
      though this does make flush_tlb_all() more expensive (by an
      order of 8 nops).  We can ifdef around the flush_tlb_all()
      case for now if it's clear that all legacy parts won't have
      a problem with this.
      Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
      29847622
    • P
      sh: Add flag for MMU PTEA capability. · 749cf486
      Paul Mundt 提交于
      Add CPU_HAS_PTEA, refactor some of the cpu flag settings.
      Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
      749cf486
    • P
      sh: Fix fatal oops in copy_user_page() on sh4a (SH7780). · 8b395265
      Paul Mundt 提交于
      We had a pretty interesting oops happening, where copy_user_page()
      was down()'ing p3map_sem[] with a bogus offset (particularly, an
      offset that hadn't been initialized with sema_init(), due to the
      mismatch between cpu_data->dcache.n_aliases and what was assumed
      based off of the old CACHE_ALIAS value).
      
      Luckily, spinlock debugging caught this for us, and so we drop
      the old hardcoded CACHE_ALIAS for sh4 completely and rely on the
      run-time probed cpu_data->dcache.alias_mask. This in turn gets
      the p3map_sem[] index right, and everything works again.
      
      While we're at it, also convert to 4-level page tables..
      Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
      8b395265
    • P
      sh: Support for SH7770/SH7780 CPU subtypes. · 5b19c908
      Paul Mundt 提交于
      Merge support for SH7770 and SH7780 SH-4A subtypes.
      Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
      5b19c908
    • R
      sh: Optimized cache handling for SH-4/SH-4A caches. · b638d0b9
      Richard Curnow 提交于
      This reworks some of the SH-4 cache handling code to more easily
      accomodate newer-style caches (particularly for the > direct-mapped
      case), as well as optimizing some of the old code.
      Signed-off-by: NRichard Curnow <richard.curnow@st.com>
      Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
      b638d0b9
    • P
      sh: Support for SH-4A memory barriers. · fdfc74f9
      Paul Mundt 提交于
      SH-4A supports 'synco' as a barrier, sprinkle it around
      the cache ops as necessary..
      Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
      fdfc74f9
    • P
      sh: hugetlb updates. · 3f787fe2
      Paul Mundt 提交于
      For some of the larger sizes we permitted spanning pages
      across several PTEs, but this turned out to not be generally
      useful. This reverts the sh hugetlbpage interface to something
      more sensible using huge pages at single PTE granularity.
      Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
      3f787fe2
    • P
      sh: flush_cache_range() cleanup and optimizations. · a252710f
      Paul Mundt 提交于
      flush_cache_range() wasn't page aligning the end of the range,
      we can't assume that it will always be page aligned, and we
      ended up getting unaligned faults in some rare call paths.
      
      Additionally, we add a small optimization to just purge the
      dcache entirely if the range is large enough that the page
      table walking will take longer. We use an arbitrary value of
      64 pages for the large range size, as per sh64.
      Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
      a252710f
  2. 26 9月, 2006 1 次提交
    • D
      [PATCH] Standardize pxx_page macros · 46a82b2d
      Dave McCracken 提交于
      One of the changes necessary for shared page tables is to standardize the
      pxx_page macros.  pte_page and pmd_page have always returned the struct
      page associated with their entry, while pte_page_kernel and pmd_page_kernel
      have returned the kernel virtual address.  pud_page and pgd_page, on the
      other hand, return the kernel virtual address.
      
      Shared page tables needs pud_page and pgd_page to return the actual page
      structures.  There are very few actual users of these functions, so it is
      simple to standardize their usage.
      
      Since this is basic cleanup, I am submitting these changes as a standalone
      patch.  Per Hugh Dickins' comments about it, I am also changing the
      pxx_page_kernel macros to pxx_page_vaddr to clarify their meaning.
      Signed-off-by: NDave McCracken <dmccr@us.ibm.com>
      Cc: Hugh Dickins <hugh@veritas.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      46a82b2d
  3. 01 7月, 2006 1 次提交
  4. 22 3月, 2006 3 次提交
    • D
      [PATCH] hugepage: is_aligned_hugepage_range() cleanup · 42b88bef
      David Gibson 提交于
      Quite a long time back, prepare_hugepage_range() replaced
      is_aligned_hugepage_range() as the callback from mm/mmap.c to arch code to
      verify if an address range is suitable for a hugepage mapping.
      is_aligned_hugepage_range() stuck around, but only to implement
      prepare_hugepage_range() on archs which didn't implement their own.
      
      Most archs (everything except ia64 and powerpc) used the same
      implementation of is_aligned_hugepage_range().  On powerpc, which
      implements its own prepare_hugepage_range(), the custom version was never
      used.
      
      In addition, "is_aligned_hugepage_range()" was a bad name, because it
      suggests it returns true iff the given range is a good hugepage range,
      whereas in fact it returns 0-or-error (so the sense is reversed).
      
      This patch cleans up by abolishing is_aligned_hugepage_range().  Instead
      prepare_hugepage_range() is defined directly.  Most archs use the default
      version, which simply checks the given region is aligned to the size of a
      hugepage.  ia64 and powerpc define custom versions.  The ia64 one simply
      checks that the range is in the correct address space region in addition to
      being suitably aligned.  The powerpc version (just as previously) checks
      for suitable addresses, and if necessary performs low-level MMU frobbing to
      set up new areas for use by hugepages.
      
      No libhugetlbfs testsuite regressions on ppc64 (POWER5 LPAR).
      Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
      Signed-off-by: NZhang Yanmin <yanmin.zhang@intel.com>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: William Lee Irwin III <wli@holomorphy.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      42b88bef
    • N
      [PATCH] remove set_page_count() outside mm/ · 7835e98b
      Nick Piggin 提交于
      set_page_count usage outside mm/ is limited to setting the refcount to 1.
      Remove set_page_count from outside mm/, and replace those users with
      init_page_count() and set_page_refcounted().
      
      This allows more debug checking, and tighter control on how code is allowed
      to play around with page->_count.
      Signed-off-by: NNick Piggin <npiggin@suse.de>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      7835e98b
    • N
      [PATCH] mm: split highorder pages · 8dfcc9ba
      Nick Piggin 提交于
      Have an explicit mm call to split higher order pages into individual pages.
       Should help to avoid bugs and be more explicit about the code's intention.
      Signed-off-by: NNick Piggin <npiggin@suse.de>
      Cc: Russell King <rmk@arm.linux.org.uk>
      Cc: David Howells <dhowells@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Paul Mundt <lethal@linux-sh.org>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Chris Zankel <chris@zankel.net>
      Signed-off-by: NYoichi Yuasa <yoichi_yuasa@tripeaks.co.jp>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      8dfcc9ba
  5. 17 1月, 2006 2 次提交
  6. 07 11月, 2005 2 次提交
  7. 30 10月, 2005 3 次提交
    • H
      [PATCH] mm: i386 sh sh64 ready for split ptlock · 60ec5585
      Hugh Dickins 提交于
      Use pte_offset_map_lock, instead of pte_offset_map (or inappropriate
      pte_offset_kernel) and mm-wide page_table_lock, in sundry arch places.
      
      The i386 vm86 mark_screen_rdonly: yes, there was and is an assumption that the
      screen fits inside the one page table, as indeed it does.
      
      The sh __do_page_fault: which handles both kernel faults (without lock) and
      user mm faults (locked - though it set_pte without locking before).
      
      The sh64 flush_cache_range and helpers: which wrongly thought callers held
      page_table_lock before (only its tlb_start_vma did, and no longer does so);
      moved the flush loop down, and adjusted the large versus small range decision
      to consider a range which spans page tables as large.
      Signed-off-by: NHugh Dickins <hugh@veritas.com>
      Acked-by: NPaul Mundt <lethal@linux-sh.org>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      60ec5585
    • H
      [PATCH] mm: init_mm without ptlock · 872fec16
      Hugh Dickins 提交于
      First step in pushing down the page_table_lock.  init_mm.page_table_lock has
      been used throughout the architectures (usually for ioremap): not to serialize
      kernel address space allocation (that's usually vmlist_lock), but because
      pud_alloc,pmd_alloc,pte_alloc_kernel expect caller holds it.
      
      Reverse that: don't lock or unlock init_mm.page_table_lock in any of the
      architectures; instead rely on pud_alloc,pmd_alloc,pte_alloc_kernel to take
      and drop it when allocating a new one, to check lest a racing task already
      did.  Similarly no page_table_lock in vmalloc's map_vm_area.
      
      Some temporary ugliness in __pud_alloc and __pmd_alloc: since they also handle
      user mms, which are converted only by a later patch, for now they have to lock
      differently according to whether or not it's init_mm.
      
      If sources get muddled, there's a danger that an arch source taking
      init_mm.page_table_lock will be mixed with common source also taking it (or
      neither take it).  So break the rules and make another change, which should
      break the build for such a mismatch: remove the redundant mm arg from
      pte_alloc_kernel (ppc64 scrapped its distinct ioremap_mm in 2.6.13).
      
      Exceptions: arm26 used pte_alloc_kernel on user mm, now pte_alloc_map; ia64
      used pte_alloc_map on init_mm, now pte_alloc_kernel; parisc had bad args to
      pmd_alloc and pte_alloc_kernel in unused USE_HPPA_IOREMAP code; ppc64
      map_io_page forgot to unlock on failure; ppc mmu_mapin_ram and ppc64 im_free
      took page_table_lock for no good reason.
      Signed-off-by: NHugh Dickins <hugh@veritas.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      872fec16
    • H
      [PATCH] mm: sh64 hugetlbpage.c · 147efea8
      Hugh Dickins 提交于
      The sh64 hugetlbpage.c seems to be erroneous, left over from a bygone age,
      clashing with the common hugetlb.c.  Replace it by a copy of the sh
      hugetlbpage.c.  Except, delete that mk_pte_huge macro neither uses.
      Signed-off-by: NHugh Dickins <hugh@veritas.com>
      Acked-by: NPaul Mundt <lethal@linux-sh.org>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      147efea8
  8. 28 10月, 2005 1 次提交
  9. 22 6月, 2005 1 次提交
    • D
      [PATCH] Hugepage consolidation · 63551ae0
      David Gibson 提交于
      A lot of the code in arch/*/mm/hugetlbpage.c is quite similar.  This patch
      attempts to consolidate a lot of the code across the arch's, putting the
      combined version in mm/hugetlb.c.  There are a couple of uglyish hacks in
      order to covert all the hugepage archs, but the result is a very large
      reduction in the total amount of code.  It also means things like hugepage
      lazy allocation could be implemented in one place, instead of six.
      
      Tested, at least a little, on ppc64, i386 and x86_64.
      
      Notes:
      	- this patch changes the meaning of set_huge_pte() to be more
      	  analagous to set_pte()
      	- does SH4 need s special huge_ptep_get_and_clear()??
      Acked-by: NWilliam Lee Irwin <wli@holomorphy.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      63551ae0
  10. 17 4月, 2005 1 次提交
    • L
      Linux-2.6.12-rc2 · 1da177e4
      Linus Torvalds 提交于
      Initial git repository build. I'm not bothering with the full history,
      even though we have it. We can create a separate "historical" git
      archive of that later if we want to, and in the meantime it's about
      3.2GB when imported into git - space that would just make the early
      git days unnecessarily complicated, when we don't have a lot of good
      infrastructure for it.
      
      Let it rip!
      1da177e4