1. 16 2月, 2010 1 次提交
    • P
      sh: Prevent fixed slot PMB remapping from clobbering boot entries. · 55cef91a
      Paul Mundt 提交于
      The PMB initialization code walks the entries and synchronizes the
      software PMB state with the hardware mappings, preserving the slot index.
      Unfortunately pmb_alloc() only tested the bit position in the entry map
      and failed to set it, resulting in subsequent remaps being able to be
      dynamically assigned a slot that trampled an existing boot mapping with
      general badness ensuing.
      Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
      55cef91a
  2. 12 2月, 2010 1 次提交
    • P
      sh: Isolate uncached mapping support. · b0f3ae03
      Paul Mundt 提交于
      This splits out the uncached mapping support under its own config option,
      presently only used by 29-bit mode and 32-bit + PMB. This will make it
      possible to optionally add an uncached mapping on sh64 as well as booting
      without an uncached mapping for 32-bit.
      Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
      b0f3ae03
  3. 29 1月, 2010 1 次提交
  4. 26 1月, 2010 1 次提交
    • P
      sh: Mass ctrl_in/outX to __raw_read/writeX conversion. · 9d56dd3b
      Paul Mundt 提交于
      The old ctrl in/out routines are non-portable and unsuitable for
      cross-platform use. While drivers/sh has already been sanitized, there
      is still quite a lot of code that is not. This converts the arch/sh/ bits
      over, which permits us to flag the routines as deprecated whilst still
      building with -Werror for the architecture code, and to ensure that
      future users are not added.
      Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
      9d56dd3b
  5. 21 1月, 2010 2 次提交
  6. 20 1月, 2010 3 次提交
    • P
      sh: pretty print virtual memory map on boot. · 35f99c0d
      Paul Mundt 提交于
      This cribs the pretty printing from arch/x86/mm/init_32.c to dump the
      virtual memory layout on boot. This is primarily intended as a debugging
      aid, given that the newer CPUs have full control over their address space
      and as such have little to nothing in common with the legacy layout.
      Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
      35f99c0d
    • P
      sh: Correct iounmap fixmap teardown. · 920efaab
      Paul Mundt 提交于
      iounmap_fixed() had a couple of bugs in it that caused it to effectively
      fail at life. The total number of pages to unmap factored in the mapping
      offset and aligned up to the next page boundary, which doesn't match the
      ioremap_fixed() behaviour.
      
      When ioremap_fixed() pegs a slot, the address in the mapping data already
      contains the offset displacement, and the size is recorded verbatim given
      that we're only interested in total number of pages required. As such, we
      need to calculate the total number from the original size in the unmap
      path as well.
      
      At the same time, there was also an off-by-1 problem in the fixmap index
      calculation which has also been corrected.
      
      Previously subsequent remaps of an identical fixmap index would trigger
      the pte_ERROR() in set_pte_phys():
      
      	arch/sh/mm/init.c:77: bad pte 8053ffb0(0000781003fff506).
      	arch/sh/mm/init.c:77: bad pte 8053ffb0(0000781003fff506).
      	arch/sh/mm/init.c:77: bad pte 8053ffb0(0000781003fff506).
      	arch/sh/mm/init.c:77: bad pte 8053ffb0(0000781003fff506).
      	arch/sh/mm/init.c:77: bad pte 8053ffb0(0000781003fff506).
      	arch/sh/mm/init.c:77: bad pte 8053ffb0(0000781003fff506).
      
      With this patch in place, the iounmap-driven fixmap teardown actually
      does what it's supposed to do.
      Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
      920efaab
    • P
      sh: Make 29/32-bit mode check helper generally available. · 2efa53b2
      Paul Mundt 提交于
      Presently __in_29bit_mode() is only defined for the PMB case, but
      it's also easily derived from the CONFIG_29BIT and CONFIG_32BIT &&
      CONFIG_PMB=n cases.
      Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
      2efa53b2
  7. 19 1月, 2010 5 次提交
    • P
      sh64: Fix up PC casting in unaligned fixup notifier with 32bit ABI. · 88ea1a44
      Paul Mundt 提交于
      Presently the build bails with the following:
      
        CC      arch/sh/mm/alignment.o
      cc1: warnings being treated as errors
      arch/sh/mm/alignment.c: In function 'unaligned_fixups_notify':
      arch/sh/mm/alignment.c:69: warning: cast to pointer from integer of different size
      arch/sh/mm/alignment.c:74: warning: cast to pointer from integer of different size
      make[2]: *** [arch/sh/mm/alignment.o] Error 1
      
      This is due to the fact that regs->pc is always 64-bit, while the pointer size
      depends on the ABI. Wrapping through instruction_pointer() takes care of the
      appropriate casting for both configurations.
      Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
      88ea1a44
    • P
      sh: Kill off now bogus fixmap/page wiring documentation. · cb6d0446
      Paul Mundt 提交于
      The plans for _PAGE_WIRED were detailed in a comment with the fixmap
      code, but as it's now all taken care of, we no longer have any reason for
      keeping it around, particularly since it's no longer accurate. Kill it
      off.
      Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
      cb6d0446
    • P
      sh: Split out MMUCR.URB based entry wiring in to shared helper. · bb29c677
      Paul Mundt 提交于
      Presently this is duplicated between tlb-sh4 and tlb-pteaex. Split the
      helpers out in to a generic tlb-urb that can be used by any parts
      equipped with MMUCR.URB.
      
      At the same time, move the SH-5 code out-of-line, as we require single
      global state for DTLB entry wiring.
      Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
      bb29c677
    • P
      sh: Kill off duplicate address alignment in ioremap_fixed(). · acf2c968
      Paul Mundt 提交于
      This is already taken care of in the top-level ioremap, and now that
      no one should be calling ioremap_fixed() directly we can simply throw the
      mapping displacement in as an additional argument.
      Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
      acf2c968
    • P
      sh: Prevent 64-bit pgprot clobbering across ioremap implementations. · d57d6408
      Paul Mundt 提交于
      Presently 'flags' gets passed around a lot between the various ioremap
      helpers and implementations, which is only 32-bits. In the X2TLB case
      we use 64-bit pgprots which presently results in the upper 32bits being
      chopped off (which handily include our read/write/exec permissions).
      
      As such, we convert everything internally to using pgprot_t directly and
      simply convert over with pgprot_val() where needed. With this in place,
      transparent fixmap utilization for early ioremap works as expected.
      Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
      d57d6408
  8. 18 1月, 2010 6 次提交
  9. 17 1月, 2010 1 次提交
  10. 16 1月, 2010 5 次提交
  11. 15 1月, 2010 1 次提交
  12. 13 1月, 2010 2 次提交
    • P
      sh: default to extended TLB support. · 782bb5a5
      Paul Mundt 提交于
      All SH-X2 and SH-X3 parts support an extended TLB mode, which has been
      left as experimental since support was originally merged. Now that it's
      had some time to stabilize and get some exposure to various platforms,
      we can drop it as an option and default enable it across the board.
      
      This is also good future proofing for newer parts that will drop support
      for the legacy TLB mode completely.
      
      This will also force 3-level page tables for all newer parts, which is
      necessary both for the varying page sizes and larger memories.
      Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
      782bb5a5
    • P
      sh: fixed PMB mode refactoring. · a0ab3668
      Paul Mundt 提交于
      This introduces some much overdue chainsawing of the fixed PMB support.
      fixed PMB was introduced initially to work around the fact that dynamic
      PMB mode was relatively broken, though they were never intended to
      converge. The main areas where there are differences are whether the
      system is booted in 29-bit mode or 32-bit mode, and whether legacy
      mappings are to be preserved. Any system booting in true 32-bit mode will
      not care about legacy mappings, so these are roughly decoupled.
      
      Regardless of the entry point, PMB and 32BIT are directly related as far
      as the kernel is concerned, so we also switch back to having one select
      the other.
      
      With legacy mappings iterated through and applied in the initialization
      path it's now possible to finally merge the two implementations and
      permit dynamic remapping overtop of remaining entries regardless of
      whether boot mappings are crafted by hand or inherited from the boot
      loader.
      Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
      a0ab3668
  13. 12 1月, 2010 2 次提交
  14. 06 1月, 2010 1 次提交
  15. 04 1月, 2010 2 次提交
  16. 02 1月, 2010 3 次提交
    • M
      sh: Move page table allocation out of line · 2a5eacca
      Matt Fleming 提交于
      We also switched away from quicklists and instead moved to slab
      caches. After benchmarking both implementations the difference is
      negligible. The slab caches suit us better though because the size of a
      pgd table is just 4 entries when we're using a 3-level page table layout
      and quicklists always deal with pages.
      Signed-off-by: NMatt Fleming <matt@console-pimps.org>
      2a5eacca
    • M
      sh: Optimise flush_dcache_page() on SH4 · b4c89276
      Matt Fleming 提交于
      If the page is not mapped into any process's address space then aliases
      cannot exist in the cache. So reduce the amount of flushing we perform.
      Signed-off-by: NMatt Fleming <matt@console-pimps.org>
      b4c89276
    • M
      sh: Correct the PTRS_PER_PMD and PMD_SHIFT values · 3f5ab768
      Matt Fleming 提交于
      The previous expressions were wrong which made free_pmd_range() explode
      when using anything other than 4KB pages (which is why 8KB and 64KB
      pages were disabled with the 3-level page table layout).
      
      The problem was that pmd_offset() was returning an index of non-zero
      when it should have been returning 0. This non-zero offset was used to
      calculate the address of the pmd table to free in free_pmd_range(),
      which ended up trying to free an object that was not aligned on a page
      boundary.
      
      Now 3-level page tables should work with 4KB, 8KB and 64KB pages.
      Signed-off-by: NMatt Fleming <matt@console-pimps.org>
      3f5ab768
  17. 24 12月, 2009 1 次提交
  18. 17 12月, 2009 1 次提交
    • M
      sh: Definitions for 3-level page table layout · 5d9b4b19
      Matt Fleming 提交于
      If using 64-bit PTEs and 4K pages then each page table has 512 entries
      (as opposed to 1024 entries with 32-bit PTEs). Unlike MIPS, SH follows
      the convention that all structures in the page table (pgd_t, pmd_t,
      pgprot_t, etc) must be the same size. Therefore, 64-bit PTEs require
      64-bit PGD entries, etc. Using 2-levels of page tables and 64-bit PTEs
      it is only possible to map 1GB of virtual address space.
      
      In order to map all 4GB of virtual address space we need to adopt a
      3-level page table layout. This actually works out better for
      CONFIG_SUPERH32 because we only waste 2 PGD entries on the P1 and P2
      areas (which are untranslated) instead of 256.
      Signed-off-by: NMatt Fleming <matt@console-pimps.org>
      Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
      5d9b4b19
  19. 14 12月, 2009 1 次提交