1. 14 12月, 2009 1 次提交
  2. 30 10月, 2009 1 次提交
  3. 03 10月, 2009 1 次提交
  4. 02 12月, 2008 1 次提交
  5. 25 10月, 2008 1 次提交
  6. 01 10月, 2008 6 次提交
  7. 07 8月, 2008 2 次提交
  8. 29 7月, 2008 1 次提交
    • E
      [ARM] pxa: add support for L2 outer cache on XScale3 (attempt 2) · 905a09d5
      Eric Miao 提交于
      (20072fd0 lost most of its changes
      somehow, came from a mbox archive applied with git-am.  No idea
      what happened.  This puts back the missing bits.  --rmk)
      
      The initial patch from Lothar, and Lennert make it into a cleaner
      one, modified and tested on PXA320 by Eric Miao.
      
      This patch moves the L2 cache operations out of proc-xsc3.S into
      dedicated outer cache support code.
      
      CACHE_XSC3L2 can be deselected so no L2 cache specific code will be
      linked in, and that L2 enable bit will not be set, this applies to
      the following cases:
      
          a. _only_ PXA300/PXA310 support included and no L2 cache wanted
          b. PXA320 support included, but want L2 be disabled
      
      So the enabling of L2 depends on two things:
      
          - CACHE_XSC3L2 is selected
          - and L2 cache is present
      
      Where the latter is only a safeguard (previous testing shows it works
      OK even when this bit is turned on).
      
      IXP series of processors with XScale3 cannot disable L2 cache for the
      moment since they depend on the L2 cache for its coherent memory, so
      IXP may always select CACHE_XSC3L2.
      
      Other L2 relevant bits are always turned on (i.e. the original code
      enclosed by #if L2_CACHE_ENABLED .. #endif), as they showed no side
      effects. Specifically, these bits are:
      
         - OC bits in TTBASE register (table walk outer cache attributes)
         - LLR Outer Cache Attributes (OC) in Auxiliary Control Register
      Signed-off-by: NLothar WaÃ&lt;9f&gt;mann <LW@KARO-electronics.de>
      Signed-off-by: NLennert Buytenhek <buytenh@marvell.com>
      Signed-off-by: NEric Miao <eric.miao@marvell.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      905a09d5
  9. 24 4月, 2008 1 次提交
  10. 08 2月, 2007 1 次提交
    • L
      [ARM] 4123/1: xsc3: general cleanup · 850b4293
      Lennert Buytenhek 提交于
      This patch cleans up proc-xsc3:
      - Correct a number of typos.
      - Fix up indentation in a number of places.
      - Change references to the various caches to be more clear about
        whether we're talking about the L1 D, the L1 I or the unified L2
        cache.
      - Rename "drain write buffer" to "data write barrier", the official
        name used in the Manzano manual.
      - Change the xsc3 cpu name from "XScale-Core3" to "XScale-V3 based
        processor".
      
      Also, since a previously merged patch implements proper support for
      using a MAC or iWMMXt coprocessor on xsc3 platforms, we no longer
      need to enable access to CP0 on boot.
      Signed-off-by: NLennert Buytenhek <buytenh@wantstofly.org>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      850b4293
  11. 20 12月, 2006 1 次提交
  12. 13 12月, 2006 1 次提交
    • R
      [ARM] Unuse another Linux PTE bit · ad1ae2fe
      Russell King 提交于
      L_PTE_ASID is not really required to be stored in every PTE, since we
      can identify it via the address passed to set_pte_at().  So, create
      set_pte_ext() which takes the address of the PTE to set, the Linux
      PTE value, and the additional CPU PTE bits which aren't encoded in
      the Linux PTE value.
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      ad1ae2fe
  13. 02 12月, 2006 2 次提交
  14. 30 11月, 2006 1 次提交
  15. 30 6月, 2006 1 次提交
    • R
      [ARM] Set bit 4 on section mappings correctly depending on CPU · 8799ee9f
      Russell King 提交于
      On some CPUs, bit 4 of section mappings means "update the
      cache when written to".  On others, this bit is required to
      be one, and others it's required to be zero.  Finally, on
      ARMv6 and above, setting it turns on "no execute" and prevents
      speculative prefetches.
      
      With all these combinations, no one value fits all CPUs, so we
      have to pick a value depending on the CPU type, and the area
      we're mapping.
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      8799ee9f
  16. 29 6月, 2006 1 次提交
    • R
      [ARM] nommu: provide a way for correct control register value selection · 22b19086
      Russell King 提交于
      Most MMU-based CPUs have a restriction on the setting of the data cache
      enable and mmu enable bits in the control register, whereby if the data
      cache is enabled, the MMU must also be enabled.  Enabling the data
      cache without the MMU is an invalid combination.
      
      However, there are CPUs where the data cache can be enabled without the
      MMU.
      
      In order to allow these CPUs to take advantage of that, provide a
      method whereby each proc-*.S file defines the control regsiter value
      for use with nommu (with the MMU disabled.)  Later on, when we add
      support for enabling the MMU on these devices, we can adjust the
      "crval" macro to also enable the data cache for nommu.
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      22b19086
  17. 31 5月, 2006 1 次提交
  18. 02 4月, 2006 1 次提交
    • L
      [ARM] 3439/2: xsc3: add I/O coherency support · 23759dc6
      Lennert Buytenhek 提交于
      Patch from Lennert Buytenhek
      
      This patch adds support for the I/O coherent cache available on the
      xsc3.  The approach is to provide a simple API to determine whether the
      chipset supports coherency by calling arch_is_coherent() and then
      setting the appropriate system memory PTE and PMD bits.  In addition,
      we call this API on dma_alloc_coherent() and dma_map_single() calls.
      A generic version exists that will compile out all the coherency-related
      code that is not needed on the majority of ARM systems.
      
      Note that we do not check for coherency in the dma_alloc_writecombine()
      function as that still requires a special PTE setting.  We also don't
      touch dma_mmap_coherent() as that is a special ARM-only API that is by
      definition only used on non-coherent system.
      Signed-off-by: NDeepak Saxena <dsaxena@plexity.net>
      Signed-off-by: NLennert Buytenhek <buytenh@wantstofly.org>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      23759dc6
  19. 30 3月, 2006 1 次提交
  20. 29 3月, 2006 1 次提交
    • L
      [ARM] 3377/2: add support for intel xsc3 core · 23bdf86a
      Lennert Buytenhek 提交于
      Patch from Lennert Buytenhek
      
      This patch adds support for the new XScale v3 core.  This is an
      ARMv5 ISA core with the following additions:
      
      - L2 cache
      - I/O coherency support (on select chipsets)
      - Low-Locality Reference cache attributes (replaces mini-cache)
      - Supersections (v6 compatible)
      - 36-bit addressing (v6 compatible)
      - Single instruction cache line clean/invalidate
      - LRU cache replacement (vs round-robin)
      
      I attempted to merge the XSC3 support into proc-xscale.S, but XSC3
      cores have separate errata and have to handle things like L2, so it
      is simpler to keep it separate.
      
      L2 cache support is currently a build option because the L2 enable
      bit must be set before we enable the MMU and there is no easy way to
      capture command line parameters at this point.
      
      There are still optimizations that can be done such as using LLR for
      copypage (in theory using the exisiting mini-cache code) but those
      can be addressed down the road.
      Signed-off-by: NDeepak Saxena <dsaxena@plexity.net>
      Signed-off-by: NLennert Buytenhek <buytenh@wantstofly.org>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      23bdf86a