1. 28 10月, 2010 1 次提交
    • M
      ARM: 6466/1: implement flush_icache_all for the rest of the CPUs · c8c90860
      Mika Westerberg 提交于
      Commit 81d11955 ("ARM: 6405/1: Handle __flush_icache_all for
      CONFIG_SMP_ON_UP") added a new function to struct cpu_cache_fns:
      flush_icache_all(). It also implemented this for v6 and v7 but not
      for v5 and backwards. Without the function pointer in place, we
      will be calling wrong cache functions.
      
      For example with ep93xx we get following:
      
          Unable to handle kernel paging request at virtual address ee070f38
          pgd = c0004000
          [ee070f38] *pgd=00000000
          Internal error: Oops: 80000005 [#1] PREEMPT
          last sysfs file:
          Modules linked in:
          CPU: 0    Not tainted  (2.6.36+ #1)
          PC is at 0xee070f38
          LR is at __dma_alloc+0x11c/0x2d0
          pc : [<ee070f38>]    lr : [<c0032c8c>]    psr: 60000013
          sp : c581bde0  ip : 00000000  fp : c0472000
          r10: c0472000  r9 : 000000d0  r8 : 00020000
          r7 : 0001ffff  r6 : 00000000  r5 : c0472400  r4 : c5980000
          r3 : c03ab7e0  r2 : 00000000  r1 : c59a0000  r0 : c5980000
          Flags: nZCv  IRQs on  FIQs on  Mode SVC_32  ISA ARM  Segment kernel
          Control: c000717f  Table: c0004000  DAC: 00000017
          Process swapper (pid: 1, stack limit = 0xc581a270)
          [<c0032c8c>] (__dma_alloc+0x11c/0x2d0)
          [<c0032e5c>] (dma_alloc_writecombine+0x1c/0x24)
          [<c0204148>] (ep93xx_pcm_preallocate_dma_buffer+0x44/0x60)
          [<c02041c0>] (ep93xx_pcm_new+0x5c/0x88)
          [<c01ff188>] (snd_soc_instantiate_cards+0x8a8/0xbc0)
          [<c01ff59c>] (soc_probe+0xfc/0x134)
          [<c01adafc>] (platform_drv_probe+0x18/0x1c)
          [<c01acca4>] (driver_probe_device+0xb0/0x16c)
          [<c01ac284>] (bus_for_each_drv+0x48/0x84)
          [<c01ace90>] (device_attach+0x50/0x68)
          [<c01ac0f8>] (bus_probe_device+0x24/0x44)
          [<c01aad7c>] (device_add+0x2fc/0x44c)
          [<c01adfa8>] (platform_device_add+0x104/0x15c)
          [<c0015eb8>] (simone_init+0x60/0x94)
          [<c0021410>] (do_one_initcall+0xd0/0x1a4)
      
      __dma_alloc() calls (inlined) __dma_alloc_buffer() which ends up
      calling dmac_flush_range(). Now since the entries in the
      arm920_cache_fns are shifted by one, we jump into address 0xee070f38
      which is actually next instruction after the arm920_cache_fns
      structure.
      
      So implement flush_icache_all() for the rest of the supported CPUs
      using a generic 'invalidate I cache' instruction.
      Signed-off-by: NMika Westerberg <mika.westerberg@iki.fi>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      c8c90860
  2. 08 10月, 2010 1 次提交
  3. 27 7月, 2010 1 次提交
    • R
      ARM: Factor out common code from cpu_proc_fin() · 9ca03a21
      Russell King 提交于
      All implementations of cpu_proc_fin() start by disabling interrupts
      and then flush caches.  Rather than have every processors proc_fin()
      implementation do this, move it out into generic code - and move the
      cache flush past setup_mm_for_reboot() (so it can benefit from having
      caches still enabled.)
      
      This allows cpu_proc_fin() to become independent of the L1/L2 cache
      types, and eventually move the L2 cache flushing into the L2 support
      code.
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      9ca03a21
  4. 15 2月, 2010 2 次提交
  5. 01 1月, 2010 1 次提交
  6. 14 12月, 2009 1 次提交
  7. 30 10月, 2009 1 次提交
  8. 03 10月, 2009 1 次提交
  9. 02 12月, 2008 1 次提交
  10. 25 10月, 2008 1 次提交
  11. 01 10月, 2008 6 次提交
  12. 07 8月, 2008 2 次提交
  13. 29 7月, 2008 1 次提交
    • E
      [ARM] pxa: add support for L2 outer cache on XScale3 (attempt 2) · 905a09d5
      Eric Miao 提交于
      (20072fd0 lost most of its changes
      somehow, came from a mbox archive applied with git-am.  No idea
      what happened.  This puts back the missing bits.  --rmk)
      
      The initial patch from Lothar, and Lennert make it into a cleaner
      one, modified and tested on PXA320 by Eric Miao.
      
      This patch moves the L2 cache operations out of proc-xsc3.S into
      dedicated outer cache support code.
      
      CACHE_XSC3L2 can be deselected so no L2 cache specific code will be
      linked in, and that L2 enable bit will not be set, this applies to
      the following cases:
      
          a. _only_ PXA300/PXA310 support included and no L2 cache wanted
          b. PXA320 support included, but want L2 be disabled
      
      So the enabling of L2 depends on two things:
      
          - CACHE_XSC3L2 is selected
          - and L2 cache is present
      
      Where the latter is only a safeguard (previous testing shows it works
      OK even when this bit is turned on).
      
      IXP series of processors with XScale3 cannot disable L2 cache for the
      moment since they depend on the L2 cache for its coherent memory, so
      IXP may always select CACHE_XSC3L2.
      
      Other L2 relevant bits are always turned on (i.e. the original code
      enclosed by #if L2_CACHE_ENABLED .. #endif), as they showed no side
      effects. Specifically, these bits are:
      
         - OC bits in TTBASE register (table walk outer cache attributes)
         - LLR Outer Cache Attributes (OC) in Auxiliary Control Register
      Signed-off-by: NLothar WaÃ&lt;9f&gt;mann <LW@KARO-electronics.de>
      Signed-off-by: NLennert Buytenhek <buytenh@marvell.com>
      Signed-off-by: NEric Miao <eric.miao@marvell.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      905a09d5
  14. 24 4月, 2008 1 次提交
  15. 08 2月, 2007 1 次提交
    • L
      [ARM] 4123/1: xsc3: general cleanup · 850b4293
      Lennert Buytenhek 提交于
      This patch cleans up proc-xsc3:
      - Correct a number of typos.
      - Fix up indentation in a number of places.
      - Change references to the various caches to be more clear about
        whether we're talking about the L1 D, the L1 I or the unified L2
        cache.
      - Rename "drain write buffer" to "data write barrier", the official
        name used in the Manzano manual.
      - Change the xsc3 cpu name from "XScale-Core3" to "XScale-V3 based
        processor".
      
      Also, since a previously merged patch implements proper support for
      using a MAC or iWMMXt coprocessor on xsc3 platforms, we no longer
      need to enable access to CP0 on boot.
      Signed-off-by: NLennert Buytenhek <buytenh@wantstofly.org>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      850b4293
  16. 20 12月, 2006 1 次提交
  17. 13 12月, 2006 1 次提交
    • R
      [ARM] Unuse another Linux PTE bit · ad1ae2fe
      Russell King 提交于
      L_PTE_ASID is not really required to be stored in every PTE, since we
      can identify it via the address passed to set_pte_at().  So, create
      set_pte_ext() which takes the address of the PTE to set, the Linux
      PTE value, and the additional CPU PTE bits which aren't encoded in
      the Linux PTE value.
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      ad1ae2fe
  18. 02 12月, 2006 2 次提交
  19. 30 11月, 2006 1 次提交
  20. 30 6月, 2006 1 次提交
    • R
      [ARM] Set bit 4 on section mappings correctly depending on CPU · 8799ee9f
      Russell King 提交于
      On some CPUs, bit 4 of section mappings means "update the
      cache when written to".  On others, this bit is required to
      be one, and others it's required to be zero.  Finally, on
      ARMv6 and above, setting it turns on "no execute" and prevents
      speculative prefetches.
      
      With all these combinations, no one value fits all CPUs, so we
      have to pick a value depending on the CPU type, and the area
      we're mapping.
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      8799ee9f
  21. 29 6月, 2006 1 次提交
    • R
      [ARM] nommu: provide a way for correct control register value selection · 22b19086
      Russell King 提交于
      Most MMU-based CPUs have a restriction on the setting of the data cache
      enable and mmu enable bits in the control register, whereby if the data
      cache is enabled, the MMU must also be enabled.  Enabling the data
      cache without the MMU is an invalid combination.
      
      However, there are CPUs where the data cache can be enabled without the
      MMU.
      
      In order to allow these CPUs to take advantage of that, provide a
      method whereby each proc-*.S file defines the control regsiter value
      for use with nommu (with the MMU disabled.)  Later on, when we add
      support for enabling the MMU on these devices, we can adjust the
      "crval" macro to also enable the data cache for nommu.
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      22b19086
  22. 31 5月, 2006 1 次提交
  23. 02 4月, 2006 1 次提交
    • L
      [ARM] 3439/2: xsc3: add I/O coherency support · 23759dc6
      Lennert Buytenhek 提交于
      Patch from Lennert Buytenhek
      
      This patch adds support for the I/O coherent cache available on the
      xsc3.  The approach is to provide a simple API to determine whether the
      chipset supports coherency by calling arch_is_coherent() and then
      setting the appropriate system memory PTE and PMD bits.  In addition,
      we call this API on dma_alloc_coherent() and dma_map_single() calls.
      A generic version exists that will compile out all the coherency-related
      code that is not needed on the majority of ARM systems.
      
      Note that we do not check for coherency in the dma_alloc_writecombine()
      function as that still requires a special PTE setting.  We also don't
      touch dma_mmap_coherent() as that is a special ARM-only API that is by
      definition only used on non-coherent system.
      Signed-off-by: NDeepak Saxena <dsaxena@plexity.net>
      Signed-off-by: NLennert Buytenhek <buytenh@wantstofly.org>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      23759dc6
  24. 30 3月, 2006 1 次提交
  25. 29 3月, 2006 1 次提交
    • L
      [ARM] 3377/2: add support for intel xsc3 core · 23bdf86a
      Lennert Buytenhek 提交于
      Patch from Lennert Buytenhek
      
      This patch adds support for the new XScale v3 core.  This is an
      ARMv5 ISA core with the following additions:
      
      - L2 cache
      - I/O coherency support (on select chipsets)
      - Low-Locality Reference cache attributes (replaces mini-cache)
      - Supersections (v6 compatible)
      - 36-bit addressing (v6 compatible)
      - Single instruction cache line clean/invalidate
      - LRU cache replacement (vs round-robin)
      
      I attempted to merge the XSC3 support into proc-xscale.S, but XSC3
      cores have separate errata and have to handle things like L2, so it
      is simpler to keep it separate.
      
      L2 cache support is currently a build option because the L2 enable
      bit must be set before we enable the MMU and there is no easy way to
      capture command line parameters at this point.
      
      There are still optimizations that can be done such as using LLR for
      copypage (in theory using the exisiting mini-cache code) but those
      can be addressed down the road.
      Signed-off-by: NDeepak Saxena <dsaxena@plexity.net>
      Signed-off-by: NLennert Buytenhek <buytenh@wantstofly.org>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      23bdf86a