1. 14 1月, 2011 2 次提交
  2. 13 1月, 2011 1 次提交
  3. 07 1月, 2011 2 次提交
  4. 03 1月, 2011 1 次提交
  5. 24 12月, 2010 1 次提交
  6. 22 12月, 2010 6 次提交
  7. 20 12月, 2010 3 次提交
    • N
      ARM: fix cache-feroceon-l2 after stack based kmap_atomic() · 6d3e6d36
      Nicolas Pitre 提交于
      Since commit 3e4d3af5 "mm: stack based kmap_atomic()", it is actively
      wrong to rely on fixed kmap type indices (namely KM_L2_CACHE) as
      kmap_atomic() totally ignores them and a concurrent instance of it may
      happily reuse any slot for any purpose.  Because kmap_atomic() is now
      able to deal with reentrancy, we can get rid of the ad hoc mapping here.
      
      While the code is made much simpler, there is a needless cache flush
      introduced by the usage of __kunmap_atomic().  It is not clear if the
      performance difference to remove that is worth the cost in code
      maintenance (I don't think there are that many highmem users on that
      platform anyway) but that should be reconsidered when/if someone cares
      enough to do some measurements.
      Signed-off-by: NNicolas Pitre <nicolas.pitre@linaro.org>
      6d3e6d36
    • N
      ARM: fix cache-xsc3l2 after stack based kmap_atomic() · 25cbe454
      Nicolas Pitre 提交于
      Since commit 3e4d3af5 "mm: stack based kmap_atomic()", it is actively
      wrong to rely on fixed kmap type indices (namely KM_L2_CACHE) as
      kmap_atomic() totally ignores them and a concurrent instance of it may
      happily reuse any slot for any purpose.  Because kmap_atomic() is now
      able to deal with reentrancy, we can get rid of the ad hoc mapping here,
      and we even don't have to disable IRQs anymore (highmem case).
      
      While the code is made much simpler, there is a needless cache flush
      introduced by the usage of __kunmap_atomic().  It is not clear if the
      performance difference to remove that is worth the cost in code
      maintenance (I don't think there are that many highmem users on that
      platform if at all anyway).
      Signed-off-by: NNicolas Pitre <nicolas.pitre@linaro.org>
      25cbe454
    • N
      ARM: get rid of kmap_high_l1_vipt() · 39af22a7
      Nicolas Pitre 提交于
      Since commit 3e4d3af5 "mm: stack based kmap_atomic()", it is no longer
      necessary to carry an ad hoc version of kmap_atomic() added in commit
      7e5a69e8 "ARM: 6007/1: fix highmem with VIPT cache and DMA" to cope
      with reentrancy.
      
      In fact, it is now actively wrong to rely on fixed kmap type indices
      (namely KM_L1_CACHE) as kmap_atomic() totally ignores them now and a
      concurrent instance of it may reuse any slot for any purpose.
      Signed-off-by: NNicolas Pitre <nicolas.pitre@linaro.org>
      39af22a7
  8. 18 12月, 2010 2 次提交
  9. 15 12月, 2010 1 次提交
  10. 13 12月, 2010 2 次提交
  11. 30 11月, 2010 1 次提交
    • D
      ARM: 6501/1: Thumb-2: Correct data alignment for CONFIG_THUMB2_KERNEL in mm/proc-v7.S · 6323875d
      Dave Martin 提交于
      Directives such as .long and .word do not magically cause the
      assembler location counter to become aligned in gas.  As a result,
      using these directives in code sections can result in misaligned
      data words when building a Thumb-2 kernel (CONFIG_THUMB2_KERNEL).
      
      This is a Bad Thing, since the ABI permits the compiler to assume
      that fundamental types of word size or above are word- aligned when
      accessing them from C.  If the data is not really word-aligned,
      this can cause impaired performance and stray alignment faults in
      some circumstances.
      
      In general, the following rules should be applied when using data
      word declaration directives inside code sections:
      
          * .quad and .double:
               .align 3
      
          * .long, .word, .single, .float:
               .align (or .align 2)
      
          * .short:
              No explicit alignment required, since Thumb-2
              instructions are always 2 or 4 bytes in size.
              immediately after an instruction.
      
      In this specific case, we can achieve the desired alignment by
      forcing a 32-bit branch instruction using the W() macro, since the
      assembler location counter is already 32-bit aligned in this case.
      Reviewed-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NDave Martin <dave.martin@linaro.org>
      Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      6323875d
  12. 27 11月, 2010 4 次提交
  13. 24 11月, 2010 1 次提交
  14. 18 11月, 2010 1 次提交
    • M
      ARM: mach-shmobile: Initial AG5 and AG5EVM support · 6d9598e2
      Magnus Damm 提交于
      This patch adds initial support for Renesas SH-Mobile AG5.
      
      At this point the AG5 CPU support is limited to the ARM
      core, SCIF serial and a CMT timer together with L2 cache
      and the GIC. The AG5EVM board also supports Ethernet.
      
      Future patches will add support for GPIO, INTCS, CPGA
      and platform data / driver updates for devices such as
      IIC, LCDC, FSI, KEYSC, CEU and SDHI among others.
      
      The code in entry-macro.S will be cleaned up when the
      ARM IRQ demux code improvements have been merged.
      
      Depends on the AG5EVM mach-type recently registered but
      not yet present in arch/arm/tools/mach-types.
      
      As the AG5EVM board comes with 512MiB memory it is
      recommended to turn on HIGHMEM.
      
      Many thanks to Yoshii-san for initial bring up.
      Signed-off-by: NMagnus Damm <damm@opensource.se>
      Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
      6d9598e2
  15. 15 11月, 2010 1 次提交
  16. 08 11月, 2010 1 次提交
  17. 04 11月, 2010 2 次提交
    • L
      ARM: 6396/1: Add SWP/SWPB emulation for ARMv7 processors · 64d2dc38
      Leif Lindholm 提交于
      The SWP instruction was deprecated in the ARMv6 architecture,
      superseded by the LDREX/STREX family of instructions for
      load-linked/store-conditional operations. The ARMv7 multiprocessing
      extensions mandate that SWP/SWPB instructions are treated as undefined
      from reset, with the ability to enable them through the System Control
      Register SW bit.
      
      This patch adds the alternative solution to emulate the SWP and SWPB
      instructions using LDREX/STREX sequences, and log statistics to
      /proc/cpu/swp_emulation. To correctly deal with copy-on-write, it also
      modifies cpu_v7_set_pte_ext to change the mappings to priviliged RO when
      user RO.
      Signed-off-by: NLeif Lindholm <leif.lindholm@arm.com>
      Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
      Acked-by: NKirill A. Shutemov <kirill@shutemov.name>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      64d2dc38
    • C
      ARM: 6384/1: Remove the domain switching on ARMv6k/v7 CPUs · 247055aa
      Catalin Marinas 提交于
      This patch removes the domain switching functionality via the set_fs and
      __switch_to functions on cores that have a TLS register.
      
      Currently, the ioremap and vmalloc areas share the same level 1 page
      tables and therefore have the same domain (DOMAIN_KERNEL). When the
      kernel domain is modified from Client to Manager (via the __set_fs or in
      the __switch_to function), the XN (eXecute Never) bit is overridden and
      newer CPUs can speculatively prefetch the ioremap'ed memory.
      
      Linux performs the kernel domain switching to allow user-specific
      functions (copy_to/from_user, get/put_user etc.) to access kernel
      memory. In order for these functions to work with the kernel domain set
      to Client, the patch modifies the LDRT/STRT and related instructions to
      the LDR/STR ones.
      
      The user pages access rights are also modified for kernel read-only
      access rather than read/write so that the copy-on-write mechanism still
      works. CPU_USE_DOMAINS gets disabled only if the hardware has a TLS register
      (CPU_32v6K is defined) since writing the TLS value to the high vectors page
      isn't possible.
      
      The user addresses passed to the kernel are checked by the access_ok()
      function so that they do not point to the kernel space.
      Tested-by: NAnton Vorontsov <cbouatmailru@gmail.com>
      Cc: Tony Lindgren <tony@atomide.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      247055aa
  18. 28 10月, 2010 8 次提交