1. 07 1月, 2011 1 次提交
  2. 03 1月, 2011 1 次提交
  3. 24 12月, 2010 1 次提交
  4. 04 11月, 2010 2 次提交
    • L
      ARM: 6396/1: Add SWP/SWPB emulation for ARMv7 processors · 64d2dc38
      Leif Lindholm 提交于
      The SWP instruction was deprecated in the ARMv6 architecture,
      superseded by the LDREX/STREX family of instructions for
      load-linked/store-conditional operations. The ARMv7 multiprocessing
      extensions mandate that SWP/SWPB instructions are treated as undefined
      from reset, with the ability to enable them through the System Control
      Register SW bit.
      
      This patch adds the alternative solution to emulate the SWP and SWPB
      instructions using LDREX/STREX sequences, and log statistics to
      /proc/cpu/swp_emulation. To correctly deal with copy-on-write, it also
      modifies cpu_v7_set_pte_ext to change the mappings to priviliged RO when
      user RO.
      Signed-off-by: NLeif Lindholm <leif.lindholm@arm.com>
      Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
      Acked-by: NKirill A. Shutemov <kirill@shutemov.name>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      64d2dc38
    • C
      ARM: 6384/1: Remove the domain switching on ARMv6k/v7 CPUs · 247055aa
      Catalin Marinas 提交于
      This patch removes the domain switching functionality via the set_fs and
      __switch_to functions on cores that have a TLS register.
      
      Currently, the ioremap and vmalloc areas share the same level 1 page
      tables and therefore have the same domain (DOMAIN_KERNEL). When the
      kernel domain is modified from Client to Manager (via the __set_fs or in
      the __switch_to function), the XN (eXecute Never) bit is overridden and
      newer CPUs can speculatively prefetch the ioremap'ed memory.
      
      Linux performs the kernel domain switching to allow user-specific
      functions (copy_to/from_user, get/put_user etc.) to access kernel
      memory. In order for these functions to work with the kernel domain set
      to Client, the patch modifies the LDRT/STRT and related instructions to
      the LDR/STR ones.
      
      The user pages access rights are also modified for kernel read-only
      access rather than read/write so that the copy-on-write mechanism still
      works. CPU_USE_DOMAINS gets disabled only if the hardware has a TLS register
      (CPU_32v6K is defined) since writing the TLS value to the high vectors page
      isn't possible.
      
      The user addresses passed to the kernel are checked by the access_ok()
      function so that they do not point to the kernel space.
      Tested-by: NAnton Vorontsov <cbouatmailru@gmail.com>
      Cc: Tony Lindgren <tony@atomide.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      247055aa
  5. 28 10月, 2010 13 次提交
  6. 27 10月, 2010 2 次提交
    • P
      mm: remove pte_*map_nested() · ece0e2b6
      Peter Zijlstra 提交于
      Since we no longer need to provide KM_type, the whole pte_*map_nested()
      API is now redundant, remove it.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Acked-by: NChris Metcalf <cmetcalf@tilera.com>
      Cc: David Howells <dhowells@redhat.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Russell King <rmk@arm.linux.org.uk>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: David Miller <davem@davemloft.net>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      ece0e2b6
    • P
      mm: stack based kmap_atomic() · 3e4d3af5
      Peter Zijlstra 提交于
      Keep the current interface but ignore the KM_type and use a stack based
      approach.
      
      The advantage is that we get rid of crappy code like:
      
      	#define __KM_PTE			\
      		(in_nmi() ? KM_NMI_PTE : 	\
      		 in_irq() ? KM_IRQ_PTE :	\
      		 KM_PTE0)
      
      and in general can stop worrying about what context we're in and what kmap
      slots might be appropriate for that.
      
      The downside is that FRV kmap_atomic() gets more expensive.
      
      For now we use a CPP trick suggested by Andrew:
      
        #define kmap_atomic(page, args...) __kmap_atomic(page)
      
      to avoid having to touch all kmap_atomic() users in a single patch.
      
      [ not compiled on:
        - mn10300: the arch doesn't actually build with highmem to begin with ]
      
      [akpm@linux-foundation.org: coding-style fixes]
      [akpm@linux-foundation.org: fix up drivers/gpu/drm/i915/intel_overlay.c]
      Acked-by: NRik van Riel <riel@redhat.com>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Acked-by: NChris Metcalf <cmetcalf@tilera.com>
      Cc: David Howells <dhowells@redhat.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Russell King <rmk@arm.linux.org.uk>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: David Miller <davem@davemloft.net>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Dave Airlie <airlied@linux.ie>
      Cc: Li Zefan <lizf@cn.fujitsu.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      3e4d3af5
  7. 26 10月, 2010 4 次提交
  8. 13 10月, 2010 2 次提交
  9. 08 10月, 2010 3 次提交
  10. 05 10月, 2010 5 次提交
  11. 02 10月, 2010 1 次提交
  12. 25 9月, 2010 1 次提交
  13. 23 9月, 2010 1 次提交
    • N
      ARM: 6401/1: plug a race in the alignment trap handler · 2f27bf83
      Nicolas Pitre 提交于
      When the policy for user space is to ignore misaligned accesses from user
      space, the processor then performs a documented rotation on the accessed
      data.  This is the result of the access being trapped, and the kernel
      disabling the alignment trap before returning to user space again.
      
      In kernel space we always want misaligned accesses to be fixed up.  This
      is enforced by always re-enabling the alignment trap on every entry into
      kernel space from user space.  No such re-enabling is performed when an
      exception occurs while already in kernel space as the alignment trap is
      always supposed to be enabled in that case.
      
      There is however a small race window when a misaligned access in user
      space is trapped and the alignment trap disabled, but the CPU didn't
      return to user space just yet.  Any exception would be entered from kernel
      space at that point and the kernel would then execute with the alignment
      trap disabled.
      
      Thanks to Maxime Bizon <mbizon@freebox.fr> for providing a test module
      that made this issue reproducible.
      Signed-off-by: NNicolas Pitre <nicolas.pitre@linaro.org>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      2f27bf83
  14. 19 9月, 2010 3 次提交