1. 27 12月, 2016 1 次提交
  2. 25 12月, 2016 1 次提交
  3. 22 11月, 2016 1 次提交
  4. 16 9月, 2016 1 次提交
  5. 12 9月, 2016 1 次提交
    • M
      arm64: use alternative auto-nop · 6ba3b554
      Mark Rutland 提交于
      Make use of the new alternative_if and alternative_else_nop_endif and
      get rid of our homebew NOP sleds, making the code simpler to read.
      
      Note that for cpu_do_switch_mm the ret has been moved out of the
      alternative sequence, and in the default case there will be three
      additional NOPs executed.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: James Morse <james.morse@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      6ba3b554
  6. 27 7月, 2016 1 次提交
  7. 21 6月, 2016 1 次提交
  8. 05 3月, 2016 1 次提交
  9. 27 2月, 2016 1 次提交
    • A
      arm64: lse: deal with clobbered IP registers after branch via PLT · 5be8b70a
      Ard Biesheuvel 提交于
      The LSE atomics implementation uses runtime patching to patch in calls
      to out of line non-LSE atomics implementations on cores that lack hardware
      support for LSE. To avoid paying the overhead cost of a function call even
      if no call ends up being made, the bl instruction is kept invisible to the
      compiler, and the out of line implementations preserve all registers, not
      just the ones that they are required to preserve as per the AAPCS64.
      
      However, commit fd045f6c ("arm64: add support for module PLTs") added
      support for routing branch instructions via veneers if the branch target
      offset exceeds the range of the ordinary relative branch instructions.
      Since this deals with jump and call instructions that are exposed to ELF
      relocations, the PLT code uses x16 to hold the address of the branch target
      when it performs an indirect branch-to-register, something which is
      explicitly allowed by the AAPCS64 (and ordinary compiler generated code
      does not expect register x16 or x17 to retain their values across a bl
      instruction).
      
      Since the lse runtime patched bl instructions don't adhere to the AAPCS64,
      they don't deal with this clobbering of registers x16 and x17. So add them
      to the clobber list of the asm() statements that perform the call
      instructions, and drop x16 and x17 from the list of registers that are
      callee saved in the out of line non-LSE implementations.
      
      In addition, since we have given these functions two scratch registers,
      they no longer need to stack/unstack temp registers.
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      [will: factored clobber list into #define, updated Makefile comment]
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      5be8b70a
  10. 19 2月, 2016 2 次提交
    • J
      arm64: kernel: Don't toggle PAN on systems with UAO · 70544196
      James Morse 提交于
      If a CPU supports both Privileged Access Never (PAN) and User Access
      Override (UAO), we don't need to disable/re-enable PAN round all
      copy_to_user() like calls.
      
      UAO alternatives cause these calls to use the 'unprivileged' load/store
      instructions, which are overridden to be the privileged kind when
      fs==KERNEL_DS.
      
      This patch changes the copy_to_user() calls to have their PAN toggling
      depend on a new composite 'feature' ARM64_ALT_PAN_NOT_UAO.
      
      If both features are detected, PAN will be enabled, but the copy_to_user()
      alternatives will not be applied. This means PAN will be enabled all the
      time for these functions. If only PAN is detected, the toggling will be
      enabled as normal.
      
      This will save the time taken to disable/re-enable PAN, and allow us to
      catch copy_to_user() accesses that occur with fs==KERNEL_DS.
      
      Futex and swp-emulation code continue to hang their PAN toggling code on
      ARM64_HAS_PAN.
      Signed-off-by: NJames Morse <james.morse@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      70544196
    • J
      arm64: kernel: Add support for User Access Override · 57f4959b
      James Morse 提交于
      'User Access Override' is a new ARMv8.2 feature which allows the
      unprivileged load and store instructions to be overridden to behave in
      the normal way.
      
      This patch converts {get,put}_user() and friends to use ldtr*/sttr*
      instructions - so that they can only access EL0 memory, then enables
      UAO when fs==KERNEL_DS so that these functions can access kernel memory.
      
      This allows user space's read/write permissions to be checked against the
      page tables, instead of testing addr<USER_DS, then using the kernel's
      read/write permissions.
      Signed-off-by: NJames Morse <james.morse@arm.com>
      [catalin.marinas@arm.com: move uao_thread_switch() above dsb()]
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      57f4959b
  11. 16 2月, 2016 3 次提交
  12. 13 10月, 2015 1 次提交
    • A
      arm64: add KASAN support · 39d114dd
      Andrey Ryabinin 提交于
      This patch adds arch specific code for kernel address sanitizer
      (see Documentation/kasan.txt).
      
      1/8 of kernel addresses reserved for shadow memory. There was no
      big enough hole for this, so virtual addresses for shadow were
      stolen from vmalloc area.
      
      At early boot stage the whole shadow region populated with just
      one physical page (kasan_zero_page). Later, this page reused
      as readonly zero shadow for some memory that KASan currently
      don't track (vmalloc).
      After mapping the physical memory, pages for shadow memory are
      allocated and mapped.
      
      Functions like memset/memmove/memcpy do a lot of memory accesses.
      If bad pointer passed to one of these function it is important
      to catch this. Compiler's instrumentation cannot do this since
      these functions are written in assembly.
      KASan replaces memory functions with manually instrumented variants.
      Original functions declared as weak symbols so strong definitions
      in mm/kasan/kasan.c could replace them. Original functions have aliases
      with '__' prefix in name, so we could call non-instrumented variant
      if needed.
      Some files built without kasan instrumentation (e.g. mm/slub.c).
      Original mem* function replaced (via #define) with prefixed variants
      to disable memory access checks for such files.
      Signed-off-by: NAndrey Ryabinin <ryabinin.a.a@gmail.com>
      Tested-by: NLinus Walleij <linus.walleij@linaro.org>
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      39d114dd
  13. 12 10月, 2015 1 次提交
  14. 07 10月, 2015 2 次提交
  15. 27 7月, 2015 5 次提交
  16. 13 11月, 2014 1 次提交
    • K
      arm64: __clear_user: handle exceptions on strb · 97fc1543
      Kyle McMartin 提交于
      ARM64 currently doesn't fix up faults on the single-byte (strb) case of
      __clear_user... which means that we can cause a nasty kernel panic as an
      ordinary user with any multiple PAGE_SIZE+1 read from /dev/zero.
      i.e.: dd if=/dev/zero of=foo ibs=1 count=1 (or ibs=65537, etc.)
      
      This is a pretty obscure bug in the general case since we'll only
      __do_kernel_fault (since there's no extable entry for pc) if the
      mmap_sem is contended. However, with CONFIG_DEBUG_VM enabled, we'll
      always fault.
      
      if (!down_read_trylock(&mm->mmap_sem)) {
      	if (!user_mode(regs) && !search_exception_tables(regs->pc))
      		goto no_context;
      retry:
      	down_read(&mm->mmap_sem);
      } else {
      	/*
      	 * The above down_read_trylock() might have succeeded in
      	 * which
      	 * case, we'll have missed the might_sleep() from
      	 * down_read().
      	 */
      	might_sleep();
      	if (!user_mode(regs) && !search_exception_tables(regs->pc))
      		goto no_context;
      }
      
      Fix that by adding an extable entry for the strb instruction, since it
      touches user memory, similar to the other stores in __clear_user.
      Signed-off-by: NKyle McMartin <kyle@redhat.com>
      Reported-by: NMiloš Prchlík <mprchlik@redhat.com>
      Cc: stable@vger.kernel.org
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      97fc1543
  17. 23 5月, 2014 6 次提交
  18. 08 2月, 2014 1 次提交
    • W
      arm64: atomics: fix use of acquire + release for full barrier semantics · 8e86f0b4
      Will Deacon 提交于
      Linux requires a number of atomic operations to provide full barrier
      semantics, that is no memory accesses after the operation can be
      observed before any accesses up to and including the operation in
      program order.
      
      On arm64, these operations have been incorrectly implemented as follows:
      
      	// A, B, C are independent memory locations
      
      	<Access [A]>
      
      	// atomic_op (B)
      1:	ldaxr	x0, [B]		// Exclusive load with acquire
      	<op(B)>
      	stlxr	w1, x0, [B]	// Exclusive store with release
      	cbnz	w1, 1b
      
      	<Access [C]>
      
      The assumption here being that two half barriers are equivalent to a
      full barrier, so the only permitted ordering would be A -> B -> C
      (where B is the atomic operation involving both a load and a store).
      
      Unfortunately, this is not the case by the letter of the architecture
      and, in fact, the accesses to A and C are permitted to pass their
      nearest half barrier resulting in orderings such as Bl -> A -> C -> Bs
      or Bl -> C -> A -> Bs (where Bl is the load-acquire on B and Bs is the
      store-release on B). This is a clear violation of the full barrier
      requirement.
      
      The simple way to fix this is to implement the same algorithm as ARMv7
      using explicit barriers:
      
      	<Access [A]>
      
      	// atomic_op (B)
      	dmb	ish		// Full barrier
      1:	ldxr	x0, [B]		// Exclusive load
      	<op(B)>
      	stxr	w1, x0, [B]	// Exclusive store
      	cbnz	w1, 1b
      	dmb	ish		// Full barrier
      
      	<Access [C]>
      
      but this has the undesirable effect of introducing *two* full barrier
      instructions. A better approach is actually the following, non-intuitive
      sequence:
      
      	<Access [A]>
      
      	// atomic_op (B)
      1:	ldxr	x0, [B]		// Exclusive load
      	<op(B)>
      	stlxr	w1, x0, [B]	// Exclusive store with release
      	cbnz	w1, 1b
      	dmb	ish		// Full barrier
      
      	<Access [C]>
      
      The simple observations here are:
      
        - The dmb ensures that no subsequent accesses (e.g. the access to C)
          can enter or pass the atomic sequence.
      
        - The dmb also ensures that no prior accesses (e.g. the access to A)
          can pass the atomic sequence.
      
        - Therefore, no prior access can pass a subsequent access, or
          vice-versa (i.e. A is strictly ordered before C).
      
        - The stlxr ensures that no prior access can pass the store component
          of the atomic operation.
      
      The only tricky part remaining is the ordering between the ldxr and the
      access to A, since the absence of the first dmb means that we're now
      permitting re-ordering between the ldxr and any prior accesses.
      
      From an (arbitrary) observer's point of view, there are two scenarios:
      
        1. We have observed the ldxr. This means that if we perform a store to
           [B], the ldxr will still return older data. If we can observe the
           ldxr, then we can potentially observe the permitted re-ordering
           with the access to A, which is clearly an issue when compared to
           the dmb variant of the code. Thankfully, the exclusive monitor will
           save us here since it will be cleared as a result of the store and
           the ldxr will retry. Notice that any use of a later memory
           observation to imply observation of the ldxr will also imply
           observation of the access to A, since the stlxr/dmb ensure strict
           ordering.
      
        2. We have not observed the ldxr. This means we can perform a store
           and influence the later ldxr. However, that doesn't actually tell
           us anything about the access to [A], so we've not lost anything
           here either when compared to the dmb variant.
      
      This patch implements this solution for our barriered atomic operations,
      ensuring that we satisfy the full barrier requirements where they are
      needed.
      
      Cc: <stable@vger.kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      8e86f0b4
  19. 20 12月, 2013 1 次提交
  20. 08 5月, 2013 1 次提交
  21. 30 4月, 2013 2 次提交
  22. 22 3月, 2013 3 次提交
  23. 17 9月, 2012 2 次提交