1. 26 2月, 2014 12 次提交
  2. 14 2月, 2014 1 次提交
  3. 08 2月, 2014 3 次提交
    • M
      arm64: defconfig: Expand default enabled features · 55834a77
      Mark Rutland 提交于
      FPGA implementations of the Cortex-A57 and Cortex-A53 are now available
      in the form of the SMM-A57 and SMM-A53 Soft Macrocell Models (SMMs) for
      Versatile Express. As these attach to a Motherboard Express V2M-P1 it
      would be useful to have support for some V2M-P1 peripherals enabled by
      default.
      
      Additionally a couple of of features have been introduced since the last
      defconfig update (CMA, jump labels) that would be good to have enabled
      by default to ensure they are build and boot tested.
      
      This patch updates the arm64 defconfig to enable support for these
      devices and features. The arm64 Kconfig is modified to select
      HAVE_PATA_PLATFORM, which is required to enable support for the
      CompactFlash controller on the V2M-P1.
      
      A few options which don't need to appear in defconfig are trimmed:
      
      * BLK_DEV - selected by default
      * EXPERIMENTAL - otherwise gone from the kernel
      * MII - selected by drivers which require it
      * USB_SUPPORT - selected by default
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      55834a77
    • W
      arm64: asm: remove redundant "cc" clobbers · 95c41896
      Will Deacon 提交于
      cbnz/tbnz don't update the condition flags, so remove the "cc" clobbers
      from inline asm blocks that only use these instructions to implement
      conditional branches.
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      95c41896
    • W
      arm64: atomics: fix use of acquire + release for full barrier semantics · 8e86f0b4
      Will Deacon 提交于
      Linux requires a number of atomic operations to provide full barrier
      semantics, that is no memory accesses after the operation can be
      observed before any accesses up to and including the operation in
      program order.
      
      On arm64, these operations have been incorrectly implemented as follows:
      
      	// A, B, C are independent memory locations
      
      	<Access [A]>
      
      	// atomic_op (B)
      1:	ldaxr	x0, [B]		// Exclusive load with acquire
      	<op(B)>
      	stlxr	w1, x0, [B]	// Exclusive store with release
      	cbnz	w1, 1b
      
      	<Access [C]>
      
      The assumption here being that two half barriers are equivalent to a
      full barrier, so the only permitted ordering would be A -> B -> C
      (where B is the atomic operation involving both a load and a store).
      
      Unfortunately, this is not the case by the letter of the architecture
      and, in fact, the accesses to A and C are permitted to pass their
      nearest half barrier resulting in orderings such as Bl -> A -> C -> Bs
      or Bl -> C -> A -> Bs (where Bl is the load-acquire on B and Bs is the
      store-release on B). This is a clear violation of the full barrier
      requirement.
      
      The simple way to fix this is to implement the same algorithm as ARMv7
      using explicit barriers:
      
      	<Access [A]>
      
      	// atomic_op (B)
      	dmb	ish		// Full barrier
      1:	ldxr	x0, [B]		// Exclusive load
      	<op(B)>
      	stxr	w1, x0, [B]	// Exclusive store
      	cbnz	w1, 1b
      	dmb	ish		// Full barrier
      
      	<Access [C]>
      
      but this has the undesirable effect of introducing *two* full barrier
      instructions. A better approach is actually the following, non-intuitive
      sequence:
      
      	<Access [A]>
      
      	// atomic_op (B)
      1:	ldxr	x0, [B]		// Exclusive load
      	<op(B)>
      	stlxr	w1, x0, [B]	// Exclusive store with release
      	cbnz	w1, 1b
      	dmb	ish		// Full barrier
      
      	<Access [C]>
      
      The simple observations here are:
      
        - The dmb ensures that no subsequent accesses (e.g. the access to C)
          can enter or pass the atomic sequence.
      
        - The dmb also ensures that no prior accesses (e.g. the access to A)
          can pass the atomic sequence.
      
        - Therefore, no prior access can pass a subsequent access, or
          vice-versa (i.e. A is strictly ordered before C).
      
        - The stlxr ensures that no prior access can pass the store component
          of the atomic operation.
      
      The only tricky part remaining is the ordering between the ldxr and the
      access to A, since the absence of the first dmb means that we're now
      permitting re-ordering between the ldxr and any prior accesses.
      
      From an (arbitrary) observer's point of view, there are two scenarios:
      
        1. We have observed the ldxr. This means that if we perform a store to
           [B], the ldxr will still return older data. If we can observe the
           ldxr, then we can potentially observe the permitted re-ordering
           with the access to A, which is clearly an issue when compared to
           the dmb variant of the code. Thankfully, the exclusive monitor will
           save us here since it will be cleared as a result of the store and
           the ldxr will retry. Notice that any use of a later memory
           observation to imply observation of the ldxr will also imply
           observation of the access to A, since the stlxr/dmb ensure strict
           ordering.
      
        2. We have not observed the ldxr. This means we can perform a store
           and influence the later ldxr. However, that doesn't actually tell
           us anything about the access to [A], so we've not lost anything
           here either when compared to the dmb variant.
      
      This patch implements this solution for our barriered atomic operations,
      ensuring that we satisfy the full barrier requirements where they are
      needed.
      
      Cc: <stable@vger.kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      8e86f0b4
  4. 06 2月, 2014 1 次提交
  5. 05 2月, 2014 9 次提交
  6. 31 1月, 2014 2 次提交
  7. 30 1月, 2014 1 次提交
  8. 27 1月, 2014 2 次提交
  9. 24 1月, 2014 1 次提交
    • L
      arm64: kernel: fix per-cpu offset restore on resume · fb4a9602
      Lorenzo Pieralisi 提交于
      The introduction of percpu offset optimisation through tpidr_el1 in:
      
      Commit id :71586276
      "arm64: percpu: implement optimised pcpu access using tpidr_el1"
      
      requires cpu_{suspend/resume} to restore the tpidr_el1 register upon resume
      so that percpu variables can be addressed correctly when a CPU comes out
      of reset from warm-boot.
      
      This patch fixes cpu_{suspend}/{resume} tpidr_el1 restoration on resume, by
      calling the set_my_cpu_offset C API, as it is done on primary and secondary
      CPUs on cold boot, so that, even if the register used to store the percpu
      offset is changed, the save and restore of general purpose registers does not
      have to be updated.
      Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      fb4a9602
  10. 23 1月, 2014 2 次提交
  11. 17 1月, 2014 1 次提交
  12. 13 1月, 2014 1 次提交
  13. 12 1月, 2014 1 次提交
    • P
      arch: Introduce smp_load_acquire(), smp_store_release() · 47933ad4
      Peter Zijlstra 提交于
      A number of situations currently require the heavyweight smp_mb(),
      even though there is no need to order prior stores against later
      loads.  Many architectures have much cheaper ways to handle these
      situations, but the Linux kernel currently has no portable way
      to make use of them.
      
      This commit therefore supplies smp_load_acquire() and
      smp_store_release() to remedy this situation.  The new
      smp_load_acquire() primitive orders the specified load against
      any subsequent reads or writes, while the new smp_store_release()
      primitive orders the specifed store against any prior reads or
      writes.  These primitives allow array-based circular FIFOs to be
      implemented without an smp_mb(), and also allow a theoretical
      hole in rcu_assign_pointer() to be closed at no additional
      expense on most architectures.
      
      In addition, the RCU experience transitioning from explicit
      smp_read_barrier_depends() and smp_wmb() to rcu_dereference()
      and rcu_assign_pointer(), respectively resulted in substantial
      improvements in readability.  It therefore seems likely that
      replacing other explicit barriers with smp_load_acquire() and
      smp_store_release() will provide similar benefits.  It appears
      that roughly half of the explicit barriers in core kernel code
      might be so replaced.
      
      [Changelog by PaulMck]
      Reviewed-by: N"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
      Signed-off-by: NPeter Zijlstra <peterz@infradead.org>
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
      Cc: Michael Ellerman <michael@ellerman.id.au>
      Cc: Michael Neuling <mikey@neuling.org>
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Victor Kaplansky <VICTORK@il.ibm.com>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Link: http://lkml.kernel.org/r/20131213150640.908486364@infradead.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      47933ad4
  14. 11 1月, 2014 1 次提交
    • L
      arm64: kernel: restore HW breakpoint registers in cpu_suspend · 65c021bb
      Lorenzo Pieralisi 提交于
      When a CPU resumes from low-power, it restores HW breakpoint and
      watchpoint slots through a CPU PM notifier. Since we want to enable
      debugging as early as possible in the resume path, the mdscr content
      is restored along the general purpose registers in the cpu_suspend API
      and debug exceptions are reenabled when cpu_suspend returns. Since the
      CPU PM notifier is run after a CPU has been resumed, we cannot expect
      HW breakpoint registers to contain sane values till the notifier is run,
      since the HW breakpoints registers content is unknown at reset; this means
      that the CPU might run with debug exceptions enabled, mdscr restored but HW
      breakpoint registers containing junk values that can trigger spurious
      debug exceptions.
      
      This patch fixes current HW breakpoints restore by moving the HW breakpoints
      registers restoration to the cpu_suspend API, before the debug exceptions are
      enabled. This way, as soon as the cpu_suspend function returns the
      kernel can resume debugging with sane values in HW breakpoint registers.
      Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com>
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      65c021bb
  15. 08 1月, 2014 2 次提交