1. 10 2月, 2014 1 次提交
    • T
      locking/mcs: Order the header files in Kbuild of each architecture in alphabetical order · b119fa61
      Tim Chen 提交于
      We perform a clean up of the Kbuid files in each architecture.
      We order the files in each Kbuild in alphabetical order
      by running the below script.
      
      for i in arch/*/include/asm/Kbuild
      do
              cat $i | gawk '/^generic-y/ {
                      i = 3;
                      do {
                              for (; i <= NF; i++) {
                                      if ($i == "\\") {
                                              getline;
                                              i = 1;
                                              continue;
                                      }
                                      if ($i != "")
                                              hdr[$i] = $i;
                              }
                              break;
                      } while (1);
                      next;
              }
              // {
                      print $0;
              }
              END {
                      n = asort(hdr);
                      for (i = 1; i <= n; i++)
                              print "generic-y += " hdr[i];
              }' > ${i}.sorted;
              mv ${i}.sorted $i;
      done
      Signed-off-by: NTim Chen <tim.c.chen@linux.intel.com>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Matthew R Wilcox <matthew.r.wilcox@intel.com>
      Cc: AswinChandramouleeswaran <aswin@hp.com>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: "Paul E.McKenney" <paulmck@linux.vnet.ibm.com>
      Cc: Scott J Norton <scott.norton@hp.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: "Figo.zhang" <figo1802@gmail.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Waiman Long <waiman.long@hp.com>
      Cc: Peter Hurley <peter@hurleysoftware.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Tim Chen <tim.c.chen@linux.intel.com>
      Cc: Alex Shi <alex.shi@linaro.org>
      Cc: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
      Cc: Andi Kleen <andi@firstfloor.org>
      Cc: George Spelvin <linux@horizon.com>
      Cc: MichelLespinasse <walken@google.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Davidlohr Bueso <davidlohr.bueso@hp.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NPeter Zijlstra <peterz@infradead.org>
      [ Fixed build bug. ]
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      b119fa61
  2. 31 1月, 2014 2 次提交
  3. 30 1月, 2014 1 次提交
  4. 27 1月, 2014 2 次提交
  5. 24 1月, 2014 1 次提交
    • L
      arm64: kernel: fix per-cpu offset restore on resume · fb4a9602
      Lorenzo Pieralisi 提交于
      The introduction of percpu offset optimisation through tpidr_el1 in:
      
      Commit id :71586276
      "arm64: percpu: implement optimised pcpu access using tpidr_el1"
      
      requires cpu_{suspend/resume} to restore the tpidr_el1 register upon resume
      so that percpu variables can be addressed correctly when a CPU comes out
      of reset from warm-boot.
      
      This patch fixes cpu_{suspend}/{resume} tpidr_el1 restoration on resume, by
      calling the set_my_cpu_offset C API, as it is done on primary and secondary
      CPUs on cold boot, so that, even if the register used to store the percpu
      offset is changed, the save and restore of general purpose registers does not
      have to be updated.
      Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      fb4a9602
  6. 23 1月, 2014 2 次提交
  7. 17 1月, 2014 1 次提交
  8. 13 1月, 2014 1 次提交
  9. 12 1月, 2014 1 次提交
    • P
      arch: Introduce smp_load_acquire(), smp_store_release() · 47933ad4
      Peter Zijlstra 提交于
      A number of situations currently require the heavyweight smp_mb(),
      even though there is no need to order prior stores against later
      loads.  Many architectures have much cheaper ways to handle these
      situations, but the Linux kernel currently has no portable way
      to make use of them.
      
      This commit therefore supplies smp_load_acquire() and
      smp_store_release() to remedy this situation.  The new
      smp_load_acquire() primitive orders the specified load against
      any subsequent reads or writes, while the new smp_store_release()
      primitive orders the specifed store against any prior reads or
      writes.  These primitives allow array-based circular FIFOs to be
      implemented without an smp_mb(), and also allow a theoretical
      hole in rcu_assign_pointer() to be closed at no additional
      expense on most architectures.
      
      In addition, the RCU experience transitioning from explicit
      smp_read_barrier_depends() and smp_wmb() to rcu_dereference()
      and rcu_assign_pointer(), respectively resulted in substantial
      improvements in readability.  It therefore seems likely that
      replacing other explicit barriers with smp_load_acquire() and
      smp_store_release() will provide similar benefits.  It appears
      that roughly half of the explicit barriers in core kernel code
      might be so replaced.
      
      [Changelog by PaulMck]
      Reviewed-by: N"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
      Signed-off-by: NPeter Zijlstra <peterz@infradead.org>
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
      Cc: Michael Ellerman <michael@ellerman.id.au>
      Cc: Michael Neuling <mikey@neuling.org>
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Victor Kaplansky <VICTORK@il.ibm.com>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Link: http://lkml.kernel.org/r/20131213150640.908486364@infradead.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      47933ad4
  10. 11 1月, 2014 1 次提交
    • L
      arm64: kernel: restore HW breakpoint registers in cpu_suspend · 65c021bb
      Lorenzo Pieralisi 提交于
      When a CPU resumes from low-power, it restores HW breakpoint and
      watchpoint slots through a CPU PM notifier. Since we want to enable
      debugging as early as possible in the resume path, the mdscr content
      is restored along the general purpose registers in the cpu_suspend API
      and debug exceptions are reenabled when cpu_suspend returns. Since the
      CPU PM notifier is run after a CPU has been resumed, we cannot expect
      HW breakpoint registers to contain sane values till the notifier is run,
      since the HW breakpoints registers content is unknown at reset; this means
      that the CPU might run with debug exceptions enabled, mdscr restored but HW
      breakpoint registers containing junk values that can trigger spurious
      debug exceptions.
      
      This patch fixes current HW breakpoints restore by moving the HW breakpoints
      registers restoration to the cpu_suspend API, before the debug exceptions are
      enabled. This way, as soon as the cpu_suspend function returns the
      kernel can resume debugging with sane values in HW breakpoint registers.
      Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com>
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      65c021bb
  11. 08 1月, 2014 5 次提交
  12. 28 12月, 2013 3 次提交
  13. 22 12月, 2013 1 次提交
  14. 21 12月, 2013 1 次提交
  15. 20 12月, 2013 17 次提交