1. 16 2月, 2016 1 次提交
  2. 21 10月, 2015 1 次提交
    • S
      arm64: Delay cpu feature capability checks · dbb4e152
      Suzuki K. Poulose 提交于
      At the moment we run through the arm64_features capability list for
      each CPU and set the capability if one of the CPU supports it. This
      could be problematic in a heterogeneous system with differing capabilities.
      Delay the CPU feature checks until all the enabled CPUs are up(i.e,
      smp_cpus_done(), so that we can make better decisions based on the
      overall system capability. Once we decide and advertise the capabilities
      the alternatives can be applied. From this state, we cannot roll back
      a feature to disabled based on the values from a new hotplugged CPU,
      due to the runtime patching and other reasons. So, for all new CPUs,
      we need to make sure that they have the established system capabilities.
      Failing which, we bring the CPU down, preventing it from turning online.
      Once the capabilities are decided, any new CPU booting up goes through
      verification to ensure that it has all the enabled capabilities and also
      invokes the respective enable() method on the CPU.
      
      The CPU errata checks are not delayed and is still executed per-CPU
      to detect the respective capabilities. If we ever come across a non-errata
      capability that needs to be checked on each-CPU, we could introduce them via
      a new capability table(or introduce a flag), which can be processed per CPU.
      
      The next patch will make the feature checks use the system wide
      safe value of a feature register.
      
      NOTE: The enable() methods associated with the capability is scheduled
      on all the CPUs (which is the only use case at the moment). If we need
      a different type of 'enable()' which only needs to be run once on any CPU,
      we should be able to handle that when needed.
      Signed-off-by: NSuzuki K. Poulose <suzuki.poulose@arm.com>
      Tested-by: NDave Martin <Dave.Martin@arm.com>
      [catalin.marinas@arm.com: static variable and coding style fixes]
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      dbb4e152
  3. 27 7月, 2015 1 次提交
  4. 01 6月, 2015 1 次提交
  5. 17 3月, 2015 1 次提交
  6. 28 2月, 2015 1 次提交
  7. 07 1月, 2015 2 次提交
  8. 29 8月, 2014 1 次提交
  9. 17 7月, 2014 1 次提交
    • D
      arch, locking: Ciao arch_mutex_cpu_relax() · 3a6bfbc9
      Davidlohr Bueso 提交于
      The arch_mutex_cpu_relax() function, introduced by 34b133f8, is
      hacky and ugly. It was added a few years ago to address the fact
      that common cpu_relax() calls include yielding on s390, and thus
      impact the optimistic spinning functionality of mutexes. Nowadays
      we use this function well beyond mutexes: rwsem, qrwlock, mcs and
      lockref. Since the macro that defines the call is in the mutex header,
      any users must include mutex.h and the naming is misleading as well.
      
      This patch (i) renames the call to cpu_relax_lowlatency  ("relax, but
      only if you can do it with very low latency") and (ii) defines it in
      each arch's asm/processor.h local header, just like for regular cpu_relax
      functions. On all archs, except s390, cpu_relax_lowlatency is simply cpu_relax,
      and thus we can take it out of mutex.h. While this can seem redundant,
      I believe it is a good choice as it allows us to move out arch specific
      logic from generic locking primitives and enables future(?) archs to
      transparently define it, similarly to System Z.
      Signed-off-by: NDavidlohr Bueso <davidlohr@hp.com>
      Signed-off-by: NPeter Zijlstra <peterz@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Anton Blanchard <anton@samba.org>
      Cc: Aurelien Jacquiot <a-jacquiot@ti.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Bharat Bhushan <r65777@freescale.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Chen Liqin <liqin.linux@gmail.com>
      Cc: Chris Metcalf <cmetcalf@tilera.com>
      Cc: Christian Borntraeger <borntraeger@de.ibm.com>
      Cc: Chris Zankel <chris@zankel.net>
      Cc: David Howells <dhowells@redhat.com>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: Deepthi Dharwar <deepthi@linux.vnet.ibm.com>
      Cc: Dominik Dingel <dingel@linux.vnet.ibm.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Guan Xuetao <gxt@mprc.pku.edu.cn>
      Cc: Haavard Skinnemoen <hskinnemoen@gmail.com>
      Cc: Hans-Christian Egtvedt <egtvedt@samfundet.no>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Helge Deller <deller@gmx.de>
      Cc: Hirokazu Takata <takata@linux-m32r.org>
      Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
      Cc: James E.J. Bottomley <jejb@parisc-linux.org>
      Cc: James Hogan <james.hogan@imgtec.com>
      Cc: Jason Wang <jasowang@redhat.com>
      Cc: Jesper Nilsson <jesper.nilsson@axis.com>
      Cc: Joe Perches <joe@perches.com>
      Cc: Jonas Bonn <jonas@southpole.se>
      Cc: Joseph Myers <joseph@codesourcery.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Koichi Yasutake <yasutake.koichi@jp.panasonic.com>
      Cc: Lennox Wu <lennox.wu@gmail.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Mark Salter <msalter@redhat.com>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Matt Turner <mattst88@gmail.com>
      Cc: Max Filippov <jcmvbkbc@gmail.com>
      Cc: Michael Neuling <mikey@neuling.org>
      Cc: Michal Simek <monstr@monstr.eu>
      Cc: Mikael Starvik <starvik@axis.com>
      Cc: Nicolas Pitre <nico@linaro.org>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Paul Burton <paul.burton@imgtec.com>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Qais Yousef <qais.yousef@imgtec.com>
      Cc: Qiaowei Ren <qiaowei.ren@intel.com>
      Cc: Rafael Wysocki <rafael.j.wysocki@intel.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Richard Henderson <rth@twiddle.net>
      Cc: Richard Kuo <rkuo@codeaurora.org>
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: Steven Miao <realmz6@gmail.com>
      Cc: Steven Rostedt <srostedt@redhat.com>
      Cc: Stratos Karafotis <stratosk@semaphore.gr>
      Cc: Tim Chen <tim.c.chen@linux.intel.com>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Vasily Kulikov <segoon@openwall.com>
      Cc: Vineet Gupta <vgupta@synopsys.com>
      Cc: Vineet Gupta <Vineet.Gupta1@synopsys.com>
      Cc: Waiman Long <Waiman.Long@hp.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Wolfram Sang <wsa@the-dreams.de>
      Cc: adi-buildroot-devel@lists.sourceforge.net
      Cc: linux390@de.ibm.com
      Cc: linux-alpha@vger.kernel.org
      Cc: linux-am33-list@redhat.com
      Cc: linux-arm-kernel@lists.infradead.org
      Cc: linux-c6x-dev@linux-c6x.org
      Cc: linux-cris-kernel@axis.com
      Cc: linux-hexagon@vger.kernel.org
      Cc: linux-ia64@vger.kernel.org
      Cc: linux@lists.openrisc.net
      Cc: linux-m32r-ja@ml.linux-m32r.org
      Cc: linux-m32r@ml.linux-m32r.org
      Cc: linux-m68k@lists.linux-m68k.org
      Cc: linux-metag@vger.kernel.org
      Cc: linux-mips@linux-mips.org
      Cc: linux-parisc@vger.kernel.org
      Cc: linuxppc-dev@lists.ozlabs.org
      Cc: linux-s390@vger.kernel.org
      Cc: linux-sh@vger.kernel.org
      Cc: linux-xtensa@linux-xtensa.org
      Cc: sparclinux@vger.kernel.org
      Link: http://lkml.kernel.org/r/1404079773.2619.4.camel@buesod1.americas.hpqcorp.netSigned-off-by: NIngo Molnar <mingo@kernel.org>
      3a6bfbc9
  10. 10 7月, 2014 1 次提交
  11. 09 5月, 2014 1 次提交
  12. 25 10月, 2013 1 次提交
  13. 09 11月, 2012 1 次提交
  14. 19 10月, 2012 1 次提交
    • C
      arm64: No need to set the x0-x2 registers in start_thread() · 16dd46bb
      Catalin Marinas 提交于
      For historical reasons, ARM used to set r0-r2 in start_thread() to the
      first values on the user stack when starting a new user application. The
      same logic has been inherited in AArch64. The x0 register is overridden
      by the sys_execve() return value so it's always zero on success. The x1
      and x2 registers are ignored by AArch64 and EABI AArch32 applications,
      so we can safely remove the register setting for both native and compat
      user space.
      
      This also fixes a potential fault with the kernel accessing user space
      stack directly.
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      Reported-by: NAl Viro <viro@zeniv.linux.org.uk>
      16dd46bb
  15. 17 10月, 2012 1 次提交
  16. 17 9月, 2012 1 次提交