1. 21 6月, 2016 1 次提交
    • M
      powerpc: Improve FSCR init and context switching · b57bd2de
      Michael Neuling 提交于
      This fixes a few issues with FSCR init and switching.
      
      In commit 152d523e ("powerpc: Create context switch helpers
      save_sprs() and restore_sprs()") we moved the setting of the FSCR
      register from inside an CPU_FTR_ARCH_207S section to inside just a
      CPU_FTR_ARCH_DSCR section. Hence we are setting FSCR on POWER6/7 where
      the FSCR doesn't exist. This is harmless but we shouldn't do it.
      
      Also, we can simplify the FSCR context switch. We don't need to go
      through the calculation involving dscr_inherit. We can just restore
      what we saved last time.
      
      We also set an initial value in INIT_THREAD, so that pid 1 which is
      cloned from that gets a sane value.
      
      Based on patch by Jack Miller.
      Signed-off-by: NMichael Neuling <mikey@neuling.org>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      b57bd2de
  2. 14 6月, 2016 1 次提交
  3. 29 3月, 2016 1 次提交
  4. 02 3月, 2016 1 次提交
    • C
      powerpc: Restore FPU/VEC/VSX if previously used · 70fe3d98
      Cyril Bur 提交于
      Currently the FPU, VEC and VSX facilities are lazily loaded. This is not
      a problem unless a process is using these facilities.
      
      Modern versions of GCC are very good at automatically vectorising code,
      new and modernised workloads make use of floating point and vector
      facilities, even the kernel makes use of vectorised memcpy.
      
      All this combined greatly increases the cost of a syscall since the
      kernel uses the facilities sometimes even in syscall fast-path making it
      increasingly common for a thread to take an *_unavailable exception soon
      after a syscall, not to mention potentially taking all three.
      
      The obvious overcompensation to this problem is to simply always load
      all the facilities on every exit to userspace. Loading up all FPU, VEC
      and VSX registers every time can be expensive and if a workload does
      avoid using them, it should not be forced to incur this penalty.
      
      An 8bit counter is used to detect if the registers have been used in the
      past and the registers are always loaded until the value wraps to back
      to zero.
      
      Several versions of the assembly in entry_64.S were tested:
      
        1. Always calling C.
        2. Performing a common case check and then calling C.
        3. A complex check in asm.
      
      After some benchmarking it was determined that avoiding C in the common
      case is a performance benefit (option 2). The full check in asm (option
      3) greatly complicated that codepath for a negligible performance gain
      and the trade-off was deemed not worth it.
      Signed-off-by: NCyril Bur <cyrilbur@gmail.com>
      [mpe: Move load_vec in the struct to fill an existing hole, reword change log]
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      
      fixup
      70fe3d98
  5. 01 12月, 2015 3 次提交
  6. 16 7月, 2015 1 次提交
  7. 07 6月, 2015 1 次提交
  8. 15 12月, 2014 2 次提交
    • S
      powernv/powerpc: Add winkle support for offline cpus · 77b54e9f
      Shreyas B. Prabhu 提交于
      Winkle is a deep idle state supported in power8 chips. A core enters
      winkle when all the threads of the core enter winkle. In this state
      power supply to the entire chiplet i.e core, private L2 and private L3
      is turned off. As a result it gives higher powersavings compared to
      sleep.
      
      But entering winkle results in a total hypervisor state loss. Hence the
      hypervisor context has to be preserved before entering winkle and
      restored upon wake up.
      
      Power-on Reset Engine (PORE) is a dedicated engine which is responsible
      for powering on the chiplet during wake up. It can be programmed to
      restore the register contests of a few specific registers. This patch
      uses PORE to restore register state wherever possible and uses stack to
      save and restore rest of the necessary registers.
      
      With hypervisor state restore things fall under three categories-
      per-core state, per-subcore state and per-thread state. To manage this,
      extend the infrastructure introduced for sleep. Mainly we add a paca
      variable subcore_sibling_mask. Using this and the core_idle_state we can
      distingush first thread in core and subcore.
      Signed-off-by: NShreyas B. Prabhu <shreyas@linux.vnet.ibm.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: linuxppc-dev@lists.ozlabs.org
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      77b54e9f
    • S
      powernv/cpuidle: Redesign idle states management · 7cba160a
      Shreyas B. Prabhu 提交于
      Deep idle states like sleep and winkle are per core idle states. A core
      enters these states only when all the threads enter either the
      particular idle state or a deeper one. There are tasks like fastsleep
      hardware bug workaround and hypervisor core state save which have to be
      done only by the last thread of the core entering deep idle state and
      similarly tasks like timebase resync, hypervisor core register restore
      that have to be done only by the first thread waking up from these
      state.
      
      The current idle state management does not have a way to distinguish the
      first/last thread of the core waking/entering idle states. Tasks like
      timebase resync are done for all the threads. This is not only is
      suboptimal, but can cause functionality issues when subcores and kvm is
      involved.
      
      This patch adds the necessary infrastructure to track idle states of
      threads in a per-core structure. It uses this info to perform tasks like
      fastsleep workaround and timebase resync only once per core.
      Signed-off-by: NShreyas B. Prabhu <shreyas@linux.vnet.ibm.com>
      Originally-by: NPreeti U. Murthy <preeti@linux.vnet.ibm.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Rafael J. Wysocki <rjw@rjwysocki.net>
      Cc: linux-pm@vger.kernel.org
      Cc: linuxppc-dev@lists.ozlabs.org
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      7cba160a
  9. 08 12月, 2014 1 次提交
    • P
      powerpc/powernv: Return to cpu offline loop when finished in KVM guest · 56548fc0
      Paul Mackerras 提交于
      When a secondary hardware thread has finished running a KVM guest, we
      currently put that thread into nap mode using a nap instruction in
      the KVM code.  This changes the code so that instead of doing a nap
      instruction directly, we instead cause the call to power7_nap() that
      put the thread into nap mode to return.  The reason for doing this is
      to avoid having the KVM code having to know what low-power mode to
      put the thread into.
      
      In the case of a secondary thread used to run a KVM guest, the thread
      will be offline from the point of view of the host kernel, and the
      relevant power7_nap() call is the one in pnv_smp_cpu_disable().
      In this case we don't want to clear pending IPIs in the offline loop
      in that function, since that might cause us to miss the wakeup for
      the next time the thread needs to run a guest.  To tell whether or
      not to clear the interrupt, we use the SRR1 value returned from
      power7_nap(), and check if it indicates an external interrupt.  We
      arrange that the return from power7_nap() when we have finished running
      a guest returns 0, so pending interrupts don't get flushed in that
      case.
      
      Note that it is important a secondary thread that has finished
      executing in the guest, or that didn't have a guest to run, should
      not return to power7_nap's caller while the kvm_hstate.hwthread_req
      flag in the PACA is non-zero, because the return from power7_nap
      will reenable the MMU, and the MMU might still be in guest context.
      In this situation we spin at low priority in real mode waiting for
      hwthread_req to become zero.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      56548fc0
  10. 17 7月, 2014 1 次提交
    • D
      arch, locking: Ciao arch_mutex_cpu_relax() · 3a6bfbc9
      Davidlohr Bueso 提交于
      The arch_mutex_cpu_relax() function, introduced by 34b133f8, is
      hacky and ugly. It was added a few years ago to address the fact
      that common cpu_relax() calls include yielding on s390, and thus
      impact the optimistic spinning functionality of mutexes. Nowadays
      we use this function well beyond mutexes: rwsem, qrwlock, mcs and
      lockref. Since the macro that defines the call is in the mutex header,
      any users must include mutex.h and the naming is misleading as well.
      
      This patch (i) renames the call to cpu_relax_lowlatency  ("relax, but
      only if you can do it with very low latency") and (ii) defines it in
      each arch's asm/processor.h local header, just like for regular cpu_relax
      functions. On all archs, except s390, cpu_relax_lowlatency is simply cpu_relax,
      and thus we can take it out of mutex.h. While this can seem redundant,
      I believe it is a good choice as it allows us to move out arch specific
      logic from generic locking primitives and enables future(?) archs to
      transparently define it, similarly to System Z.
      Signed-off-by: NDavidlohr Bueso <davidlohr@hp.com>
      Signed-off-by: NPeter Zijlstra <peterz@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Anton Blanchard <anton@samba.org>
      Cc: Aurelien Jacquiot <a-jacquiot@ti.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Bharat Bhushan <r65777@freescale.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Chen Liqin <liqin.linux@gmail.com>
      Cc: Chris Metcalf <cmetcalf@tilera.com>
      Cc: Christian Borntraeger <borntraeger@de.ibm.com>
      Cc: Chris Zankel <chris@zankel.net>
      Cc: David Howells <dhowells@redhat.com>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: Deepthi Dharwar <deepthi@linux.vnet.ibm.com>
      Cc: Dominik Dingel <dingel@linux.vnet.ibm.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Guan Xuetao <gxt@mprc.pku.edu.cn>
      Cc: Haavard Skinnemoen <hskinnemoen@gmail.com>
      Cc: Hans-Christian Egtvedt <egtvedt@samfundet.no>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Helge Deller <deller@gmx.de>
      Cc: Hirokazu Takata <takata@linux-m32r.org>
      Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
      Cc: James E.J. Bottomley <jejb@parisc-linux.org>
      Cc: James Hogan <james.hogan@imgtec.com>
      Cc: Jason Wang <jasowang@redhat.com>
      Cc: Jesper Nilsson <jesper.nilsson@axis.com>
      Cc: Joe Perches <joe@perches.com>
      Cc: Jonas Bonn <jonas@southpole.se>
      Cc: Joseph Myers <joseph@codesourcery.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Koichi Yasutake <yasutake.koichi@jp.panasonic.com>
      Cc: Lennox Wu <lennox.wu@gmail.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Mark Salter <msalter@redhat.com>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Matt Turner <mattst88@gmail.com>
      Cc: Max Filippov <jcmvbkbc@gmail.com>
      Cc: Michael Neuling <mikey@neuling.org>
      Cc: Michal Simek <monstr@monstr.eu>
      Cc: Mikael Starvik <starvik@axis.com>
      Cc: Nicolas Pitre <nico@linaro.org>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Paul Burton <paul.burton@imgtec.com>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Qais Yousef <qais.yousef@imgtec.com>
      Cc: Qiaowei Ren <qiaowei.ren@intel.com>
      Cc: Rafael Wysocki <rafael.j.wysocki@intel.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Richard Henderson <rth@twiddle.net>
      Cc: Richard Kuo <rkuo@codeaurora.org>
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: Steven Miao <realmz6@gmail.com>
      Cc: Steven Rostedt <srostedt@redhat.com>
      Cc: Stratos Karafotis <stratosk@semaphore.gr>
      Cc: Tim Chen <tim.c.chen@linux.intel.com>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Vasily Kulikov <segoon@openwall.com>
      Cc: Vineet Gupta <vgupta@synopsys.com>
      Cc: Vineet Gupta <Vineet.Gupta1@synopsys.com>
      Cc: Waiman Long <Waiman.Long@hp.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Wolfram Sang <wsa@the-dreams.de>
      Cc: adi-buildroot-devel@lists.sourceforge.net
      Cc: linux390@de.ibm.com
      Cc: linux-alpha@vger.kernel.org
      Cc: linux-am33-list@redhat.com
      Cc: linux-arm-kernel@lists.infradead.org
      Cc: linux-c6x-dev@linux-c6x.org
      Cc: linux-cris-kernel@axis.com
      Cc: linux-hexagon@vger.kernel.org
      Cc: linux-ia64@vger.kernel.org
      Cc: linux@lists.openrisc.net
      Cc: linux-m32r-ja@ml.linux-m32r.org
      Cc: linux-m32r@ml.linux-m32r.org
      Cc: linux-m68k@lists.linux-m68k.org
      Cc: linux-metag@vger.kernel.org
      Cc: linux-mips@linux-mips.org
      Cc: linux-parisc@vger.kernel.org
      Cc: linuxppc-dev@lists.ozlabs.org
      Cc: linux-s390@vger.kernel.org
      Cc: linux-sh@vger.kernel.org
      Cc: linux-xtensa@linux-xtensa.org
      Cc: sparclinux@vger.kernel.org
      Link: http://lkml.kernel.org/r/1404079773.2619.4.camel@buesod1.americas.hpqcorp.netSigned-off-by: NIngo Molnar <mingo@kernel.org>
      3a6bfbc9
  11. 28 5月, 2014 1 次提交
  12. 05 3月, 2014 1 次提交
  13. 29 1月, 2014 2 次提交
    • D
      powerpc/pseries/cpuidle: smt-snooze-delay cleanup. · 3fa8cad8
      Deepthi Dharwar 提交于
      smt-snooze-delay was designed to disable NAP state or delay the entry
      to the NAP state prior to adoption of cpuidle framework. This
      is per-cpu variable. With the coming of CPUIDLE framework,
      states can be disabled on per-cpu basis using the cpuidle/enable
      sysfs entry.
      
      Also, with the coming of cpuidle driver each state's target residency
      is per-driver unlike earlier which was per-device. Therefore,
      the per-cpu sysfs smt-snooze-delay which decides the target residency
      of the idle state on a particular cpu causes more confusion to the user
      as we cannot have different smt-snooze-delay (target residency)
      values for each cpu.
      
      In the current code, smt-snooze-delay functionality is completely broken.
      It makes sense to remove smt-snooze-delay from idle driver with the
      coming of cpuidle framework.
      However, sysfs files are retained as ppc64_util currently
      utilises it. Once we fix ppc64_util, propose to clean
      up the kernel code.
      Signed-off-by: NDeepthi Dharwar <deepthi@linux.vnet.ibm.com>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      3fa8cad8
    • D
      powerpc/pseries/cpuidle: Move processor_idle.c to drivers/cpuidle. · 962e7bd4
      Deepthi Dharwar 提交于
      Move the file from arch specific pseries/processor_idle.c
      to drivers/cpuidle/cpuidle-pseries.c
      Make the relevant Makefile and Kconfig changes.
      Also, introduce Kconfig.powerpc in drivers/cpuidle
      for all powerpc cpuidle drivers.
      Signed-off-by: NDeepthi Dharwar <deepthi@linux.vnet.ibm.com>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      962e7bd4
  14. 15 1月, 2014 1 次提交
    • P
      powerpc: Don't corrupt transactional state when using FP/VMX in kernel · d31626f7
      Paul Mackerras 提交于
      Currently, when we have a process using the transactional memory
      facilities on POWER8 (that is, the processor is in transactional
      or suspended state), and the process enters the kernel and the
      kernel then uses the floating-point or vector (VMX/Altivec) facility,
      we end up corrupting the user-visible FP/VMX/VSX state.  This
      happens, for example, if a page fault causes a copy-on-write
      operation, because the copy_page function will use VMX to do the
      copy on POWER8.  The test program below demonstrates the bug.
      
      The bug happens because when FP/VMX state for a transactional process
      is stored in the thread_struct, we store the checkpointed state in
      .fp_state/.vr_state and the transactional (current) state in
      .transact_fp/.transact_vr.  However, when the kernel wants to use
      FP/VMX, it calls enable_kernel_fp() or enable_kernel_altivec(),
      which saves the current state in .fp_state/.vr_state.  Furthermore,
      when we return to the user process we return with FP/VMX/VSX
      disabled.  The next time the process uses FP/VMX/VSX, we don't know
      which set of state (the current register values, .fp_state/.vr_state,
      or .transact_fp/.transact_vr) we should be using, since we have no
      way to tell if we are still in the same transaction, and if not,
      whether the previous transaction succeeded or failed.
      
      Thus it is necessary to strictly adhere to the rule that if FP has
      been enabled at any point in a transaction, we must keep FP enabled
      for the user process with the current transactional state in the
      FP registers, until we detect that it is no longer in a transaction.
      Similarly for VMX; once enabled it must stay enabled until the
      process is no longer transactional.
      
      In order to keep this rule, we add a new thread_info flag which we
      test when returning from the kernel to userspace, called TIF_RESTORE_TM.
      This flag indicates that there is FP/VMX/VSX state to be restored
      before entering userspace, and when it is set the .tm_orig_msr field
      in the thread_struct indicates what state needs to be restored.
      The restoration is done by restore_tm_state().  The TIF_RESTORE_TM
      bit is set by new giveup_fpu/altivec_maybe_transactional helpers,
      which are called from enable_kernel_fp/altivec, giveup_vsx, and
      flush_fp/altivec_to_thread instead of giveup_fpu/altivec.
      
      The other thing to be done is to get the transactional FP/VMX/VSX
      state from .fp_state/.vr_state when doing reclaim, if that state
      has been saved there by giveup_fpu/altivec_maybe_transactional.
      Having done this, we set the FP/VMX bit in the thread's MSR after
      reclaim to indicate that that part of the state is now valid
      (having been reclaimed from the processor's checkpointed state).
      
      Finally, in the signal handling code, we move the clearing of the
      transactional state bits in the thread's MSR a bit earlier, before
      calling flush_fp_to_thread(), so that we don't unnecessarily set
      the TIF_RESTORE_TM bit.
      
      This is the test program:
      
      /* Michael Neuling 4/12/2013
       *
       * See if the altivec state is leaked out of an aborted transaction due to
       * kernel vmx copy loops.
       *
       *   gcc -m64 htm_vmxcopy.c -o htm_vmxcopy
       *
       */
      
      /* We don't use all of these, but for reference: */
      
      int main(int argc, char *argv[])
      {
      	long double vecin = 1.3;
      	long double vecout;
      	unsigned long pgsize = getpagesize();
      	int i;
      	int fd;
      	int size = pgsize*16;
      	char tmpfile[] = "/tmp/page_faultXXXXXX";
      	char buf[pgsize];
      	char *a;
      	uint64_t aborted = 0;
      
      	fd = mkstemp(tmpfile);
      	assert(fd >= 0);
      
      	memset(buf, 0, pgsize);
      	for (i = 0; i < size; i += pgsize)
      		assert(write(fd, buf, pgsize) == pgsize);
      
      	unlink(tmpfile);
      
      	a = mmap(NULL, size, PROT_READ|PROT_WRITE, MAP_PRIVATE, fd, 0);
      	assert(a != MAP_FAILED);
      
      	asm __volatile__(
      		"lxvd2x 40,0,%[vecinptr] ; " // set 40 to initial value
      		TBEGIN
      		"beq	3f ;"
      		TSUSPEND
      		"xxlxor 40,40,40 ; " // set 40 to 0
      		"std	5, 0(%[map]) ;" // cause kernel vmx copy page
      		TABORT
      		TRESUME
      		TEND
      		"li	%[res], 0 ;"
      		"b	5f ;"
      		"3: ;" // Abort handler
      		"li	%[res], 1 ;"
      		"5: ;"
      		"stxvd2x 40,0,%[vecoutptr] ; "
      		: [res]"=r"(aborted)
      		: [vecinptr]"r"(&vecin),
      		  [vecoutptr]"r"(&vecout),
      		  [map]"r"(a)
      		: "memory", "r0", "r3", "r4", "r5", "r6", "r7");
      
      	if (aborted && (vecin != vecout)){
      		printf("FAILED: vector state leaked on abort %f != %f\n",
      		       (double)vecin, (double)vecout);
      		exit(1);
      	}
      
      	munmap(a, size);
      
      	close(fd);
      
      	printf("PASSED!\n");
      	return 0;
      }
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      d31626f7
  15. 08 1月, 2014 1 次提交
    • J
      powerpc: fix exception clearing in e500 SPE float emulation · 640e9225
      Joseph Myers 提交于
      The e500 SPE floating-point emulation code clears existing exceptions
      (__FPU_FPSCR &= ~FP_EX_MASK;) before ORing in the exceptions from the
      emulated operation.  However, these exception bits are the "sticky",
      cumulative exception bits, and should only be cleared by the user
      program setting SPEFSCR, not implicitly by any floating-point
      instruction (whether executed purely by the hardware or emulated).
      The spurious clearing of these bits shows up as missing exceptions in
      glibc testing.
      
      Fixing this, however, is not as simple as just not clearing the bits,
      because while the bits may be from previous floating-point operations
      (in which case they should not be cleared), the processor can also set
      the sticky bits itself before the interrupt for an exception occurs,
      and this can happen in cases when IEEE 754 semantics are that the
      sticky bit should not be set.  Specifically, the "invalid" sticky bit
      is set in various cases with non-finite operands, where IEEE 754
      semantics do not involve raising such an exception, and the
      "underflow" sticky bit is set in cases of exact underflow, whereas
      IEEE 754 semantics are that this flag is set only for inexact
      underflow.  Thus, for correct emulation the kernel needs to know the
      setting of these two sticky bits before the instruction being
      emulated.
      
      When a floating-point operation raises an exception, the kernel can
      note the state of the sticky bits immediately afterwards.  Some
      <fenv.h> functions that affect the state of these bits, such as
      fesetenv and feholdexcept, need to use prctl with PR_GET_FPEXC and
      PR_SET_FPEXC anyway, and so it is natural to record the state of those
      bits during that call into the kernel and so avoid any need for a
      separate call into the kernel to inform it of a change to those bits.
      Thus, the interface I chose to use (in this patch and the glibc port)
      is that one of those prctl calls must be made after any userspace
      change to those sticky bits, other than through a floating-point
      operation that traps into the kernel anyway.  feclearexcept and
      fesetexceptflag duly make those calls, which would not be required
      were it not for this issue.
      
      The previous EGLIBC port, and the uClibc code copied from it, is
      fundamentally broken as regards any use of prctl for floating-point
      exceptions because it didn't use the PR_FP_EXC_SW_ENABLE bit in its
      prctl calls (and did various worse things, such as passing a pointer
      when prctl expected an integer).  If you avoid anything where prctl is
      used, the clearing of sticky bits still means it will never give
      anything approximating correct exception semantics with existing
      kernels.  I don't believe the patch makes things any worse for
      existing code that doesn't try to inform the kernel of changes to
      sticky bits - such code may get incorrect exceptions in some cases,
      but it would have done so anyway in other cases.
      Signed-off-by: NJoseph Myers <joseph@codesourcery.com>
      Signed-off-by: NScott Wood <scottwood@freescale.com>
      640e9225
  16. 19 10月, 2013 1 次提交
  17. 17 10月, 2013 1 次提交
  18. 11 10月, 2013 3 次提交
    • P
      powerpc: Provide for giveup_fpu/altivec to save state in alternate location · 18461960
      Paul Mackerras 提交于
      This provides a facility which is intended for use by KVM, where the
      contents of the FP/VSX and VMX (Altivec) registers can be saved away
      to somewhere other than the thread_struct when kernel code wants to
      use floating point or VMX instructions.  This is done by providing a
      pointer in the thread_struct to indicate where the state should be
      saved to.  The giveup_fpu() and giveup_altivec() functions test these
      pointers and save state to the indicated location if they are non-NULL.
      Note that the MSR_FP/VEC bits in task->thread.regs->msr are still used
      to indicate whether the CPU register state is live, even when an
      alternate save location is being used.
      
      This also provides load_fp_state() and load_vr_state() functions, which
      load up FP/VSX and VMX state from memory into the CPU registers, and
      corresponding store_fp_state() and store_vr_state() functions, which
      store FP/VSX and VMX state into memory from the CPU registers.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      18461960
    • P
      powerpc: Put FP/VSX and VR state into structures · de79f7b9
      Paul Mackerras 提交于
      This creates new 'thread_fp_state' and 'thread_vr_state' structures
      to store FP/VSX state (including FPSCR) and Altivec/VSX state
      (including VSCR), and uses them in the thread_struct.  In the
      thread_fp_state, the FPRs and VSRs are represented as u64 rather
      than double, since we rarely perform floating-point computations
      on the values, and this will enable the structures to be used
      in KVM code as well.  Similarly FPSCR is now a u64 rather than
      a structure of two 32-bit values.
      
      This takes the offsets out of the macros such as SAVE_32FPRS,
      REST_32FPRS, etc.  This enables the same macros to be used for normal
      and transactional state, enabling us to delete the transactional
      versions of the macros.   This also removes the unused do_load_up_fpu
      and do_load_up_altivec, which were in fact buggy since they didn't
      create large enough stack frames to account for the fact that
      load_up_fpu and load_up_altivec are not designed to be called from C
      and assume that their caller's stack frame is an interrupt frame.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      de79f7b9
    • A
      powerpc: Fix offset of FPRs in VSX registers in little endian builds · e156bd8a
      Anton Blanchard 提交于
      The FPRs overlap the high doublewords of the first 32 VSX registers.
      Fix TS_FPROFFSET and TS_VSRLOWOFFSET so we access the correct fields
      in little endian mode.
      
      If VSX is disabled the FPRs are only one doubleword in length so
      TS_FPROFFSET needs adjusting in little endian.
      Signed-off-by: NAnton Blanchard <anton@samba.org>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      e156bd8a
  19. 25 9月, 2013 1 次提交
    • B
      powerpc: Remove ksp_limit on ppc64 · cbc9565e
      Benjamin Herrenschmidt 提交于
      We've been keeping that field in thread_struct for a while, it contains
      the "limit" of the current stack pointer and is meant to be used for
      detecting stack overflows.
      
      It has a few problems however:
      
       - First, it was never actually *used* on 64-bit. Set and updated but
      not actually exploited
      
       - When switching stack to/from irq and softirq stacks, it's update
      is racy unless we hard disable interrupts, which is costly. This
      is fine on 32-bit as we don't soft-disable there but not on 64-bit.
      
      Thus rather than fixing 2 in order to implement 1 in some hypothetical
      future, let's remove the code completely from 64-bit. In order to avoid
      a clutter of ifdef's, we remove the updates from C code completely
      during interrupt stack switching, and instead maintain it from the
      asm helper that is used to do the stack switching in the first place.
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      cbc9565e
  20. 09 8月, 2013 1 次提交
    • M
      powerpc/tm: Fix context switching TAR, PPR and DSCR SPRs · 28e61cc4
      Michael Neuling 提交于
      If a transaction is rolled back, the Target Address Register (TAR), Processor
      Priority Register (PPR) and Data Stream Control Register (DSCR) should be
      restored to the checkpointed values before the transaction began.  Any changes
      to these SPRs inside the transaction should not be visible in the abort
      handler.
      
      Currently Linux doesn't save or restore the checkpointed TAR, PPR or DSCR.  If
      we preempt a processes inside a transaction which has modified any of these, on
      process restore, that same transaction may be aborted we but we won't see the
      checkpointed versions of these SPRs.
      
      This adds checkpointed versions of these SPRs to the thread_struct and adds the
      save/restore of these three SPRs to the treclaim/trechkpt code.
      
      Without this if any of these SPRs are modified during a transaction, users may
      incorrectly see a speculated SPR value even if the transaction is aborted.
      Signed-off-by: NMichael Neuling <mikey@neuling.org>
      Cc: <stable@vger.kernel.org> [v3.10]
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      28e61cc4
  21. 01 7月, 2013 2 次提交
  22. 20 6月, 2013 3 次提交
    • A
      powerpc: Align thread->fpr to 16 bytes · 475e68cf
      Anton Blanchard 提交于
      On newer CPUs we use VSX loads and stores to the thread->fpr array.
      For best performance we need to ensure 16 byte alignment.
      Signed-off-by: NAnton Blanchard <anton@samba.org>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      475e68cf
    • B
    • D
      powerpc/mm: Make mmap_64.c compile on 32bit powerpc · d5d8ec89
      Daniel Walker 提交于
      There appears to be no good reason to keep this as 64bit only. It works
      on 32bit also, and has checks so that it can work correctly with 32bit
      binaries on 64bit hardware which is why I think this works.
      
      I tested this on qemu using the virtex-ml507 machine type.
      
      Before,
      
      /bin2 # ./test & cat /proc/${!}/maps
      00100000-00103000 r-xp 00000000 00:00 0          [vdso]
      10000000-10007000 r-xp 00000000 00:01 454        /bin2/test
      10017000-10018000 rw-p 00007000 00:01 454        /bin2/test
      48000000-48020000 r-xp 00000000 00:01 224        /lib/ld-2.11.3.so
      48021000-48023000 rw-p 00021000 00:01 224        /lib/ld-2.11.3.so
      bfd03000-bfd24000 rw-p 00000000 00:00 0          [stack]
      /bin2 # ./test & cat /proc/${!}/maps
      00100000-00103000 r-xp 00000000 00:00 0          [vdso]
      0fe6e000-0ffd8000 r-xp 00000000 00:01 214        /lib/libc-2.11.3.so
      0ffd8000-0ffe8000 ---p 0016a000 00:01 214        /lib/libc-2.11.3.so
      0ffe8000-0ffed000 rw-p 0016a000 00:01 214        /lib/libc-2.11.3.so
      0ffed000-0fff0000 rw-p 00000000 00:00 0
      10000000-10007000 r-xp 00000000 00:01 454        /bin2/test
      10017000-10018000 rw-p 00007000 00:01 454        /bin2/test
      48000000-48020000 r-xp 00000000 00:01 224        /lib/ld-2.11.3.so
      48020000-48021000 rw-p 00000000 00:00 0
      48021000-48023000 rw-p 00021000 00:01 224        /lib/ld-2.11.3.so
      bf98a000-bf9ab000 rw-p 00000000 00:00 0          [stack]
      /bin2 # ./test & cat /proc/${!}/maps
      00100000-00103000 r-xp 00000000 00:00 0          [vdso]
      0fe6e000-0ffd8000 r-xp 00000000 00:01 214        /lib/libc-2.11.3.so
      0ffd8000-0ffe8000 ---p 0016a000 00:01 214        /lib/libc-2.11.3.so
      0ffe8000-0ffed000 rw-p 0016a000 00:01 214        /lib/libc-2.11.3.so
      0ffed000-0fff0000 rw-p 00000000 00:00 0
      10000000-10007000 r-xp 00000000 00:01 454        /bin2/test
      10017000-10018000 rw-p 00007000 00:01 454        /bin2/test
      48000000-48020000 r-xp 00000000 00:01 224        /lib/ld-2.11.3.so
      48020000-48021000 rw-p 00000000 00:00 0
      48021000-48023000 rw-p 00021000 00:01 224        /lib/ld-2.11.3.so
      bfa54000-bfa75000 rw-p 00000000 00:00 0          [stack]
      
      After,
      
      bash-4.1# ./test & cat /proc/${!}/maps
      [7] 803
      00100000-00103000 r-xp 00000000 00:00 0          [vdso]
      10000000-10007000 r-xp 00000000 00:01 454        /bin2/test
      10017000-10018000 rw-p 00007000 00:01 454        /bin2/test
      b7eb0000-b7ed0000 r-xp 00000000 00:01 224        /lib/ld-2.11.3.so
      b7ed1000-b7ed3000 rw-p 00021000 00:01 224        /lib/ld-2.11.3.so
      bfbc0000-bfbe1000 rw-p 00000000 00:00 0          [stack]
      bash-4.1# ./test & cat /proc/${!}/maps
      [8] 805
      00100000-00103000 r-xp 00000000 00:00 0          [vdso]
      10000000-10007000 r-xp 00000000 00:01 454        /bin2/test
      10017000-10018000 rw-p 00007000 00:01 454        /bin2/test
      b7b03000-b7b23000 r-xp 00000000 00:01 224        /lib/ld-2.11.3.so
      b7b24000-b7b26000 rw-p 00021000 00:01 224        /lib/ld-2.11.3.so
      bfc27000-bfc48000 rw-p 00000000 00:00 0          [stack]
      bash-4.1# ./test & cat /proc/${!}/maps
      [9] 807
      00100000-00103000 r-xp 00000000 00:00 0          [vdso]
      10000000-10007000 r-xp 00000000 00:01 454        /bin2/test
      10017000-10018000 rw-p 00007000 00:01 454        /bin2/test
      b7f37000-b7f57000 r-xp 00000000 00:01 224        /lib/ld-2.11.3.so
      b7f58000-b7f5a000 rw-p 00021000 00:01 224        /lib/ld-2.11.3.so
      bff96000-bffb7000 rw-p 00000000 00:00 0          [stack]
      Signed-off-by: NDaniel Walker <dwalker@fifo90.com>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      d5d8ec89
  23. 01 6月, 2013 1 次提交
    • M
      powerpc/tm: Fix userspace stack corruption on signal delivery for active transactions · 2b3f8e87
      Michael Neuling 提交于
      When in an active transaction that takes a signal, we need to be careful with
      the stack.  It's possible that the stack has moved back up after the tbegin.
      The obvious case here is when the tbegin is called inside a function that
      returns before a tend.  In this case, the stack is part of the checkpointed
      transactional memory state.  If we write over this non transactionally or in
      suspend, we are in trouble because if we get a tm abort, the program counter
      and stack pointer will be back at the tbegin but our in memory stack won't be
      valid anymore.
      
      To avoid this, when taking a signal in an active transaction, we need to use
      the stack pointer from the checkpointed state, rather than the speculated
      state.  This ensures that the signal context (written tm suspended) will be
      written below the stack required for the rollback.  The transaction is aborted
      becuase of the treclaim, so any memory written between the tbegin and the
      signal will be rolled back anyway.
      
      For signals taken in non-TM or suspended mode, we use the
      normal/non-checkpointed stack pointer.
      
      Tested with 64 and 32 bit signals
      Signed-off-by: NMichael Neuling <mikey@neuling.org>
      Cc: <stable@vger.kernel.org> # v3.9
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      2b3f8e87
  24. 24 5月, 2013 1 次提交
  25. 02 5月, 2013 1 次提交
  26. 18 4月, 2013 1 次提交
  27. 15 2月, 2013 2 次提交
  28. 08 2月, 2013 1 次提交
  29. 10 1月, 2013 2 次提交