1. 22 2月, 2012 1 次提交
    • B
      powerpc: Fix various issues with return to userspace · 18b246fa
      Benjamin Herrenschmidt 提交于
      We have a few problems when returning to userspace. This is a
      quick set of fixes for 3.3, I'll look into a more comprehensive
      rework for 3.4. This fixes:
      
       - We kept interrupts soft-disabled when schedule'ing or calling
      do_signal when returning to userspace as a result of a hardware
      interrupt.
      
       - Rename do_signal to do_notify_resume like all other archs (and
      do_signal_pending back to do_signal, which it was before Roland
      changed it).
      
       - Add the missing call to key_replace_session_keyring() to
      do_notify_resume().
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      ---
      18b246fa
  2. 16 11月, 2011 1 次提交
    • K
      powerpc/trace: Add a dummy stack frame for trace_hardirqs_off · 2cd76629
      Kevin Hao 提交于
      The trace_hardirqs_off will use CALLER_ADDR0 and CALLER_ADDR1.
      If an exception occurs in user mode, there is only one stack frame
      on the stack and accessing the CALLER_ADDR1 will causes the following
      call trace. So we create a dummy stack frame to make
      trace_hardirqs_off happy.
      
      WARNING: at kernel/smp.c:459
      Modules linked in:
      NIP: c0093280 LR: c00930a0 CTR: c0010780
      REGS: edb87ae0 TRAP: 0700   Not tainted  (3.1.0)
      MSR: 00021002 <ME,CE>  CR: 28002888  XER: 00000000
      TASK = edce2ac0[17658] 'mthread-lock-on' THREAD: edb86000 CPU: 5
      GPR00: 00000001 edb87b90 edce2ac0 00000005 c0019594 edb87bd8 00000001 00000fe3
      GPR08: 00041000 c084138c 4e20120d edb87b90 48002888 1001aa7c 00000000 00000000
      GPR16: 48830000 10012a8c 00000000 10000af4 00000001 c0810000 00000000 00000000
      GPR24: ee9aa920 c0816a18 00000000 00000005 c0019594 edb87bd8 ee20178c edb87b90
      NIP [c0093280] smp_call_function_many+0x214/0x2b4
      LR [c00930a0] smp_call_function_many+0x34/0x2b4
      Call Trace:
      [edb87b90] [c00930a0] smp_call_function_many+0x34/0x2b4 (unreliable)
      [edb87bd0] [c00194ec] __flush_tlb_page+0xac/0x100
      [edb87c00] [c001957c] flush_tlb_page+0x3c/0x54
      [edb87c10] [c00180ac] ptep_set_access_flags+0x74/0x12c
      [edb87c40] [c0128068] handle_pte_fault+0x2f0/0x9ac
      [edb87cb0] [c0128c3c] handle_mm_fault+0x104/0x1dc
      [edb87ce0] [c05f40f4] do_page_fault+0x2dc/0x630
      [edb87e50] [c001078c] handle_page_fault+0xc/0x80
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      2cd76629
  3. 21 1月, 2011 1 次提交
  4. 29 11月, 2010 1 次提交
  5. 05 5月, 2010 1 次提交
  6. 20 8月, 2009 1 次提交
    • B
      powerpc: Use names rather than numbers for SPRGs (v2) · ee43eb78
      Benjamin Herrenschmidt 提交于
      The kernel uses SPRG registers for various purposes, typically in
      low level assembly code as scratch registers or to hold per-cpu
      global infos such as the PACA or the current thread_info pointer.
      
      We want to be able to easily shuffle the usage of those registers
      as some implementations have specific constraints realted to some
      of them, for example, some have userspace readable aliases, etc..
      and the current choice isn't always the best.
      
      This patch should not change any code generation, and replaces the
      usage of SPRN_SPRGn everywhere in the kernel with a named replacement
      and adds documentation next to the definition of the names as to
      what those are used for on each processor family.
      
      The only parts that still use the original numbers are bits of KVM
      or suspend/resume code that just blindly needs to save/restore all
      the SPRGs.
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      ee43eb78
  7. 26 6月, 2009 1 次提交
    • B
      powerpc: Add irqtrace support for 32-bit powerpc · 5d38902c
      Benjamin Herrenschmidt 提交于
      Based on initial work from: Dale Farnsworth <dale@farnsworth.org>
      
      Add the low level irq tracing hooks for 32-bit powerpc needed
      to enable full lockdep functionality.
      
      The approach taken to deal with the code in entry_32.S is that
      we don't trace all the transitions of MSR:EE when we just turn
      it off to peek at TI_FLAGS without races. Only when we are
      calling into C code or returning from exceptions with a state
      that have changed from what lockdep thinks.
      
      There's a little bugger though: If we take an exception that
      keeps interrupts enabled (such as an alignment exception) while
      interrupts are enabled, we will call trace_hardirqs_on() on the
      way back spurriously. Not a big deal, but to get rid of it would
      require remembering in pt_regs that the exception was one of the
      type that kept interrupts enabled which we don't know at this
      stage. (Well, we could test all cases for regs->trap but that
      sucks too much).
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      Tested-by: NKumar Gala <galak@kernel.crashing.org>
      5d38902c
  8. 23 2月, 2009 4 次提交
  9. 13 2月, 2009 1 次提交
  10. 28 11月, 2008 1 次提交
    • S
      powerpc: ftrace, do nothing in mcount call for dyn ftrace · c7b0d173
      Steven Rostedt 提交于
      Impact: quicken mcount calls that are not replaced by dyn ftrace
      
      Dynamic ftrace no longer does on the fly recording of mcount locations.
      The mcount locations are now found at compile time. The mcount
      function no longer needs to store registers and call a stub function.
      It can now just simply return.
      
      Since there are some functions that do not get converted to a nop
      (.init sections and other code that may disappear), this patch should
      help speed up that code.
      
      Also, the stub for mcount on PowerPC 32 can not be a simple branch
      link register like it is on PowerPC 64. According to the ABI specification:
      
      "The _mcount routine is required to restore the link register from
       the stack so that the profiling code can be inserted transparently,
       whether or not the profiled function saves the link register itself."
      
      This means that we must restore the link register that was used
      to make the call to mcount.  The minimal mcount function for PPC32
      ends up being:
      
       mcount:
              mflr    r0
              mtctr   r0
              lwz     r0, 4(r1)
              mtlr    r0
              bctr
      
      Where we move the link register used to call mcount into the
      ctr register, and then restore the link register from the stack.
      Then we use the ctr register to jump back to the mcount caller.
      The r0 register is free for us to use.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      c7b0d173
  11. 21 10月, 2008 1 次提交
  12. 28 7月, 2008 3 次提交
  13. 26 7月, 2008 1 次提交
  14. 25 7月, 2008 1 次提交
  15. 26 6月, 2008 1 次提交
  16. 24 6月, 2008 1 次提交
  17. 03 6月, 2008 2 次提交
    • K
      [POWERPC] 40x/Book-E: Save/restore volatile exception registers · fca622c5
      Kumar Gala 提交于
      On machines with more than one exception level any system register that
      might be modified by the "normal" exception level needs to be saved and
      restored on taking a higher level exception.  We already are saving
      and restoring ESR and DEAR.
      
      For critical level add SRR0/1.
      For debug level add CSRR0/1 and SRR0/1.
      For machine check level add DSRR0/1, CSRR0/1, and SRR0/1.
      
      On FSL Book-E parts we always save/restore the MAS registers for critical,
      debug, and machine check level exceptions.  On 44x we always save/restore
      the MMUCR.
      
      Additionally, we save and restore the ksp_limit since we have to adjust it
      for each exception level.
      Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
      Acked-by: NPaul Mackerras <paulus@samba.org>
      fca622c5
    • K
      [POWERPC] Rework EXC_LEVEL_EXCEPTION_PROLOG code · 369e757b
      Kumar Gala 提交于
      * Cleanup the code a bit my allocating an INT_FRAME on our exception
        stack there by make references go from GPR11-INT_FRAME_SIZE(r8) to
        just GPR11(r8)
      * simplify {lvl}_transfer_to_handler code by moving the copying of the
        temp registers we use if we come from user space into the PROLOG
      * If the exception came from kernel mode copy thread_info flags,
        preempt, and task pointer from the process thread_info.
      Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
      Acked-by: NPaul Mackerras <paulus@samba.org>
      369e757b
  18. 27 5月, 2008 1 次提交
    • S
      ftrace: powerpc clean ups · ccbfac29
      Steven Rostedt 提交于
      This patch cleans up the ftrace code in PowerPC based on the comments from
      Michael Ellerman.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Cc: Michael Ellerman <michael@ellerman.id.au>
      Cc: proski@gnu.org
      Cc: a.p.zijlstra@chello.nl
      Cc: Pekka Paalanen <pq@iki.fi>
      Cc: Steven Rostedt <srostedt@redhat.com>
      Cc: linuxppc-dev@ozlabs.org
      Cc: Soeren Sandmann Pedersen <sandmann@redhat.com>
      Cc: paulus@samba.org
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      ccbfac29
  19. 24 5月, 2008 1 次提交
  20. 16 5月, 2008 1 次提交
    • P
      [POWERPC] Defer processing of interrupts when the CPU wakes from sleep mode · a560643e
      Paul Mackerras 提交于
      This provides a way to defer processing of an interrupt that wakes the
      processor out of sleep mode.  On 32-bit platforms that use an
      interrupt to wake the processor, we have to have interrupts enabled in
      hardware at the point where we go to sleep, otherwise the processor
      will never wake up.  However, because interrupts are logically
      disabled at this point, we don't want to process the interrupt
      straight away.
      
      This is handled by setting the _TLF_SLEEPING flag.  When we get an
      interrupt and _TLF_SLEEPING is set, we firstly clear the MSR_EE
      (external interrupt enable) bit in the saved MSR value, and secondly
      we then return to the address in the link register, like we do for
      _TLF_NAPPING, but without actually handling the interrupt.
      
      Note that this is handled somewhat differently on powerbooks, so this
      new code will only be used on non-Apple machines.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      a560643e
  21. 14 5月, 2008 1 次提交
  22. 29 4月, 2008 1 次提交
    • K
      [POWERPC] Add IRQSTACKS support on ppc32 · 85218827
      Kumar Gala 提交于
      This makes it possible to use separate stacks for hard and soft IRQs
      on 32-bit powerpc as well as on 64-bit.  The code for 32-bit is just
      the 32-bit analog of the 64-bit code.
      
      * Added allocation and initialization of the irq stacks.  We limit the
        stacks to be in lowmem for ppc32.
      * Implemented ppc32 versions of call_do_softirq() and call_handle_irq()
        to switch the stack pointers
      * Reworked how we do stack overflow detection.  We now keep around the
        limit of the stack in the thread_struct and compare against the limit
        to see if we've overflowed.  We can now use this on ppc64 if desired.
      
      [ paulus@samba.org: Fixed bug on 6xx where we need to reload r9 with the
        thread_info pointer. ]
      Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      85218827
  23. 17 4月, 2008 1 次提交
    • K
      [POWERPC] Make Book-E debug handling SMP safe · 4eaddb4d
      Kumar Gala 提交于
      global_dbcr0 needs to be a per cpu set of save areas instead of a single
      global on all processors.
      
      Also, we switch to using DBCR0_IDM to determine if the user space app is
      being debugged as its a more consistent way.  In the future we should
      support features like hardware breakpoint and watchpoints which will
      have DBCR0_IDM set but not necessarily DBCR0_IC (single step).
      Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
      4eaddb4d
  24. 13 11月, 2007 1 次提交
    • B
      [POWERPC] Avoid unpaired stwcx. on some processors · b64f87c1
      Becky Bruce 提交于
      The context switch code in the kernel issues a dummy stwcx. to clear the
      reservation, as recommended by the architecture.  However, some processors
      can have issues if this stwcx to address A occurs while the reservation
      is already held to a different address B.  To avoid this problem, the dummy
      stwcx. needs to be paired with a dummy lwarx to the same address.
      
      This adds the dummy lwarx, and creates a cpu feature bit to indicate
      which cpus are affected.  Tested on mpc8641_hpcn_defconfig in
      arch/powerpc; build tested in arch/ppc.
      Signed-off-by: NBecky Bruce <becky.bruce@freescale.com>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      b64f87c1
  25. 01 11月, 2007 1 次提交
    • B
      [POWERPC] 4xx: Deal with 44x virtually tagged icache · b98ac05d
      Benjamin Herrenschmidt 提交于
      The 44x family has an interesting "feature" which is a virtually
      tagged instruction cache (yuck !). So far, we haven't dealt with
      it properly, which means we've been mostly lucky or people didn't
      report the problems, unless people have been running custom patches
      in their distro...
      
      This is an attempt at fixing it properly. I chose to do it by
      setting a global flag whenever we change a PTE that was previously
      marked executable, and flush the entire instruction cache upon
      return to user space when that happens.
      
      This is a bit heavy handed, but it's hard to do more fine grained
      flushes as the icbi instruction, on those processor, for some very
      strange reasons (since the cache is virtually mapped) still requires
      a valid TLB entry for reading in the target address space, which
      isn't something I want to deal with.
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      Signed-off-by: NJosh Boyer <jwboyer@linux.vnet.ibm.com>
      b98ac05d
  26. 14 9月, 2007 1 次提交
  27. 17 5月, 2007 1 次提交
    • K
      [POWERPC] Fix COMMON symbol warnings · 991eb43a
      Kumar Gala 提交于
      We get the following warnings in various ARCH=powerpc builds:
      
      WARNING: "ee_restarts" [arch/powerpc/kernel/built-in] is COMMON symbol
      WARNING: "fee_restarts" [arch/powerpc/kernel/built-in] is COMMON symbol
      WARNING: "htab_hash_searches" [arch/powerpc/mm/built-in] is COMMON symbol
      WARNING: "next_slot" [arch/powerpc/mm/built-in] is COMMON symbol
      WARNING: "mmu_hash_lock" [arch/powerpc/mm/built-in] is COMMON symbol
      WARNING: "primary_pteg_full" [arch/powerpc/mm/built-in] is COMMON symbol
      WARNING: "global_dbcr0" [arch/powerpc/kernel/built-in] is COMMON symbol
      
      Switch to moving local symbols (except mmu_hash_lock which is global) and
      space directive instead.
      Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
      991eb43a
  28. 22 3月, 2007 1 次提交
  29. 01 7月, 2006 1 次提交
  30. 18 4月, 2006 1 次提交
    • P
      powerpc: Use correct sequence for putting CPU into nap mode · f39224a8
      Paul Mackerras 提交于
      We weren't using the recommended sequence for putting the CPU into
      nap mode.  When I changed the idle loop, for some reason 7447A cpus
      started hanging when we put them into nap mode.  Changing to the
      recommended sequence fixes that.
      
      The complexity here is that the recommended sequence is a loop that
      keeps putting the cpu back into nap mode.  Clearly we need some way
      to break out of the loop when an interrupt (external interrupt,
      decrementer, performance monitor) occurs.  Here we use a bit in
      the thread_info struct to indicate that we need this, and the exception
      entry code notices this and arranges for the exception to return
      to the value in the link register, thus breaking out of the loop.
      We use a new `local_flags' field in the thread_info which we can
      alter without needing to use an atomic update sequence.
      
      The PPC970 has the same recommended sequence, so we do the same thing
      there too.
      
      This also fixes a bug in the kernel stack overflow handling code on
      32-bit, since it was causing a value that we needed in a register to
      get trashed.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      f39224a8
  31. 27 3月, 2006 1 次提交
    • P
      powerpc: Unify the 32 and 64 bit idle loops · a0652fc9
      Paul Mackerras 提交于
      This unifies the 32-bit (ARCH=ppc and ARCH=powerpc) and 64-bit idle
      loops.  It brings over the concept of having a ppc_md.power_save
      function from 32-bit to ARCH=powerpc, which lets us get rid of
      native_idle().  With this we will also be able to simplify the idle
      handling for pSeries and cell.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      a0652fc9
  32. 08 3月, 2006 1 次提交
    • P
      powerpc: Fix various syscall/signal/swapcontext bugs · 1bd79336
      Paul Mackerras 提交于
      A careful reading of the recent changes to the system call entry/exit
      paths revealed several problems, plus some things that could be
      simplified and improved:
      
      * 32-bit wasn't testing the _TIF_NOERROR bit in the syscall fast exit
        path, so it was only doing anything with it once it saw some other
        bit being set.  In other words, the noerror behaviour would apply to
        the next system call where we had to reschedule or deliver a signal,
        which is not necessarily the current system call.
      
      * 32-bit wasn't doing the call to ptrace_notify in the syscall exit
        path when the _TIF_SINGLESTEP bit was set.
      
      * _TIF_RESTOREALL was in both _TIF_USER_WORK_MASK and
        _TIF_PERSYSCALL_MASK, which is odd since _TIF_RESTOREALL is only set
        by system calls.  I took it out of _TIF_USER_WORK_MASK.
      
      * On 64-bit, _TIF_RESTOREALL wasn't causing the non-volatile registers
        to be restored (unless perhaps a signal was delivered or the syscall
        was traced or single-stepped).  Thus the non-volatile registers
        weren't restored on exit from a signal handler.  We probably got
        away with it mostly because signal handlers written in C wouldn't
        alter the non-volatile registers.
      
      * On 32-bit I simplified the code and made it more like 64-bit by
        making the syscall exit path jump to ret_from_except to handle
        preemption and signal delivery.
      
      * 32-bit was calling do_signal unnecessarily when _TIF_RESTOREALL was
        set - but I think because of that 32-bit was actually restoring the
        non-volatile registers on exit from a signal handler.
      
      * I changed the order of enabling interrupts and saving the
        non-volatile registers before calling do_syscall_trace_leave; now we
        enable interrupts first.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      1bd79336
  33. 19 1月, 2006 1 次提交
  34. 13 1月, 2006 1 次提交
    • D
      [PATCH] powerpc: Cleanup LOADADDR etc. asm macros · e58c3495
      David Gibson 提交于
      This patch consolidates the variety of macros used for loading 32 or
      64-bit constants in assembler (LOADADDR, LOADBASE, SET_REG_TO_*).  The
      idea is to make the set of macros consistent across 32 and 64 bit and
      to make it more obvious which is the appropriate one to use in a given
      situation.  The new macros and their semantics are described in the
      comments in ppc_asm.h.
      
      In the process, we change several places that were unnecessarily using
      immediate loads on ppc64 to use the GOT/TOC.  Likewise we cleanup a
      couple of places where we were clumsily subtracting PAGE_OFFSET with
      asm instructions to use assemble-time arithmetic or the toreal() macro
      instead.
      Signed-off-by: NDavid Gibson <dwg@au1.ibm.com>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      e58c3495