1. 11 7月, 2012 1 次提交
  2. 03 7月, 2012 1 次提交
  3. 29 6月, 2012 1 次提交
    • T
      ppc64: fix missing to check all bits of _TIF_USER_WORK_MASK in preempt · c58ce2b1
      Tiejun Chen 提交于
      In entry_64.S version of ret_from_except_lite, you'll notice that
      in the !preempt case, after we've checked MSR_PR we test for any
      TIF flag in _TIF_USER_WORK_MASK to decide whether to go to do_work
      or not. However, in the preempt case, we do a convoluted trick to
      test SIGPENDING only if PR was set and always test NEED_RESCHED ...
      but we forget to test any other bit of _TIF_USER_WORK_MASK !!! So
      that means that with preempt, we completely fail to test for things
      like single step, syscall tracing, etc...
      
      This should be fixed as the following path:
      
       - Test PR. If not set, go to resume_kernel, else continue.
      
       - If go resume_kernel, to do that original do_work.
      
       - If else, then always test for _TIF_USER_WORK_MASK to decide to do
      that original user_work, else restore directly.
      Signed-off-by: NTiejun Chen <tiejun.chen@windriver.com>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      c58ce2b1
  4. 12 5月, 2012 1 次提交
    • B
      powerpc/irq: Fix another case of lazy IRQ state getting out of sync · 7c0482e3
      Benjamin Herrenschmidt 提交于
      So we have another case of paca->irq_happened getting out of
      sync with the HW irq state. This can happen when a perfmon
      interrupt occurs while soft disabled, as it will return to a
      soft disabled but hard enabled context while leaving a stale
      PACA_IRQ_HARD_DIS flag set.
      
      This patch fixes it, and also adds a test for the condition
      of those flags being out of sync in arch_local_irq_restore()
      when CONFIG_TRACE_IRQFLAGS is enabled.
      
      This helps catching those gremlins faster (and so far I
      can't seem see any anymore, so that's good news).
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      7c0482e3
  5. 09 5月, 2012 1 次提交
  6. 30 4月, 2012 4 次提交
  7. 09 3月, 2012 4 次提交
    • B
      powerpc: Rework lazy-interrupt handling · 7230c564
      Benjamin Herrenschmidt 提交于
      The current implementation of lazy interrupts handling has some
      issues that this tries to address.
      
      We don't do the various workarounds we need to do when re-enabling
      interrupts in some cases such as when returning from an interrupt
      and thus we may still lose or get delayed decrementer or doorbell
      interrupts.
      
      The current scheme also makes it much harder to handle the external
      "edge" interrupts provided by some BookE processors when using the
      EPR facility (External Proxy) and the Freescale Hypervisor.
      
      Additionally, we tend to keep interrupts hard disabled in a number
      of cases, such as decrementer interrupts, external interrupts, or
      when a masked decrementer interrupt is pending. This is sub-optimal.
      
      This is an attempt at fixing it all in one go by reworking the way
      we do the lazy interrupt disabling from the ground up.
      
      The base idea is to replace the "hard_enabled" field with a
      "irq_happened" field in which we store a bit mask of what interrupt
      occurred while soft-disabled.
      
      When re-enabling, either via arch_local_irq_restore() or when returning
      from an interrupt, we can now decide what to do by testing bits in that
      field.
      
      We then implement replaying of the missed interrupts either by
      re-using the existing exception frame (in exception exit case) or via
      the creation of a new one from an assembly trampoline (in the
      arch_local_irq_enable case).
      
      This removes the need to play with the decrementer to try to create
      fake interrupts, among others.
      
      In addition, this adds a few refinements:
      
       - We no longer  hard disable decrementer interrupts that occur
      while soft-disabled. We now simply bump the decrementer back to max
      (on BookS) or leave it stopped (on BookE) and continue with hard interrupts
      enabled, which means that we'll potentially get better sample quality from
      performance monitor interrupts.
      
       - Timer, decrementer and doorbell interrupts now hard-enable
      shortly after removing the source of the interrupt, which means
      they no longer run entirely hard disabled. Again, this will improve
      perf sample quality.
      
       - On Book3E 64-bit, we now make the performance monitor interrupt
      act as an NMI like Book3S (the necessary C code for that to work
      appear to already be present in the FSL perf code, notably calling
      nmi_enter instead of irq_enter). (This also fixes a bug where BookE
      perfmon interrupts could clobber r14 ... oops)
      
       - We could make "masked" decrementer interrupts act as NMIs when doing
      timer-based perf sampling to improve the sample quality.
      
      Signed-off-by-yet: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      ---
      
      v2:
      
      - Add hard-enable to decrementer, timer and doorbells
      - Fix CR clobber in masked irq handling on BookE
      - Make embedded perf interrupt act as an NMI
      - Add a PACA_HAPPENED_EE_EDGE for use by FSL if they want
        to retrigger an interrupt without preventing hard-enable
      
      v3:
      
       - Fix or vs. ori bug on Book3E
       - Fix enabling of interrupts for some exceptions on Book3E
      
      v4:
      
       - Fix resend of doorbells on return from interrupt on Book3E
      
      v5:
      
       - Rebased on top of my latest series, which involves some significant
      rework of some aspects of the patch.
      
      v6:
       - 32-bit compile fix
       - more compile fixes with various .config combos
       - factor out the asm code to soft-disable interrupts
       - remove the C wrapper around preempt_schedule_irq
      
      v7:
       - Fix a bug with hard irq state tracking on native power7
      7230c564
    • B
      powerpc: Replace mfmsr instructions with load from PACA kernel_msr field · d9ada91a
      Benjamin Herrenschmidt 提交于
      On 64-bit, the mfmsr instruction can be quite slow, slower
      than loading a field from the cache-hot PACA, which happens
      to already contain the value we want in most cases.
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      d9ada91a
    • B
      powerpc: Improve 64-bit syscall entry/exit · 1421ae0b
      Benjamin Herrenschmidt 提交于
      We unconditionally hard enable interrupts. This is unnecessary as
      syscalls are expected to always be called with interrupts enabled.
      
      While at it, we add a WARN_ON if that is not the case and
      CONFIG_TRACE_IRQFLAGS is enabled (we don't want to add overhead
      to the fast path when this is not set though).
      
      Thus let's remove the enabling (and associated irq tracing) from
      the syscall entry path. Also on Book3S, replace a few mfmsr
      instructions with loads of PACAMSR from the PACA, which should be
      faster & schedule better.
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      1421ae0b
    • B
      powerpc: Remove legacy iSeries bits from assembly files · 4f8cf36f
      Benjamin Herrenschmidt 提交于
      This removes the various bits of assembly in the kernel entry,
      exception handling and SLB management code that were specific
      to running under the legacy iSeries hypervisor which is no
      longer supported.
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      4f8cf36f
  8. 22 2月, 2012 1 次提交
    • B
      powerpc: Fix various issues with return to userspace · 18b246fa
      Benjamin Herrenschmidt 提交于
      We have a few problems when returning to userspace. This is a
      quick set of fixes for 3.3, I'll look into a more comprehensive
      rework for 3.4. This fixes:
      
       - We kept interrupts soft-disabled when schedule'ing or calling
      do_signal when returning to userspace as a result of a hardware
      interrupt.
      
       - Rename do_signal to do_notify_resume like all other archs (and
      do_signal_pending back to do_signal, which it was before Roland
      changed it).
      
       - Add the missing call to key_replace_session_keyring() to
      do_notify_resume().
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      ---
      18b246fa
  9. 27 4月, 2011 2 次提交
  10. 20 4月, 2011 1 次提交
  11. 02 9月, 2010 2 次提交
    • P
      powerpc: Account time using timebase rather than PURR · cf9efce0
      Paul Mackerras 提交于
      Currently, when CONFIG_VIRT_CPU_ACCOUNTING is enabled, we use the
      PURR register for measuring the user and system time used by
      processes, as well as other related times such as hardirq and
      softirq times.  This turns out to be quite confusing for users
      because it means that a program will often be measured as taking
      less time when run on a multi-threaded processor (SMT2 or SMT4 mode)
      than it does when run on a single-threaded processor (ST mode), even
      though the program takes longer to finish.  The discrepancy is
      accounted for as stolen time, which is also confusing, particularly
      when there are no other partitions running.
      
      This changes the accounting to use the timebase instead, meaning that
      the reported user and system times are the actual number of real-time
      seconds that the program was executing on the processor thread,
      regardless of which SMT mode the processor is in.  Thus a program will
      generally show greater user and system times when run on a
      multi-threaded processor than on a single-threaded processor.
      
      On pSeries systems on POWER5 or later processors, we measure the
      stolen time (time when this partition wasn't running) using the
      hypervisor dispatch trace log.  We check for new entries in the
      log on every entry from user mode and on every transition from
      kernel process context to soft or hard IRQ context (i.e. when
      account_system_vtime() gets called).  So that we can correctly
      distinguish time stolen from user time and time stolen from system
      time, without having to check the log on every exit to user mode,
      we store separate timestamps for exit to user mode and entry from
      user mode.
      
      On systems that have a SPURR (POWER6 and POWER7), we read the SPURR
      in account_system_vtime() (as before), and then apportion the SPURR
      ticks since the last time we read it between scaled user time and
      scaled system time according to the relative proportions of user
      time and system time over the same interval.  This avoids having to
      read the SPURR on every kernel entry and exit.  On systems that have
      PURR but not SPURR (i.e., POWER5), we do the same using the PURR
      rather than the SPURR.
      
      This disables the DTL user interface in /sys/debug/kernel/powerpc/dtl
      for now since it conflicts with the use of the dispatch trace log
      by the time accounting code.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      cf9efce0
    • A
      powerpc: Feature nop out reservation clear when stcx checks address · f89451fb
      Anton Blanchard 提交于
      The POWER architecture does not require stcx to check that it is operating
      on the same address as the larx. This means it is possible for an
      an exception handler to execute a larx, get a reservation, decide
      not to do the stcx and then return back with an active reservation. If the
      interrupted code was in the middle of a larx/stcx sequence the stcx could
      incorrectly succeed.
      
      All recent POWER CPUs check the address before letting the stcx succeed
      so we can create a CPU feature and nop it out. As Ben suggested, we can
      only do this in our syscall path because there is a remote possibility
      some kernel code gets interrupted by an exception that ends up operating
      on the same cacheline.
      
      Thanks to Paul Mackerras and Derek Williams for the idea.
      
      To test this I used a very simple null syscall (actually getppid) testcase
      at http://ozlabs.org/~anton/junkcode/null_syscall.c
      
      I tested against 2.6.35-git10 with the following changes against the
      pseries_defconfig:
      
      CONFIG_VIRT_CPU_ACCOUNTING=n
      CONFIG_AUDIT=n
      CONFIG_PPC_4K_PAGES=n
      CONFIG_PPC_64K_PAGES=y
      CONFIG_FORCE_MAX_ZONEORDER=9
      CONFIG_PPC_SUBPAGE_PROT=n
      CONFIG_FUNCTION_TRACER=n
      CONFIG_FUNCTION_GRAPH_TRACER=n
      CONFIG_IRQSOFF_TRACER=n
      CONFIG_STACK_TRACER=n
      
      to remove the overhead of virtual CPU accounting, syscall auditing and
      the ftrace mcount tracers. 64kB pages were enabled to minimise TLB misses.
      
      POWER6: +8.2%
      POWER7: +7.0%
      
      Another suggestion was to use a larx to something in the L1 instead of a stcx.
      This was almost as fast as removing the larx on POWER6, but only 3.5% faster
      on POWER7. We can use this to speed up the reservation clear in our
      exception exit code.
      Signed-off-by: NAnton Blanchard <anton@samba.org>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      f89451fb
  12. 12 5月, 2010 1 次提交
    • P
      powerpc/perf_event: Fix oops due to perf_event_do_pending call · 0fe1ac48
      Paul Mackerras 提交于
      Anton Blanchard found that large POWER systems would occasionally
      crash in the exception exit path when profiling with perf_events.
      The symptom was that an interrupt would occur late in the exit path
      when the MSR[RI] (recoverable interrupt) bit was clear.  Interrupts
      should be hard-disabled at this point but they were enabled.  Because
      the interrupt was not recoverable the system panicked.
      
      The reason is that the exception exit path was calling
      perf_event_do_pending after hard-disabling interrupts, and
      perf_event_do_pending will re-enable interrupts.
      
      The simplest and cleanest fix for this is to use the same mechanism
      that 32-bit powerpc does, namely to cause a self-IPI by setting the
      decrementer to 1.  This means we can remove the tests in the exception
      exit path and raw_local_irq_restore.
      
      This also makes sure that the call to perf_event_do_pending from
      timer_interrupt() happens within irq_enter/irq_exit.  (Note that
      calling perf_event_do_pending from timer_interrupt does not mean that
      there is a possible 1/HZ latency; setting the decrementer to 1 ensures
      that the timer interrupt will happen immediately, i.e. within one
      timebase tick, which is a few nanoseconds or 10s of nanoseconds.)
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Cc: stable@kernel.org
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      0fe1ac48
  13. 09 2月, 2010 1 次提交
  14. 28 10月, 2009 1 次提交
  15. 27 10月, 2009 1 次提交
    • B
      powerpc/ppc64: Use preempt_schedule_irq instead of preempt_schedule · 4f917ba3
      Benjamin Herrenschmidt 提交于
      Based on an original patch by Valentine Barshak <vbarshak@ru.mvista.com>
      
      Use preempt_schedule_irq to prevent infinite irq-entry and
      eventual stack overflow problems with fast-paced IRQ sources.
      
      This kind of problems has been observed on the PASemi Electra IDE
      controller. We have to make sure we are soft-disabled before calling
      preempt_schedule_irq and hard disable interrupts after that
      to avoid unrecoverable exceptions.
      
      This patch also moves the "clrrdi r9,r1,THREAD_SHIFT" out of
      the #ifdef CONFIG_PPC_BOOK3E scope, since r9 is clobbered
      and has to be restored in both cases.
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      4f917ba3
  16. 14 10月, 2009 1 次提交
  17. 21 9月, 2009 1 次提交
    • I
      perf: Do the big rename: Performance Counters -> Performance Events · cdd6c482
      Ingo Molnar 提交于
      Bye-bye Performance Counters, welcome Performance Events!
      
      In the past few months the perfcounters subsystem has grown out its
      initial role of counting hardware events, and has become (and is
      becoming) a much broader generic event enumeration, reporting, logging,
      monitoring, analysis facility.
      
      Naming its core object 'perf_counter' and naming the subsystem
      'perfcounters' has become more and more of a misnomer. With pending
      code like hw-breakpoints support the 'counter' name is less and
      less appropriate.
      
      All in one, we've decided to rename the subsystem to 'performance
      events' and to propagate this rename through all fields, variables
      and API names. (in an ABI compatible fashion)
      
      The word 'event' is also a bit shorter than 'counter' - which makes
      it slightly more convenient to write/handle as well.
      
      Thanks goes to Stephane Eranian who first observed this misnomer and
      suggested a rename.
      
      User-space tooling and ABI compatibility is not affected - this patch
      should be function-invariant. (Also, defconfigs were not touched to
      keep the size down.)
      
      This patch has been generated via the following script:
      
        FILES=$(find * -type f | grep -vE 'oprofile|[^K]config')
      
        sed -i \
          -e 's/PERF_EVENT_/PERF_RECORD_/g' \
          -e 's/PERF_COUNTER/PERF_EVENT/g' \
          -e 's/perf_counter/perf_event/g' \
          -e 's/nb_counters/nb_events/g' \
          -e 's/swcounter/swevent/g' \
          -e 's/tpcounter_event/tp_event/g' \
          $FILES
      
        for N in $(find . -name perf_counter.[ch]); do
          M=$(echo $N | sed 's/perf_counter/perf_event/g')
          mv $N $M
        done
      
        FILES=$(find . -name perf_event.*)
      
        sed -i \
          -e 's/COUNTER_MASK/REG_MASK/g' \
          -e 's/COUNTER/EVENT/g' \
          -e 's/\<event\>/event_id/g' \
          -e 's/counter/event/g' \
          -e 's/Counter/Event/g' \
          $FILES
      
      ... to keep it as correct as possible. This script can also be
      used by anyone who has pending perfcounters patches - it converts
      a Linux kernel tree over to the new naming. We tried to time this
      change to the point in time where the amount of pending patches
      is the smallest: the end of the merge window.
      
      Namespace clashes were fixed up in a preparatory patch - and some
      stylistic fallout will be fixed up in a subsequent patch.
      
      ( NOTE: 'counters' are still the proper terminology when we deal
        with hardware registers - and these sed scripts are a bit
        over-eager in renaming them. I've undone some of that, but
        in case there's something left where 'counter' would be
        better than 'event' we can undo that on an individual basis
        instead of touching an otherwise nicely automated patch. )
      Suggested-by: NStephane Eranian <eranian@google.com>
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Acked-by: NPaul Mackerras <paulus@samba.org>
      Reviewed-by: NArjan van de Ven <arjan@linux.intel.com>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: David Howells <dhowells@redhat.com>
      Cc: Kyle McMartin <kyle@mcmartin.ca>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: <linux-arch@vger.kernel.org>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      cdd6c482
  18. 20 8月, 2009 3 次提交
    • B
      powerpc: Remaining 64-bit Book3E support · 2d27cfd3
      Benjamin Herrenschmidt 提交于
      This contains all the bits that didn't fit in previous patches :-) This
      includes the actual exception handlers assembly, the changes to the
      kernel entry, other misc bits and wiring it all up in Kconfig.
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      2d27cfd3
    • B
      powerpc/of: Remove useless register save/restore when calling OF back · 6c171994
      Benjamin Herrenschmidt 提交于
      enter_prom() used to save and restore registers such as CTR, XER etc..
      which are volatile, or SRR0,1... which we don't care about. This
      removes a bunch of useless code and while at it turns an mtmsrd into
      an MTMSRD macro which will be useful to Book3E.
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      6c171994
    • B
      powerpc: Use names rather than numbers for SPRGs (v2) · ee43eb78
      Benjamin Herrenschmidt 提交于
      The kernel uses SPRG registers for various purposes, typically in
      low level assembly code as scratch registers or to hold per-cpu
      global infos such as the PACA or the current thread_info pointer.
      
      We want to be able to easily shuffle the usage of those registers
      as some implementations have specific constraints realted to some
      of them, for example, some have userspace readable aliases, etc..
      and the current choice isn't always the best.
      
      This patch should not change any code generation, and replaces the
      usage of SPRN_SPRGn everywhere in the kernel with a named replacement
      and adds documentation next to the definition of the names as to
      what those are used for on each processor family.
      
      The only parts that still use the original numbers are bits of KVM
      or suspend/resume code that just blindly needs to save/restore all
      the SPRGs.
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      ee43eb78
  19. 23 2月, 2009 3 次提交
  20. 09 1月, 2009 1 次提交
    • P
      powerpc: Provide a way to defer perf counter work until interrupts are enabled · 93a6d3ce
      Paul Mackerras 提交于
      Because 64-bit powerpc uses lazy (soft) interrupt disabling, it is
      possible for a performance monitor exception to come in when the
      kernel thinks interrupts are disabled (i.e. when they are
      soft-disabled but hard-enabled).  In such a situation the performance
      monitor exception handler might have some processing to do (such as
      process wakeups) which can't be done in what is effectively an NMI
      handler.
      
      This provides a way to defer that work until interrupts get enabled,
      either in raw_local_irq_restore() or by returning from an interrupt
      handler to code that had interrupts enabled.  We have a per-processor
      flag that indicates that there is work pending to do when interrupts
      subsequently get re-enabled.  This flag is checked in the interrupt
      return path and in raw_local_irq_restore(), and if it is set,
      perf_counter_do_pending() is called to do the pending work.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      93a6d3ce
  21. 01 12月, 2008 1 次提交
    • P
      powerpc: Fix system calls on Cell entered with XER.SO=1 · ab598b66
      Paul Mackerras 提交于
      It turns out that on Cell, on a kernel with CONFIG_VIRT_CPU_ACCOUNTING
      = y, if a program sets the SO (summary overflow) bit in the XER and
      then does a system call, the SO bit in CR0 will be set on return
      regardless of whether the system call detected an error.  Since CR0.SO
      is used as the error indication from the system call, this means that
      all system calls appear to fail.
      
      The reason is that the workaround for the timebase bug on Cell uses a
      compare instruction.  With CONFIG_VIRT_CPU_ACCOUNTING = y, the
      ACCOUNT_CPU_USER_ENTRY macro reads the timebase, so we end up doing a
      compare instruction, which copies XER.SO to CR0.SO.  Since we were
      doing this in the system call entry patch after clearing CR0.SO but
      before saving the CR, this meant that the saved CR image had CR0.SO
      set if XER.SO was set on entry.
      
      This fixes it by moving the clearing of CR0.SO to after the
      ACCOUNT_CPU_USER_ENTRY call in the system call entry path.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Acked-by: NArnd Bergmann <arnd@arndb.de>
      Acked-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      ab598b66
  22. 28 11月, 2008 1 次提交
    • S
      powerpc: ftrace, do nothing in mcount call for dyn ftrace · c7b0d173
      Steven Rostedt 提交于
      Impact: quicken mcount calls that are not replaced by dyn ftrace
      
      Dynamic ftrace no longer does on the fly recording of mcount locations.
      The mcount locations are now found at compile time. The mcount
      function no longer needs to store registers and call a stub function.
      It can now just simply return.
      
      Since there are some functions that do not get converted to a nop
      (.init sections and other code that may disappear), this patch should
      help speed up that code.
      
      Also, the stub for mcount on PowerPC 32 can not be a simple branch
      link register like it is on PowerPC 64. According to the ABI specification:
      
      "The _mcount routine is required to restore the link register from
       the stack so that the profiling code can be inserted transparently,
       whether or not the profiled function saves the link register itself."
      
      This means that we must restore the link register that was used
      to make the call to mcount.  The minimal mcount function for PPC32
      ends up being:
      
       mcount:
              mflr    r0
              mtctr   r0
              lwz     r0, 4(r1)
              mtlr    r0
              bctr
      
      Where we move the link register used to call mcount into the
      ctr register, and then restore the link register from the stack.
      Then we use the ctr register to jump back to the mcount caller.
      The r0 register is free for us to use.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      c7b0d173
  23. 21 10月, 2008 1 次提交
  24. 16 9月, 2008 1 次提交
    • P
      powerpc: Use LOAD_REG_IMMEDIATE only for constants on 64-bit · e31aa453
      Paul Mackerras 提交于
      Using LOAD_REG_IMMEDIATE to get the address of kernel symbols
      generates 5 instructions where LOAD_REG_ADDR can do it in one,
      and will generate R_PPC64_ADDR16_* relocations in the output when
      we get to making the kernel as a position-independent executable,
      which we'd rather not have to handle.  This changes various bits
      of assembly code to use LOAD_REG_ADDR when we need to get the
      address of a symbol, or to use suitable position-independent code
      for cases where we can't access the TOC for various reasons, or
      if we're not running at the address we were linked at.
      
      It also cleans up a few minor things; there's no reason to save and
      restore SRR0/1 around RTAS calls, __mmu_off can get the return
      address from LR more conveniently than the caller can supply it in
      R4 (and we already assume elsewhere that EA == RA if the MMU is on
      in early boot), and enable_64b_mode was using 5 instructions where
      2 would do.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      e31aa453
  25. 20 8月, 2008 1 次提交
  26. 28 7月, 2008 2 次提交
  27. 01 7月, 2008 1 次提交
    • M
      powerpc: Add VSX context save/restore, ptrace and signal support · ce48b210
      Michael Neuling 提交于
      This patch extends the floating point save and restore code to use the
      VSX load/stores when VSX is available.  This will make FP context
      save/restore marginally slower on FP only code, when VSX is available,
      as it has to load/store 128bits rather than just 64bits.
      
      Mixing FP, VMX and VSX code will get constant architected state.
      
      The signals interface is extended to enable access to VSR 0-31
      doubleword 1 after discussions with tool chain maintainers.  Backward
      compatibility is maintained.
      
      The ptrace interface is also extended to allow access to VSR 0-31 full
      registers.
      Signed-off-by: NMichael Neuling <mikey@neuling.org>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      ce48b210