1. 20 8月, 2009 3 次提交
    • B
      powerpc: Change PACA from SPRG3 to SPRG1 · 063517be
      Benjamin Herrenschmidt 提交于
      This change the SPRG used to store the PACA on ppc64 from
      SPRG3 to SPRG1. SPRG3 is user readable on most processors
      and we want to use it for other things. We change the scratch
      SPRG used by exception vectors from SRPG1 to SPRG2.
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      063517be
    • B
      powerpc: Remove use of a second scratch SPRG in STAB code · c5a8c0c9
      Benjamin Herrenschmidt 提交于
      The STAB code used on Power3 and RS/64 uses a second scratch SPRG to
      save a GPR in order to decide whether to go to do_stab_bolted_* or
      to handle a normal data access exception.
      
      This prevents our scheme of freeing SPRG3 which is user visible for
      user uses since we cannot use SPRG0 which, on RS/64, seems to be
      read-only for supervisor mode (like POWER4).
      
      This reworks the STAB exception entry to use the PACA as temporary
      storage instead.
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      c5a8c0c9
    • B
      powerpc: Use names rather than numbers for SPRGs (v2) · ee43eb78
      Benjamin Herrenschmidt 提交于
      The kernel uses SPRG registers for various purposes, typically in
      low level assembly code as scratch registers or to hold per-cpu
      global infos such as the PACA or the current thread_info pointer.
      
      We want to be able to easily shuffle the usage of those registers
      as some implementations have specific constraints realted to some
      of them, for example, some have userspace readable aliases, etc..
      and the current choice isn't always the best.
      
      This patch should not change any code generation, and replaces the
      usage of SPRN_SPRGn everywhere in the kernel with a named replacement
      and adds documentation next to the definition of the names as to
      what those are used for on each processor family.
      
      The only parts that still use the original numbers are bits of KVM
      or suspend/resume code that just blindly needs to save/restore all
      the SPRGs.
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      ee43eb78
  2. 16 6月, 2009 1 次提交
    • B
      powerpc: Add memory clobber to mtspr() · 2fae0a52
      Benjamin Herrenschmidt 提交于
      Without this clobber, mtspr can be re-ordered by gcc vs. surrounding
      memory accesses. While this might be ok for some cases, it's not in
      others and I'm not confident that all callers get it right (In fact
      I'm sure some of them don't).
      
      So for now, let's make mtspr() itself contain a memory clobber until
      we can audit and fix everything, at which point we can remove it
      if we think it's worth doing so.
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      2fae0a52
  3. 15 6月, 2009 1 次提交
    • P
      powerpc: Add compiler memory barrier to mtmsr macro · 4c75f84f
      Paul Mackerras 提交于
      On 32-bit non-Book E, local_irq_restore() turns into just mtmsr(),
      which doesn't currently have a compiler memory barrier.  This means
      that accesses to memory inside a local_irq_save/restore section,
      or a spin_lock_irqsave/spin_unlock_irqrestore section on UP, can
      be reordered by the compiler to occur outside that section.
      
      To fix this, this adds a compiler memory barrier to mtmsr for both
      32-bit and 64-bit.  Having a compiler memory barrier in mtmsr makes
      sense because it will almost always be changing something about the
      context in which memory accesses are done, so in general we don't want
      memory accesses getting moved from one side of an mtmsr to the other.
      
      With the barrier in mtmsr(), some of the explicit barriers in
      hw_irq.h are now redundant, so this removes them.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      4c75f84f
  4. 15 5月, 2009 1 次提交
    • P
      perf_counter: powerpc: supply more precise information on counter overflow events · 0bbd0d4b
      Paul Mackerras 提交于
      This uses values from the MMCRA, SIAR and SDAR registers on
      powerpc to supply more precise information for overflow events,
      including a data address when PERF_RECORD_ADDR is specified.
      
      Since POWER6 uses different bit positions in MMCRA from earlier
      processors, this converts the struct power_pmu limited_pmc5_6
      field, which only had 0/1 values, into a flags field and
      defines bit values for its previous use (PPMU_LIMITED_PMC5_6)
      and a new flag (PPMU_ALT_SIPR) to indicate that the processor
      uses the POWER6 bit positions rather than the earlier
      positions.  It also adds definitions in reg.h for the new and
      old positions of the bit that indicates that the SIAR and SDAR
      values come from the same instruction.
      
      For the data address, the SDAR value is supplied if we are not
      doing instruction sampling.  In that case there is no guarantee
      that the address given in the PERF_RECORD_ADDR subrecord will
      correspond to the instruction whose address is given in the
      PERF_RECORD_IP subrecord.
      
      If instruction sampling is enabled (e.g. because this counter
      is counting a marked instruction event), then we only supply
      the SDAR value for the PERF_RECORD_ADDR subrecord if it
      corresponds to the instruction whose address is in the
      PERF_RECORD_IP subrecord.  Otherwise we supply 0.
      
      [ Impact: support more PMU hardware features on PowerPC ]
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      LKML-Reference: <18955.37028.48861.555309@drongo.ozlabs.ibm.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      0bbd0d4b
  5. 02 4月, 2009 1 次提交
  6. 11 3月, 2009 1 次提交
  7. 23 12月, 2008 1 次提交
  8. 04 8月, 2008 1 次提交
  9. 17 7月, 2008 1 次提交
  10. 01 7月, 2008 2 次提交
    • M
      powerpc: Add VSX context save/restore, ptrace and signal support · ce48b210
      Michael Neuling 提交于
      This patch extends the floating point save and restore code to use the
      VSX load/stores when VSX is available.  This will make FP context
      save/restore marginally slower on FP only code, when VSX is available,
      as it has to load/store 128bits rather than just 64bits.
      
      Mixing FP, VMX and VSX code will get constant architected state.
      
      The signals interface is extended to enable access to VSR 0-31
      doubleword 1 after discussions with tool chain maintainers.  Backward
      compatibility is maintained.
      
      The ptrace interface is also extended to allow access to VSR 0-31 full
      registers.
      Signed-off-by: NMichael Neuling <mikey@neuling.org>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      ce48b210
    • M
      powerpc: Introduce infrastructure for feature sections with alternatives · fac23fe4
      Michael Ellerman 提交于
      The current feature section logic only supports nop'ing out code, this means
      if you want to choose at runtime between instruction sequences, one or both
      cases will have to execute the nop'ed out contents of the other section, eg:
      
      BEGIN_FTR_SECTION
      	or	1,1,1
      END_FTR_SECTION_IFSET(FOO)
      BEGIN_FTR_SECTION
      	or	2,2,2
      END_FTR_SECTION_IFCLR(FOO)
      
      and the resulting code will be either,
      
      	or	1,1,1
      	nop
      
      or,
      	nop
      	or	2,2,2
      
      For small code segments this is fine, but for larger code blocks and in
      performance criticial code segments, it would be nice to avoid the nops.
      This commit starts to implement logic to allow the following:
      
      BEGIN_FTR_SECTION
      	or	1,1,1
      FTR_SECTION_ELSE
      	or	2,2,2
      ALT_FTR_SECTION_END_IFSET(FOO)
      
      and the resulting code will be:
      
      	or	1,1,1
      or,
      	or	2,2,2
      
      We achieve this by extending the existing FTR macros. The current feature
      section semantic just becomes a special case, ie. if the else case is empty
      we nop out the default case.
      
      The key limitation is that the size of the else case must be less than or
      equal to the size of the default case. If the else case is smaller the
      remainder of the section is nop'ed.
      
      We let the linker put the else case code in with the rest of the text,
      so that relative branches from the else case are more likley to link,
      this has the disadvantage that we can't free the unused else cases.
      
      This commit introduces the required macro and linker script changes, but
      does not enable the patching of the alternative sections.
      
      We also need to update two hand-made section entries in reg.h and timex.h
      Signed-off-by: NMichael Ellerman <michael@ellerman.id.au>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      fac23fe4
  11. 26 6月, 2008 1 次提交
  12. 03 3月, 2008 1 次提交
  13. 06 2月, 2008 1 次提交
  14. 25 1月, 2008 1 次提交
  15. 11 12月, 2007 1 次提交
  16. 17 9月, 2007 1 次提交
  17. 13 9月, 2007 1 次提交
  18. 10 7月, 2007 1 次提交
  19. 24 4月, 2007 1 次提交
  20. 07 2月, 2007 3 次提交
  21. 09 12月, 2006 2 次提交
  22. 25 10月, 2006 1 次提交
  23. 23 10月, 2006 1 次提交
  24. 06 10月, 2006 1 次提交
  25. 13 9月, 2006 1 次提交
  26. 21 6月, 2006 1 次提交
    • B
      [POWERPC] cell: add RAS support · acf7d768
      Benjamin Herrenschmidt 提交于
      This is a first version of support for the Cell BE "Reliability,
      Availability and Serviceability" features.
      
      It doesn't yet handle some of the RAS interrupts (the ones described in
      iic_is/iic_irr), I'm still working on a proper way to expose these. They
      are essentially a cascaded controller by themselves (sic !) though I may
      just handle them locally to the iic driver. I need also to sync with
      David Erb on the way he hooked in the performance monitor interrupt.
      
      So that's all for 2.6.17 and I'll do more work on that with my rework of
      the powerpc interrupt layer that I'm hacking on at the moment.
      Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      acf7d768
  27. 15 6月, 2006 2 次提交
  28. 09 6月, 2006 1 次提交
  29. 19 5月, 2006 1 次提交
  30. 27 3月, 2006 1 次提交
    • P
      powerpc: Unify the 32 and 64 bit idle loops · a0652fc9
      Paul Mackerras 提交于
      This unifies the 32-bit (ARCH=ppc and ARCH=powerpc) and 64-bit idle
      loops.  It brings over the concept of having a ppc_md.power_save
      function from 32-bit to ARCH=powerpc, which lets us get rid of
      native_idle().  With this we will also be able to simplify the idle
      handling for pSeries and cell.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      a0652fc9
  31. 24 2月, 2006 1 次提交
    • A
      [PATCH] powerpc: Fix runlatch performance issues · cb2c9b27
      Anton Blanchard 提交于
      The runlatch SPR can take a lot of time to write. My original runlatch
      code would set it on every exception entry even though most of the time
      this was not required. It would also continually set it in the idle
      loop, which is an issue on an SMT capable processor.
      
      Now we cache the runlatch value in a threadinfo bit, and only check for
      it in decrementer and hardware interrupt exceptions as well as the idle
      loop. Boot on POWER3, POWER5 and iseries, and compile tested on pmac32.
      Signed-off-by: NAnton Blanchard <anton@samba.org>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      cb2c9b27
  32. 09 1月, 2006 2 次提交
    • A
      [PATCH] cell: enable pause(0) in cpu_idle · c902be71
      Arnd Bergmann 提交于
      This patch enables support for pause(0) power management state
      for the Cell Broadband Processor, which is import for power efficient
      operation. The pervasive infrastructure will in the future enable
      us to introduce more functionality specific to the Cell's
      pervasive unit.
      
      From: Maximino Aguilar <maguilar@us.ibm.com>
      Signed-off-by: NArnd Bergmann <arndb@de.ibm.com>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      c902be71
    • A
      [PATCH] powerpc: G4+ oprofile support · 555d97ac
      Andy Fleming 提交于
      This patch adds oprofile support for the 7450 and all its multitudinous
      derivatives.
      
      * Added 7450 (and derivatives) support for oprofile
      * Changed e500 cputable to have oprofile model and cpu_type fields
      * Added support for classic 32-bit performance monitor interrupt
      * Cleaned up common powerpc oprofile code to be as common as possible
      * Cleaned up oprofile_impl.h to reflect 32 bit classic code
      * Added 32-bit MMCRx bitfield definitions and SPR numbers
      Signed-off-by: NAndy Fleming <afleming@freescale.com>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      555d97ac