1. 05 5月, 2010 2 次提交
  2. 01 3月, 2010 1 次提交
  3. 28 10月, 2009 1 次提交
  4. 20 8月, 2009 5 次提交
  5. 16 6月, 2009 1 次提交
    • B
      powerpc: Add memory clobber to mtspr() · 2fae0a52
      Benjamin Herrenschmidt 提交于
      Without this clobber, mtspr can be re-ordered by gcc vs. surrounding
      memory accesses. While this might be ok for some cases, it's not in
      others and I'm not confident that all callers get it right (In fact
      I'm sure some of them don't).
      
      So for now, let's make mtspr() itself contain a memory clobber until
      we can audit and fix everything, at which point we can remove it
      if we think it's worth doing so.
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      2fae0a52
  6. 15 6月, 2009 1 次提交
    • P
      powerpc: Add compiler memory barrier to mtmsr macro · 4c75f84f
      Paul Mackerras 提交于
      On 32-bit non-Book E, local_irq_restore() turns into just mtmsr(),
      which doesn't currently have a compiler memory barrier.  This means
      that accesses to memory inside a local_irq_save/restore section,
      or a spin_lock_irqsave/spin_unlock_irqrestore section on UP, can
      be reordered by the compiler to occur outside that section.
      
      To fix this, this adds a compiler memory barrier to mtmsr for both
      32-bit and 64-bit.  Having a compiler memory barrier in mtmsr makes
      sense because it will almost always be changing something about the
      context in which memory accesses are done, so in general we don't want
      memory accesses getting moved from one side of an mtmsr to the other.
      
      With the barrier in mtmsr(), some of the explicit barriers in
      hw_irq.h are now redundant, so this removes them.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      4c75f84f
  7. 15 5月, 2009 1 次提交
    • P
      perf_counter: powerpc: supply more precise information on counter overflow events · 0bbd0d4b
      Paul Mackerras 提交于
      This uses values from the MMCRA, SIAR and SDAR registers on
      powerpc to supply more precise information for overflow events,
      including a data address when PERF_RECORD_ADDR is specified.
      
      Since POWER6 uses different bit positions in MMCRA from earlier
      processors, this converts the struct power_pmu limited_pmc5_6
      field, which only had 0/1 values, into a flags field and
      defines bit values for its previous use (PPMU_LIMITED_PMC5_6)
      and a new flag (PPMU_ALT_SIPR) to indicate that the processor
      uses the POWER6 bit positions rather than the earlier
      positions.  It also adds definitions in reg.h for the new and
      old positions of the bit that indicates that the SIAR and SDAR
      values come from the same instruction.
      
      For the data address, the SDAR value is supplied if we are not
      doing instruction sampling.  In that case there is no guarantee
      that the address given in the PERF_RECORD_ADDR subrecord will
      correspond to the instruction whose address is given in the
      PERF_RECORD_IP subrecord.
      
      If instruction sampling is enabled (e.g. because this counter
      is counting a marked instruction event), then we only supply
      the SDAR value for the PERF_RECORD_ADDR subrecord if it
      corresponds to the instruction whose address is in the
      PERF_RECORD_IP subrecord.  Otherwise we supply 0.
      
      [ Impact: support more PMU hardware features on PowerPC ]
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      LKML-Reference: <18955.37028.48861.555309@drongo.ozlabs.ibm.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      0bbd0d4b
  8. 02 4月, 2009 1 次提交
  9. 11 3月, 2009 1 次提交
  10. 23 12月, 2008 1 次提交
  11. 04 8月, 2008 1 次提交
  12. 17 7月, 2008 1 次提交
  13. 01 7月, 2008 2 次提交
    • M
      powerpc: Add VSX context save/restore, ptrace and signal support · ce48b210
      Michael Neuling 提交于
      This patch extends the floating point save and restore code to use the
      VSX load/stores when VSX is available.  This will make FP context
      save/restore marginally slower on FP only code, when VSX is available,
      as it has to load/store 128bits rather than just 64bits.
      
      Mixing FP, VMX and VSX code will get constant architected state.
      
      The signals interface is extended to enable access to VSR 0-31
      doubleword 1 after discussions with tool chain maintainers.  Backward
      compatibility is maintained.
      
      The ptrace interface is also extended to allow access to VSR 0-31 full
      registers.
      Signed-off-by: NMichael Neuling <mikey@neuling.org>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      ce48b210
    • M
      powerpc: Introduce infrastructure for feature sections with alternatives · fac23fe4
      Michael Ellerman 提交于
      The current feature section logic only supports nop'ing out code, this means
      if you want to choose at runtime between instruction sequences, one or both
      cases will have to execute the nop'ed out contents of the other section, eg:
      
      BEGIN_FTR_SECTION
      	or	1,1,1
      END_FTR_SECTION_IFSET(FOO)
      BEGIN_FTR_SECTION
      	or	2,2,2
      END_FTR_SECTION_IFCLR(FOO)
      
      and the resulting code will be either,
      
      	or	1,1,1
      	nop
      
      or,
      	nop
      	or	2,2,2
      
      For small code segments this is fine, but for larger code blocks and in
      performance criticial code segments, it would be nice to avoid the nops.
      This commit starts to implement logic to allow the following:
      
      BEGIN_FTR_SECTION
      	or	1,1,1
      FTR_SECTION_ELSE
      	or	2,2,2
      ALT_FTR_SECTION_END_IFSET(FOO)
      
      and the resulting code will be:
      
      	or	1,1,1
      or,
      	or	2,2,2
      
      We achieve this by extending the existing FTR macros. The current feature
      section semantic just becomes a special case, ie. if the else case is empty
      we nop out the default case.
      
      The key limitation is that the size of the else case must be less than or
      equal to the size of the default case. If the else case is smaller the
      remainder of the section is nop'ed.
      
      We let the linker put the else case code in with the rest of the text,
      so that relative branches from the else case are more likley to link,
      this has the disadvantage that we can't free the unused else cases.
      
      This commit introduces the required macro and linker script changes, but
      does not enable the patching of the alternative sections.
      
      We also need to update two hand-made section entries in reg.h and timex.h
      Signed-off-by: NMichael Ellerman <michael@ellerman.id.au>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      fac23fe4
  14. 26 6月, 2008 1 次提交
  15. 03 3月, 2008 1 次提交
  16. 06 2月, 2008 1 次提交
  17. 25 1月, 2008 1 次提交
  18. 11 12月, 2007 1 次提交
  19. 17 9月, 2007 1 次提交
  20. 13 9月, 2007 1 次提交
  21. 10 7月, 2007 1 次提交
  22. 24 4月, 2007 1 次提交
  23. 07 2月, 2007 3 次提交
  24. 09 12月, 2006 2 次提交
  25. 25 10月, 2006 1 次提交
  26. 23 10月, 2006 1 次提交
  27. 06 10月, 2006 1 次提交
  28. 13 9月, 2006 1 次提交
  29. 21 6月, 2006 1 次提交
    • B
      [POWERPC] cell: add RAS support · acf7d768
      Benjamin Herrenschmidt 提交于
      This is a first version of support for the Cell BE "Reliability,
      Availability and Serviceability" features.
      
      It doesn't yet handle some of the RAS interrupts (the ones described in
      iic_is/iic_irr), I'm still working on a proper way to expose these. They
      are essentially a cascaded controller by themselves (sic !) though I may
      just handle them locally to the iic driver. I need also to sync with
      David Erb on the way he hooked in the performance monitor interrupt.
      
      So that's all for 2.6.17 and I'll do more work on that with my rework of
      the powerpc interrupt layer that I'm hacking on at the moment.
      Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      acf7d768
  30. 15 6月, 2006 2 次提交