1. 28 4月, 2014 1 次提交
  2. 11 10月, 2013 1 次提交
  3. 24 7月, 2013 2 次提交
  4. 21 6月, 2013 2 次提交
  5. 01 6月, 2013 1 次提交
  6. 30 4月, 2013 4 次提交
  7. 27 9月, 2012 1 次提交
  8. 17 9月, 2012 2 次提交
  9. 05 9月, 2012 1 次提交
  10. 12 7月, 2011 2 次提交
    • P
      KVM: PPC: book3s_hv: Add support for PPC970-family processors · 9e368f29
      Paul Mackerras 提交于
      This adds support for running KVM guests in supervisor mode on those
      PPC970 processors that have a usable hypervisor mode.  Unfortunately,
      Apple G5 machines have supervisor mode disabled (MSR[HV] is forced to
      1), but the YDL PowerStation does have a usable hypervisor mode.
      
      There are several differences between the PPC970 and POWER7 in how
      guests are managed.  These differences are accommodated using the
      CPU_FTR_ARCH_201 (PPC970) and CPU_FTR_ARCH_206 (POWER7) CPU feature
      bits.  Notably, on PPC970:
      
      * The LPCR, LPID or RMOR registers don't exist, and the functions of
        those registers are provided by bits in HID4 and one bit in HID0.
      
      * External interrupts can be directed to the hypervisor, but unlike
        POWER7 they are masked by MSR[EE] in non-hypervisor modes and use
        SRR0/1 not HSRR0/1.
      
      * There is no virtual RMA (VRMA) mode; the guest must use an RMO
        (real mode offset) area.
      
      * The TLB entries are not tagged with the LPID, so it is necessary to
        flush the whole TLB on partition switch.  Furthermore, when switching
        partitions we have to ensure that no other CPU is executing the tlbie
        or tlbsync instructions in either the old or the new partition,
        otherwise undefined behaviour can occur.
      
      * The PMU has 8 counters (PMC registers) rather than 6.
      
      * The DSCR, PURR, SPURR, AMR, AMOR, UAMOR registers don't exist.
      
      * The SLB has 64 entries rather than 32.
      
      * There is no mediated external interrupt facility, so if we switch to
        a guest that has a virtual external interrupt pending but the guest
        has MSR[EE] = 0, we have to arrange to have an interrupt pending for
        it so that we can get control back once it re-enables interrupts.  We
        do that by sending ourselves an IPI with smp_send_reschedule after
        hard-disabling interrupts.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      9e368f29
    • P
      powerpc, KVM: Split HVMODE_206 cpu feature bit into separate HV and architecture bits · 969391c5
      Paul Mackerras 提交于
      This replaces the single CPU_FTR_HVMODE_206 bit with two bits, one to
      indicate that we have a usable hypervisor mode, and another to indicate
      that the processor conforms to PowerISA version 2.06.  We also add
      another bit to indicate that the processor conforms to ISA version 2.01
      and set that for PPC970 and derivatives.
      
      Some PPC970 chips (specifically those in Apple machines) have a
      hypervisor mode in that MSR[HV] is always 1, but the hypervisor mode
      is not useful in the sense that there is no way to run any code in
      supervisor mode (HV=0 PR=0).  On these processors, the LPES0 and LPES1
      bits in HID4 are always 0, and we use that as a way of detecting that
      hypervisor mode is not useful.
      
      Where we have a feature section in assembly code around code that
      only applies on POWER7 in hypervisor mode, we use a construct like
      
      END_FTR_SECTION_IFSET(CPU_FTR_HVMODE | CPU_FTR_ARCH_206)
      
      The definition of END_FTR_SECTION_IFSET is such that the code will
      be enabled (not overwritten with nops) only if all bits in the
      provided mask are set.
      
      Note that the CPU feature check in __tlbie() only needs to check the
      ARCH_206 bit, not the HVMODE bit, because __tlbie() can only get called
      if we are running bare-metal, i.e. in hypervisor mode.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      969391c5
  11. 04 5月, 2011 1 次提交
  12. 27 4月, 2011 1 次提交
  13. 19 2月, 2010 1 次提交
  14. 17 2月, 2010 1 次提交
  15. 21 5月, 2009 1 次提交
  16. 12 10月, 2007 1 次提交
    • P
      [POWERPC] Use 1TB segments · 1189be65
      Paul Mackerras 提交于
      This makes the kernel use 1TB segments for all kernel mappings and for
      user addresses of 1TB and above, on machines which support them
      (currently POWER5+, POWER6 and PA6T).
      
      We detect that the machine supports 1TB segments by looking at the
      ibm,processor-segment-sizes property in the device tree.
      
      We don't currently use 1TB segments for user addresses < 1T, since
      that would effectively prevent 32-bit processes from using huge pages
      unless we also had a way to revert to using 256MB segments.  That
      would be possible but would involve extra complications (such as
      keeping track of which segment size was used when HPTEs were inserted)
      and is not addressed here.
      
      Parts of this patch were originally written by Ben Herrenschmidt.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      1189be65
  17. 10 7月, 2007 1 次提交
  18. 14 6月, 2007 2 次提交
  19. 12 5月, 2007 2 次提交
  20. 10 5月, 2007 1 次提交
  21. 07 5月, 2007 1 次提交
  22. 13 4月, 2007 1 次提交
  23. 16 10月, 2006 1 次提交
  24. 28 6月, 2006 1 次提交
  25. 18 6月, 2006 1 次提交
  26. 09 6月, 2006 1 次提交
    • B
      [PATCH] powerpc: Fix buglet with MMU hash management · c5cf0e30
      Benjamin Herrenschmidt 提交于
      Our MMU hash management code would not set the "C" bit (changed bit) in
      the hardware PTE when updating a RO PTE into a RW PTE. That would cause
      the hardware to possibly to a write back to the hash table to set it on
      the first store access, which in addition to being a performance issue,
      might also hit a bug when running with native hash management (non-HV)
      as our code is specifically optimized for the case where no write back
      happens.
      
      Thus there is a very small therocial window were a hash PTE can become
      corrupted if that HPTE has just been upgraded to read write, a store
      access happens on it, and that races with another processor evicting
      that same slot. Since eviction (caused by an almost full hash) is
      extremely rare, the bug is very unlikely to happen fortunately.
      
      This fixes by allowing the updating of the protection bits in the native
      hash handling to also set (but not clear) the "C" bit, and, in order to
      also improve performances in the general case, by always setting that
      bit on newly inserted hash PTE so that writeback really never happens.
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      c5cf0e30
  27. 24 2月, 2006 1 次提交
  28. 07 11月, 2005 1 次提交
    • B
      [PATCH] ppc64: support 64k pages · 3c726f8d
      Benjamin Herrenschmidt 提交于
      Adds a new CONFIG_PPC_64K_PAGES which, when enabled, changes the kernel
      base page size to 64K.  The resulting kernel still boots on any
      hardware.  On current machines with 4K pages support only, the kernel
      will maintain 16 "subpages" for each 64K page transparently.
      
      Note that while real 64K capable HW has been tested, the current patch
      will not enable it yet as such hardware is not released yet, and I'm
      still verifying with the firmware architects the proper to get the
      information from the newer hypervisors.
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      3c726f8d
  29. 10 10月, 2005 1 次提交
  30. 28 9月, 2005 1 次提交
  31. 24 9月, 2005 1 次提交
    • B
      [PATCH] ppc64: Fix huge pages MMU mapping bug · 67b10813
      Benjamin Herrenschmidt 提交于
      Current kernel has a couple of sneaky bugs in the ppc64 hugetlb code that
      cause huge pages to be potentially left stale in the hash table and TLBs
      (improperly invalidated), with all the nasty consequences that can have.
      
      One is that we forgot to set the "secondary" bit in the hash PTEs when
      hashing a huge page in the secondary bucket (fortunately very rare).
      
      The other one is on non-LPAR machines (like Apple G5s), flush_hash_range()
      which is used to flush a batch of PTEs simply did not work for huge pages.
      Historically, our huge page code didn't batch, but this was changed without
      fixing this routine.  This patch fixes both.
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      67b10813