1. 12 3月, 2016 3 次提交
    • C
      powerpc32: remove ioremap_base · e974cd4b
      Christophe Leroy 提交于
      ioremap_base is not initialised and is nowhere used so remove it
      Signed-off-by: NChristophe Leroy <christophe.leroy@c-s.fr>
      Signed-off-by: NScott Wood <oss@buserror.net>
      e974cd4b
    • C
      powerpc32: refactor x_mapped_by_bats() and x_mapped_by_tlbcam() together · 3084cdb7
      Christophe Leroy 提交于
      x_mapped_by_bats() and x_mapped_by_tlbcam() serve the same kind of
      purpose, and are never defined at the same time.
      So rename them x_block_mapped() and define them in the relevant
      places
      Signed-off-by: NChristophe Leroy <christophe.leroy@c-s.fr>
      Signed-off-by: NScott Wood <oss@buserror.net>
      3084cdb7
    • C
      powerpc/8xx: Map linear kernel RAM with 8M pages · a372acfa
      Christophe Leroy 提交于
      On a live running system (VoIP gateway for Air Trafic Control), over
      a 10 minutes period (with 277s idle), we get 87 millions DTLB misses
      and approximatly 35 secondes are spent in DTLB handler.
      This represents 5.8% of the overall time and even 10.8% of the
      non-idle time.
      Among those 87 millions DTLB misses, 15% are on user addresses and
      85% are on kernel addresses. And within the kernel addresses, 93%
      are on addresses from the linear address space and only 7% are on
      addresses from the virtual address space.
      
      MPC8xx has no BATs but it has 8Mb page size. This patch implements
      mapping of kernel RAM using 8Mb pages, on the same model as what is
      done on the 40x.
      
      In 4k pages mode, each PGD entry maps a 4Mb area: we map every two
      entries to the same 8Mb physical page. In each second entry, we add
      4Mb to the page physical address to ease life of the FixupDAR
      routine. This is just ignored by HW.
      
      In 16k pages mode, each PGD entry maps a 64Mb area: each PGD entry
      will point to the first page of the area. The DTLB handler adds
      the 3 bits from EPN to map the correct page.
      
      With this patch applied, we now get only 13 millions TLB misses
      during the 10 minutes period. The idle time has increased to 313s
      and the overall time spent in DTLB miss handler is 6.3s, which
      represents 1% of the overall time and 2.2% of non-idle time.
      Signed-off-by: NChristophe Leroy <christophe.leroy@c-s.fr>
      Signed-off-by: NScott Wood <oss@buserror.net>
      a372acfa
  2. 29 2月, 2016 1 次提交
  3. 28 10月, 2015 1 次提交
    • S
      powerpc/fsl-booke-64: Don't limit ppc64_rma_size to one TLB entry · eba5de8d
      Scott Wood 提交于
      This is required for kdump to work when loaded at at an address that
      does not fall within the first TLB entry -- which can easily happen
      because while the lower limit is enforced via reserved memory, which
      doesn't affect how much is mapped, the upper limit is enforced via a
      different mechanism that does.  Thus, more TLB entries are needed than
      would normally be used, as the total memory to be mapped might not be a
      power of two.
      Signed-off-by: NScott Wood <scottwood@freescale.com>
      eba5de8d
  4. 23 10月, 2015 1 次提交
    • S
      powerpc/85xx: Load all early TLB entries at once · d9e1831a
      Scott Wood 提交于
      Use an AS=1 trampoline TLB entry to allow all normal TLB1 entries to
      be loaded at once.  This avoids the need to keep the translation that
      code is executing from in the same TLB entry in the final TLB
      configuration as during early boot, which in turn is helpful for
      relocatable kernels (e.g. kdump) where the kernel is not running from
      what would be the first TLB entry.
      
      On e6500, we limit map_mem_in_cams() to the primary hwthread of a
      core (the boot cpu is always considered primary, as a kdump kernel
      can be entered on any cpu).  Each TLB only needs to be set up once,
      and when we do, we don't want another thread to be running when we
      create a temporary trampoline TLB1 entry.
      Signed-off-by: NScott Wood <scottwood@freescale.com>
      d9e1831a
  5. 07 4月, 2015 1 次提交
  6. 29 10月, 2014 1 次提交
  7. 10 1月, 2014 3 次提交
    • K
      powerpc/fsl_booke: smp support for booting a relocatable kernel above 64M · 0be7d969
      Kevin Hao 提交于
      When booting above the 64M for a secondary cpu, we also face the
      same issue as the boot cpu that the PAGE_OFFSET map two different
      physical address for the init tlb and the final map. So we have to use
      switch_to_as1/restore_to_as0 between the conversion of these two
      maps. When restoring to as0 for a secondary cpu, we only need to
      return to the caller. So add a new parameter for function
      restore_to_as0 for this purpose.
      
      Use LOAD_REG_ADDR_PIC to get the address of variables which may
      be used before we set the final map in cams for the secondary cpu.
      Move the setting of cams a bit earlier in order to avoid the
      unnecessary using of LOAD_REG_ADDR_PIC.
      Signed-off-by: NKevin Hao <haokexin@gmail.com>
      Signed-off-by: NScott Wood <scottwood@freescale.com>
      0be7d969
    • K
      powerpc/fsl_booke: make sure PAGE_OFFSET map to memstart_addr for relocatable kernel · 7d2471f9
      Kevin Hao 提交于
      This is always true for a non-relocatable kernel. Otherwise the kernel
      would get stuck. But for a relocatable kernel, it seems a little
      complicated. When booting a relocatable kernel, we just align the
      kernel start addr to 64M and map the PAGE_OFFSET from there. The
      relocation will base on this virtual address. But if this address
      is not the same as the memstart_addr, we will have to change the
      map of PAGE_OFFSET to the real memstart_addr and do another relocation
      again.
      Signed-off-by: NKevin Hao <haokexin@gmail.com>
      [scottwood@freescale.com: make offset long and non-negative in simple case]
      Signed-off-by: NScott Wood <scottwood@freescale.com>
      7d2471f9
    • K
      powerpc/fsl_booke: set the tlb entry for the kernel address in AS1 · 78a235ef
      Kevin Hao 提交于
      We use the tlb1 entries to map low mem to the kernel space. In the
      current code, it assumes that the first tlb entry would cover the
      kernel image. But this is not true for some special cases, such as
      when we run a relocatable kernel above the 64M or set
      CONFIG_KERNEL_START above 64M. So we choose to switch to address
      space 1 before setting these tlb entries.
      Signed-off-by: NKevin Hao <haokexin@gmail.com>
      Signed-off-by: NScott Wood <scottwood@freescale.com>
      78a235ef
  8. 12 10月, 2011 1 次提交
    • K
      powerpc/fsl-booke: Fix setup_initial_memory_limit to not blindly map · 1dc91c3e
      Kumar Gala 提交于
      On FSL Book-E devices we support multiple large TLB sizes and so we can
      get into situations in which the initial 1G TLB size is too big and
      we're asked for a size that is not mappable by a single entry (like
      512M).  The single entry is important because when we bring up secondary
      cores they need to ensure any data structure they need to access (eg
      PACA or stack) is always mapped.
      
      So we really need to determine what size will actually be mapped by the
      first TLB entry to ensure we limit early memory references to that
      region.  We refactor the map_mem_in_cams() code to provider a helper
      function that we can utilize to determine the size of the first TLB
      entry while taking into account size and alignment constraints.
      Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
      1dc91c3e
  9. 14 10月, 2010 1 次提交
    • K
      powerpc/fsl-booke64: Use TLB CAMs to cover linear mapping on FSL 64-bit chips · 55fd766b
      Kumar Gala 提交于
      On Freescale parts typically have TLB array for large mappings that we can
      bolt the linear mapping into.  We utilize the code that already exists
      on PPC32 on the 64-bit side to setup the linear mapping to be cover by
      bolted TLB entries.  We utilize a quarter of the variable size TLB array
      for this purpose.
      
      Additionally, we limit the amount of memory to what we can cover via
      bolted entries so we don't get secondary faults in the TLB miss
      handlers.  We should fix this limitation in the future.
      Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
      55fd766b
  10. 17 5月, 2010 1 次提交
  11. 14 5月, 2010 1 次提交
  12. 05 5月, 2010 1 次提交
  13. 15 12月, 2009 1 次提交
  14. 13 12月, 2009 2 次提交
  15. 21 11月, 2009 1 次提交
  16. 20 8月, 2009 3 次提交
  17. 08 1月, 2009 3 次提交
  18. 21 12月, 2008 1 次提交
    • B
      powerpc/mm: Split low level tlb invalidate for nohash processors · 2a4aca11
      Benjamin Herrenschmidt 提交于
      Currently, the various forms of low level TLB invalidations are all
      implemented in misc_32.S for 32-bit processors, in a fairly scary
      mess of #ifdef's and with interesting duplication such as a whole
      bunch of code for FSL _tlbie and _tlbia which are no longer used.
      
      This moves things around such that _tlbie is now defined in
      hash_low_32.S and is only used by the 32-bit hash code, and all
      nohash CPUs use the various _tlbil_* forms that are now moved to
      a new file, tlb_nohash_low.S.
      
      I moved all the definitions for that stuff out of
      include/asm/tlbflush.h as they are really internal mm stuff, into
      mm/mmu_decl.h
      
      The code should have no functional changes.  I kept some variants
      inline for trivial forms on things like 40x and 8xx.
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      Acked-by: NKumar Gala <galak@kernel.crashing.org>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      2a4aca11
  19. 16 12月, 2008 1 次提交
  20. 10 7月, 2008 1 次提交
  21. 30 6月, 2008 1 次提交
  22. 17 4月, 2008 3 次提交
  23. 20 11月, 2007 1 次提交
  24. 01 11月, 2007 1 次提交
  25. 14 6月, 2007 3 次提交
    • D
      [POWERPC] Kill typedef-ed structs for hash PTEs and BATs · 8e561e7e
      David Gibson 提交于
      Using typedefs to rename structure types if frowned on by CodingStyle.
      However, we do so for the hash PTE structure on both ppc32 (where it's
      called "PTE") and ppc64 (where it's called "hpte_t").  On ppc32 we
      also have such a typedef for the BATs ("BAT").
      
      This removes this unhelpful use of typedefs, in the process
      bringing ppc32 and ppc64 closer together, by using the name "struct
      hash_pte" in both cases.
      Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      8e561e7e
    • D
      [POWERPC] Remove the dregs of APUS support from arch/powerpc · f21f49ea
      David Gibson 提交于
      APUS (the Amiga Power-Up System) is not supported under arch/powerpc
      and it's unlikely it ever will be.  Therefore, this patch removes the
      fragments of APUS support code from arch/powerpc which have been
      copied from arch/ppc.
      
      A few APUS references are left in asm-powerpc in .h files which are
      still used from arch/ppc.
      Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      f21f49ea
    • B
      [POWERPC] Rewrite IO allocation & mapping on powerpc64 · 3d5134ee
      Benjamin Herrenschmidt 提交于
      This rewrites pretty much from scratch the handling of MMIO and PIO
      space allocations on powerpc64.  The main goals are:
      
       - Get rid of imalloc and use more common code where possible
       - Simplify the current mess so that PIO space is allocated and
         mapped in a single place for PCI bridges
       - Handle allocation constraints of PIO for all bridges including
         hot plugged ones within the 2GB space reserved for IO ports,
         so that devices on hotplugged busses will now work with drivers
         that assume IO ports fit in an int.
       - Cleanup and separate tracking of the ISA space in the reserved
         low 64K of IO space. No ISA -> Nothing mapped there.
      
      I booted a cell blade with IDE on PIO and MMIO and a dual G5 so
      far, that's it :-)
      
      With this patch, all allocations are done using the code in
      mm/vmalloc.c, though we use the low level __get_vm_area with
      explicit start/stop constraints in order to manage separate
      areas for vmalloc/vmap, ioremap, and PCI IOs.
      
      This greatly simplifies a lot of things, as you can see in the
      diffstat of that patch :-)
      
      A new pair of functions pcibios_map/unmap_io_space() now replace
      all of the previous code that used to manipulate PCI IOs space.
      The allocation is done at mapping time, which is now called from
      scan_phb's, just before the devices are probed (instead of after,
      which is by itself a bug fix). The only other caller is the PCI
      hotplug code for hot adding PCI-PCI bridges (slots).
      
      imalloc is gone, as is the "sub-allocation" thing, but I do beleive
      that hotplug should still work in the sense that the space allocation
      is always done by the PHB, but if you unmap a child bus of this PHB
      (which seems to be possible), then the code should properly tear
      down all the HPTE mappings for that area of the PHB allocated IO space.
      
      I now always reserve the first 64K of IO space for the bridge with
      the ISA bus on it. I have moved the code for tracking ISA in a separate
      file which should also make it smarter if we ever are capable of
      hot unplugging or re-plugging an ISA bridge.
      
      This should have a side effect on platforms like powermac where VGA IOs
      will no longer work. This is done on purpose though as they would have
      worked semi-randomly before. The idea at this point is to isolate drivers
      that might need to access those and fix them by providing a proper
      function to obtain an offset to the legacy IOs of a given bus.
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      3d5134ee
  26. 02 5月, 2007 1 次提交
  27. 24 4月, 2007 1 次提交
    • D
      [POWERPC] Cleanup and fix breakage in tlbflush.h · 62102307
      David Gibson 提交于
      BenH's commit a741e679 in powerpc.git,
      although (AFAICT) only intended to affect ppc64, also has side-effects
      which break 44x.  I think 40x, 8xx and Freescale Book E are also
      affected, though I haven't tested them.
      
      The problem lies in unconditionally removing flush_tlb_pending() from
      the versions of flush_tlb_mm(), flush_tlb_range() and
      flush_tlb_kernel_range() used on ppc64 - which are also used the
      embedded platforms mentioned above.
      
      The patch below cleans up the convoluted #ifdef logic in tlbflush.h,
      in the process restoring the necessary flushes for the software TLB
      platforms.  There are three sets of definitions for the flushing
      hooks: the software TLB versions (revised to avoid using names which
      appear to related to TLB batching), the 32-bit hash based versions
      (external functions) amd the 64-bit hash based versions (which
      implement batching).
      
      It also moves the declaration of update_mmu_cache() to always be in
      tlbflush.h (previously it was in tlbflush.h except for PPC64, where it
      was in pgtable.h).
      
      Booted on Ebony (440GP) and compiled for 64-bit and 32-bit
      multiplatform.
      Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
      Acked-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      62102307