1. 12 10月, 2011 1 次提交
    • K
      powerpc/fsl-booke: Fix setup_initial_memory_limit to not blindly map · 1dc91c3e
      Kumar Gala 提交于
      On FSL Book-E devices we support multiple large TLB sizes and so we can
      get into situations in which the initial 1G TLB size is too big and
      we're asked for a size that is not mappable by a single entry (like
      512M).  The single entry is important because when we bring up secondary
      cores they need to ensure any data structure they need to access (eg
      PACA or stack) is always mapped.
      
      So we really need to determine what size will actually be mapped by the
      first TLB entry to ensure we limit early memory references to that
      region.  We refactor the map_mem_in_cams() code to provider a helper
      function that we can utilize to determine the size of the first TLB
      entry while taking into account size and alignment constraints.
      Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
      1dc91c3e
  2. 20 9月, 2011 1 次提交
    • B
      powerpc: Hugetlb for BookE · 41151e77
      Becky Bruce 提交于
      Enable hugepages on Freescale BookE processors.  This allows the kernel to
      use huge TLB entries to map pages, which can greatly reduce the number of
      TLB misses and the amount of TLB thrashing experienced by applications with
      large memory footprints.  Care should be taken when using this on FSL
      processors, as the number of large TLB entries supported by the core is low
      (16-64) on current processors.
      
      The supported set of hugepage sizes include 4m, 16m, 64m, 256m, and 1g.
      Page sizes larger than the max zone size are called "gigantic" pages and
      must be allocated on the command line (and cannot be deallocated).
      
      This is currently only fully implemented for Freescale 32-bit BookE
      processors, but there is some infrastructure in the code for
      64-bit BooKE.
      Signed-off-by: NBecky Bruce <beckyb@kernel.crashing.org>
      Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      41151e77
  3. 12 7月, 2011 1 次提交
  4. 08 7月, 2011 1 次提交
  5. 29 6月, 2011 1 次提交
    • S
      powerpc/book3e-64: use a separate TLB handler when linear map is bolted · f67f4ef5
      Scott Wood 提交于
      On MMUs such as FSL where we can guarantee the entire linear mapping is
      bolted, we don't need to worry about linear TLB misses.  If on top of
      that we do a full table walk, we get rid of all recursive TLB faults, and
      can dispense with some state saving.  This gains a few percent on
      TLB-miss-heavy workloads, and around 50% on a benchmark that had a high
      rate of virtual page table faults under the normal handler.
      
      While touching the EX_TLB layout, remove EX_TLB_MMUCR0, EX_TLB_SRR0, and
      EX_TLB_SRR1 as they're not used.
      
      [BenH: Fixed build with 64K pages (wsp config)]
      Signed-off-by: NScott Wood <scottwood@freescale.com>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      f67f4ef5
  6. 17 6月, 2011 1 次提交
  7. 25 5月, 2011 2 次提交
    • P
      mm, powerpc: move the RCU page-table freeing into generic code · 26723911
      Peter Zijlstra 提交于
      In case other architectures require RCU freed page-tables to implement
      gup_fast() and software filled hashes and similar things, provide the
      means to do so by moving the logic into generic code.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Requested-by: NDavid Miller <davem@davemloft.net>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Russell King <rmk@arm.linux.org.uk>
      Cc: Paul Mundt <lethal@linux-sh.org>
      Cc: Jeff Dike <jdike@addtoit.com>
      Cc: Richard Weinberger <richard@nod.at>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Mel Gorman <mel@csn.ul.ie>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Nick Piggin <npiggin@kernel.dk>
      Cc: Namhyung Kim <namhyung@gmail.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      26723911
    • P
      powerpc: mmu_gather rework · d6bf29b4
      Peter Zijlstra 提交于
      Fix up powerpc to the new mmu_gather stuff.
      
      PPC has an extra batching queue to RCU free the actual pagetable
      allocations, use the ARCH extentions for that for now.
      
      For the ppc64_tlb_batch, which tracks the vaddrs to unhash from the
      hardware hash-table, keep using per-cpu arrays but flush on context switch
      and use a TLF bit to track the lazy_mmu state.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Acked-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: David Miller <davem@davemloft.net>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Russell King <rmk@arm.linux.org.uk>
      Cc: Paul Mundt <lethal@linux-sh.org>
      Cc: Jeff Dike <jdike@addtoit.com>
      Cc: Richard Weinberger <richard@nod.at>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Mel Gorman <mel@csn.ul.ie>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Nick Piggin <npiggin@kernel.dk>
      Cc: Namhyung Kim <namhyung@gmail.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      d6bf29b4
  8. 18 11月, 2010 1 次提交
  9. 14 10月, 2010 2 次提交
    • K
      powerpc/fsl-booke64: Use TLB CAMs to cover linear mapping on FSL 64-bit chips · 55fd766b
      Kumar Gala 提交于
      On Freescale parts typically have TLB array for large mappings that we can
      bolt the linear mapping into.  We utilize the code that already exists
      on PPC32 on the 64-bit side to setup the linear mapping to be cover by
      bolted TLB entries.  We utilize a quarter of the variable size TLB array
      for this purpose.
      
      Additionally, we limit the amount of memory to what we can cover via
      bolted entries so we don't get secondary faults in the TLB miss
      handlers.  We should fix this limitation in the future.
      Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
      55fd766b
    • K
      powerpc/fsl-booke: Add support for FSL Arch v1.0 MMU in setup_page_sizes · 988cf86d
      Kumar Gala 提交于
      Update setup_page_sizes() to support for a MMU v1.0 FSL style MMU
      implementation.  In such a processor, we don't have TLB0PS or EPTCFG
      registers (and access to these registers may cause exceptions).  We need
      to parse the older format of TLBnCFG for page size support.  Additionaly,
      assume since we are an FSL implementation that we have 2 TLB arrays and
      the second array contains the variable size pages.
      Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
      988cf86d
  10. 05 8月, 2010 2 次提交
    • B
      memblock: Remove rmo_size, burry it in arch/powerpc where it belongs · cd3db0c4
      Benjamin Herrenschmidt 提交于
      The RMA (RMO is a misnomer) is a concept specific to ppc64 (in fact
      server ppc64 though I hijack it on embedded ppc64 for similar purposes)
      and represents the area of memory that can be accessed in real mode
      (aka with MMU off), or on embedded, from the exception vectors (which
      is bolted in the TLB) which pretty much boils down to the same thing.
      
      We take that out of the generic MEMBLOCK data structure and move it into
      arch/powerpc where it belongs, renaming it to "RMA" while at it.
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      cd3db0c4
    • B
      memblock: Introduce default allocation limit and use it to replace explicit ones · e63075a3
      Benjamin Herrenschmidt 提交于
      This introduce memblock.current_limit which is used to limit allocations
      from memblock_alloc() or memblock_alloc_base(..., MEMBLOCK_ALLOC_ACCESSIBLE).
      
      The old MEMBLOCK_ALLOC_ANYWHERE changes value from 0 to ~(u64)0 and can still
      be used with memblock_alloc_base() to allocate really anywhere.
      
      It is -no-longer- cropped to MEMBLOCK_REAL_LIMIT which disappears.
      
      Note to archs: I'm leaving the default limit to MEMBLOCK_ALLOC_ANYWHERE. I
      strongly recommend that you ensure that you set an appropriate limit
      during boot in order to guarantee that an memblock_alloc() at any time
      results in something that is accessible with a simple __va().
      
      The reason is that a subsequent patch will introduce the ability for
      the array to resize itself by reallocating itself. The MEMBLOCK core will
      honor the current limit when performing those allocations.
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      e63075a3
  11. 14 7月, 2010 3 次提交
  12. 19 2月, 2010 1 次提交
  13. 20 8月, 2009 5 次提交
  14. 08 4月, 2009 1 次提交
  15. 24 3月, 2009 1 次提交
  16. 08 1月, 2009 1 次提交
  17. 21 12月, 2008 1 次提交