1. 17 3月, 2013 1 次提交
    • A
      powerpc: Update kernel VSID range · c60ac569
      Aneesh Kumar K.V 提交于
      This patch change the kernel VSID range so that we limit VSID_BITS to 37.
      This enables us to support 64TB with 65 bit VA (37+28). Without this patch
      we have boot hangs on platforms that only support 65 bit VA.
      
      With this patch we now have proto vsid generated as below:
      
      We first generate a 37-bit "proto-VSID". Proto-VSIDs are generated
      from mmu context id and effective segment id of the address.
      
      For user processes max context id is limited to ((1ul << 19) - 5)
      for kernel space, we use the top 4 context ids to map address as below
      0x7fffc -  [ 0xc000000000000000 - 0xc0003fffffffffff ]
      0x7fffd -  [ 0xd000000000000000 - 0xd0003fffffffffff ]
      0x7fffe -  [ 0xe000000000000000 - 0xe0003fffffffffff ]
      0x7ffff -  [ 0xf000000000000000 - 0xf0003fffffffffff ]
      Acked-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Tested-by: NGeoff Levand <geoff@infradead.org>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      CC: <stable@vger.kernel.org> [v3.8]
      c60ac569
  2. 13 3月, 2013 1 次提交
  3. 15 2月, 2013 1 次提交
  4. 17 9月, 2012 2 次提交
  5. 05 9月, 2012 1 次提交
  6. 29 3月, 2012 1 次提交
  7. 21 3月, 2012 1 次提交
  8. 23 2月, 2012 1 次提交
    • M
      fadump: Register for firmware assisted dump. · 3ccc00a7
      Mahesh Salgaonkar 提交于
      On 2012-02-20 11:02:51 Mon, Paul Mackerras wrote:
      > On Thu, Feb 16, 2012 at 04:44:30PM +0530, Mahesh J Salgaonkar wrote:
      >
      > If I have read the code correctly, we are going to get this printk on
      > non-pSeries machines or on older pSeries machines, even if the user
      > has not put the fadump=on option on the kernel command line.  The
      > printk will be annoying since there is no actual error condition.  It
      > seems to me that the condition for the printk should include
      > fw_dump.fadump_enabled.  In other words you should probably add
      >
      > 	if (!fw_dump.fadump_enabled)
      > 		return 0;
      >
      > at the beginning of the function.
      
      Hi Paul,
      
      Thanks for pointing it out. Please find the updated patch below.
      
      The existing patches above this (4/10 through 10/10) cleanly applies
      on this update.
      
      Thanks,
      -Mahesh.
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      3ccc00a7
  9. 01 11月, 2011 1 次提交
  10. 20 9月, 2011 2 次提交
    • A
      powerpc: Fix oops when echoing bad values to /sys/devices/system/memory/probe · a1194097
      Anton Blanchard 提交于
      If we echo an address the hypervisor doesn't like to
      /sys/devices/system/memory/probe we oops the box:
      
      # echo 0x10000000000 > /sys/devices/system/memory/probe
      
      kernel BUG at arch/powerpc/mm/hash_utils_64.c:541!
      
      The backtrace is:
      
      create_section_mapping
      arch_add_memory
      add_memory
      memory_probe_store
      sysdev_class_store
      sysfs_write_file
      vfs_write
      SyS_write
      
      In create_section_mapping we BUG if htab_bolt_mapping returned
      an error. A better approach is to return an error which will
      propagate back to userspace.
      
      Rerunning the test with this patch applied:
      
      # echo 0x10000000000 > /sys/devices/system/memory/probe
      -bash: echo: write error: Invalid argument
      Signed-off-by: NAnton Blanchard <anton@samba.org>
      Cc: stable@kernel.org
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      a1194097
    • B
      powerpc: Hugetlb for BookE · 41151e77
      Becky Bruce 提交于
      Enable hugepages on Freescale BookE processors.  This allows the kernel to
      use huge TLB entries to map pages, which can greatly reduce the number of
      TLB misses and the amount of TLB thrashing experienced by applications with
      large memory footprints.  Care should be taken when using this on FSL
      processors, as the number of large TLB entries supported by the core is low
      (16-64) on current processors.
      
      The supported set of hugepage sizes include 4m, 16m, 64m, 256m, and 1g.
      Page sizes larger than the max zone size are called "gigantic" pages and
      must be allocated on the command line (and cannot be deallocated).
      
      This is currently only fully implemented for Freescale 32-bit BookE
      processors, but there is some infrastructure in the code for
      64-bit BooKE.
      Signed-off-by: NBecky Bruce <beckyb@kernel.crashing.org>
      Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      41151e77
  11. 27 4月, 2011 1 次提交
  12. 20 4月, 2011 1 次提交
  13. 31 3月, 2011 1 次提交
  14. 29 11月, 2010 1 次提交
  15. 18 11月, 2010 1 次提交
    • M
      powerpc: Fix call to subpage_protection() · 1c2c25c7
      Michael Neuling 提交于
      In:
        powerpc/mm: Fix pgtable cache cleanup with CONFIG_PPC_SUBPAGE_PROT
        commit d28513bc
        Author: David Gibson <david@gibson.dropbear.id.au>
      
      subpage_protection() was changed to to take an mm rather a pgdir but it
      didn't change calling site in hashpage_preload().  The change wasn't
      noticed at compile time since hashpage_preload() used a void* as the
      parameter to subpage_protection().
      
      This is obviously wrong and can trigger the following crash when
      CONFIG_SLAB, CONFIG_DEBUG_SLAB, CONFIG_PPC_64K_PAGES
      CONFIG_PPC_SUBPAGE_PROT are enabled.
      
      Freeing unused kernel memory: 704k freed
      Unable to handle kernel paging request for data at address 0x6b6b6b6b6b6c49b7
      Faulting instruction address: 0xc0000000000410f4
      cpu 0x2: Vector: 300 (Data Access) at [c00000004233f590]
          pc: c0000000000410f4: .hash_preload+0x258/0x338
          lr: c000000000041054: .hash_preload+0x1b8/0x338
          sp: c00000004233f810
         msr: 8000000000009032
         dar: 6b6b6b6b6b6c49b7
       dsisr: 40000000
        current = 0xc00000007e2c0070
        paca    = 0xc000000007fe0500
          pid   = 1, comm = init
      enter ? for help
      [c00000004233f810] c000000000041020 .hash_preload+0x184/0x338 (unreliable)
      [c00000004233f8f0] c00000000003ed98 .update_mmu_cache+0xb0/0xd0
      [c00000004233f990] c000000000157754 .__do_fault+0x48c/0x5dc
      [c00000004233faa0] c000000000158fd0 .handle_mm_fault+0x508/0xa8c
      [c00000004233fb90] c0000000006acdd4 .do_page_fault+0x428/0x6ac
      [c00000004233fe30] c000000000005260 handle_page_fault+0x20/0x74
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      1c2c25c7
  16. 05 8月, 2010 3 次提交
  17. 04 8月, 2010 2 次提交
  18. 23 7月, 2010 2 次提交
  19. 14 7月, 2010 1 次提交
  20. 18 12月, 2009 2 次提交
    • D
      powerpc/mm: Fix stupid bug in subpge protection handling · a1128f8f
      David Gibson 提交于
      Commit d28513bc ("Fix bug in pagetable
      cache cleanup with CONFIG_PPC_SUBPAGE_PROT"), itself a fix for
      breakage caused by an earlier clean up patch of mine, contains a
      stupid bug.  I changed the parameters of the subpage_protection()
      function, but failed to update one of the callers.
      
      This patch fixes it, and replaces a void * with a typed pointer so
      that the compiler will warn on such an error in future.
      Signed-off-by: NDavid Gibson <dwg@au1.ibm.com>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      a1128f8f
    • S
      powerpc/mm: Fix hash_utils_64.c compile errors with DEBUG enabled. · 5c339919
      Sachin P. Sant 提交于
      This time without the funny characters.
      
      Fix following build errors generated with DEBUG=1
      
      cc1: warnings being treated as errors
      arch/powerpc/mm/hash_utils_64.c: In function 'htab_dt_scan_page_sizes':
      arch/powerpc/mm/hash_utils_64.c:343: error: format '%04x' expects type 'unsigned int', but argument 4 has type 'long unsigned int'
      arch/powerpc/mm/hash_utils_64.c:343: error: format '%08x' expects type 'unsigned int', but argument 5 has type 'long unsigned int'
      arch/powerpc/mm/hash_utils_64.c: In function 'htab_initialize':
      arch/powerpc/mm/hash_utils_64.c:666: error: format '%x' expects type 'unsigned int', but argument 4 has type 'long unsigned int'
      ... SNIP ...
      Signed-off-by: NSachin Sant <sachinp@in.ibm.com>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      5c339919
  21. 08 12月, 2009 1 次提交
  22. 02 12月, 2009 1 次提交
  23. 27 11月, 2009 1 次提交
  24. 05 11月, 2009 1 次提交
  25. 30 10月, 2009 3 次提交
    • D
      powerpc/mm: Bring hugepage PTE accessor functions back into sync with normal accessors · 0895ecda
      David Gibson 提交于
      The hugepage arch code provides a number of hook functions/macros
      which mirror the functionality of various normal page pte access
      functions.  Various changes in the normal page accessors (in
      particular BenH's recent changes to the handling of lazy icache
      flushing and PAGE_EXEC) have caused the hugepage versions to get out
      of sync with the originals.  In some cases, this is a bug, at least on
      some MMU types.
      
      One of the reasons that some hooks were not identical to the normal
      page versions, is that the fact we're dealing with a hugepage needed
      to be passed down do use the correct dcache-icache flush function.
      This patch makes the main flush_dcache_icache_page() function hugepage
      aware (by checking for the PageCompound flag).  That in turn means we
      can make set_huge_pte_at() just a call to set_pte_at() bringing it
      back into sync.  As a bonus, this lets us remove the
      hash_huge_page_do_lazy_icache() function, replacing it with a call to
      the hash_page_do_lazy_icache() function it was based on.
      
      Some other hugepage pte access hooks - huge_ptep_get_and_clear() and
      huge_ptep_clear_flush() - are not so easily unified, but this patch at
      least brings them back into sync with the current versions of the
      corresponding normal page functions.
      Signed-off-by: NDavid Gibson <dwg@au1.ibm.com>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      0895ecda
    • D
      powerpc/mm: Cleanup initialization of hugepages on powerpc · d1837cba
      David Gibson 提交于
      This patch simplifies the logic used to initialize hugepages on
      powerpc.  The somewhat oddly named set_huge_psize() is renamed to
      add_huge_page_size() and now does all necessary verification of
      whether it's given a valid hugepage sizes (instead of just some) and
      instantiates the generic hstate structure (but no more).
      
      hugetlbpage_init() now steps through the available pagesizes, checks
      if they're valid for hugepages by calling add_huge_page_size() and
      initializes the kmem_caches for the hugepage pagetables.  This means
      we can now eliminate the mmu_huge_psizes array, since we no longer
      need to pass the sizing information for the pagetable caches from
      set_huge_psize() into hugetlbpage_init()
      
      Determination of the default huge page size is also moved from the
      hash code into the general hugepage code.
      Signed-off-by: NDavid Gibson <dwg@au1.ibm.com>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      d1837cba
    • D
      powerpc/mm: Allow more flexible layouts for hugepage pagetables · a4fe3ce7
      David Gibson 提交于
      Currently each available hugepage size uses a slightly different
      pagetable layout: that is, the bottem level table of pointers to
      hugepages is a different size, and may branch off from the normal page
      tables at a different level.  Every hugepage aware path that needs to
      walk the pagetables must therefore look up the hugepage size from the
      slice info first, and work out the correct way to walk the pagetables
      accordingly.  Future hardware is likely to add more possible hugepage
      sizes, more layout options and more mess.
      
      This patch, therefore reworks the handling of hugepage pagetables to
      reduce this complexity.  In the new scheme, instead of having to
      consult the slice mask, pagetable walking code can check a flag in the
      PGD/PUD/PMD entries to see where to branch off to hugepage pagetables,
      and the entry also contains the information (eseentially hugepage
      shift) necessary to then interpret that table without recourse to the
      slice mask.  This scheme can be extended neatly to handle multiple
      levels of self-describing "special" hugepage pagetables, although for
      now we assume only one level exists.
      
      This approach means that only the pagetable allocation path needs to
      know how the pagetables should be set out.  All other (hugepage)
      pagetable walking paths can just interpret the structure as they go.
      
      There already was a flag bit in PGD/PUD/PMD entries for hugepage
      directory pointers, but it was only used for debug.  We alter that
      flag bit to instead be a 0 in the MSB to indicate a hugepage pagetable
      pointer (normally it would be 1 since the pointer lies in the linear
      mapping).  This means that asm pagetable walking can test for (and
      punt on) hugepage pointers with the same test that checks for
      unpopulated page directory entries (beq becomes bge), since hugepage
      pointers will always be positive, and normal pointers always negative.
      
      While we're at it, we get rid of the confusing (and grep defeating)
      #defining of hugepte_shift to be the same thing as mmu_huge_psizes.
      Signed-off-by: NDavid Gibson <dwg@au1.ibm.com>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      a4fe3ce7
  26. 22 4月, 2009 1 次提交
  27. 24 3月, 2009 2 次提交
  28. 23 2月, 2009 1 次提交
  29. 22 10月, 2008 1 次提交
  30. 14 10月, 2008 1 次提交