1. 17 9月, 2012 1 次提交
  2. 05 9月, 2012 1 次提交
  3. 29 3月, 2012 1 次提交
  4. 21 3月, 2012 1 次提交
  5. 23 2月, 2012 1 次提交
    • M
      fadump: Register for firmware assisted dump. · 3ccc00a7
      Mahesh Salgaonkar 提交于
      On 2012-02-20 11:02:51 Mon, Paul Mackerras wrote:
      > On Thu, Feb 16, 2012 at 04:44:30PM +0530, Mahesh J Salgaonkar wrote:
      >
      > If I have read the code correctly, we are going to get this printk on
      > non-pSeries machines or on older pSeries machines, even if the user
      > has not put the fadump=on option on the kernel command line.  The
      > printk will be annoying since there is no actual error condition.  It
      > seems to me that the condition for the printk should include
      > fw_dump.fadump_enabled.  In other words you should probably add
      >
      > 	if (!fw_dump.fadump_enabled)
      > 		return 0;
      >
      > at the beginning of the function.
      
      Hi Paul,
      
      Thanks for pointing it out. Please find the updated patch below.
      
      The existing patches above this (4/10 through 10/10) cleanly applies
      on this update.
      
      Thanks,
      -Mahesh.
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      3ccc00a7
  6. 01 11月, 2011 1 次提交
  7. 20 9月, 2011 2 次提交
    • A
      powerpc: Fix oops when echoing bad values to /sys/devices/system/memory/probe · a1194097
      Anton Blanchard 提交于
      If we echo an address the hypervisor doesn't like to
      /sys/devices/system/memory/probe we oops the box:
      
      # echo 0x10000000000 > /sys/devices/system/memory/probe
      
      kernel BUG at arch/powerpc/mm/hash_utils_64.c:541!
      
      The backtrace is:
      
      create_section_mapping
      arch_add_memory
      add_memory
      memory_probe_store
      sysdev_class_store
      sysfs_write_file
      vfs_write
      SyS_write
      
      In create_section_mapping we BUG if htab_bolt_mapping returned
      an error. A better approach is to return an error which will
      propagate back to userspace.
      
      Rerunning the test with this patch applied:
      
      # echo 0x10000000000 > /sys/devices/system/memory/probe
      -bash: echo: write error: Invalid argument
      Signed-off-by: NAnton Blanchard <anton@samba.org>
      Cc: stable@kernel.org
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      a1194097
    • B
      powerpc: Hugetlb for BookE · 41151e77
      Becky Bruce 提交于
      Enable hugepages on Freescale BookE processors.  This allows the kernel to
      use huge TLB entries to map pages, which can greatly reduce the number of
      TLB misses and the amount of TLB thrashing experienced by applications with
      large memory footprints.  Care should be taken when using this on FSL
      processors, as the number of large TLB entries supported by the core is low
      (16-64) on current processors.
      
      The supported set of hugepage sizes include 4m, 16m, 64m, 256m, and 1g.
      Page sizes larger than the max zone size are called "gigantic" pages and
      must be allocated on the command line (and cannot be deallocated).
      
      This is currently only fully implemented for Freescale 32-bit BookE
      processors, but there is some infrastructure in the code for
      64-bit BooKE.
      Signed-off-by: NBecky Bruce <beckyb@kernel.crashing.org>
      Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      41151e77
  8. 27 4月, 2011 1 次提交
  9. 20 4月, 2011 1 次提交
  10. 31 3月, 2011 1 次提交
  11. 29 11月, 2010 1 次提交
  12. 18 11月, 2010 1 次提交
    • M
      powerpc: Fix call to subpage_protection() · 1c2c25c7
      Michael Neuling 提交于
      In:
        powerpc/mm: Fix pgtable cache cleanup with CONFIG_PPC_SUBPAGE_PROT
        commit d28513bc
        Author: David Gibson <david@gibson.dropbear.id.au>
      
      subpage_protection() was changed to to take an mm rather a pgdir but it
      didn't change calling site in hashpage_preload().  The change wasn't
      noticed at compile time since hashpage_preload() used a void* as the
      parameter to subpage_protection().
      
      This is obviously wrong and can trigger the following crash when
      CONFIG_SLAB, CONFIG_DEBUG_SLAB, CONFIG_PPC_64K_PAGES
      CONFIG_PPC_SUBPAGE_PROT are enabled.
      
      Freeing unused kernel memory: 704k freed
      Unable to handle kernel paging request for data at address 0x6b6b6b6b6b6c49b7
      Faulting instruction address: 0xc0000000000410f4
      cpu 0x2: Vector: 300 (Data Access) at [c00000004233f590]
          pc: c0000000000410f4: .hash_preload+0x258/0x338
          lr: c000000000041054: .hash_preload+0x1b8/0x338
          sp: c00000004233f810
         msr: 8000000000009032
         dar: 6b6b6b6b6b6c49b7
       dsisr: 40000000
        current = 0xc00000007e2c0070
        paca    = 0xc000000007fe0500
          pid   = 1, comm = init
      enter ? for help
      [c00000004233f810] c000000000041020 .hash_preload+0x184/0x338 (unreliable)
      [c00000004233f8f0] c00000000003ed98 .update_mmu_cache+0xb0/0xd0
      [c00000004233f990] c000000000157754 .__do_fault+0x48c/0x5dc
      [c00000004233faa0] c000000000158fd0 .handle_mm_fault+0x508/0xa8c
      [c00000004233fb90] c0000000006acdd4 .do_page_fault+0x428/0x6ac
      [c00000004233fe30] c000000000005260 handle_page_fault+0x20/0x74
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      1c2c25c7
  13. 05 8月, 2010 3 次提交
  14. 04 8月, 2010 2 次提交
  15. 23 7月, 2010 2 次提交
  16. 14 7月, 2010 1 次提交
  17. 18 12月, 2009 2 次提交
    • D
      powerpc/mm: Fix stupid bug in subpge protection handling · a1128f8f
      David Gibson 提交于
      Commit d28513bc ("Fix bug in pagetable
      cache cleanup with CONFIG_PPC_SUBPAGE_PROT"), itself a fix for
      breakage caused by an earlier clean up patch of mine, contains a
      stupid bug.  I changed the parameters of the subpage_protection()
      function, but failed to update one of the callers.
      
      This patch fixes it, and replaces a void * with a typed pointer so
      that the compiler will warn on such an error in future.
      Signed-off-by: NDavid Gibson <dwg@au1.ibm.com>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      a1128f8f
    • S
      powerpc/mm: Fix hash_utils_64.c compile errors with DEBUG enabled. · 5c339919
      Sachin P. Sant 提交于
      This time without the funny characters.
      
      Fix following build errors generated with DEBUG=1
      
      cc1: warnings being treated as errors
      arch/powerpc/mm/hash_utils_64.c: In function 'htab_dt_scan_page_sizes':
      arch/powerpc/mm/hash_utils_64.c:343: error: format '%04x' expects type 'unsigned int', but argument 4 has type 'long unsigned int'
      arch/powerpc/mm/hash_utils_64.c:343: error: format '%08x' expects type 'unsigned int', but argument 5 has type 'long unsigned int'
      arch/powerpc/mm/hash_utils_64.c: In function 'htab_initialize':
      arch/powerpc/mm/hash_utils_64.c:666: error: format '%x' expects type 'unsigned int', but argument 4 has type 'long unsigned int'
      ... SNIP ...
      Signed-off-by: NSachin Sant <sachinp@in.ibm.com>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      5c339919
  18. 08 12月, 2009 1 次提交
  19. 02 12月, 2009 1 次提交
  20. 27 11月, 2009 1 次提交
  21. 05 11月, 2009 1 次提交
  22. 30 10月, 2009 3 次提交
    • D
      powerpc/mm: Bring hugepage PTE accessor functions back into sync with normal accessors · 0895ecda
      David Gibson 提交于
      The hugepage arch code provides a number of hook functions/macros
      which mirror the functionality of various normal page pte access
      functions.  Various changes in the normal page accessors (in
      particular BenH's recent changes to the handling of lazy icache
      flushing and PAGE_EXEC) have caused the hugepage versions to get out
      of sync with the originals.  In some cases, this is a bug, at least on
      some MMU types.
      
      One of the reasons that some hooks were not identical to the normal
      page versions, is that the fact we're dealing with a hugepage needed
      to be passed down do use the correct dcache-icache flush function.
      This patch makes the main flush_dcache_icache_page() function hugepage
      aware (by checking for the PageCompound flag).  That in turn means we
      can make set_huge_pte_at() just a call to set_pte_at() bringing it
      back into sync.  As a bonus, this lets us remove the
      hash_huge_page_do_lazy_icache() function, replacing it with a call to
      the hash_page_do_lazy_icache() function it was based on.
      
      Some other hugepage pte access hooks - huge_ptep_get_and_clear() and
      huge_ptep_clear_flush() - are not so easily unified, but this patch at
      least brings them back into sync with the current versions of the
      corresponding normal page functions.
      Signed-off-by: NDavid Gibson <dwg@au1.ibm.com>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      0895ecda
    • D
      powerpc/mm: Cleanup initialization of hugepages on powerpc · d1837cba
      David Gibson 提交于
      This patch simplifies the logic used to initialize hugepages on
      powerpc.  The somewhat oddly named set_huge_psize() is renamed to
      add_huge_page_size() and now does all necessary verification of
      whether it's given a valid hugepage sizes (instead of just some) and
      instantiates the generic hstate structure (but no more).
      
      hugetlbpage_init() now steps through the available pagesizes, checks
      if they're valid for hugepages by calling add_huge_page_size() and
      initializes the kmem_caches for the hugepage pagetables.  This means
      we can now eliminate the mmu_huge_psizes array, since we no longer
      need to pass the sizing information for the pagetable caches from
      set_huge_psize() into hugetlbpage_init()
      
      Determination of the default huge page size is also moved from the
      hash code into the general hugepage code.
      Signed-off-by: NDavid Gibson <dwg@au1.ibm.com>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      d1837cba
    • D
      powerpc/mm: Allow more flexible layouts for hugepage pagetables · a4fe3ce7
      David Gibson 提交于
      Currently each available hugepage size uses a slightly different
      pagetable layout: that is, the bottem level table of pointers to
      hugepages is a different size, and may branch off from the normal page
      tables at a different level.  Every hugepage aware path that needs to
      walk the pagetables must therefore look up the hugepage size from the
      slice info first, and work out the correct way to walk the pagetables
      accordingly.  Future hardware is likely to add more possible hugepage
      sizes, more layout options and more mess.
      
      This patch, therefore reworks the handling of hugepage pagetables to
      reduce this complexity.  In the new scheme, instead of having to
      consult the slice mask, pagetable walking code can check a flag in the
      PGD/PUD/PMD entries to see where to branch off to hugepage pagetables,
      and the entry also contains the information (eseentially hugepage
      shift) necessary to then interpret that table without recourse to the
      slice mask.  This scheme can be extended neatly to handle multiple
      levels of self-describing "special" hugepage pagetables, although for
      now we assume only one level exists.
      
      This approach means that only the pagetable allocation path needs to
      know how the pagetables should be set out.  All other (hugepage)
      pagetable walking paths can just interpret the structure as they go.
      
      There already was a flag bit in PGD/PUD/PMD entries for hugepage
      directory pointers, but it was only used for debug.  We alter that
      flag bit to instead be a 0 in the MSB to indicate a hugepage pagetable
      pointer (normally it would be 1 since the pointer lies in the linear
      mapping).  This means that asm pagetable walking can test for (and
      punt on) hugepage pointers with the same test that checks for
      unpopulated page directory entries (beq becomes bge), since hugepage
      pointers will always be positive, and normal pointers always negative.
      
      While we're at it, we get rid of the confusing (and grep defeating)
      #defining of hugepte_shift to be the same thing as mmu_huge_psizes.
      Signed-off-by: NDavid Gibson <dwg@au1.ibm.com>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      a4fe3ce7
  23. 22 4月, 2009 1 次提交
  24. 24 3月, 2009 2 次提交
  25. 23 2月, 2009 1 次提交
  26. 22 10月, 2008 1 次提交
  27. 14 10月, 2008 1 次提交
  28. 16 9月, 2008 1 次提交
    • P
      powerpc: Make the 64-bit kernel as a position-independent executable · 549e8152
      Paul Mackerras 提交于
      This implements CONFIG_RELOCATABLE for 64-bit by making the kernel as
      a position-independent executable (PIE) when it is set.  This involves
      processing the dynamic relocations in the image in the early stages of
      booting, even if the kernel is being run at the address it is linked at,
      since the linker does not necessarily fill in words in the image for
      which there are dynamic relocations.  (In fact the linker does fill in
      such words for 64-bit executables, though not for 32-bit executables,
      so in principle we could avoid calling relocate() entirely when we're
      running a 64-bit kernel at the linked address.)
      
      The dynamic relocations are processed by a new function relocate(addr),
      where the addr parameter is the virtual address where the image will be
      run.  In fact we call it twice; once before calling prom_init, and again
      when starting the main kernel.  This means that reloc_offset() returns
      0 in prom_init (since it has been relocated to the address it is running
      at), which necessitated a few adjustments.
      
      This also changes __va and __pa to use an equivalent definition that is
      simpler.  With the relocatable kernel, PAGE_OFFSET and MEMORY_START are
      constants (for 64-bit) whereas PHYSICAL_START is a variable (and
      KERNELBASE ideally should be too, but isn't yet).
      
      With this, relocatable kernels still copy themselves down to physical
      address 0 and run there.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      549e8152
  29. 03 9月, 2008 1 次提交
    • P
      powerpc: Only make kernel text pages of linear mapping executable · 9e88ba4e
      Paul Mackerras 提交于
      Commit bc033b63 ("powerpc/mm: Fix
      attribute confusion with htab_bolt_mapping()") moved the check for
      whether we should make pages of the linear mapping executable from
      htab_bolt_mapping into its callers, including htab_initialize.
      A side-effect of this is that the decision is now made once for
      each contiguous section in the LMB array rather than for each page
      individually.  This can often mean that the whole of the linear
      mapping ends up being executable.
      
      This reverts to the previous behaviour, where individual pages are
      checked for being part of the kernel text or not, by moving the check
      back down into htab_bolt_mapping.
      Acked-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      9e88ba4e
  30. 20 8月, 2008 1 次提交
  31. 11 8月, 2008 1 次提交
    • B
      powerpc/mm: Fix attribute confusion with htab_bolt_mapping() · bc033b63
      Benjamin Herrenschmidt 提交于
      The function htab_bolt_mapping() is used to create permanent
      mappings in the MMU hash table, for example, in order to create
      the linear mapping of vmemmap.  It's also used by early boot
      ioremap (before mem_init_done).
      
      However, the way ioremap uses it is incorrect as it passes it the
      protection flags in the "linux PTE" form while htab_bolt_mapping()
      expects them in the hash table format.  This is made more confusing by
      the fact that some of those flags are actually in the same position in
      both cases.
      
      This fixes it all by making htab_bolt_mapping() take normal linux
      protection flags instead, and use a little helper to convert them to
      htab flags. Callers can now use the usual PAGE_* definitions safely.
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      
       arch/powerpc/include/asm/mmu-hash64.h |    2 -
       arch/powerpc/mm/hash_utils_64.c       |   65 ++++++++++++++++++++--------------
       arch/powerpc/mm/init_64.c             |    9 +---
       3 files changed, 44 insertions(+), 32 deletions(-)
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      bc033b63