1. 30 4月, 2013 3 次提交
  2. 08 5月, 2012 1 次提交
    • P
      powerpc: fix compile fail in hugetlb cmdline parsing · 89528127
      Paul Gortmaker 提交于
      Commit 9fb48c74
      
          "params: add 3rd arg to option handler callback signature"
      
      added an extra arg to the function, but didn't catch all the use
      cases needing it, causing this compile fail in mpc85xx_defconfig:
      
       arch/powerpc/mm/hugetlbpage.c:316:4: error: passing argument 7 of
       'parse_args' from incompatible pointer type [-Werror]
      
       include/linux/moduleparam.h:317:12: note: expected
      	 'int (*)(char *, char *, const char *)' but argument is of type
      	 'int (*)(char *, char *)'
      
      This function has no need to printk out the "doing" value, so
      just add the arg as an "unused".
      
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: Jim Cromie <jim.cromie@gmail.com>
      Cc: Jason Baron <jbaron@redhat.com>
      Cc: Becky Bruce <beckyb@kernel.crashing.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      89528127
  3. 26 3月, 2012 1 次提交
    • P
      params: <level>_initcall-like kernel parameters · 026cee00
      Pawel Moll 提交于
      This patch adds a set of macros that can be used to declare
      kernel parameters to be parsed _before_ initcalls at a chosen
      level are executed.  We rename the now-unused "flags" field of
      struct kernel_param as the level.  It's signed, for when we
      use this for early params as well, in future.
      
      Linker macro collating init calls had to be modified in order
      to add additional symbols between levels that are later used
      by the init code to split the calls into blocks.
      Signed-off-by: NPawel Moll <pawel.moll@arm.com>
      Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
      026cee00
  4. 20 3月, 2012 1 次提交
  5. 05 3月, 2012 1 次提交
    • P
      KVM: PPC: Implement MMU notifiers for Book3S HV guests · 342d3db7
      Paul Mackerras 提交于
      This adds the infrastructure to enable us to page out pages underneath
      a Book3S HV guest, on processors that support virtualized partition
      memory, that is, POWER7.  Instead of pinning all the guest's pages,
      we now look in the host userspace Linux page tables to find the
      mapping for a given guest page.  Then, if the userspace Linux PTE
      gets invalidated, kvm_unmap_hva() gets called for that address, and
      we replace all the guest HPTEs that refer to that page with absent
      HPTEs, i.e. ones with the valid bit clear and the HPTE_V_ABSENT bit
      set, which will cause an HDSI when the guest tries to access them.
      Finally, the page fault handler is extended to reinstantiate the
      guest HPTE when the guest tries to access a page which has been paged
      out.
      
      Since we can't intercept the guest DSI and ISI interrupts on PPC970,
      we still have to pin all the guest pages on PPC970.  We have a new flag,
      kvm->arch.using_mmu_notifiers, that indicates whether we can page
      guest pages out.  If it is not set, the MMU notifier callbacks do
      nothing and everything operates as before.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      Signed-off-by: NAvi Kivity <avi@redhat.com>
      342d3db7
  6. 07 12月, 2011 4 次提交
  7. 25 11月, 2011 1 次提交
  8. 03 11月, 2011 5 次提交
    • A
      thp: share get_huge_page_tail() · b35a35b5
      Andrea Arcangeli 提交于
      This avoids duplicating the function in every arch gup_fast.
      Signed-off-by: NAndrea Arcangeli <aarcange@redhat.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Johannes Weiner <jweiner@redhat.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: David Gibson <david@gibson.dropbear.id.au>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: David Miller <davem@davemloft.net>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      b35a35b5
    • A
      powerpc: gup_huge_pmd() return 0 if pte changes · cf592bf7
      Andrea Arcangeli 提交于
      powerpc didn't return 0 in that case, if it's rolling back the *nr pointer
      it should also return zero to avoid adding pages to the array at the wrong
      offset.
      Signed-off-by: NAndrea Arcangeli <aarcange@redhat.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Johannes Weiner <jweiner@redhat.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Acked-by: NDavid Gibson <david@gibson.dropbear.id.au>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: David Miller <davem@davemloft.net>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      cf592bf7
    • A
      powerpc: gup_hugepte() support THP based tail recounting · 3526741f
      Andrea Arcangeli 提交于
      Up to this point the code assumed old refcounting for hugepages (pre-thp).
      This updates the code directly to the thp mapcount tail page refcounting.
      Signed-off-by: NAndrea Arcangeli <aarcange@redhat.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Johannes Weiner <jweiner@redhat.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: David Gibson <david@gibson.dropbear.id.au>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      3526741f
    • A
      powerpc: gup_hugepte() avoid freeing the head page too many times · 85964684
      Andrea Arcangeli 提交于
      We only taken "refs" pins on the head page not "*nr" pins.
      Signed-off-by: NAndrea Arcangeli <aarcange@redhat.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Johannes Weiner <jweiner@redhat.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Acked-by: NDavid Gibson <david@gibson.dropbear.id.au>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      85964684
    • A
      powerpc: get_hugepte() don't put_page() the wrong page · 405e44f2
      Andrea Arcangeli 提交于
      "page" may have changed to point to the next hugepage after the loop
      completed, The references have been taken on the head page, so the
      put_page must happen there too.
      
      This is a longstanding issue pre-thp inclusion.
      
      It's totally unclear how these page_cache_add_speculative and
      pte_val(pte) != pte_val(*ptep) checks are necessary across all the
      powerpc gup_fast code, when x86 doesn't need any of that: there's no way
      the page can be freed with irq disabled so we're guaranteed the
      atomic_inc will happen on a page with page_count > 0 (so not needing the
      speculative check).
      
      The pte check is also meaningless on x86: no need to rollback on x86 if
      the pte changed, because the pte can still change a CPU tick after the
      check succeeded and it won't be rolled back in that case.  The important
      thing is we got a reference on a valid page that was mapped there a CPU
      tick ago.  So not knowing the soft tlb refill code of ppc64 in great
      detail I'm not removing the "speculative" page_count increase and the
      pte checks across all the code, but unless there's a strong reason for
      it they should be later cleaned up too.
      
      If a pte can change from huge to non-huge (like it could happen with
      THP) passing a pte_t *ptep to gup_hugepte() would also require to repeat
      the is_hugepd in gup_hugepte(), but that shouldn't happen with hugetlbfs
      only so I'm not altering that.
      Signed-off-by: NAndrea Arcangeli <aarcange@redhat.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Johannes Weiner <jweiner@redhat.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Acked-by: NDavid Gibson <david@gibson.dropbear.id.au>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      405e44f2
  9. 23 9月, 2011 1 次提交
    • P
      powerpc: Fix hugetlb with CONFIG_PPC_MM_SLICES=y · 25c29f9e
      Paul Mackerras 提交于
      Commit 41151e77 ("powerpc: Hugetlb for BookE") added some
      #ifdef CONFIG_MM_SLICES conditionals to hugetlb_get_unmapped_area()
      and vma_mmu_pagesize().  Unfortunately this is not the correct config
      symbol; it should be CONFIG_PPC_MM_SLICES.  The result is that
      attempting to use hugetlbfs on 64-bit Power server processors results
      in an infinite stack recursion between get_unmapped_area() and
      hugetlb_get_unmapped_area().
      
      This fixes it by changing the #ifdef to use CONFIG_PPC_MM_SLICES
      in those functions and also in book3e_hugetlb_preload().
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      25c29f9e
  10. 20 9月, 2011 1 次提交
    • B
      powerpc: Hugetlb for BookE · 41151e77
      Becky Bruce 提交于
      Enable hugepages on Freescale BookE processors.  This allows the kernel to
      use huge TLB entries to map pages, which can greatly reduce the number of
      TLB misses and the amount of TLB thrashing experienced by applications with
      large memory footprints.  Care should be taken when using this on FSL
      processors, as the number of large TLB entries supported by the core is low
      (16-64) on current processors.
      
      The supported set of hugepage sizes include 4m, 16m, 64m, 256m, and 1g.
      Page sizes larger than the max zone size are called "gigantic" pages and
      must be allocated on the command line (and cannot be deallocated).
      
      This is currently only fully implemented for Freescale 32-bit BookE
      processors, but there is some infrastructure in the code for
      64-bit BooKE.
      Signed-off-by: NBecky Bruce <beckyb@kernel.crashing.org>
      Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      41151e77
  11. 27 4月, 2011 1 次提交
  12. 30 3月, 2010 1 次提交
    • T
      include cleanup: Update gfp.h and slab.h includes to prepare for breaking... · 5a0e3ad6
      Tejun Heo 提交于
      include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
      
      percpu.h is included by sched.h and module.h and thus ends up being
      included when building most .c files.  percpu.h includes slab.h which
      in turn includes gfp.h making everything defined by the two files
      universally available and complicating inclusion dependencies.
      
      percpu.h -> slab.h dependency is about to be removed.  Prepare for
      this change by updating users of gfp and slab facilities include those
      headers directly instead of assuming availability.  As this conversion
      needs to touch large number of source files, the following script is
      used as the basis of conversion.
      
        http://userweb.kernel.org/~tj/misc/slabh-sweep.py
      
      The script does the followings.
      
      * Scan files for gfp and slab usages and update includes such that
        only the necessary includes are there.  ie. if only gfp is used,
        gfp.h, if slab is used, slab.h.
      
      * When the script inserts a new include, it looks at the include
        blocks and try to put the new include such that its order conforms
        to its surrounding.  It's put in the include block which contains
        core kernel includes, in the same order that the rest are ordered -
        alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
        doesn't seem to be any matching order.
      
      * If the script can't find a place to put a new include (mostly
        because the file doesn't have fitting include block), it prints out
        an error message indicating which .h file needs to be added to the
        file.
      
      The conversion was done in the following steps.
      
      1. The initial automatic conversion of all .c files updated slightly
         over 4000 files, deleting around 700 includes and adding ~480 gfp.h
         and ~3000 slab.h inclusions.  The script emitted errors for ~400
         files.
      
      2. Each error was manually checked.  Some didn't need the inclusion,
         some needed manual addition while adding it to implementation .h or
         embedding .c file was more appropriate for others.  This step added
         inclusions to around 150 files.
      
      3. The script was run again and the output was compared to the edits
         from #2 to make sure no file was left behind.
      
      4. Several build tests were done and a couple of problems were fixed.
         e.g. lib/decompress_*.c used malloc/free() wrappers around slab
         APIs requiring slab.h to be added manually.
      
      5. The script was run on all .h files but without automatically
         editing them as sprinkling gfp.h and slab.h inclusions around .h
         files could easily lead to inclusion dependency hell.  Most gfp.h
         inclusion directives were ignored as stuff from gfp.h was usually
         wildly available and often used in preprocessor macros.  Each
         slab.h inclusion directive was examined and added manually as
         necessary.
      
      6. percpu.h was updated not to include slab.h.
      
      7. Build test were done on the following configurations and failures
         were fixed.  CONFIG_GCOV_KERNEL was turned off for all tests (as my
         distributed build env didn't work with gcov compiles) and a few
         more options had to be turned off depending on archs to make things
         build (like ipr on powerpc/64 which failed due to missing writeq).
      
         * x86 and x86_64 UP and SMP allmodconfig and a custom test config.
         * powerpc and powerpc64 SMP allmodconfig
         * sparc and sparc64 SMP allmodconfig
         * ia64 SMP allmodconfig
         * s390 SMP allmodconfig
         * alpha SMP allmodconfig
         * um on x86_64 SMP allmodconfig
      
      8. percpu.h modifications were reverted so that it could be applied as
         a separate patch and serve as bisection point.
      
      Given the fact that I had only a couple of failures from tests on step
      6, I'm fairly confident about the coverage of this conversion patch.
      If there is a breakage, it's likely to be something in one of the arch
      headers which should be easily discoverable easily on most builds of
      the specific arch.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Guess-its-ok-by: NChristoph Lameter <cl@linux-foundation.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
      5a0e3ad6
  13. 27 11月, 2009 1 次提交
    • D
      powerpc/mm: Fix bug in gup_hugepd() · 39adfa54
      David Gibson 提交于
      Commit a4fe3ce7 introduced a new
      get_user_pages() path for hugepages on powerpc.  Unfortunately, there
      is a bug in it's loop logic, which can cause it to overrun the end of
      the intended region.  This came about by copying the logic from the
      normal page path, which assumes the address and end parameters have
      been pagesize aligned at the top-level.  Since they're not *hugepage*
      size aligned, the simplistic logic could step over the end of the gup
      region without triggering the loop end condition.
      
      This patch fixes the bug by using the technique that the normal page
      path uses in levels above the lowest to truncate the ending address to
      something we know we'll match with.
      Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      39adfa54
  14. 30 10月, 2009 6 次提交
    • D
      powerpc/mm: Bring hugepage PTE accessor functions back into sync with normal accessors · 0895ecda
      David Gibson 提交于
      The hugepage arch code provides a number of hook functions/macros
      which mirror the functionality of various normal page pte access
      functions.  Various changes in the normal page accessors (in
      particular BenH's recent changes to the handling of lazy icache
      flushing and PAGE_EXEC) have caused the hugepage versions to get out
      of sync with the originals.  In some cases, this is a bug, at least on
      some MMU types.
      
      One of the reasons that some hooks were not identical to the normal
      page versions, is that the fact we're dealing with a hugepage needed
      to be passed down do use the correct dcache-icache flush function.
      This patch makes the main flush_dcache_icache_page() function hugepage
      aware (by checking for the PageCompound flag).  That in turn means we
      can make set_huge_pte_at() just a call to set_pte_at() bringing it
      back into sync.  As a bonus, this lets us remove the
      hash_huge_page_do_lazy_icache() function, replacing it with a call to
      the hash_page_do_lazy_icache() function it was based on.
      
      Some other hugepage pte access hooks - huge_ptep_get_and_clear() and
      huge_ptep_clear_flush() - are not so easily unified, but this patch at
      least brings them back into sync with the current versions of the
      corresponding normal page functions.
      Signed-off-by: NDavid Gibson <dwg@au1.ibm.com>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      0895ecda
    • D
      powerpc/mm: Split hash MMU specific hugepage code into a new file · 883a3e52
      David Gibson 提交于
      This patch separates the parts of hugetlbpage.c which are inherently
      specific to the hash MMU into a new hugelbpage-hash64.c file.
      Signed-off-by: NDavid Gibson <dwg@au1.ibm.com>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      883a3e52
    • D
      powerpc/mm: Cleanup initialization of hugepages on powerpc · d1837cba
      David Gibson 提交于
      This patch simplifies the logic used to initialize hugepages on
      powerpc.  The somewhat oddly named set_huge_psize() is renamed to
      add_huge_page_size() and now does all necessary verification of
      whether it's given a valid hugepage sizes (instead of just some) and
      instantiates the generic hstate structure (but no more).
      
      hugetlbpage_init() now steps through the available pagesizes, checks
      if they're valid for hugepages by calling add_huge_page_size() and
      initializes the kmem_caches for the hugepage pagetables.  This means
      we can now eliminate the mmu_huge_psizes array, since we no longer
      need to pass the sizing information for the pagetable caches from
      set_huge_psize() into hugetlbpage_init()
      
      Determination of the default huge page size is also moved from the
      hash code into the general hugepage code.
      Signed-off-by: NDavid Gibson <dwg@au1.ibm.com>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      d1837cba
    • D
      powerpc/mm: Allow more flexible layouts for hugepage pagetables · a4fe3ce7
      David Gibson 提交于
      Currently each available hugepage size uses a slightly different
      pagetable layout: that is, the bottem level table of pointers to
      hugepages is a different size, and may branch off from the normal page
      tables at a different level.  Every hugepage aware path that needs to
      walk the pagetables must therefore look up the hugepage size from the
      slice info first, and work out the correct way to walk the pagetables
      accordingly.  Future hardware is likely to add more possible hugepage
      sizes, more layout options and more mess.
      
      This patch, therefore reworks the handling of hugepage pagetables to
      reduce this complexity.  In the new scheme, instead of having to
      consult the slice mask, pagetable walking code can check a flag in the
      PGD/PUD/PMD entries to see where to branch off to hugepage pagetables,
      and the entry also contains the information (eseentially hugepage
      shift) necessary to then interpret that table without recourse to the
      slice mask.  This scheme can be extended neatly to handle multiple
      levels of self-describing "special" hugepage pagetables, although for
      now we assume only one level exists.
      
      This approach means that only the pagetable allocation path needs to
      know how the pagetables should be set out.  All other (hugepage)
      pagetable walking paths can just interpret the structure as they go.
      
      There already was a flag bit in PGD/PUD/PMD entries for hugepage
      directory pointers, but it was only used for debug.  We alter that
      flag bit to instead be a 0 in the MSB to indicate a hugepage pagetable
      pointer (normally it would be 1 since the pointer lies in the linear
      mapping).  This means that asm pagetable walking can test for (and
      punt on) hugepage pointers with the same test that checks for
      unpopulated page directory entries (beq becomes bge), since hugepage
      pointers will always be positive, and normal pointers always negative.
      
      While we're at it, we get rid of the confusing (and grep defeating)
      #defining of hugepte_shift to be the same thing as mmu_huge_psizes.
      Signed-off-by: NDavid Gibson <dwg@au1.ibm.com>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      a4fe3ce7
    • D
      powerpc/mm: Cleanup management of kmem_caches for pagetables · a0668cdc
      David Gibson 提交于
      Currently we have a fair bit of rather fiddly code to manage the
      various kmem_caches used to store page tables of various levels.  We
      generally have two caches holding some combination of PGD, PUD and PMD
      tables, plus several more for the special hugepage pagetables.
      
      This patch cleans this all up by taking a different approach.  Rather
      than the caches being designated as for PUDs or for hugeptes for 16M
      pages, the caches are simply allocated to be a specific size.  Thus
      sharing of caches between different types/levels of pagetables happens
      naturally.  The pagetable size, where needed, is passed around encoded
      in the same way as {PGD,PUD,PMD}_INDEX_SIZE; that is n where the
      pagetable contains 2^n pointers.
      Signed-off-by: NDavid Gibson <dwg@au1.ibm.com>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      a0668cdc
    • D
      powerpc/mm: Make hpte_need_flush() correctly mask for multiple page sizes · f71dc176
      David Gibson 提交于
      Currently, hpte_need_flush() only correctly flushes the given address
      for normal pages.  Callers for hugepages are required to mask the
      address themselves.
      
      But hpte_need_flush() already looks up the page sizes for its own
      reasons, so this is a rather silly imposition on the callers.  This
      patch alters it to mask based on the pagesize it has looked up itself,
      and removes the awkward masking code in the hugepage caller.
      Signed-off-by: NDavid Gibson <dwg@au1.ibm.com>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      f71dc176
  15. 20 8月, 2009 1 次提交
  16. 28 7月, 2009 1 次提交
    • B
      mm: Pass virtual address to [__]p{te,ud,md}_free_tlb() · 9e1b32ca
      Benjamin Herrenschmidt 提交于
      mm: Pass virtual address to [__]p{te,ud,md}_free_tlb()
      
      Upcoming paches to support the new 64-bit "BookE" powerpc architecture
      will need to have the virtual address corresponding to PTE page when
      freeing it, due to the way the HW table walker works.
      
      Basically, the TLB can be loaded with "large" pages that cover the whole
      virtual space (well, sort-of, half of it actually) represented by a PTE
      page, and which contain an "indirect" bit indicating that this TLB entry
      RPN points to an array of PTEs from which the TLB can then create direct
      entries. Thus, in order to invalidate those when PTE pages are deleted,
      we need the virtual address to pass to tlbilx or tlbivax instructions.
      
      The old trick of sticking it somewhere in the PTE page struct page sucks
      too much, the address is almost readily available in all call sites and
      almost everybody implemets these as macros, so we may as well add the
      argument everywhere. I added it to the pmd and pud variants for consistency.
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      Acked-by: David Howells <dhowells@redhat.com> [MN10300 & FRV]
      Acked-by: NNick Piggin <npiggin@suse.de>
      Acked-by: Martin Schwidefsky <schwidefsky@de.ibm.com> [s390]
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      9e1b32ca
  17. 07 1月, 2009 1 次提交
  18. 16 12月, 2008 1 次提交
    • B
      powerpc: Check for valid hugepage size in hugetlb_get_unmapped_area · 48f797de
      Brian King 提交于
      It looks like most of the hugetlb code is doing the correct thing if
      hugepages are not supported, but the mmap code is not.  If we get into
      the mmap code when hugepages are not supported, such as in an LPAR
      which is running Active Memory Sharing, we can oops the kernel.  This
      fixes the oops being seen in this path.
      
      oops: Kernel access of bad area, sig: 11 [#1]
      SMP NR_CPUS=1024 NUMA pSeries
      Modules linked in: nfs(N) lockd(N) nfs_acl(N) sunrpc(N) ipv6(N) fuse(N) loop(N)
      dm_mod(N) sg(N) ibmveth(N) sd_mod(N) crc_t10dif(N) ibmvscsic(N)
      scsi_transport_srp(N) scsi_tgt(N) scsi_mod(N)
      Supported: No
      NIP: c000000000038d60 LR: c00000000003945c CTR: c0000000000393f0
      REGS: c000000077e7b830 TRAP: 0300   Tainted: G
      (2.6.27.5-bz50170-2-ppc64)
      MSR: 8000000000009032 <EE,ME,IR,DR>  CR: 44000448  XER: 20000001
      DAR: c000002000af90a8, DSISR: 0000000040000000
      TASK = c00000007c1b8600[4019] 'hugemmap01' THREAD: c000000077e78000 CPU: 6
      GPR00: 0000001fffffffe0 c000000077e7bab0 c0000000009a4e78 0000000000000000
      GPR04: 0000000000010000 0000000000000001 00000000ffffffff 0000000000000001
      GPR08: 0000000000000000 c000000000af90c8 0000000000000001 0000000000000000
      GPR12: 000000000000003f c000000000a73880 0000000000000000 0000000000000000
      GPR16: 0000000000000000 0000000000000000 0000000000000000 0000000000010000
      GPR20: 0000000000000000 0000000000000003 0000000000010000 0000000000000001
      GPR24: 0000000000000003 0000000000000000 0000000000000001 ffffffffffffffb5
      GPR28: c000000077ca2e80 0000000000000000 c00000000092af78 0000000000010000
      NIP [c000000000038d60] .slice_get_unmapped_area+0x6c/0x4e0
      LR [c00000000003945c] .hugetlb_get_unmapped_area+0x6c/0x80
      Call Trace:
      [c000000077e7bbc0] [c00000000003945c] .hugetlb_get_unmapped_area+0x6c/0x80
      [c000000077e7bc30] [c000000000107e30] .get_unmapped_area+0x64/0xd8
      [c000000077e7bcb0] [c00000000010b140] .do_mmap_pgoff+0x140/0x420
      [c000000077e7bd80] [c00000000000bf5c] .sys_mmap+0xc4/0x140
      [c000000077e7be30] [c0000000000086b4] syscall_exit+0x0/0x40
      Instruction dump:
      fac1ffb0 fae1ffb8 fb01ffc0 fb21ffc8 fb41ffd0 fb61ffd8 fb81ffe0 fbc1fff0
      fbe1fff8 f821fef1 f8c10158 f8e10160 <7d49002e> f9010168 e92d01b0 eb4902b0
      Signed-off-by: NBrian King <brking@linux.vnet.ibm.com>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      48f797de
  19. 01 12月, 2008 1 次提交
  20. 06 11月, 2008 1 次提交
  21. 16 9月, 2008 1 次提交
    • D
      powerpc: Clean up hugepage pagetable allocation for powerpc with 16G pages · 0b26425c
      David Gibson 提交于
      There is a small bug in the handling of 16G hugepages recently added
      to the kernel.  This doesn't cause a crash or other user-visible
      problems, but it does mean that more levels of pagetable are allocated
      than makes sense for 16G pages.  The hugepage pagetables for the 16G
      pages are allocated much lower in the pagetable tree than they should
      be, with the intervening levels allocated with full pmd and pud pages
      which will only ever have one entry filled in.
      
      This corrects this problem, at the same time cleaning up the handling
      of which level 64k versus 16M hugepage pagetables are allocated at.
      The new way of formatting the tests should be more robust against
      changes in pagetable structure, or any newly added hugepage sizes.
      Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      0b26425c
  22. 28 7月, 2008 1 次提交
  23. 27 7月, 2008 1 次提交
  24. 25 7月, 2008 3 次提交