1. 06 5月, 2010 1 次提交
  2. 07 4月, 2010 1 次提交
  3. 30 3月, 2010 1 次提交
    • T
      include cleanup: Update gfp.h and slab.h includes to prepare for breaking... · 5a0e3ad6
      Tejun Heo 提交于
      include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
      
      percpu.h is included by sched.h and module.h and thus ends up being
      included when building most .c files.  percpu.h includes slab.h which
      in turn includes gfp.h making everything defined by the two files
      universally available and complicating inclusion dependencies.
      
      percpu.h -> slab.h dependency is about to be removed.  Prepare for
      this change by updating users of gfp and slab facilities include those
      headers directly instead of assuming availability.  As this conversion
      needs to touch large number of source files, the following script is
      used as the basis of conversion.
      
        http://userweb.kernel.org/~tj/misc/slabh-sweep.py
      
      The script does the followings.
      
      * Scan files for gfp and slab usages and update includes such that
        only the necessary includes are there.  ie. if only gfp is used,
        gfp.h, if slab is used, slab.h.
      
      * When the script inserts a new include, it looks at the include
        blocks and try to put the new include such that its order conforms
        to its surrounding.  It's put in the include block which contains
        core kernel includes, in the same order that the rest are ordered -
        alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
        doesn't seem to be any matching order.
      
      * If the script can't find a place to put a new include (mostly
        because the file doesn't have fitting include block), it prints out
        an error message indicating which .h file needs to be added to the
        file.
      
      The conversion was done in the following steps.
      
      1. The initial automatic conversion of all .c files updated slightly
         over 4000 files, deleting around 700 includes and adding ~480 gfp.h
         and ~3000 slab.h inclusions.  The script emitted errors for ~400
         files.
      
      2. Each error was manually checked.  Some didn't need the inclusion,
         some needed manual addition while adding it to implementation .h or
         embedding .c file was more appropriate for others.  This step added
         inclusions to around 150 files.
      
      3. The script was run again and the output was compared to the edits
         from #2 to make sure no file was left behind.
      
      4. Several build tests were done and a couple of problems were fixed.
         e.g. lib/decompress_*.c used malloc/free() wrappers around slab
         APIs requiring slab.h to be added manually.
      
      5. The script was run on all .h files but without automatically
         editing them as sprinkling gfp.h and slab.h inclusions around .h
         files could easily lead to inclusion dependency hell.  Most gfp.h
         inclusion directives were ignored as stuff from gfp.h was usually
         wildly available and often used in preprocessor macros.  Each
         slab.h inclusion directive was examined and added manually as
         necessary.
      
      6. percpu.h was updated not to include slab.h.
      
      7. Build test were done on the following configurations and failures
         were fixed.  CONFIG_GCOV_KERNEL was turned off for all tests (as my
         distributed build env didn't work with gcov compiles) and a few
         more options had to be turned off depending on archs to make things
         build (like ipr on powerpc/64 which failed due to missing writeq).
      
         * x86 and x86_64 UP and SMP allmodconfig and a custom test config.
         * powerpc and powerpc64 SMP allmodconfig
         * sparc and sparc64 SMP allmodconfig
         * ia64 SMP allmodconfig
         * s390 SMP allmodconfig
         * alpha SMP allmodconfig
         * um on x86_64 SMP allmodconfig
      
      8. percpu.h modifications were reverted so that it could be applied as
         a separate patch and serve as bisection point.
      
      Given the fact that I had only a couple of failures from tests on step
      6, I'm fairly confident about the coverage of this conversion patch.
      If there is a breakage, it's likely to be something in one of the arch
      headers which should be easily discoverable easily on most builds of
      the specific arch.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Guess-its-ok-by: NChristoph Lameter <cl@linux-foundation.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
      5a0e3ad6
  4. 18 12月, 2009 1 次提交
  5. 13 12月, 2009 2 次提交
  6. 27 8月, 2009 1 次提交
    • B
      powerpc/mm: Cleanup handling of execute permission · ea3cc330
      Benjamin Herrenschmidt 提交于
      This is an attempt at cleaning up a bit the way we handle execute
      permission on powerpc. _PAGE_HWEXEC is gone, _PAGE_EXEC is now only
      defined by CPUs that can do something with it, and the myriad of
      #ifdef's in the I$/D$ coherency code is reduced to 2 cases that
      hopefully should cover everything.
      
      The logic on BookE is a little bit different than what it was though
      not by much. Since now, _PAGE_EXEC will be set by the generic code
      for executable pages, we need to filter out if they are unclean and
      recover it. However, I don't expect the code to be more bloated than
      it already was in that area due to that change.
      
      I could boast that this brings proper enforcing of per-page execute
      permissions to all BookE and 40x but in fact, we've had that now for
      some time as a side effect of my previous rework in that area (and
      I didn't even know it :-) We would only enable execute permission if
      the page was cache clean and we would only cache clean it if we took
      and exec fault. Since we now enforce that the later only work if
      VM_EXEC is part of the VMA flags, we de-fact already enforce per-page
      execute permissions... Unless I missed something
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      ea3cc330
  7. 27 5月, 2009 1 次提交
  8. 24 3月, 2009 1 次提交
    • B
      powerpc/mm: Tweak PTE bit combination definitions · 8d1cf34e
      Benjamin Herrenschmidt 提交于
      This patch tweaks the way some PTE bit combinations are defined, in such a
      way that the 32 and 64-bit variant become almost identical and that will
      make it easier to bring in a new common pte-* file for the new variant
      of the Book3-E support.
      
      The combination of bits defining access to kernel pages are now clearly
      separated from the combination used by userspace and the core VM. The
      resulting generated code should remain identical unless I made a mistake.
      
      Note: While at it, I removed a non-sensical statement related to CONFIG_KGDB
      in ppc_mmu_32.c which could cause kernel mappings to be user accessible when
      that option is enabled. Probably something that bitrot.
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      8d1cf34e
  9. 11 3月, 2009 1 次提交
  10. 10 2月, 2009 1 次提交
  11. 08 1月, 2009 1 次提交
  12. 29 12月, 2008 1 次提交
  13. 23 12月, 2008 2 次提交
  14. 16 12月, 2008 1 次提交
  15. 03 12月, 2008 2 次提交
  16. 25 9月, 2008 1 次提交
    • B
      POWERPC: Allow 32-bit hashed pgtable code to support 36-bit physical · 4ee7084e
      Becky Bruce 提交于
      This rearranges a bit of code, and adds support for
      36-bit physical addressing for configs that use a
      hashed page table.  The 36b physical support is not
      enabled by default on any config - it must be
      explicitly enabled via the config system.
      
      This patch *only* expands the page table code to accomodate
      large physical addresses on 32-bit systems and enables the
      PHYS_64BIT config option for 86xx.  It does *not*
      allow you to boot a board with more than about 3.5GB of
      RAM - for that, SWIOTLB support is also required (and
      coming soon).
      Signed-off-by: NBecky Bruce <becky.bruce@freescale.com>
      Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
      4ee7084e
  17. 25 7月, 2008 1 次提交
    • B
      powerpc ioremap_prot · a1f242ff
      Benjamin Herrenschmidt 提交于
      This adds ioremap_prot and pte_pgprot() so that one can extract protection
      bits from a PTE and use them to ioremap_prot() (in order to support ptrace
      of VM_IO | VM_PFNMAP as per Rik's patch).
      
      This moves a couple of flag checks around in the ioremap implementations
      of arch/powerpc.  There's a side effect of allowing non-cacheable and
      non-guarded mappings on ppc32 which before would always have _PAGE_GUARDED
      set whenever _PAGE_NO_CACHE is.
      
      (standard ioremap will still set _PAGE_GUARDED, but ioremap_prot will be
      capable of setting such a non guarded mapping).
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      Signed-off-by: NRik van Riel <riel@redhat.com>
      Cc: Dave Airlie <airlied@linux.ie>
      Cc: Hugh Dickins <hugh@veritas.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      a1f242ff
  18. 30 6月, 2008 1 次提交
  19. 23 5月, 2008 1 次提交
  20. 24 4月, 2008 1 次提交
    • K
      [POWERPC] Port fixmap from x86 and use for kmap_atomic · 2c419bde
      Kumar Gala 提交于
      The fixmap code from x86 allows us to have compile time virtual addresses
      that we change the physical addresses of at run time.
      
      This is useful for applications like kmap_atomic, PCI config that is done
      via direct memory map, kexec/kdump.
      
      We got ride of CONFIG_HIGHMEM_START as we can now determine a more optimal
      location for PKMAP_BASE based on where the fixmap addresses start and
      working back from there.
      
      Additionally, the kmap code in asm-powerpc/highmem.h always had debug
      enabled.  Moved to using CONFIG_DEBUG_HIGHMEM to determine if we should
      have the extra debug checking.
      Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      2c419bde
  21. 17 4月, 2008 1 次提交
    • K
      [POWERPC] Remove and replace uses of PPC_MEMSTART with memstart_addr · 99c62dd7
      Kumar Gala 提交于
      A number of users of PPC_MEMSTART (40x, ppc_mmu_32) can just always
      use 0 as we don't support booting these kernels at non-zero physical
      addresses since their exception vectors must be at 0 (or 0xfffx_xxxx).
      
      For the sub-arches that support relocatable interrupt vectors
      (book-e), it's reasonable to have memory start at a non-zero physical
      address.  For those cases use the variable memstart_addr instead of
      the #define PPC_MEMSTART since the only uses of PPC_MEMSTART are for
      initialization and in the future we can set memstart_addr at runtime
      to have a relocatable kernel.
      Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      99c62dd7
  22. 09 2月, 2008 1 次提交
    • M
      CONFIG_HIGHPTE vs. sub-page page tables. · 2f569afd
      Martin Schwidefsky 提交于
      Background: I've implemented 1K/2K page tables for s390.  These sub-page
      page tables are required to properly support the s390 virtualization
      instruction with KVM.  The SIE instruction requires that the page tables
      have 256 page table entries (pte) followed by 256 page status table entries
      (pgste).  The pgstes are only required if the process is using the SIE
      instruction.  The pgstes are updated by the hardware and by the hypervisor
      for a number of reasons, one of them is dirty and reference bit tracking.
      To avoid wasting memory the standard pte table allocation should return
      1K/2K (31/64 bit) and 2K/4K if the process is using SIE.
      
      Problem: Page size on s390 is 4K, page table size is 1K or 2K.  That means
      the s390 version for pte_alloc_one cannot return a pointer to a struct
      page.  Trouble is that with the CONFIG_HIGHPTE feature on x86 pte_alloc_one
      cannot return a pointer to a pte either, since that would require more than
      32 bit for the return value of pte_alloc_one (and the pte * would not be
      accessible since its not kmapped).
      
      Solution: The only solution I found to this dilemma is a new typedef: a
      pgtable_t.  For s390 pgtable_t will be a (pte *) - to be introduced with a
      later patch.  For everybody else it will be a (struct page *).  The
      additional problem with the initialization of the ptl lock and the
      NR_PAGETABLE accounting is solved with a constructor pgtable_page_ctor and
      a destructor pgtable_page_dtor.  The page table allocation and free
      functions need to call these two whenever a page table page is allocated or
      freed.  pmd_populate will get a pgtable_t instead of a struct page pointer.
       To get the pgtable_t back from a pmd entry that has been installed with
      pmd_populate a new function pmd_pgtable is added.  It replaces the pmd_page
      call in free_pte_range and apply_to_pte_range.
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: <linux-arch@vger.kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      2f569afd
  23. 06 2月, 2008 1 次提交
  24. 14 6月, 2007 3 次提交
  25. 23 5月, 2007 1 次提交
    • K
      [POWERPC] Fix modpost warning · f1aed924
      Kumar Gala 提交于
      Mark pte_alloc_one_kernel as __init_refok to fix the following warning:
      
      WARNING: arch/powerpc/mm/built-in.o(.text+0x1068): Section mismatch: reference to .init.text:early_get_page (between 'pte_alloc_one_kernel' and 'steal_context')
      Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
      f1aed924
  26. 08 5月, 2007 1 次提交
  27. 24 4月, 2007 1 次提交
    • D
      [POWERPC] Abolish PHYS_FMT macro from arch/powerpc · 37f01d64
      David Gibson 提交于
      32-bit powerpc systems define a macro, PHYS_FMT, giving a printf
      format string fragment for displaying physical addresses, since most
      32-bit powerpc platforms use 32-bit physical addresses but a few use
      64-bit physical addresses.
      
      This macro is used in exactly one place, a rare error message, where
      we can solve the problem more simply by just unconditionally casting
      the address up to 64-bit quantity before formatting it.
      
      This patch does so, meaning that as we bring MMU definitions from
      asm-ppc over to asm-powerpc, cleaning them up in the process, we don't
      need to implement this ugly macro (which additionally has a very bad
      name for something global).
      Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      37f01d64
  28. 13 4月, 2007 3 次提交
    • B
      [POWERPC] DEBUG_PAGEALLOC for 32-bit · 88df6e90
      Benjamin Herrenschmidt 提交于
      Here's an implementation of DEBUG_PAGEALLOC for ppc32. It disables BAT
      mapping and is only tested with Hash table based processor though it
      shouldn't be too hard to adapt it to others.
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      
       arch/powerpc/Kconfig.debug       |    9 ++++++
       arch/powerpc/mm/init_32.c        |    4 +++
       arch/powerpc/mm/pgtable_32.c     |   52 +++++++++++++++++++++++++++++++++++++++
       arch/powerpc/mm/ppc_mmu_32.c     |    4 ++-
       include/asm-powerpc/cacheflush.h |    6 ++++
       5 files changed, 74 insertions(+), 1 deletion(-)
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      88df6e90
    • B
      [POWERPC] Fix 32-bit mm operations when not using BATs · ee4f2ea4
      Benjamin Herrenschmidt 提交于
      On hash table based 32 bits powerpc's, the hash management code runs with
      a big spinlock. It's thus important that it never causes itself a hash
      fault. That code is generally safe (it does memory accesses in real mode
      among other things) with the exception of the actual access to the code
      itself. That is, the kernel text needs to be accessible without taking
      a hash miss exceptions.
      
      This is currently guaranteed by having a BAT register mapping part of the
      linear mapping permanently, which includes the kernel text. But this is
      not true if using the "nobats" kernel command line option (which can be
      useful for debugging) and will not be true when using DEBUG_PAGEALLOC
      implemented in a subsequent patch.
      
      This patch fixes this by pre-faulting in the hash table pages that hit
      the kernel text, and making sure we never evict such a page under hash
      pressure.
      Signed-off-by: NBenjamin Herrenchmidt <benh@kernel.crashing.org>
      
       arch/powerpc/mm/hash_low_32.S |   22 ++++++++++++++++++++--
       arch/powerpc/mm/mem.c         |    3 ---
       arch/powerpc/mm/mmu_decl.h    |    4 ++++
       arch/powerpc/mm/pgtable_32.c  |   11 +++++++----
       4 files changed, 31 insertions(+), 9 deletions(-)
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      ee4f2ea4
    • B
      [POWERPC] Cleanup 32-bit map_page · 3be4e699
      Benjamin Herrenschmidt 提交于
      The 32 bits map_page() function is used internally by the mm code
      for early mmu mappings and for ioremap. It should never be called
      for an address that already has a valid PTE or hash entry, so we
      add a BUG_ON for that and remove the useless flush_HPTE call.
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      
       arch/powerpc/mm/pgtable_32.c |    9 ++++++---
       1 file changed, 6 insertions(+), 3 deletions(-)
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      3be4e699
  29. 09 2月, 2007 1 次提交
    • K
      [POWERPC] Fix is_power_of_4(x) compile error · 8dabba5d
      Kumar Gala 提交于
      When building an 85xx kernel we get:
      
        CC      arch/powerpc/mm/pgtable_32.o
      arch/powerpc/mm/pgtable_32.c: In function 'io_block_mapping':
      arch/powerpc/mm/pgtable_32.c:330: error: expected identifier before '(' token
      arch/powerpc/mm/pgtable_32.c:330: error: expected statement before ')' token
      
      The is_power_of_2(x) fixup patch left an extra ')' on the is_power_of_4 macro.
      There is a similiar issue on the arch/ppc side.
      Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
      8dabba5d
  30. 07 2月, 2007 1 次提交
  31. 04 12月, 2006 2 次提交
  32. 01 7月, 2006 1 次提交