1. 13 12月, 2009 2 次提交
  2. 27 8月, 2009 1 次提交
    • B
      powerpc/mm: Cleanup handling of execute permission · ea3cc330
      Benjamin Herrenschmidt 提交于
      This is an attempt at cleaning up a bit the way we handle execute
      permission on powerpc. _PAGE_HWEXEC is gone, _PAGE_EXEC is now only
      defined by CPUs that can do something with it, and the myriad of
      #ifdef's in the I$/D$ coherency code is reduced to 2 cases that
      hopefully should cover everything.
      
      The logic on BookE is a little bit different than what it was though
      not by much. Since now, _PAGE_EXEC will be set by the generic code
      for executable pages, we need to filter out if they are unclean and
      recover it. However, I don't expect the code to be more bloated than
      it already was in that area due to that change.
      
      I could boast that this brings proper enforcing of per-page execute
      permissions to all BookE and 40x but in fact, we've had that now for
      some time as a side effect of my previous rework in that area (and
      I didn't even know it :-) We would only enable execute permission if
      the page was cache clean and we would only cache clean it if we took
      and exec fault. Since we now enforce that the later only work if
      VM_EXEC is part of the VMA flags, we de-fact already enforce per-page
      execute permissions... Unless I missed something
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      ea3cc330
  3. 27 5月, 2009 1 次提交
  4. 24 3月, 2009 1 次提交
    • B
      powerpc/mm: Tweak PTE bit combination definitions · 8d1cf34e
      Benjamin Herrenschmidt 提交于
      This patch tweaks the way some PTE bit combinations are defined, in such a
      way that the 32 and 64-bit variant become almost identical and that will
      make it easier to bring in a new common pte-* file for the new variant
      of the Book3-E support.
      
      The combination of bits defining access to kernel pages are now clearly
      separated from the combination used by userspace and the core VM. The
      resulting generated code should remain identical unless I made a mistake.
      
      Note: While at it, I removed a non-sensical statement related to CONFIG_KGDB
      in ppc_mmu_32.c which could cause kernel mappings to be user accessible when
      that option is enabled. Probably something that bitrot.
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      8d1cf34e
  5. 11 3月, 2009 1 次提交
  6. 10 2月, 2009 1 次提交
  7. 08 1月, 2009 1 次提交
  8. 29 12月, 2008 1 次提交
  9. 23 12月, 2008 2 次提交
  10. 16 12月, 2008 1 次提交
  11. 03 12月, 2008 2 次提交
  12. 25 9月, 2008 1 次提交
    • B
      POWERPC: Allow 32-bit hashed pgtable code to support 36-bit physical · 4ee7084e
      Becky Bruce 提交于
      This rearranges a bit of code, and adds support for
      36-bit physical addressing for configs that use a
      hashed page table.  The 36b physical support is not
      enabled by default on any config - it must be
      explicitly enabled via the config system.
      
      This patch *only* expands the page table code to accomodate
      large physical addresses on 32-bit systems and enables the
      PHYS_64BIT config option for 86xx.  It does *not*
      allow you to boot a board with more than about 3.5GB of
      RAM - for that, SWIOTLB support is also required (and
      coming soon).
      Signed-off-by: NBecky Bruce <becky.bruce@freescale.com>
      Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
      4ee7084e
  13. 25 7月, 2008 1 次提交
    • B
      powerpc ioremap_prot · a1f242ff
      Benjamin Herrenschmidt 提交于
      This adds ioremap_prot and pte_pgprot() so that one can extract protection
      bits from a PTE and use them to ioremap_prot() (in order to support ptrace
      of VM_IO | VM_PFNMAP as per Rik's patch).
      
      This moves a couple of flag checks around in the ioremap implementations
      of arch/powerpc.  There's a side effect of allowing non-cacheable and
      non-guarded mappings on ppc32 which before would always have _PAGE_GUARDED
      set whenever _PAGE_NO_CACHE is.
      
      (standard ioremap will still set _PAGE_GUARDED, but ioremap_prot will be
      capable of setting such a non guarded mapping).
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      Signed-off-by: NRik van Riel <riel@redhat.com>
      Cc: Dave Airlie <airlied@linux.ie>
      Cc: Hugh Dickins <hugh@veritas.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      a1f242ff
  14. 30 6月, 2008 1 次提交
  15. 23 5月, 2008 1 次提交
  16. 24 4月, 2008 1 次提交
    • K
      [POWERPC] Port fixmap from x86 and use for kmap_atomic · 2c419bde
      Kumar Gala 提交于
      The fixmap code from x86 allows us to have compile time virtual addresses
      that we change the physical addresses of at run time.
      
      This is useful for applications like kmap_atomic, PCI config that is done
      via direct memory map, kexec/kdump.
      
      We got ride of CONFIG_HIGHMEM_START as we can now determine a more optimal
      location for PKMAP_BASE based on where the fixmap addresses start and
      working back from there.
      
      Additionally, the kmap code in asm-powerpc/highmem.h always had debug
      enabled.  Moved to using CONFIG_DEBUG_HIGHMEM to determine if we should
      have the extra debug checking.
      Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      2c419bde
  17. 17 4月, 2008 1 次提交
    • K
      [POWERPC] Remove and replace uses of PPC_MEMSTART with memstart_addr · 99c62dd7
      Kumar Gala 提交于
      A number of users of PPC_MEMSTART (40x, ppc_mmu_32) can just always
      use 0 as we don't support booting these kernels at non-zero physical
      addresses since their exception vectors must be at 0 (or 0xfffx_xxxx).
      
      For the sub-arches that support relocatable interrupt vectors
      (book-e), it's reasonable to have memory start at a non-zero physical
      address.  For those cases use the variable memstart_addr instead of
      the #define PPC_MEMSTART since the only uses of PPC_MEMSTART are for
      initialization and in the future we can set memstart_addr at runtime
      to have a relocatable kernel.
      Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      99c62dd7
  18. 09 2月, 2008 1 次提交
    • M
      CONFIG_HIGHPTE vs. sub-page page tables. · 2f569afd
      Martin Schwidefsky 提交于
      Background: I've implemented 1K/2K page tables for s390.  These sub-page
      page tables are required to properly support the s390 virtualization
      instruction with KVM.  The SIE instruction requires that the page tables
      have 256 page table entries (pte) followed by 256 page status table entries
      (pgste).  The pgstes are only required if the process is using the SIE
      instruction.  The pgstes are updated by the hardware and by the hypervisor
      for a number of reasons, one of them is dirty and reference bit tracking.
      To avoid wasting memory the standard pte table allocation should return
      1K/2K (31/64 bit) and 2K/4K if the process is using SIE.
      
      Problem: Page size on s390 is 4K, page table size is 1K or 2K.  That means
      the s390 version for pte_alloc_one cannot return a pointer to a struct
      page.  Trouble is that with the CONFIG_HIGHPTE feature on x86 pte_alloc_one
      cannot return a pointer to a pte either, since that would require more than
      32 bit for the return value of pte_alloc_one (and the pte * would not be
      accessible since its not kmapped).
      
      Solution: The only solution I found to this dilemma is a new typedef: a
      pgtable_t.  For s390 pgtable_t will be a (pte *) - to be introduced with a
      later patch.  For everybody else it will be a (struct page *).  The
      additional problem with the initialization of the ptl lock and the
      NR_PAGETABLE accounting is solved with a constructor pgtable_page_ctor and
      a destructor pgtable_page_dtor.  The page table allocation and free
      functions need to call these two whenever a page table page is allocated or
      freed.  pmd_populate will get a pgtable_t instead of a struct page pointer.
       To get the pgtable_t back from a pmd entry that has been installed with
      pmd_populate a new function pmd_pgtable is added.  It replaces the pmd_page
      call in free_pte_range and apply_to_pte_range.
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: <linux-arch@vger.kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      2f569afd
  19. 06 2月, 2008 1 次提交
  20. 14 6月, 2007 3 次提交
  21. 23 5月, 2007 1 次提交
    • K
      [POWERPC] Fix modpost warning · f1aed924
      Kumar Gala 提交于
      Mark pte_alloc_one_kernel as __init_refok to fix the following warning:
      
      WARNING: arch/powerpc/mm/built-in.o(.text+0x1068): Section mismatch: reference to .init.text:early_get_page (between 'pte_alloc_one_kernel' and 'steal_context')
      Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
      f1aed924
  22. 08 5月, 2007 1 次提交
  23. 24 4月, 2007 1 次提交
    • D
      [POWERPC] Abolish PHYS_FMT macro from arch/powerpc · 37f01d64
      David Gibson 提交于
      32-bit powerpc systems define a macro, PHYS_FMT, giving a printf
      format string fragment for displaying physical addresses, since most
      32-bit powerpc platforms use 32-bit physical addresses but a few use
      64-bit physical addresses.
      
      This macro is used in exactly one place, a rare error message, where
      we can solve the problem more simply by just unconditionally casting
      the address up to 64-bit quantity before formatting it.
      
      This patch does so, meaning that as we bring MMU definitions from
      asm-ppc over to asm-powerpc, cleaning them up in the process, we don't
      need to implement this ugly macro (which additionally has a very bad
      name for something global).
      Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      37f01d64
  24. 13 4月, 2007 3 次提交
    • B
      [POWERPC] DEBUG_PAGEALLOC for 32-bit · 88df6e90
      Benjamin Herrenschmidt 提交于
      Here's an implementation of DEBUG_PAGEALLOC for ppc32. It disables BAT
      mapping and is only tested with Hash table based processor though it
      shouldn't be too hard to adapt it to others.
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      
       arch/powerpc/Kconfig.debug       |    9 ++++++
       arch/powerpc/mm/init_32.c        |    4 +++
       arch/powerpc/mm/pgtable_32.c     |   52 +++++++++++++++++++++++++++++++++++++++
       arch/powerpc/mm/ppc_mmu_32.c     |    4 ++-
       include/asm-powerpc/cacheflush.h |    6 ++++
       5 files changed, 74 insertions(+), 1 deletion(-)
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      88df6e90
    • B
      [POWERPC] Fix 32-bit mm operations when not using BATs · ee4f2ea4
      Benjamin Herrenschmidt 提交于
      On hash table based 32 bits powerpc's, the hash management code runs with
      a big spinlock. It's thus important that it never causes itself a hash
      fault. That code is generally safe (it does memory accesses in real mode
      among other things) with the exception of the actual access to the code
      itself. That is, the kernel text needs to be accessible without taking
      a hash miss exceptions.
      
      This is currently guaranteed by having a BAT register mapping part of the
      linear mapping permanently, which includes the kernel text. But this is
      not true if using the "nobats" kernel command line option (which can be
      useful for debugging) and will not be true when using DEBUG_PAGEALLOC
      implemented in a subsequent patch.
      
      This patch fixes this by pre-faulting in the hash table pages that hit
      the kernel text, and making sure we never evict such a page under hash
      pressure.
      Signed-off-by: NBenjamin Herrenchmidt <benh@kernel.crashing.org>
      
       arch/powerpc/mm/hash_low_32.S |   22 ++++++++++++++++++++--
       arch/powerpc/mm/mem.c         |    3 ---
       arch/powerpc/mm/mmu_decl.h    |    4 ++++
       arch/powerpc/mm/pgtable_32.c  |   11 +++++++----
       4 files changed, 31 insertions(+), 9 deletions(-)
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      ee4f2ea4
    • B
      [POWERPC] Cleanup 32-bit map_page · 3be4e699
      Benjamin Herrenschmidt 提交于
      The 32 bits map_page() function is used internally by the mm code
      for early mmu mappings and for ioremap. It should never be called
      for an address that already has a valid PTE or hash entry, so we
      add a BUG_ON for that and remove the useless flush_HPTE call.
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      
       arch/powerpc/mm/pgtable_32.c |    9 ++++++---
       1 file changed, 6 insertions(+), 3 deletions(-)
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      3be4e699
  25. 09 2月, 2007 1 次提交
    • K
      [POWERPC] Fix is_power_of_4(x) compile error · 8dabba5d
      Kumar Gala 提交于
      When building an 85xx kernel we get:
      
        CC      arch/powerpc/mm/pgtable_32.o
      arch/powerpc/mm/pgtable_32.c: In function 'io_block_mapping':
      arch/powerpc/mm/pgtable_32.c:330: error: expected identifier before '(' token
      arch/powerpc/mm/pgtable_32.c:330: error: expected statement before ')' token
      
      The is_power_of_2(x) fixup patch left an extra ')' on the is_power_of_4 macro.
      There is a similiar issue on the arch/ppc side.
      Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
      8dabba5d
  26. 07 2月, 2007 1 次提交
  27. 04 12月, 2006 2 次提交
  28. 01 7月, 2006 1 次提交
  29. 29 3月, 2006 1 次提交
  30. 16 3月, 2006 1 次提交
    • O
      [PATCH] powerpc: remove duplicate EXPORT_SYMBOLS · 920573bd
      Olaf Hering 提交于
      remove warnings when building a 64bit kernel.
      smp_call_function triggers also with 32bit kernel.
      
      WARNING: vmlinux: duplicate symbol 'smp_call_function' previous definition was in vmlinux
      arch/powerpc/kernel/ppc_ksyms.c:164:EXPORT_SYMBOL(smp_call_function);
      arch/powerpc/kernel/smp.c:300:EXPORT_SYMBOL(smp_call_function);
      
      WARNING: vmlinux: duplicate symbol 'ioremap' previous definition was in vmlinux
      arch/powerpc/kernel/ppc_ksyms.c:113:EXPORT_SYMBOL(ioremap);
      arch/powerpc/mm/pgtable_64.c:321:EXPORT_SYMBOL(ioremap);
      
      WARNING: vmlinux: duplicate symbol '__ioremap' previous definition was in vmlinux
      arch/powerpc/kernel/ppc_ksyms.c:117:EXPORT_SYMBOL(__ioremap);
      arch/powerpc/mm/pgtable_64.c:322:EXPORT_SYMBOL(__ioremap);
      
      WARNING: vmlinux: duplicate symbol 'iounmap' previous definition was in vmlinux
      arch/powerpc/kernel/ppc_ksyms.c:118:EXPORT_SYMBOL(iounmap);
      arch/powerpc/mm/pgtable_64.c:323:EXPORT_SYMBOL(iounmap);
      Signed-off-by: NOlaf Hering <olh@suse.de>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      920573bd
  31. 31 10月, 2005 1 次提交
  32. 29 10月, 2005 1 次提交