1. 10 10月, 2014 1 次提交
    • M
      mm: remove misleading ARCH_USES_NUMA_PROT_NONE · 6a33979d
      Mel Gorman 提交于
      ARCH_USES_NUMA_PROT_NONE was defined for architectures that implemented
      _PAGE_NUMA using _PROT_NONE.  This saved using an additional PTE bit and
      relied on the fact that PROT_NONE vmas were skipped by the NUMA hinting
      fault scanner.  This was found to be conceptually confusing with a lot of
      implicit assumptions and it was asked that an alternative be found.
      
      Commit c46a7c81 "x86: define _PAGE_NUMA by reusing software bits on the
      PMD and PTE levels" redefined _PAGE_NUMA on x86 to be one of the swap PTE
      bits and shrunk the maximum possible swap size but it did not go far
      enough.  There are no architectures that reuse _PROT_NONE as _PROT_NUMA
      but the relics still exist.
      
      This patch removes ARCH_USES_NUMA_PROT_NONE and removes some unnecessary
      duplication in powerpc vs the generic implementation by defining the types
      the core NUMA helpers expected to exist from x86 with their ppc64
      equivalent.  This necessitated that a PTE bit mask be created that
      identified the bits that distinguish present from NUMA pte entries but it
      is expected this will only differ between arches based on _PAGE_PROTNONE.
      The naming for the generic helpers was taken from x86 originally but ppc64
      has types that are equivalent for the purposes of the helper so they are
      mapped instead of duplicating code.
      Signed-off-by: NMel Gorman <mgorman@suse.de>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Cyrill Gorcunov <gorcunov@gmail.com>
      Reviewed-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      6a33979d
  2. 18 4月, 2011 1 次提交
  3. 31 3月, 2011 1 次提交
  4. 14 10月, 2010 1 次提交
    • P
      powerpc: Fix invalid page flags in create TLB CAM path for PTE_64BIT · 92437d41
      Paul Gortmaker 提交于
      There exists a four line chunk of code, which when configured for
      64 bit address space, can incorrectly set certain page flags during
      the TLB creation.  It turns out that this is code which isn't used,
      but might still serve a purpose.  Since it isn't obvious why it exists
      or why it causes problems, the below description covers both in detail.
      
      For powerpc bootstrap, the physical memory (at most 768M), is mapped
      into the kernel space via the following path:
      
      MMU_init()
          |
          + adjust_total_lowmem()
                  |
                  + map_mem_in_cams()
                          |
                          + settlbcam(i, virt, phys, cam_sz, PAGE_KERNEL_X, 0);
      
      On settlbcam(), the kernel will create TLB entries according to the flag,
      PAGE_KERNEL_X.
      
      settlbcam()
      {
              ...
              TLBCAM[index].MAS1 = MAS1_VALID
                              | MAS1_IPROT | MAS1_TSIZE(tsize) | MAS1_TID(pid);
                                      ^
      			These entries cannot be invalidated by the
      			kernel since MAS1_IPROT is set on TLB property.
              ...
              if (flags & _PAGE_USER) {
                 TLBCAM[index].MAS3 |= MAS3_UX | MAS3_UR;
                 TLBCAM[index].MAS3 |= ((flags & _PAGE_RW) ? MAS3_UW : 0);
              }
      
      For classic BookE (flags & _PAGE_USER) is 'zero' so it's fine.
      But on boards like the the Freescale P4080, we want to support 36-bit
      physical address on it. So the following options may be set:
      
      CONFIG_FSL_BOOKE=y
      CONFIG_PTE_64BIT=y
      CONFIG_PHYS_64BIT=y
      
      As a result, boards like the P4080 will introduce PTE format as Book3E.
      As per the file: arch/powerpc/include/asm/pgtable-ppc32.h
      
        * #elif defined(CONFIG_FSL_BOOKE) && defined(CONFIG_PTE_64BIT)
        * #include <asm/pte-book3e.h>
      
      So PAGE_KERNEL_X is __pgprot(_PAGE_BASE | _PAGE_KERNEL_RWX) and the
      book3E version of _PAGE_KERNEL_RWX is defined with:
      
        (_PAGE_BAP_SW | _PAGE_BAP_SR | _PAGE_DIRTY | _PAGE_BAP_SX)
      
      Note the _PAGE_BAP_SR, which is also defined in the book3E _PAGE_USER:
      
        #define _PAGE_USER        (_PAGE_BAP_UR | _PAGE_BAP_SR) /* Can be read */
      
      So the possibility exists to wrongly assign the user MAS3_U<RWX> bits
      to kernel (PAGE_KERNEL_X) address space via the following code fragment:
      
              if (flags & _PAGE_USER) {
                 TLBCAM[index].MAS3 |= MAS3_UX | MAS3_UR;
                 TLBCAM[index].MAS3 |= ((flags & _PAGE_RW) ? MAS3_UW : 0);
              }
      
      Here is a dump of the TLB info from Simics with the above code present:
      ------
      L2 TLB1
                                                  GT                   SSS UUU V I
       Row  Logical           Physical            SS TLPID  TID  WIMGE XWR XWR F P   V
      ----- ----------------- ------------------- -- ----- ----- ----- --- --- - -   -
        0   c0000000-cfffffff 000000000-00fffffff 00     0     0   M   XWR XWR 0 1   1
        1   d0000000-dfffffff 010000000-01fffffff 00     0     0   M   XWR XWR 0 1   1
        2   e0000000-efffffff 020000000-02fffffff 00     0     0   M   XWR XWR 0 1   1
      
      Actually this conditional code was used for two legacy functions:
      
        1: support KGDB to set break point.
           KGDB already dropped this; now uses its core write to set break point.
      
        2: io_block_mapping() to create TLB in segmentation size (not PAGE_SIZE)
           for device IO space.
           This use case is also removed from the latest PowerPC kernel.
      
      However, there may still be a use case for it in the future, like
      large user pages, so we can't remove it entirely.  As an alternative,
      we match on all bits of _PAGE_USER instead of just any bits, so the
      case where just _PAGE_BAP_SR is set can't sneak through.
      
      With this done, the TLB appears without U having XWR as below:
      
      -------
      L2 TLB1
                                                  GT                   SSS UUU V I
       Row  Logical           Physical            SS TLPID  TID  WIMGE XWR XWR F P   V
      ----- ----------------- ------------------- -- ----- ----- ----- --- --- - -   -
        0   c0000000-cfffffff 000000000-00fffffff 00     0     0   M   XWR     0 1   1
        1   d0000000-dfffffff 010000000-01fffffff 00     0     0   M   XWR     0 1   1
        2   e0000000-efffffff 020000000-02fffffff 00     0     0   M   XWR     0 1   1
      Signed-off-by: NTiejun Chen <tiejun.chen@windriver.com>
      Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
      Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
      92437d41
  5. 24 9月, 2009 1 次提交
  6. 27 8月, 2009 1 次提交
    • B
      powerpc/mm: Cleanup handling of execute permission · ea3cc330
      Benjamin Herrenschmidt 提交于
      This is an attempt at cleaning up a bit the way we handle execute
      permission on powerpc. _PAGE_HWEXEC is gone, _PAGE_EXEC is now only
      defined by CPUs that can do something with it, and the myriad of
      #ifdef's in the I$/D$ coherency code is reduced to 2 cases that
      hopefully should cover everything.
      
      The logic on BookE is a little bit different than what it was though
      not by much. Since now, _PAGE_EXEC will be set by the generic code
      for executable pages, we need to filter out if they are unclean and
      recover it. However, I don't expect the code to be more bloated than
      it already was in that area due to that change.
      
      I could boast that this brings proper enforcing of per-page execute
      permissions to all BookE and 40x but in fact, we've had that now for
      some time as a side effect of my previous rework in that area (and
      I didn't even know it :-) We would only enable execute permission if
      the page was cache clean and we would only cache clean it if we took
      and exec fault. Since we now enforce that the later only work if
      VM_EXEC is part of the VMA flags, we de-fact already enforce per-page
      execute permissions... Unless I missed something
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      ea3cc330
  7. 20 8月, 2009 1 次提交
  8. 07 4月, 2009 1 次提交
    • P
      powerpc: Fix oops when loading modules · 11b55da7
      Paul Mackerras 提交于
      This fixes a problem reported by Sean MacLennan where loading any
      module would cause an oops.  We weren't marking the pages containing
      the module text as having hardware execute permission, due to a bug
      introduced in commit 8d1cf34e ("powerpc/mm: Tweak PTE bit combination
      definitions"), hence trying to execute the module text caused an
      exception on processors that support hardware execute permission.
      
      This adds _PAGE_HWEXEC to the definitions of PAGE_KERNEL_X and
      PAGE_KERNEL_ROX to fix this problem.
      Reported-by: NSean MacLennan <smaclennan@pikatech.com>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      11b55da7
  9. 24 3月, 2009 1 次提交
    • B
      powerpc/mm: Merge various PTE bits and accessors definitions · 71087002
      Benjamin Herrenschmidt 提交于
      Now that they are almost identical, we can merge some of the definitions
      related to the PTE format into common files.
      
      This creates a new pte-common.h which is included by both 32 and 64-bit
      right after the CPU specific pte-*.h file, and which defines some
      bits to "default" values if they haven't been defined already, and
      then provides a generic definition of most of the bit combinations
      based on these and exposed to the rest of the kernel.
      
      I also moved to the common pgtable.h most of the "small" accessors to the
      PTE bits and modification helpers (pte_mk*). The actual accessors remain
      in their separate files.
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      71087002