1. 21 11月, 2017 1 次提交
  2. 10 3月, 2017 1 次提交
  3. 29 6月, 2016 3 次提交
  4. 10 6月, 2016 1 次提交
    • W
      ARM: 8578/1: mm: ensure pmd_present only checks the valid bit · 62453188
      Will Deacon 提交于
      In a subsequent patch, pmd_mknotpresent will clear the valid bit of the
      pmd entry, resulting in a not-present entry from the hardware's
      perspective. Unfortunately, pmd_present simply checks for a non-zero pmd
      value and will therefore continue to return true even after a
      pmd_mknotpresent operation. Since pmd_mknotpresent is only used for
      managing huge entries, this is only an issue for the 3-level case.
      
      This patch fixes the 3-level pmd_present implementation to take into
      account the valid bit. For bisectability, the change is made before the
      fix to pmd_mknotpresent.
      
      [catalin.marinas@arm.com: comment update regarding pmd_mknotpresent patch]
      
      Fixes: 8d962507 ("ARM: mm: Transparent huge page support for LPAE systems.")
      Cc: <stable@vger.kernel.org> # 3.11+
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Steve Capper <Steve.Capper@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      62453188
  5. 22 9月, 2015 1 次提交
    • N
      ARM: 8432/1: move VMALLOC_END from 0xff000000 to 0xff800000 · 6ff09660
      Nicolas Pitre 提交于
      There is a 12MB unused region in our memory map between the vmalloc and
      fixmap areas. This became unused with commit e9da6e99, confirmed
      with commit 64d3b6a3.
      
      We also have a 8MB guard area before the vmalloc area.  With the default
      240MB vmalloc area size and the current VMALLOC_END definition, that
      means the end of low memory ends up at 0xef800000 which is unfortunate
      for 768MB machines where 8MB of RAM is lost to himem.
      
      Let's move VMALLOC_END to 0xff800000 so the guard area won't chop the
      top of the 768MB low memory area while keeping the default vmalloc area
      size unchanged and still preserving a gap between the vmalloc and fixmap
      areas.
      Signed-off-by: NNicolas Pitre <nico@linaro.org>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      6ff09660
  6. 11 2月, 2015 1 次提交
  7. 04 12月, 2014 1 次提交
  8. 10 10月, 2014 2 次提交
  9. 24 7月, 2014 1 次提交
    • S
      ARM: 8108/1: mm: Introduce {pte,pmd}_isset and {pte,pmd}_isclear · f2950706
      Steven Capper 提交于
      Long descriptors on ARM are 64 bits, and some pte functions such as
      pte_dirty return a bitwise-and of a flag with the pte value. If the
      flag to be tested resides in the upper 32 bits of the pte, then we run
      into the danger of the result being dropped if downcast.
      
      For example:
      	gather_stats(page, md, pte_dirty(*pte), 1);
      where pte_dirty(*pte) is downcast to an int.
      
      This patch introduces a new macro pte_isset which performs the bitwise
      and, then performs a double logical invert (where needed) to ensure
      predictable downcasting. The logical inverse pte_isclear is also
      introduced.
      
      Equivalent pmd functions for Transparent HugePages have also been
      added.
      Signed-off-by: NSteve Capper <steve.capper@linaro.org>
      Reviewed-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      f2950706
  10. 25 2月, 2014 1 次提交
    • W
      ARM: 7985/1: mm: implement pte_accessible for faulting mappings · 1971188a
      Will Deacon 提交于
      The pte_accessible macro can be used to identify page table entries
      capable of being cached by a TLB. In principle, this differs from
      pte_present, since PROT_NONE mappings are mapped using invalid entries
      identified as present and ptes designated as `old' can use either
      invalid entries or those with the access flag cleared (guaranteed not to
      be in the TLB). However, there is a race to take care of, as described
      in 20841405 ("mm: fix TLB flush race between migration, and
      change_protection_range"), between a page being migrated and mprotected
      at the same time. In this case, we can check whether a TLB invalidation
      is pending for the mm and if so, temporarily consider PROT_NONE mappings
      as valid.
      
      This patch implements a quick pte_accessible macro for ARM by simply
      checking if the pte is valid/present depending on the mm. For classic
      MMU, these checks are identical and will generate some false positives
      for PROT_NONE mappings, but this is better than the current asm-generic
      definition of ((void)(pte),1).
      
      Finally, pte_present_user is moved to use pte_valid (and renamed
      appropriately) since we don't care about cache flushing for faulting
      mappings.
      Acked-by: NSteve Capper <steve.capper@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      1971188a
  11. 11 12月, 2013 1 次提交
  12. 30 11月, 2013 1 次提交
    • R
      ARM: fix booting low-vectors machines · d8aa712c
      Russell King 提交于
      Commit f6f91b0d (ARM: allow kuser helpers to be removed from the
      vector page) required two pages for the vectors code.  Although the
      code setting up the initial page tables was updated, the code which
      allocates page tables for new processes wasn't, neither was the code
      which tears down the mappings.  Fix this.
      
      Fixes: f6f91b0d ("ARM: allow kuser helpers to be removed from the vector page")
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      Cc: <stable@vger.kernel.org>
      d8aa712c
  13. 14 8月, 2013 1 次提交
  14. 29 6月, 2013 1 次提交
  15. 04 6月, 2013 1 次提交
  16. 30 4月, 2013 1 次提交
    • C
      arm: set the page table freeing ceiling to TASK_SIZE · 104ad3b3
      Catalin Marinas 提交于
      ARM processors with LPAE enabled use 3 levels of page tables, with an
      entry in the top level (pgd) covering 1GB of virtual space.  Because of
      the branch relocation limitations on ARM, the loadable modules are
      mapped 16MB below PAGE_OFFSET, making the corresponding 1GB pgd shared
      between kernel modules and user space.
      
      If free_pgtables() is called with the default ceiling 0,
      free_pgd_range() (and subsequently called functions) also frees the page
      table shared between user space and kernel modules (which is normally
      handled by the ARM-specific pgd_free() function).  This patch changes
      defines the ARM USER_PGTABLES_CEILING to TASK_SIZE when CONFIG_ARM_LPAE
      is enabled.
      
      Note that the pgd_free() function already checks the presence of the
      shared pmd page allocated by pgd_alloc() and frees it, though with
      ceiling 0 this wasn't necessary.
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: <stable@vger.kernel.org>	[3.3+]
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      104ad3b3
  17. 25 4月, 2013 1 次提交
    • C
      ARM: 7702/1: Set the page table freeing ceiling to TASK_SIZE · 6aaa189f
      Catalin Marinas 提交于
      ARM processors with LPAE enabled use 3 levels of page tables, with an
      entry in the top level (pgd) covering 1GB of virtual space. Because of
      the branch relocation limitations on ARM, the loadable modules are
      mapped 16MB below PAGE_OFFSET, making the corresponding 1GB pgd shared
      between kernel modules and user space.
      
      If free_pgtables() is called with the default ceiling 0,
      free_pgd_range() (and subsequently called functions) also frees the page
      table shared between user space and kernel modules (which is normally
      handled by the ARM-specific pgd_free() function). This patch changes
      defines the ARM USER_PGTABLES_CEILING to TASK_SIZE when CONFIG_ARM_LPAE
      is enabled.
      
      Note that the pgd_free() function already checks the presence of the
      shared pmd page allocated by pgd_alloc() and frees it, though with
      ceiling 0 this wasn't necessary.
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: <stable@vger.kernel.org> # 3.3+
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      6aaa189f
  18. 21 2月, 2013 1 次提交
  19. 24 1月, 2013 1 次提交
  20. 09 11月, 2012 2 次提交
    • W
      ARM: mm: introduce present, faulting entries for PAGE_NONE · 26ffd0d4
      Will Deacon 提交于
      PROT_NONE mappings apply the page protection attributes defined by _P000
      which translate to PAGE_NONE for ARM. These attributes specify an XN,
      RDONLY pte that is inaccessible to userspace. However, on kernels
      configured without support for domains, such a pte *is* accessible to
      the kernel and can be read via get_user, allowing tasks to read
      PROT_NONE pages via syscalls such as read/write over a pipe.
      
      This patch introduces a new software pte flag, L_PTE_NONE, that is set
      to identify faulting, present entries.
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      26ffd0d4
    • W
      ARM: mm: introduce L_PTE_VALID for page table entries · dbf62d50
      Will Deacon 提交于
      For long-descriptor translation table formats, the ARMv7 architecture
      defines the last two bits of the second- and third-level descriptors to
      be:
      
      	x0b	- Invalid
      	01b	- Block (second-level), Reserved (third-level)
      	11b	- Table (second-level), Page (third-level)
      
      This allows us to define L_PTE_PRESENT as (3 << 0) and use this value to
      create ptes directly. However, when determining whether a given pte
      value is present in the low-level page table accessors, we only need to
      check the least significant bit of the descriptor, allowing us to write
      faulting, present entries which are required for PROT_NONE mappings.
      
      This patch introduces L_PTE_VALID, which can be used to test whether a
      pte should fault, and updates the low-level page table accessors
      accordingly.
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      dbf62d50
  21. 03 10月, 2012 1 次提交
  22. 11 8月, 2012 2 次提交
  23. 03 1月, 2012 1 次提交
  24. 08 12月, 2011 3 次提交
  25. 06 12月, 2011 2 次提交
  26. 27 11月, 2011 2 次提交
    • N
      ARM: move VMALLOC_END down temporarily for shmobile · 0af362f8
      Nicolas Pitre 提交于
      THIS IS A TEMPORARY HACK.  The purpose of this is _only_ to avoid a
      regression on an existing machine while a better fix is implemented.
      
      On shmobile the consistent DMA memory area was set to 158MB in commit
      28f0721a with no explanation.  The documented size for this area should
      vary between 2MB and 14MB, and none of the other ARM targets exceed that.
      
      The included #warning is therefore meant to be noisy on purpose to get
      shmobile maintainers attention and this commit reverted once this
      consistent DMA size conflict is resolved.
      Signed-off-by: NNicolas Pitre <nicolas.pitre@linaro.org>
      Cc: Magnus Damm <damm@opensource.se>
      Cc: Paul Mundt <lethal@linux-sh.org>
      0af362f8
    • N
      ARM: move iotable mappings within the vmalloc region · 0536bdf3
      Nicolas Pitre 提交于
      In order to remove the build time variation between different SOCs with
      regards to VMALLOC_END, the iotable mappings are now allocated inside
      the vmalloc region.  This allows for VMALLOC_END to be identical across
      all machines.
      
      The value for VMALLOC_END is now set to 0xff000000 which is right where
      the consistent DMA area starts.
      
      To accommodate all static mappings on machines with possible highmem usage,
      the default vmalloc area size is changed to 240 MB so that VMALLOC_START
      is no higher than 0xf0000000 by default.
      Signed-off-by: NNicolas Pitre <nicolas.pitre@linaro.org>
      Tested-by: NStephen Warren <swarren@nvidia.com>
      Tested-by: NKevin Hilman <khilman@ti.com>
      Tested-by: NJamie Iles <jamie@jamieiles.com>
      0536bdf3
  27. 06 10月, 2011 2 次提交
  28. 23 9月, 2011 1 次提交
  29. 22 2月, 2011 1 次提交
  30. 15 2月, 2011 1 次提交