1. 08 11月, 2016 1 次提交
    • A
      arm64: mm: BUG on unsupported manipulations of live kernel mappings · e98216b5
      Ard Biesheuvel 提交于
      Now that we take care not manipulate the live kernel page tables in a
      way that may lead to TLB conflicts, the case where a table mapping is
      replaced by a block mapping can no longer occur. So remove the handling
      of this at the PUD and PMD levels, and instead, BUG() on any occurrence
      of live kernel page table manipulations that modify anything other than
      the permission bits.
      
      Since mark_rodata_ro() is the only caller where the kernel mappings that
      are being manipulated are actually live, drop the various conditional
      flush_tlb_all() invocations, and add a single call to mark_rodata_ro()
      instead.
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      e98216b5
  2. 07 9月, 2016 1 次提交
  3. 22 8月, 2016 1 次提交
  4. 01 8月, 2016 1 次提交
  5. 26 7月, 2016 2 次提交
  6. 01 7月, 2016 4 次提交
  7. 28 6月, 2016 1 次提交
  8. 16 4月, 2016 1 次提交
  9. 15 4月, 2016 2 次提交
  10. 25 3月, 2016 1 次提交
    • M
      arm64: consistently use p?d_set_huge · c661cb1c
      Mark Rutland 提交于
      Commit 324420bf ("arm64: add support for ioremap() block
      mappings") added new p?d_set_huge functions which do the hard work to
      generate and set a correct block entry.
      
      These differ from open-coded huge page creation in the early page table
      code by explicitly setting the P?D_TYPE_SECT bits (which are implicitly
      retained by mk_sect_prot() for any valid prot), but are otherwise
      identical (and cannot fail on arm64).
      
      For simplicity and consistency, make use of these in the initial page
      table creation code.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      c661cb1c
  11. 26 2月, 2016 1 次提交
  12. 24 2月, 2016 1 次提交
    • A
      arm64: add support for kernel ASLR · f80fb3a3
      Ard Biesheuvel 提交于
      This adds support for KASLR is implemented, based on entropy provided by
      the bootloader in the /chosen/kaslr-seed DT property. Depending on the size
      of the address space (VA_BITS) and the page size, the entropy in the
      virtual displacement is up to 13 bits (16k/2 levels) and up to 25 bits (all
      4 levels), with the sidenote that displacements that result in the kernel
      image straddling a 1GB/32MB/512MB alignment boundary (for 4KB/16KB/64KB
      granule kernels, respectively) are not allowed, and will be rounded up to
      an acceptable value.
      
      If CONFIG_RANDOMIZE_MODULE_REGION_FULL is enabled, the module region is
      randomized independently from the core kernel. This makes it less likely
      that the location of core kernel data structures can be determined by an
      adversary, but causes all function calls from modules into the core kernel
      to be resolved via entries in the module PLTs.
      
      If CONFIG_RANDOMIZE_MODULE_REGION_FULL is not enabled, the module region is
      randomized by choosing a page aligned 128 MB region inside the interval
      [_etext - 128 MB, _stext + 128 MB). This gives between 10 and 14 bits of
      entropy (depending on page size), independently of the kernel randomization,
      but still guarantees that modules are within the range of relative branch
      and jump instructions (with the caveat that, since the module region is
      shared with other uses of the vmalloc area, modules may need to be loaded
      further away if the module region is exhausted)
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      f80fb3a3
  13. 19 2月, 2016 4 次提交
    • A
      arm64: allow kernel Image to be loaded anywhere in physical memory · a7f8de16
      Ard Biesheuvel 提交于
      This relaxes the kernel Image placement requirements, so that it
      may be placed at any 2 MB aligned offset in physical memory.
      
      This is accomplished by ignoring PHYS_OFFSET when installing
      memblocks, and accounting for the apparent virtual offset of
      the kernel Image. As a result, virtual address references
      below PAGE_OFFSET are correctly mapped onto physical references
      into the kernel Image regardless of where it sits in memory.
      
      Special care needs to be taken for dealing with memory limits passed
      via mem=, since the generic implementation clips memory top down, which
      may clip the kernel image itself if it is loaded high up in memory. To
      deal with this case, we simply add back the memory covering the kernel
      image, which may result in more memory to be retained than was passed
      as a mem= parameter.
      
      Since mem= should not be considered a production feature, a panic notifier
      handler is installed that dumps the memory limit at panic time if one was
      set.
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      a7f8de16
    • A
      arm64: move kernel image to base of vmalloc area · f9040773
      Ard Biesheuvel 提交于
      This moves the module area to right before the vmalloc area, and moves
      the kernel image to the base of the vmalloc area. This is an intermediate
      step towards implementing KASLR, which allows the kernel image to be
      located anywhere in the vmalloc area.
      
      Since other subsystems such as hibernate may still need to refer to the
      kernel text or data segments via their linears addresses, both are mapped
      in the linear region as well. The linear alias of the text region is
      mapped read-only/non-executable to prevent inadvertent modification or
      execution.
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      f9040773
    • A
      arm64: decouple early fixmap init from linear mapping · 157962f5
      Ard Biesheuvel 提交于
      Since the early fixmap page tables are populated using pages that are
      part of the static footprint of the kernel, they are covered by the
      initial kernel mapping, and we can refer to them without using __va/__pa
      translations, which are tied to the linear mapping.
      
      Since the fixmap page tables are disjoint from the kernel mapping up
      to the top level pgd entry, we can refer to bm_pte[] directly, and there
      is no need to walk the page tables and perform __pa()/__va() translations
      at each step.
      Reviewed-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      157962f5
    • A
      arm64: add support for ioremap() block mappings · 324420bf
      Ard Biesheuvel 提交于
      This wires up the existing generic huge-vmap feature, which allows
      ioremap() to use PMD or PUD sized block mappings. It also adds support
      to the unmap path for dealing with block mappings, which will allow us
      to unmap the __init region using unmap_kernel_range() in a subsequent
      patch.
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      324420bf
  14. 16 2月, 2016 11 次提交
  15. 12 12月, 2015 1 次提交
    • W
      arm64: mm: ensure that the zero page is visible to the page table walker · 32d63978
      Will Deacon 提交于
      In paging_init, we allocate the zero page, memset it to zero and then
      point TTBR0 to it in order to avoid speculative fetches through the
      identity mapping.
      
      In order to guarantee that the freshly zeroed page is indeed visible to
      the page table walker, we need to execute a dsb instruction prior to
      writing the TTBR.
      
      Cc: <stable@vger.kernel.org> # v3.14+, for older kernels need to drop the 'ishst'
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      32d63978
  16. 11 12月, 2015 1 次提交
  17. 10 12月, 2015 1 次提交
  18. 01 12月, 2015 2 次提交
  19. 26 11月, 2015 1 次提交
    • C
      Revert "arm64: Mark kernel page ranges contiguous" · 667c2759
      Catalin Marinas 提交于
      This reverts commit 348a65cd.
      
      Incorrect page table manipulation that does not respect the ARM ARM
      recommended break-before-make sequence may lead to TLB conflicts. The
      contiguous PTE patch makes the system even more susceptible to such
      errors by changing the mapping from a single page to a contiguous range
      of pages. An additional TLB invalidation would reduce the risk window,
      however, the correct fix is to switch to a temporary swapper_pg_dir.
      Once the correct workaround is done, the reverted commit will be
      re-applied.
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      Reported-by: NJeremy Linton <jeremy.linton@arm.com>
      667c2759
  20. 25 11月, 2015 1 次提交
  21. 18 11月, 2015 1 次提交