1. 28 7月, 2017 1 次提交
  2. 30 5月, 2017 1 次提交
  3. 06 4月, 2017 1 次提交
    • T
      arm64: kdump: protect crash dump kernel memory · 98d2e153
      Takahiro Akashi 提交于
      arch_kexec_protect_crashkres() and arch_kexec_unprotect_crashkres()
      are meant to be called by kexec_load() in order to protect the memory
      allocated for crash dump kernel once the image is loaded.
      
      The protection is implemented by unmapping the relevant segments in crash
      dump kernel memory, rather than making it read-only as other archs do,
      to prevent coherency issues due to potential cache aliasing (with
      mismatched attributes).
      
      Page-level mappings are consistently used here so that we can change
      the attributes of segments in page granularity as well as shrink the region
      also in page granularity through /sys/kernel/kexec_crash_size, putting
      the freed memory back to buddy system.
      Signed-off-by: NAKASHI Takahiro <takahiro.akashi@linaro.org>
      Reviewed-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      98d2e153
  4. 23 3月, 2017 10 次提交
  5. 24 2月, 2017 1 次提交
    • M
      Revert "arm64: mm: set the contiguous bit for kernel mappings where appropriate" · d81bbe6d
      Mark Rutland 提交于
      This reverts commit 0bfc445d.
      
      When we change the permissions of regions mapped using contiguous
      entries, the architecture requires us to follow a Break-Before-Make
      strategy, breaking *all* associated entries before we can change any of
      the following properties from the entries:
      
       - presence of the contiguous bit
       - output address
       - attributes
       - permissiones
      
      Failure to do so can result in a number of problems (e.g. TLB conflict
      aborts and/or erroneous results from TLB lookups).
      
      See ARM DDI 0487A.k_iss10775, "Misprogramming of the Contiguous bit",
      page D4-1762.
      
      We do not take this into account when altering the permissions of kernel
      segments in mark_rodata_ro(), where we change the permissions of live
      contiguous entires one-by-one, leaving them transiently inconsistent.
      This has been observed to result in failures on some fast model
      configurations.
      
      Unfortunately, we cannot follow Break-Before-Make here as we'd have to
      unmap kernel text and data used to perform the sequence.
      
      For the timebeing, revert commit 0bfc445d so as to avoid issues
      resulting from this misuse of the contiguous bit.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Acked-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Reported-by: NJean-Philippe Brucker <jean-philippe.brucker@arm.com>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <Will.Deacon@arm.com>
      Cc: stable@vger.kernel.org # v4.10
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      d81bbe6d
  6. 15 2月, 2017 1 次提交
  7. 13 1月, 2017 1 次提交
  8. 12 1月, 2017 1 次提交
  9. 08 11月, 2016 4 次提交
  10. 07 9月, 2016 1 次提交
  11. 22 8月, 2016 1 次提交
  12. 01 8月, 2016 1 次提交
  13. 26 7月, 2016 2 次提交
  14. 01 7月, 2016 4 次提交
  15. 28 6月, 2016 1 次提交
  16. 16 4月, 2016 1 次提交
  17. 15 4月, 2016 2 次提交
  18. 25 3月, 2016 1 次提交
    • M
      arm64: consistently use p?d_set_huge · c661cb1c
      Mark Rutland 提交于
      Commit 324420bf ("arm64: add support for ioremap() block
      mappings") added new p?d_set_huge functions which do the hard work to
      generate and set a correct block entry.
      
      These differ from open-coded huge page creation in the early page table
      code by explicitly setting the P?D_TYPE_SECT bits (which are implicitly
      retained by mk_sect_prot() for any valid prot), but are otherwise
      identical (and cannot fail on arm64).
      
      For simplicity and consistency, make use of these in the initial page
      table creation code.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      c661cb1c
  19. 26 2月, 2016 1 次提交
  20. 24 2月, 2016 1 次提交
    • A
      arm64: add support for kernel ASLR · f80fb3a3
      Ard Biesheuvel 提交于
      This adds support for KASLR is implemented, based on entropy provided by
      the bootloader in the /chosen/kaslr-seed DT property. Depending on the size
      of the address space (VA_BITS) and the page size, the entropy in the
      virtual displacement is up to 13 bits (16k/2 levels) and up to 25 bits (all
      4 levels), with the sidenote that displacements that result in the kernel
      image straddling a 1GB/32MB/512MB alignment boundary (for 4KB/16KB/64KB
      granule kernels, respectively) are not allowed, and will be rounded up to
      an acceptable value.
      
      If CONFIG_RANDOMIZE_MODULE_REGION_FULL is enabled, the module region is
      randomized independently from the core kernel. This makes it less likely
      that the location of core kernel data structures can be determined by an
      adversary, but causes all function calls from modules into the core kernel
      to be resolved via entries in the module PLTs.
      
      If CONFIG_RANDOMIZE_MODULE_REGION_FULL is not enabled, the module region is
      randomized by choosing a page aligned 128 MB region inside the interval
      [_etext - 128 MB, _stext + 128 MB). This gives between 10 and 14 bits of
      entropy (depending on page size), independently of the kernel randomization,
      but still guarantees that modules are within the range of relative branch
      and jump instructions (with the caveat that, since the module region is
      shared with other uses of the vmalloc area, modules may need to be loaded
      further away if the module region is exhausted)
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      f80fb3a3
  21. 19 2月, 2016 3 次提交
    • A
      arm64: allow kernel Image to be loaded anywhere in physical memory · a7f8de16
      Ard Biesheuvel 提交于
      This relaxes the kernel Image placement requirements, so that it
      may be placed at any 2 MB aligned offset in physical memory.
      
      This is accomplished by ignoring PHYS_OFFSET when installing
      memblocks, and accounting for the apparent virtual offset of
      the kernel Image. As a result, virtual address references
      below PAGE_OFFSET are correctly mapped onto physical references
      into the kernel Image regardless of where it sits in memory.
      
      Special care needs to be taken for dealing with memory limits passed
      via mem=, since the generic implementation clips memory top down, which
      may clip the kernel image itself if it is loaded high up in memory. To
      deal with this case, we simply add back the memory covering the kernel
      image, which may result in more memory to be retained than was passed
      as a mem= parameter.
      
      Since mem= should not be considered a production feature, a panic notifier
      handler is installed that dumps the memory limit at panic time if one was
      set.
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      a7f8de16
    • A
      arm64: move kernel image to base of vmalloc area · f9040773
      Ard Biesheuvel 提交于
      This moves the module area to right before the vmalloc area, and moves
      the kernel image to the base of the vmalloc area. This is an intermediate
      step towards implementing KASLR, which allows the kernel image to be
      located anywhere in the vmalloc area.
      
      Since other subsystems such as hibernate may still need to refer to the
      kernel text or data segments via their linears addresses, both are mapped
      in the linear region as well. The linear alias of the text region is
      mapped read-only/non-executable to prevent inadvertent modification or
      execution.
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      f9040773
    • A
      arm64: decouple early fixmap init from linear mapping · 157962f5
      Ard Biesheuvel 提交于
      Since the early fixmap page tables are populated using pages that are
      part of the static footprint of the kernel, they are covered by the
      initial kernel mapping, and we can refer to them without using __va/__pa
      translations, which are tied to the linear mapping.
      
      Since the fixmap page tables are disjoint from the kernel mapping up
      to the top level pgd entry, we can refer to bm_pte[] directly, and there
      is no need to walk the page tables and perform __pa()/__va() translations
      at each step.
      Reviewed-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      157962f5