1. 26 2月, 2016 2 次提交
  2. 25 2月, 2016 3 次提交
  3. 24 2月, 2016 3 次提交
    • A
      arm64: kaslr: randomize the linear region · c031a421
      Ard Biesheuvel 提交于
      When KASLR is enabled (CONFIG_RANDOMIZE_BASE=y), and entropy has been
      provided by the bootloader, randomize the placement of RAM inside the
      linear region if sufficient space is available. For instance, on a 4KB
      granule/3 levels kernel, the linear region is 256 GB in size, and we can
      choose any 1 GB aligned offset that is far enough from the top of the
      address space to fit the distance between the start of the lowest memblock
      and the top of the highest memblock.
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      c031a421
    • A
      arm64: add support for kernel ASLR · f80fb3a3
      Ard Biesheuvel 提交于
      This adds support for KASLR is implemented, based on entropy provided by
      the bootloader in the /chosen/kaslr-seed DT property. Depending on the size
      of the address space (VA_BITS) and the page size, the entropy in the
      virtual displacement is up to 13 bits (16k/2 levels) and up to 25 bits (all
      4 levels), with the sidenote that displacements that result in the kernel
      image straddling a 1GB/32MB/512MB alignment boundary (for 4KB/16KB/64KB
      granule kernels, respectively) are not allowed, and will be rounded up to
      an acceptable value.
      
      If CONFIG_RANDOMIZE_MODULE_REGION_FULL is enabled, the module region is
      randomized independently from the core kernel. This makes it less likely
      that the location of core kernel data structures can be determined by an
      adversary, but causes all function calls from modules into the core kernel
      to be resolved via entries in the module PLTs.
      
      If CONFIG_RANDOMIZE_MODULE_REGION_FULL is not enabled, the module region is
      randomized by choosing a page aligned 128 MB region inside the interval
      [_etext - 128 MB, _stext + 128 MB). This gives between 10 and 14 bits of
      entropy (depending on page size), independently of the kernel randomization,
      but still guarantees that modules are within the range of relative branch
      and jump instructions (with the caveat that, since the module region is
      shared with other uses of the vmalloc area, modules may need to be loaded
      further away if the module region is exhausted)
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      f80fb3a3
    • A
      arm64: switch to relative exception tables · 6c94f27a
      Ard Biesheuvel 提交于
      Instead of using absolute addresses for both the exception location
      and the fixup, use offsets relative to the exception table entry values.
      Not only does this cut the size of the exception table in half, it is
      also a prerequisite for KASLR, since absolute exception table entries
      are subject to dynamic relocation, which is incompatible with the sorting
      of the exception table that occurs at build time.
      
      This patch also introduces the _ASM_EXTABLE preprocessor macro (which
      exists on x86 as well) and its _asm_extable assembly counterpart, as
      shorthands to emit exception table entries.
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      6c94f27a
  4. 19 2月, 2016 9 次提交
    • C
      arm64: User die() instead of panic() in do_page_fault() · 70c8abc2
      Catalin Marinas 提交于
      The former gives better error reporting on unhandled permission faults
      (introduced by the UAO patches).
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      70c8abc2
    • A
      arm64: allow kernel Image to be loaded anywhere in physical memory · a7f8de16
      Ard Biesheuvel 提交于
      This relaxes the kernel Image placement requirements, so that it
      may be placed at any 2 MB aligned offset in physical memory.
      
      This is accomplished by ignoring PHYS_OFFSET when installing
      memblocks, and accounting for the apparent virtual offset of
      the kernel Image. As a result, virtual address references
      below PAGE_OFFSET are correctly mapped onto physical references
      into the kernel Image regardless of where it sits in memory.
      
      Special care needs to be taken for dealing with memory limits passed
      via mem=, since the generic implementation clips memory top down, which
      may clip the kernel image itself if it is loaded high up in memory. To
      deal with this case, we simply add back the memory covering the kernel
      image, which may result in more memory to be retained than was passed
      as a mem= parameter.
      
      Since mem= should not be considered a production feature, a panic notifier
      handler is installed that dumps the memory limit at panic time if one was
      set.
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      a7f8de16
    • A
      arm64: defer __va translation of initrd_start and initrd_end · a89dea58
      Ard Biesheuvel 提交于
      Before deferring the assignment of memstart_addr in a subsequent patch, to
      the moment where all memory has been discovered and possibly clipped based
      on the size of the linear region and the presence of a mem= command line
      parameter, we need to ensure that memstart_addr is not used to perform __va
      translations before it is assigned.
      
      One such use is in the generic early DT discovery of the initrd location,
      which is recorded as a virtual address in the globals initrd_start and
      initrd_end. So wire up the generic support to declare the initrd addresses,
      and implement it without __va() translations, and perform the translation
      after memstart_addr has been assigned.
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      a89dea58
    • A
      arm64: move kernel image to base of vmalloc area · f9040773
      Ard Biesheuvel 提交于
      This moves the module area to right before the vmalloc area, and moves
      the kernel image to the base of the vmalloc area. This is an intermediate
      step towards implementing KASLR, which allows the kernel image to be
      located anywhere in the vmalloc area.
      
      Since other subsystems such as hibernate may still need to refer to the
      kernel text or data segments via their linears addresses, both are mapped
      in the linear region as well. The linear alias of the text region is
      mapped read-only/non-executable to prevent inadvertent modification or
      execution.
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      f9040773
    • A
      arm64: decouple early fixmap init from linear mapping · 157962f5
      Ard Biesheuvel 提交于
      Since the early fixmap page tables are populated using pages that are
      part of the static footprint of the kernel, they are covered by the
      initial kernel mapping, and we can refer to them without using __va/__pa
      translations, which are tied to the linear mapping.
      
      Since the fixmap page tables are disjoint from the kernel mapping up
      to the top level pgd entry, we can refer to bm_pte[] directly, and there
      is no need to walk the page tables and perform __pa()/__va() translations
      at each step.
      Reviewed-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      157962f5
    • A
      arm64: add support for ioremap() block mappings · 324420bf
      Ard Biesheuvel 提交于
      This wires up the existing generic huge-vmap feature, which allows
      ioremap() to use PMD or PUD sized block mappings. It also adds support
      to the unmap path for dealing with block mappings, which will allow us
      to unmap the __init region using unmap_kernel_range() in a subsequent
      patch.
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      324420bf
    • C
      arm64: Remove the get_thread_info() function · e950631e
      Catalin Marinas 提交于
      This function was introduced by previous commits implementing UAO.
      However, it can be replaced with task_thread_info() in
      uao_thread_switch() or get_fs() in do_page_fault() (the latter being
      called only on the current context, so no need for using the saved
      pt_regs).
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      e950631e
    • J
      arm64: kernel: Don't toggle PAN on systems with UAO · 70544196
      James Morse 提交于
      If a CPU supports both Privileged Access Never (PAN) and User Access
      Override (UAO), we don't need to disable/re-enable PAN round all
      copy_to_user() like calls.
      
      UAO alternatives cause these calls to use the 'unprivileged' load/store
      instructions, which are overridden to be the privileged kind when
      fs==KERNEL_DS.
      
      This patch changes the copy_to_user() calls to have their PAN toggling
      depend on a new composite 'feature' ARM64_ALT_PAN_NOT_UAO.
      
      If both features are detected, PAN will be enabled, but the copy_to_user()
      alternatives will not be applied. This means PAN will be enabled all the
      time for these functions. If only PAN is detected, the toggling will be
      enabled as normal.
      
      This will save the time taken to disable/re-enable PAN, and allow us to
      catch copy_to_user() accesses that occur with fs==KERNEL_DS.
      
      Futex and swp-emulation code continue to hang their PAN toggling code on
      ARM64_HAS_PAN.
      Signed-off-by: NJames Morse <james.morse@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      70544196
    • J
      arm64: kernel: Add support for User Access Override · 57f4959b
      James Morse 提交于
      'User Access Override' is a new ARMv8.2 feature which allows the
      unprivileged load and store instructions to be overridden to behave in
      the normal way.
      
      This patch converts {get,put}_user() and friends to use ldtr*/sttr*
      instructions - so that they can only access EL0 memory, then enables
      UAO when fs==KERNEL_DS so that these functions can access kernel memory.
      
      This allows user space's read/write permissions to be checked against the
      page tables, instead of testing addr<USER_DS, then using the kernel's
      read/write permissions.
      Signed-off-by: NJames Morse <james.morse@arm.com>
      [catalin.marinas@arm.com: move uao_thread_switch() above dsb()]
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      57f4959b
  5. 18 2月, 2016 1 次提交
  6. 16 2月, 2016 14 次提交
  7. 02 2月, 2016 1 次提交
  8. 26 1月, 2016 1 次提交
    • M
      arm64: mm: avoid calling apply_to_page_range on empty range · 57adec86
      Mika Penttilä 提交于
      Calling apply_to_page_range with an empty range results in a BUG_ON
      from the core code. This can be triggered by trying to load the st_drv
      module with CONFIG_DEBUG_SET_MODULE_RONX enabled:
      
        kernel BUG at mm/memory.c:1874!
        Internal error: Oops - BUG: 0 [#1] PREEMPT SMP
        Modules linked in:
        CPU: 3 PID: 1764 Comm: insmod Not tainted 4.5.0-rc1+ #2
        Hardware name: ARM Juno development board (r0) (DT)
        task: ffffffc9763b8000 ti: ffffffc975af8000 task.ti: ffffffc975af8000
        PC is at apply_to_page_range+0x2cc/0x2d0
        LR is at change_memory_common+0x80/0x108
      
      This patch fixes the issue by making change_memory_common (called by the
      set_memory_* functions) a NOP when numpages == 0, therefore avoiding the
      erroneous call to apply_to_page_range and bringing us into line with x86
      and s390.
      
      Cc: <stable@vger.kernel.org>
      Reviewed-by: NLaura Abbott <labbott@redhat.com>
      Acked-by: NDavid Rientjes <rientjes@google.com>
      Signed-off-by: NMika Penttilä <mika.penttila@nextfour.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      57adec86
  9. 25 1月, 2016 3 次提交
  10. 16 1月, 2016 1 次提交
    • K
      arm64, thp: remove infrastructure for handling splitting PMDs · b7ed934a
      Kirill A. Shutemov 提交于
      With new refcounting we don't need to mark PMDs splitting.  Let's drop
      code to handle this.
      
      pmdp_splitting_flush() is not needed too: on splitting PMD we will do
      pmdp_clear_flush() + set_pte_at().  pmdp_clear_flush() will do IPI as
      needed for fast_gup.
      Signed-off-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: Sasha Levin <sasha.levin@oracle.com>
      Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Cc: Jerome Marchand <jmarchan@redhat.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
      Cc: Steve Capper <steve.capper@linaro.org>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Michal Hocko <mhocko@suse.cz>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: David Rientjes <rientjes@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      b7ed934a
  11. 15 1月, 2016 1 次提交
    • D
      arm64: mm: support ARCH_MMAP_RND_BITS · 8f0d3aa9
      Daniel Cashman 提交于
      arm64: arch_mmap_rnd() uses STACK_RND_MASK to generate the random offset
      for the mmap base address.  This value represents a compromise between
      increased ASLR effectiveness and avoiding address-space fragmentation.
      Replace it with a Kconfig option, which is sensibly bounded, so that
      platform developers may choose where to place this compromise.  Keep
      default values as new minimums.
      Signed-off-by: NDaniel Cashman <dcashman@google.com>
      Cc: Russell King <linux@arm.linux.org.uk>
      Acked-by: NKees Cook <keescook@chromium.org>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Jonathan Corbet <corbet@lwn.net>
      Cc: Don Zickus <dzickus@redhat.com>
      Cc: Eric W. Biederman <ebiederm@xmission.com>
      Cc: Heinrich Schuchardt <xypron.glpk@gmx.de>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Mark Salyzyn <salyzyn@android.com>
      Cc: Jeff Vander Stoep <jeffv@google.com>
      Cc: Nick Kralevich <nnk@google.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Hector Marco-Gisbert <hecmargi@upv.es>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      8f0d3aa9
  12. 05 1月, 2016 1 次提交
    • W
      arm64: mm: move pgd_cache initialisation to pgtable_cache_init · 39b5be9b
      Will Deacon 提交于
      Initialising the suppport for EFI runtime services requires us to
      allocate a pgd off the back of an early_initcall. On systems where the
      PGD_SIZE is smaller than PAGE_SIZE (e.g. 64k pages and 48-bit VA), the
      pgd_cache isn't initialised at this stage, and we panic with a NULL
      dereference during boot:
      
        Unable to handle kernel NULL pointer dereference at virtual address 00000000
      
        __create_mapping.isra.5+0x84/0x350
        create_pgd_mapping+0x20/0x28
        efi_create_mapping+0x5c/0x6c
        arm_enable_runtime_services+0x154/0x1e4
        do_one_initcall+0x8c/0x190
        kernel_init_freeable+0x84/0x1ec
        kernel_init+0x10/0xe0
        ret_from_fork+0x10/0x50
      
      This patch fixes the problem by initialising the pgd_cache earlier, in
      the pgtable_cache_init callback, which sounds suspiciously like what it
      was intended for.
      Reported-by: NDennis Chen <dennis.chen@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      39b5be9b