1. 15 4月, 2016 1 次提交
    • A
      arm64: move early boot code to the .init segment · 546c8c44
      Ard Biesheuvel 提交于
      Apart from the arm64/linux and EFI header data structures, there is nothing
      in the .head.text section that must reside at the beginning of the Image.
      So let's move it to the .init section where it belongs.
      
      Note that this involves some minor tweaking of the EFI header, primarily
      because the address of 'stext' no longer coincides with the start of the
      .text section. It also requires a couple of relocated symbol references
      to be slightly rewritten or their definition moved to the linker script.
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      546c8c44
  2. 24 2月, 2016 1 次提交
  3. 19 2月, 2016 1 次提交
    • A
      arm64: allow kernel Image to be loaded anywhere in physical memory · a7f8de16
      Ard Biesheuvel 提交于
      This relaxes the kernel Image placement requirements, so that it
      may be placed at any 2 MB aligned offset in physical memory.
      
      This is accomplished by ignoring PHYS_OFFSET when installing
      memblocks, and accounting for the apparent virtual offset of
      the kernel Image. As a result, virtual address references
      below PAGE_OFFSET are correctly mapped onto physical references
      into the kernel Image regardless of where it sits in memory.
      
      Special care needs to be taken for dealing with memory limits passed
      via mem=, since the generic implementation clips memory top down, which
      may clip the kernel image itself if it is loaded high up in memory. To
      deal with this case, we simply add back the memory covering the kernel
      image, which may result in more memory to be retained than was passed
      as a mem= parameter.
      
      Since mem= should not be considered a production feature, a panic notifier
      handler is installed that dumps the memory limit at panic time if one was
      set.
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      a7f8de16
  4. 16 2月, 2016 1 次提交
  5. 25 1月, 2016 1 次提交
    • A
      arm64: hide __efistub_ aliases from kallsyms · 75feee3d
      Ard Biesheuvel 提交于
      Commit e8f3010f ("arm64/efi: isolate EFI stub from the kernel
      proper") isolated the EFI stub code from the kernel proper by prefixing
      all of its symbols with __efistub_, and selectively allowing access to
      core kernel symbols from the stub by emitting __efistub_ aliases for
      functions and variables that the stub can access legally.
      
      As an unintended side effect, these aliases are emitted into the
      kallsyms symbol table, which means they may turn up in backtraces,
      e.g.,
      
        ...
        PC is at __efistub_memset+0x108/0x200
        LR is at fixup_init+0x3c/0x48
        ...
        [<ffffff8008328608>] __efistub_memset+0x108/0x200
        [<ffffff8008094dcc>] free_initmem+0x2c/0x40
        [<ffffff8008645198>] kernel_init+0x20/0xe0
        [<ffffff8008085cd0>] ret_from_fork+0x10/0x40
      
      The backtrace in question has nothing to do with the EFI stub, but
      simply returns one of the several aliases of memset() that have been
      recorded in the kallsyms table. This is undesirable, since it may
      suggest to people who are not aware of this that the issue they are
      seeing is somehow EFI related.
      
      So hide the __efistub_ aliases from kallsyms, by emitting them as
      absolute linker symbols explicitly. The distinction between those
      and section relative symbols is completely irrelevant to these
      definitions, and to the final link we are performing when these
      definitions are being taken into account (the distinction is only
      relevant to symbols defined inside a section definition when performing
      a partial link), and so the resulting values are identical to the
      original ones. Since absolute symbols are ignored by kallsyms, this
      will result in these values to be omitted from its symbol table.
      
      After this patch, the backtrace generated from the same address looks
      like this:
        ...
        PC is at __memset+0x108/0x200
        LR is at fixup_init+0x3c/0x48
        ...
        [<ffffff8008328608>] __memset+0x108/0x200
        [<ffffff8008094dcc>] free_initmem+0x2c/0x40
        [<ffffff8008645198>] kernel_init+0x20/0xe0
        [<ffffff8008085cd0>] ret_from_fork+0x10/0x40
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      75feee3d
  6. 20 10月, 2015 1 次提交
  7. 13 10月, 2015 1 次提交
    • A
      arm64: add KASAN support · 39d114dd
      Andrey Ryabinin 提交于
      This patch adds arch specific code for kernel address sanitizer
      (see Documentation/kasan.txt).
      
      1/8 of kernel addresses reserved for shadow memory. There was no
      big enough hole for this, so virtual addresses for shadow were
      stolen from vmalloc area.
      
      At early boot stage the whole shadow region populated with just
      one physical page (kasan_zero_page). Later, this page reused
      as readonly zero shadow for some memory that KASan currently
      don't track (vmalloc).
      After mapping the physical memory, pages for shadow memory are
      allocated and mapped.
      
      Functions like memset/memmove/memcpy do a lot of memory accesses.
      If bad pointer passed to one of these function it is important
      to catch this. Compiler's instrumentation cannot do this since
      these functions are written in assembly.
      KASan replaces memory functions with manually instrumented variants.
      Original functions declared as weak symbols so strong definitions
      in mm/kasan/kasan.c could replace them. Original functions have aliases
      with '__' prefix in name, so we could call non-instrumented variant
      if needed.
      Some files built without kasan instrumentation (e.g. mm/slub.c).
      Original mem* function replaced (via #define) with prefixed variants
      to disable memory access checks for such files.
      Signed-off-by: NAndrey Ryabinin <ryabinin.a.a@gmail.com>
      Tested-by: NLinus Walleij <linus.walleij@linaro.org>
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      39d114dd
  8. 12 10月, 2015 1 次提交
    • A
      arm64/efi: isolate EFI stub from the kernel proper · e8f3010f
      Ard Biesheuvel 提交于
      Since arm64 does not use a builtin decompressor, the EFI stub is built
      into the kernel proper. So far, this has been working fine, but actually,
      since the stub is in fact a PE/COFF relocatable binary that is executed
      at an unknown offset in the 1:1 mapping provided by the UEFI firmware, we
      should not be seamlessly sharing code with the kernel proper, which is a
      position dependent executable linked at a high virtual offset.
      
      So instead, separate the contents of libstub and its dependencies, by
      putting them into their own namespace by prefixing all of its symbols
      with __efistub. This way, we have tight control over what parts of the
      kernel proper are referenced by the stub.
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Reviewed-by: NMatt Fleming <matt.fleming@intel.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      e8f3010f
  9. 10 7月, 2014 1 次提交
    • M
      arm64: Update the Image header · a2c1d73b
      Mark Rutland 提交于
      Currently the kernel Image is stripped of everything past the initial
      stack, and at runtime the memory is initialised and used by the kernel.
      This makes the effective minimum memory footprint of the kernel larger
      than the size of the loaded binary, though bootloaders have no mechanism
      to identify how large this minimum memory footprint is. This makes it
      difficult to choose safe locations to place both the kernel and other
      binaries required at boot (DTB, initrd, etc), such that the kernel won't
      clobber said binaries or other reserved memory during initialisation.
      
      Additionally when big endian support was added the image load offset was
      overlooked, and is currently of an arbitrary endianness, which makes it
      difficult for bootloaders to make use of it. It seems that bootloaders
      aren't respecting the image load offset at present anyway, and are
      assuming that offset 0x80000 will always be correct.
      
      This patch adds an effective image size to the kernel header which
      describes the amount of memory from the start of the kernel Image binary
      which the kernel expects to use before detecting memory and handling any
      memory reservations. This can be used by bootloaders to choose suitable
      locations to load the kernel and/or other binaries such that the kernel
      will not clobber any memory unexpectedly. As before, memory reservations
      are required to prevent the kernel from clobbering these locations
      later.
      
      Both the image load offset and the effective image size are forced to be
      little-endian regardless of the native endianness of the kernel to
      enable bootloaders to load a kernel of arbitrary endianness. Bootloaders
      which wish to make use of the load offset can inspect the effective
      image size field for a non-zero value to determine if the offset is of a
      known endianness. To enable software to determine the endinanness of the
      kernel as may be required for certain use-cases, a new flags field (also
      little-endian) is added to the kernel header to export this information.
      
      The documentation is updated to clarify these details. To discourage
      future assumptions regarding the value of text_offset, the value at this
      point in time is removed from the main flow of the documentation (though
      kept as a compatibility note). Some minor formatting issues in the
      documentation are also corrected.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Acked-by: NTom Rini <trini@ti.com>
      Cc: Geoff Levand <geoff@infradead.org>
      Cc: Kevin Hilman <kevin.hilman@linaro.org>
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      a2c1d73b