1. 14 4月, 2016 1 次提交
    • A
      arm64/mm: ensure memstart_addr remains sufficiently aligned · 2958987f
      Ard Biesheuvel 提交于
      After choosing memstart_addr to be the highest multiple of
      ARM64_MEMSTART_ALIGN less than or equal to the first usable physical memory
      address, we clip the memblocks to the maximum size of the linear region.
      Since the kernel may be high up in memory, we take care not to clip the
      kernel itself, which means we have to clip some memory from the bottom if
      this occurs, to ensure that the distance between the first and the last
      usable physical memory address can be covered by the linear region.
      
      However, we fail to update memstart_addr if this clipping from the bottom
      occurs, which means that we may still end up with virtual addresses that
      wrap into the userland range. So increment memstart_addr as appropriate to
      prevent this from happening.
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      2958987f
  2. 21 3月, 2016 1 次提交
  3. 02 3月, 2016 1 次提交
  4. 01 3月, 2016 2 次提交
  5. 27 2月, 2016 1 次提交
    • A
      arm64: vmemmap: use virtual projection of linear region · dfd55ad8
      Ard Biesheuvel 提交于
      Commit dd006da2 ("arm64: mm: increase VA range of identity map") made
      some changes to the memory mapping code to allow physical memory to reside
      at an offset that exceeds the size of the virtual mapping.
      
      However, since the size of the vmemmap area is proportional to the size of
      the VA area, but it is populated relative to the physical space, we may
      end up with the struct page array being mapped outside of the vmemmap
      region. For instance, on my Seattle A0 box, I can see the following output
      in the dmesg log.
      
         vmemmap : 0xffffffbdc0000000 - 0xffffffbfc0000000   (     8 GB maximum)
                   0xffffffbfc0000000 - 0xffffffbfd0000000   (   256 MB actual)
      
      We can fix this by deciding that the vmemmap region is not a projection of
      the physical space, but of the virtual space above PAGE_OFFSET, i.e., the
      linear region. This way, we are guaranteed that the vmemmap region is of
      sufficient size, and we can even reduce the size by half.
      
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      dfd55ad8
  6. 26 2月, 2016 2 次提交
  7. 24 2月, 2016 1 次提交
    • A
      arm64: kaslr: randomize the linear region · c031a421
      Ard Biesheuvel 提交于
      When KASLR is enabled (CONFIG_RANDOMIZE_BASE=y), and entropy has been
      provided by the bootloader, randomize the placement of RAM inside the
      linear region if sufficient space is available. For instance, on a 4KB
      granule/3 levels kernel, the linear region is 256 GB in size, and we can
      choose any 1 GB aligned offset that is far enough from the top of the
      address space to fit the distance between the start of the lowest memblock
      and the top of the highest memblock.
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      c031a421
  8. 19 2月, 2016 3 次提交
    • A
      arm64: allow kernel Image to be loaded anywhere in physical memory · a7f8de16
      Ard Biesheuvel 提交于
      This relaxes the kernel Image placement requirements, so that it
      may be placed at any 2 MB aligned offset in physical memory.
      
      This is accomplished by ignoring PHYS_OFFSET when installing
      memblocks, and accounting for the apparent virtual offset of
      the kernel Image. As a result, virtual address references
      below PAGE_OFFSET are correctly mapped onto physical references
      into the kernel Image regardless of where it sits in memory.
      
      Special care needs to be taken for dealing with memory limits passed
      via mem=, since the generic implementation clips memory top down, which
      may clip the kernel image itself if it is loaded high up in memory. To
      deal with this case, we simply add back the memory covering the kernel
      image, which may result in more memory to be retained than was passed
      as a mem= parameter.
      
      Since mem= should not be considered a production feature, a panic notifier
      handler is installed that dumps the memory limit at panic time if one was
      set.
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      a7f8de16
    • A
      arm64: defer __va translation of initrd_start and initrd_end · a89dea58
      Ard Biesheuvel 提交于
      Before deferring the assignment of memstart_addr in a subsequent patch, to
      the moment where all memory has been discovered and possibly clipped based
      on the size of the linear region and the presence of a mem= command line
      parameter, we need to ensure that memstart_addr is not used to perform __va
      translations before it is assigned.
      
      One such use is in the generic early DT discovery of the initrd location,
      which is recorded as a virtual address in the globals initrd_start and
      initrd_end. So wire up the generic support to declare the initrd addresses,
      and implement it without __va() translations, and perform the translation
      after memstart_addr has been assigned.
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      a89dea58
    • A
      arm64: move kernel image to base of vmalloc area · f9040773
      Ard Biesheuvel 提交于
      This moves the module area to right before the vmalloc area, and moves
      the kernel image to the base of the vmalloc area. This is an intermediate
      step towards implementing KASLR, which allows the kernel image to be
      located anywhere in the vmalloc area.
      
      Since other subsystems such as hibernate may still need to refer to the
      kernel text or data segments via their linears addresses, both are mapped
      in the linear region as well. The linear alias of the text region is
      mapped read-only/non-executable to prevent inadvertent modification or
      execution.
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      f9040773
  9. 11 12月, 2015 1 次提交
    • M
      arm64: mm: fold alternatives into .init · 9aa4ec15
      Mark Rutland 提交于
      Currently we treat the alternatives separately from other data that's
      only used during initialisation, using separate .altinstructions and
      .altinstr_replacement linker sections. These are freed for general
      allocation separately from .init*. This is problematic as:
      
      * We do not remove execute permissions, as we do for .init, leaving the
        memory executable.
      
      * We pad between them, making the kernel Image bianry up to PAGE_SIZE
        bytes larger than necessary.
      
      This patch moves the two sections into the contiguous region used for
      .init*. This saves some memory, ensures that we remove execute
      permissions, and allows us to remove some code made redundant by this
      reorganisation.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Cc: Andre Przywara <andre.przywara@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Jeremy Linton <jeremy.linton@arm.com>
      Cc: Laura Abbott <labbott@fedoraproject.org>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      9aa4ec15
  10. 10 12月, 2015 1 次提交
  11. 02 12月, 2015 1 次提交
  12. 30 10月, 2015 1 次提交
  13. 13 10月, 2015 1 次提交
    • L
      ARM64: kasan: print memory assignment · ee7f881b
      Linus Walleij 提交于
      This prints out the virtual memory assigned to KASan in the
      boot crawl along with other memory assignments, if and only
      if KASan is activated.
      
      Example dmesg from the Juno Development board:
      
      Memory: 1691156K/2080768K available (5465K kernel code, 444K rwdata,
      2160K rodata, 340K init, 217K bss, 373228K reserved, 16384K cma-reserved)
      Virtual kernel memory layout:
          kasan   : 0xffffff8000000000 - 0xffffff9000000000   (    64 GB)
          vmalloc : 0xffffff9000000000 - 0xffffffbdbfff0000   (   182 GB)
          vmemmap : 0xffffffbdc0000000 - 0xffffffbfc0000000   (     8 GB maximum)
                    0xffffffbdc2000000 - 0xffffffbdc3fc0000   (    31 MB actual)
          fixed   : 0xffffffbffabfd000 - 0xffffffbffac00000   (    12 KB)
          PCI I/O : 0xffffffbffae00000 - 0xffffffbffbe00000   (    16 MB)
          modules : 0xffffffbffc000000 - 0xffffffc000000000   (    64 MB)
          memory  : 0xffffffc000000000 - 0xffffffc07f000000   (  2032 MB)
            .init : 0xffffffc0007f5000 - 0xffffffc00084a000   (   340 KB)
            .text : 0xffffffc000080000 - 0xffffffc0007f45b4   (  7634 KB)
            .data : 0xffffffc000850000 - 0xffffffc0008bf200   (   445 KB)
      Signed-off-by: NLinus Walleij <linus.walleij@linaro.org>
      Signed-off-by: NAndrey Ryabinin <ryabinin.a.a@gmail.com>
      Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      ee7f881b
  14. 28 7月, 2015 1 次提交
  15. 17 6月, 2015 1 次提交
    • D
      arm64: mm: Fix freeing of the wrong memmap entries with !SPARSEMEM_VMEMMAP · b9bcc919
      Dave P Martin 提交于
      The memmap freeing code in free_unused_memmap() computes the end of
      each memblock by adding the memblock size onto the base.  However,
      if SPARSEMEM is enabled then the value (start) used for the base
      may already have been rounded downwards to work out which memmap
      entries to free after the previous memblock.
      
      This may cause memmap entries that are in use to get freed.
      
      In general, you're not likely to hit this problem unless there
      are at least 2 memblocks and one of them is not aligned to a
      sparsemem section boundary.  Note that carve-outs can increase
      the number of memblocks by splitting the regions listed in the
      device tree.
      
      This problem doesn't occur with SPARSEMEM_VMEMMAP, because the
      vmemmap code deals with freeing the unused regions of the memmap
      instead of requiring the arch code to do it.
      
      This patch gets the memblock base out of the memblock directly when
      computing the block end address to ensure the correct value is used.
      Signed-off-by: NDave Martin <Dave.Martin@arm.com>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      b9bcc919
  16. 02 6月, 2015 2 次提交
  17. 15 4月, 2015 1 次提交
  18. 28 2月, 2015 1 次提交
  19. 23 1月, 2015 1 次提交
    • M
      arm64: Fix overlapping VA allocations · aa03c428
      Mark Rutland 提交于
      PCI IO space was intended to be 16MiB, at 32MiB below MODULES_VADDR, but
      commit d1e6dc91 ("arm64: Add architectural support for PCI")
      extended this to cover the full 32MiB. The final 8KiB of this 32MiB is
      also allocated for the fixmap, allowing for potential clashes between
      the two.
      
      This change was masked by assumptions in mem_init and the page table
      dumping code, which assumed the I/O space to be 16MiB long through
      seaparte hard-coded definitions.
      
      This patch changes the definition of the PCI I/O space allocation to
      live in asm/memory.h, along with the other VA space allocations. As the
      fixmap allocation depends on the number of fixmap entries, this is moved
      below the PCI I/O space allocation. Both the fixmap and PCI I/O space
      are guarded with 2MB of padding. Sites assuming the I/O space was 16MiB
      are moved over use new PCI_IO_{START,END} definitions, which will keep
      in sync with the size of the IO space (now restored to 16MiB).
      
      As a useful side effect, the use of the new PCI_IO_{START,END}
      definitions prevents a build issue in the dumping code due to a (now
      redundant) missing include of io.h for PCI_IOBASE.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Laura Abbott <lauraa@codeaurora.org>
      Cc: Liviu Dudau <liviu.dudau@arm.com>
      Cc: Steve Capper <steve.capper@linaro.org>
      Cc: Will Deacon <will.deacon@arm.com>
      [catalin.marinas@arm.com: reorder FIXADDR and PCI_IO address_markers_idx enum]
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      aa03c428
  20. 22 1月, 2015 1 次提交
  21. 17 1月, 2015 1 次提交
    • M
      arm64: respect mem= for EFI · 6083fe74
      Mark Rutland 提交于
      When booting with EFI, we acquire the EFI memory map after parsing the
      early params. This unfortuantely renders the option useless as we call
      memblock_enforce_memory_limit (which uses memblock_remove_range behind
      the scenes) before we've added any memblocks. We end up removing
      nothing, then adding all of memory later when efi_init calls
      reserve_regions.
      
      Instead, we can log the limit and apply this later when we do the rest
      of the memblock work in memblock_init, which should work regardless of
      the presence of EFI. At the same time we may as well move the early
      parameter into arm64's mm/init.c, close to arm64_memblock_init.
      
      Any memory which must be mapped (e.g. for use by EFI runtime services)
      must be mapped explicitly reather than relying on the linear mapping,
      which may be truncated as a result of a mem= option passed on the kernel
      command line.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
      Acked-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Tested-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Leif Lindholm <leif.lindholm@linaro.org>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      6083fe74
  22. 16 1月, 2015 1 次提交
  23. 25 11月, 2014 1 次提交
  24. 03 10月, 2014 1 次提交
  25. 18 9月, 2014 1 次提交
  26. 09 9月, 2014 1 次提交
    • M
      efi/arm64: Fix fdt-related memory reservation · 0ceac9e0
      Mark Salter 提交于
      Commit 86c8b27a:
       "arm64: ignore DT memreserve entries when booting in UEFI mode
      
      prevents early_init_fdt_scan_reserved_mem() from being called for
      arm64 kernels booting via UEFI. This was done because the kernel
      will use the UEFI memory map to determine reserved memory regions.
      That approach has problems in that early_init_fdt_scan_reserved_mem()
      also reserves the FDT itself and any node-specific reserved memory.
      By chance of some kernel configs, the FDT may be overwritten before
      it can be unflattened and the kernel will fail to boot. More subtle
      problems will result if the FDT has node specific reserved memory
      which is not really reserved.
      
      This patch has the UEFI stub remove the memory reserve map entries
      from the FDT as it does with the memory nodes. This allows
      early_init_fdt_scan_reserved_mem() to be called unconditionally
      so that the other needed reservations are made.
      Signed-off-by: NMark Salter <msalter@redhat.com>
      Acked-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Acked-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
      0ceac9e0
  27. 20 8月, 2014 1 次提交
  28. 23 7月, 2014 2 次提交
  29. 10 7月, 2014 1 次提交
    • M
      arm64: place initial page tables above the kernel · bd00cd5f
      Mark Rutland 提交于
      Currently we place swapper_pg_dir and idmap_pg_dir below the kernel
      image, between PHYS_OFFSET and (PHYS_OFFSET + TEXT_OFFSET). However,
      bootloaders may use portions of this memory below the kernel and we do
      not parse the memory reservation list until after the MMU has been
      enabled. As such we may clobber some memory a bootloader wishes to have
      preserved.
      
      To enable the use of all of this memory by bootloaders (when the
      required memory reservations are communicated to the kernel) it is
      necessary to move our initial page tables elsewhere. As we currently
      have an effectively unbound requirement for memory at the end of the
      kernel image for .bss, we can place the page tables here.
      
      This patch moves the initial page table to the end of the kernel image,
      after the BSS. As they do not consist of any initialised data they will
      be stripped from the kernel Image as with the BSS. The BSS clearing
      routine is updated to stop at __bss_stop rather than _end so as to not
      clobber the page tables, and memory reservations made redundant by the
      new organisation are removed.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Tested-by: NLaura Abbott <lauraa@codeaurora.org>
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      bd00cd5f
  30. 18 6月, 2014 1 次提交
  31. 30 4月, 2014 1 次提交
  32. 13 3月, 2014 1 次提交
  33. 27 2月, 2014 2 次提交