1. 28 6月, 2016 1 次提交
    • M
      arm64: mm: simplify memblock numa node extraction · ea2cbee3
      Mark Rutland 提交于
      We currently open-code extracting the NUMA node of a memblock region,
      which requires an ifdef to cater for !CONFIG_NUMA builds where the
      memblock_region::nid field does not exist.
      
      The generic memblock_get_region_node helper is intended to cater for
      this. For CONFIG_HAVE_MEMBLOCK_NODE_MAP, builds this returns reg->nid,
      and for for !CONFIG_HAVE_MEMBLOCK_NODE_MAP builds this is a static
      inline that returns 0. Note that for arm64,
      CONFIG_HAVE_MEMBLOCK_NODE_MAP is selected iff CONFIG_NUMA is.
      
      This patch makes use of memblock_get_region_node to simplify the arm64
      code. At the same time, we can move the nid variable definition into the
      loop, as this is the only place it is used.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Reviewed-by: NSteve Capper <steve.capper@arm.com>
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      ea2cbee3
  2. 21 6月, 2016 1 次提交
  3. 20 4月, 2016 2 次提交
  4. 16 4月, 2016 1 次提交
  5. 14 4月, 2016 4 次提交
    • A
      arm64: mm: move vmemmap region right below the linear region · 3e1907d5
      Ard Biesheuvel 提交于
      This moves the vmemmap region right below PAGE_OFFSET, aka the start
      of the linear region, and redefines its size to be a power of two.
      Due to the placement of PAGE_OFFSET in the middle of the address space,
      whose size is a power of two as well, this guarantees that virt to
      page conversions and vice versa can be implemented efficiently, by
      masking and shifting rather than ordinary arithmetic.
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      3e1907d5
    • A
      arm64: mm: free __init memory via the linear mapping · d386825c
      Ard Biesheuvel 提交于
      The implementation of free_initmem_default() expects __init_begin
      and __init_end to be covered by the linear mapping, which is no
      longer the case. So open code it instead, using addresses that are
      explicitly translated from kernel virtual to linear virtual.
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      d386825c
    • A
      arm64: add the initrd region to the linear mapping explicitly · 177e15f0
      Ard Biesheuvel 提交于
      Instead of going out of our way to relocate the initrd if it turns out
      to occupy memory that is not covered by the linear mapping, just add the
      initrd to the linear mapping. This puts the burden on the bootloader to
      pass initrd= and mem= options that are mutually consistent.
      
      Note that, since the placement of the linear region in the PA space is
      also dependent on the placement of the kernel Image, which may reside
      anywhere in memory, we may still end up with a situation where the initrd
      and the kernel Image are simply too far apart to be covered by the linear
      region.
      
      Since we now leave it up to the bootloader to pass the initrd in memory
      that is guaranteed to be accessible by the kernel, add a mention of this to
      the arm64 boot protocol specification as well.
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      177e15f0
    • A
      arm64/mm: ensure memstart_addr remains sufficiently aligned · 2958987f
      Ard Biesheuvel 提交于
      After choosing memstart_addr to be the highest multiple of
      ARM64_MEMSTART_ALIGN less than or equal to the first usable physical memory
      address, we clip the memblocks to the maximum size of the linear region.
      Since the kernel may be high up in memory, we take care not to clip the
      kernel itself, which means we have to clip some memory from the bottom if
      this occurs, to ensure that the distance between the first and the last
      usable physical memory address can be covered by the linear region.
      
      However, we fail to update memstart_addr if this clipping from the bottom
      occurs, which means that we may still end up with virtual addresses that
      wrap into the userland range. So increment memstart_addr as appropriate to
      prevent this from happening.
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      2958987f
  6. 21 3月, 2016 1 次提交
  7. 02 3月, 2016 1 次提交
  8. 01 3月, 2016 2 次提交
  9. 27 2月, 2016 1 次提交
    • A
      arm64: vmemmap: use virtual projection of linear region · dfd55ad8
      Ard Biesheuvel 提交于
      Commit dd006da2 ("arm64: mm: increase VA range of identity map") made
      some changes to the memory mapping code to allow physical memory to reside
      at an offset that exceeds the size of the virtual mapping.
      
      However, since the size of the vmemmap area is proportional to the size of
      the VA area, but it is populated relative to the physical space, we may
      end up with the struct page array being mapped outside of the vmemmap
      region. For instance, on my Seattle A0 box, I can see the following output
      in the dmesg log.
      
         vmemmap : 0xffffffbdc0000000 - 0xffffffbfc0000000   (     8 GB maximum)
                   0xffffffbfc0000000 - 0xffffffbfd0000000   (   256 MB actual)
      
      We can fix this by deciding that the vmemmap region is not a projection of
      the physical space, but of the virtual space above PAGE_OFFSET, i.e., the
      linear region. This way, we are guaranteed that the vmemmap region is of
      sufficient size, and we can even reduce the size by half.
      
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      dfd55ad8
  10. 26 2月, 2016 2 次提交
  11. 24 2月, 2016 1 次提交
    • A
      arm64: kaslr: randomize the linear region · c031a421
      Ard Biesheuvel 提交于
      When KASLR is enabled (CONFIG_RANDOMIZE_BASE=y), and entropy has been
      provided by the bootloader, randomize the placement of RAM inside the
      linear region if sufficient space is available. For instance, on a 4KB
      granule/3 levels kernel, the linear region is 256 GB in size, and we can
      choose any 1 GB aligned offset that is far enough from the top of the
      address space to fit the distance between the start of the lowest memblock
      and the top of the highest memblock.
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      c031a421
  12. 19 2月, 2016 3 次提交
    • A
      arm64: allow kernel Image to be loaded anywhere in physical memory · a7f8de16
      Ard Biesheuvel 提交于
      This relaxes the kernel Image placement requirements, so that it
      may be placed at any 2 MB aligned offset in physical memory.
      
      This is accomplished by ignoring PHYS_OFFSET when installing
      memblocks, and accounting for the apparent virtual offset of
      the kernel Image. As a result, virtual address references
      below PAGE_OFFSET are correctly mapped onto physical references
      into the kernel Image regardless of where it sits in memory.
      
      Special care needs to be taken for dealing with memory limits passed
      via mem=, since the generic implementation clips memory top down, which
      may clip the kernel image itself if it is loaded high up in memory. To
      deal with this case, we simply add back the memory covering the kernel
      image, which may result in more memory to be retained than was passed
      as a mem= parameter.
      
      Since mem= should not be considered a production feature, a panic notifier
      handler is installed that dumps the memory limit at panic time if one was
      set.
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      a7f8de16
    • A
      arm64: defer __va translation of initrd_start and initrd_end · a89dea58
      Ard Biesheuvel 提交于
      Before deferring the assignment of memstart_addr in a subsequent patch, to
      the moment where all memory has been discovered and possibly clipped based
      on the size of the linear region and the presence of a mem= command line
      parameter, we need to ensure that memstart_addr is not used to perform __va
      translations before it is assigned.
      
      One such use is in the generic early DT discovery of the initrd location,
      which is recorded as a virtual address in the globals initrd_start and
      initrd_end. So wire up the generic support to declare the initrd addresses,
      and implement it without __va() translations, and perform the translation
      after memstart_addr has been assigned.
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      a89dea58
    • A
      arm64: move kernel image to base of vmalloc area · f9040773
      Ard Biesheuvel 提交于
      This moves the module area to right before the vmalloc area, and moves
      the kernel image to the base of the vmalloc area. This is an intermediate
      step towards implementing KASLR, which allows the kernel image to be
      located anywhere in the vmalloc area.
      
      Since other subsystems such as hibernate may still need to refer to the
      kernel text or data segments via their linears addresses, both are mapped
      in the linear region as well. The linear alias of the text region is
      mapped read-only/non-executable to prevent inadvertent modification or
      execution.
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      f9040773
  13. 11 12月, 2015 1 次提交
    • M
      arm64: mm: fold alternatives into .init · 9aa4ec15
      Mark Rutland 提交于
      Currently we treat the alternatives separately from other data that's
      only used during initialisation, using separate .altinstructions and
      .altinstr_replacement linker sections. These are freed for general
      allocation separately from .init*. This is problematic as:
      
      * We do not remove execute permissions, as we do for .init, leaving the
        memory executable.
      
      * We pad between them, making the kernel Image bianry up to PAGE_SIZE
        bytes larger than necessary.
      
      This patch moves the two sections into the contiguous region used for
      .init*. This saves some memory, ensures that we remove execute
      permissions, and allows us to remove some code made redundant by this
      reorganisation.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Cc: Andre Przywara <andre.przywara@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Jeremy Linton <jeremy.linton@arm.com>
      Cc: Laura Abbott <labbott@fedoraproject.org>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      9aa4ec15
  14. 10 12月, 2015 1 次提交
  15. 02 12月, 2015 1 次提交
  16. 30 10月, 2015 1 次提交
  17. 13 10月, 2015 1 次提交
    • L
      ARM64: kasan: print memory assignment · ee7f881b
      Linus Walleij 提交于
      This prints out the virtual memory assigned to KASan in the
      boot crawl along with other memory assignments, if and only
      if KASan is activated.
      
      Example dmesg from the Juno Development board:
      
      Memory: 1691156K/2080768K available (5465K kernel code, 444K rwdata,
      2160K rodata, 340K init, 217K bss, 373228K reserved, 16384K cma-reserved)
      Virtual kernel memory layout:
          kasan   : 0xffffff8000000000 - 0xffffff9000000000   (    64 GB)
          vmalloc : 0xffffff9000000000 - 0xffffffbdbfff0000   (   182 GB)
          vmemmap : 0xffffffbdc0000000 - 0xffffffbfc0000000   (     8 GB maximum)
                    0xffffffbdc2000000 - 0xffffffbdc3fc0000   (    31 MB actual)
          fixed   : 0xffffffbffabfd000 - 0xffffffbffac00000   (    12 KB)
          PCI I/O : 0xffffffbffae00000 - 0xffffffbffbe00000   (    16 MB)
          modules : 0xffffffbffc000000 - 0xffffffc000000000   (    64 MB)
          memory  : 0xffffffc000000000 - 0xffffffc07f000000   (  2032 MB)
            .init : 0xffffffc0007f5000 - 0xffffffc00084a000   (   340 KB)
            .text : 0xffffffc000080000 - 0xffffffc0007f45b4   (  7634 KB)
            .data : 0xffffffc000850000 - 0xffffffc0008bf200   (   445 KB)
      Signed-off-by: NLinus Walleij <linus.walleij@linaro.org>
      Signed-off-by: NAndrey Ryabinin <ryabinin.a.a@gmail.com>
      Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      ee7f881b
  18. 28 7月, 2015 1 次提交
  19. 17 6月, 2015 1 次提交
    • D
      arm64: mm: Fix freeing of the wrong memmap entries with !SPARSEMEM_VMEMMAP · b9bcc919
      Dave P Martin 提交于
      The memmap freeing code in free_unused_memmap() computes the end of
      each memblock by adding the memblock size onto the base.  However,
      if SPARSEMEM is enabled then the value (start) used for the base
      may already have been rounded downwards to work out which memmap
      entries to free after the previous memblock.
      
      This may cause memmap entries that are in use to get freed.
      
      In general, you're not likely to hit this problem unless there
      are at least 2 memblocks and one of them is not aligned to a
      sparsemem section boundary.  Note that carve-outs can increase
      the number of memblocks by splitting the regions listed in the
      device tree.
      
      This problem doesn't occur with SPARSEMEM_VMEMMAP, because the
      vmemmap code deals with freeing the unused regions of the memmap
      instead of requiring the arch code to do it.
      
      This patch gets the memblock base out of the memblock directly when
      computing the block end address to ensure the correct value is used.
      Signed-off-by: NDave Martin <Dave.Martin@arm.com>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      b9bcc919
  20. 02 6月, 2015 2 次提交
  21. 15 4月, 2015 1 次提交
  22. 28 2月, 2015 1 次提交
  23. 23 1月, 2015 1 次提交
    • M
      arm64: Fix overlapping VA allocations · aa03c428
      Mark Rutland 提交于
      PCI IO space was intended to be 16MiB, at 32MiB below MODULES_VADDR, but
      commit d1e6dc91 ("arm64: Add architectural support for PCI")
      extended this to cover the full 32MiB. The final 8KiB of this 32MiB is
      also allocated for the fixmap, allowing for potential clashes between
      the two.
      
      This change was masked by assumptions in mem_init and the page table
      dumping code, which assumed the I/O space to be 16MiB long through
      seaparte hard-coded definitions.
      
      This patch changes the definition of the PCI I/O space allocation to
      live in asm/memory.h, along with the other VA space allocations. As the
      fixmap allocation depends on the number of fixmap entries, this is moved
      below the PCI I/O space allocation. Both the fixmap and PCI I/O space
      are guarded with 2MB of padding. Sites assuming the I/O space was 16MiB
      are moved over use new PCI_IO_{START,END} definitions, which will keep
      in sync with the size of the IO space (now restored to 16MiB).
      
      As a useful side effect, the use of the new PCI_IO_{START,END}
      definitions prevents a build issue in the dumping code due to a (now
      redundant) missing include of io.h for PCI_IOBASE.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Laura Abbott <lauraa@codeaurora.org>
      Cc: Liviu Dudau <liviu.dudau@arm.com>
      Cc: Steve Capper <steve.capper@linaro.org>
      Cc: Will Deacon <will.deacon@arm.com>
      [catalin.marinas@arm.com: reorder FIXADDR and PCI_IO address_markers_idx enum]
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      aa03c428
  24. 22 1月, 2015 1 次提交
  25. 17 1月, 2015 1 次提交
    • M
      arm64: respect mem= for EFI · 6083fe74
      Mark Rutland 提交于
      When booting with EFI, we acquire the EFI memory map after parsing the
      early params. This unfortuantely renders the option useless as we call
      memblock_enforce_memory_limit (which uses memblock_remove_range behind
      the scenes) before we've added any memblocks. We end up removing
      nothing, then adding all of memory later when efi_init calls
      reserve_regions.
      
      Instead, we can log the limit and apply this later when we do the rest
      of the memblock work in memblock_init, which should work regardless of
      the presence of EFI. At the same time we may as well move the early
      parameter into arm64's mm/init.c, close to arm64_memblock_init.
      
      Any memory which must be mapped (e.g. for use by EFI runtime services)
      must be mapped explicitly reather than relying on the linear mapping,
      which may be truncated as a result of a mem= option passed on the kernel
      command line.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
      Acked-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Tested-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Leif Lindholm <leif.lindholm@linaro.org>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      6083fe74
  26. 16 1月, 2015 1 次提交
  27. 25 11月, 2014 1 次提交
  28. 03 10月, 2014 1 次提交
  29. 18 9月, 2014 1 次提交
  30. 09 9月, 2014 1 次提交
    • M
      efi/arm64: Fix fdt-related memory reservation · 0ceac9e0
      Mark Salter 提交于
      Commit 86c8b27a:
       "arm64: ignore DT memreserve entries when booting in UEFI mode
      
      prevents early_init_fdt_scan_reserved_mem() from being called for
      arm64 kernels booting via UEFI. This was done because the kernel
      will use the UEFI memory map to determine reserved memory regions.
      That approach has problems in that early_init_fdt_scan_reserved_mem()
      also reserves the FDT itself and any node-specific reserved memory.
      By chance of some kernel configs, the FDT may be overwritten before
      it can be unflattened and the kernel will fail to boot. More subtle
      problems will result if the FDT has node specific reserved memory
      which is not really reserved.
      
      This patch has the UEFI stub remove the memory reserve map entries
      from the FDT as it does with the memory nodes. This allows
      early_init_fdt_scan_reserved_mem() to be called unconditionally
      so that the other needed reservations are made.
      Signed-off-by: NMark Salter <msalter@redhat.com>
      Acked-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Acked-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
      0ceac9e0
  31. 20 8月, 2014 1 次提交