1. 06 5月, 2016 2 次提交
  2. 05 5月, 2016 1 次提交
  3. 03 5月, 2016 2 次提交
    • Y
      arm64: always use STRICT_MM_TYPECHECKS · 2326df55
      Yang Shi 提交于
      Inspired by the counterpart of powerpc [1], which shows there is no negative
      effect on code generation from enabling STRICT_MM_TYPECHECKS with a modern
      compiler.
      
      And, Arnd's comment [2] about that patch says STRICT_MM_TYPECHECKS could
      be default as long as the architecture can pass structures in registers as
      function arguments. ARM64 can do it as long as the size of structure <= 16
      bytes. All the page table value types are u64 on ARM64.
      
      The below disassembly demonstrates it, entry is pte_t type:
      
                  entry = arch_make_huge_pte(entry, vma, page, writable);
         0xffff00000826fc38 <+80>:    and     x0, x0, #0xfffffffffffffffd
         0xffff00000826fc3c <+84>:    mov     w3, w21
         0xffff00000826fc40 <+88>:    mov     x2, x20
         0xffff00000826fc44 <+92>:    mov     x1, x19
         0xffff00000826fc48 <+96>:    orr     x0, x0, #0x400
         0xffff00000826fc4c <+100>:   bl      0xffff00000809bcc0 <arch_make_huge_pte>
      
      [1] http://www.spinics.net/lists/linux-mm/msg105951.html
      [2] http://www.spinics.net/lists/linux-mm/msg105969.html
      
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Acked-by: NArnd Bergmann <arnd@arndb.de>
      Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NYang Shi <yang.shi@linaro.org>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      2326df55
    • J
      arm64: kvm: Fix kvm teardown for systems using the extended idmap · c612505f
      James Morse 提交于
      If memory is located above 1<<VA_BITS, kvm adds an extra level to its page
      tables, merging the runtime tables and boot tables that contain the idmap.
      This lets us avoid the trampoline dance during initialisation.
      
      This also means there is no trampoline page mapped, so
      __cpu_reset_hyp_mode() can't call __kvm_hyp_reset() in this page. The good
      news is the idmap is still mapped, so we don't need the trampoline page.
      The bad news is we can't call it directly as the idmap is above
      HYP_PAGE_OFFSET, so its address is masked by kvm_call_hyp.
      
      Add a function __extended_idmap_trampoline which will branch into
      __kvm_hyp_reset in the idmap, change kvm_hyp_reset_entry() to return
      this address if __kvm_cpu_uses_extended_idmap(). In this case
      __kvm_hyp_reset() will still switch to the boot tables (which are the
      merged tables that were already in use), and branch into the idmap (where
      it already was).
      
      This fixes boot failures on these systems, where we fail to execute the
      missing trampoline page when tearing down kvm in init_subsystems():
      [    2.508922] kvm [1]: 8-bit VMID
      [    2.512057] kvm [1]: Hyp mode initialized successfully
      [    2.517242] kvm [1]: interrupt-controller@e1140000 IRQ13
      [    2.522622] kvm [1]: timer IRQ3
      [    2.525783] Kernel panic - not syncing: HYP panic:
      [    2.525783] PS:200003c9 PC:0000007ffffff820 ESR:86000005
      [    2.525783] FAR:0000007ffffff820 HPFAR:00000000003ffff0 PAR:0000000000000000
      [    2.525783] VCPU:          (null)
      [    2.525783]
      [    2.547667] CPU: 0 PID: 0 Comm: swapper/0 Tainted: G        W       4.6.0-rc5+ #1
      [    2.555137] Hardware name: Default string Default string/Default string, BIOS ROD0084E 09/03/2015
      [    2.563994] Call trace:
      [    2.566432] [<ffffff80080888d0>] dump_backtrace+0x0/0x240
      [    2.571818] [<ffffff8008088b24>] show_stack+0x14/0x20
      [    2.576858] [<ffffff80083423ac>] dump_stack+0x94/0xb8
      [    2.581899] [<ffffff8008152130>] panic+0x10c/0x250
      [    2.586677] [<ffffff8008152024>] panic+0x0/0x250
      [    2.591281] SMP: stopping secondary CPUs
      [    3.649692] SMP: failed to stop secondary CPUs 0-2,4-7
      [    3.654818] Kernel Offset: disabled
      [    3.658293] Memory Limit: none
      [    3.661337] ---[ end Kernel panic - not syncing: HYP panic:
      [    3.661337] PS:200003c9 PC:0000007ffffff820 ESR:86000005
      [    3.661337] FAR:0000007ffffff820 HPFAR:00000000003ffff0 PAR:0000000000000000
      [    3.661337] VCPU:          (null)
      [    3.661337]
      Reported-by: NWill Deacon <will.deacon@arm.com>
      Reviewed-by: NMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: NJames Morse <james.morse@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      c612505f
  4. 29 4月, 2016 3 次提交
    • A
      arm64: kaslr: increase randomization granularity · 6f26b367
      Ard Biesheuvel 提交于
      Currently, our KASLR implementation randomizes the placement of the core
      kernel at 2 MB granularity. This is based on the arm64 kernel boot
      protocol, which mandates that the kernel is loaded TEXT_OFFSET bytes above
      a 2 MB aligned base address. This requirement is a result of the fact that
      the block size used by the early mapping code may be 2 MB at the most (for
      a 4 KB granule kernel)
      
      But we can do better than that: since a KASLR kernel needs to be relocated
      in any case, we can tolerate a physical misalignment as long as the virtual
      misalignment relative to this 2 MB block size is equal in size, and code to
      deal with this is already in place.
      
      Since we align the kernel segments to 64 KB, let's randomize the physical
      offset at 64 KB granularity as well (unless CONFIG_DEBUG_ALIGN_RODATA is
      enabled). This way, the page table and TLB footprint is not affected.
      
      The higher granularity allows for 5 bits of additional entropy to be used.
      Reviewed-by: NMatt Fleming <matt@codeblueprint.co.uk>
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      6f26b367
    • E
      arm64: kconfig: drop CONFIG_RTC_LIB dependency · 99a50777
      Ezequiel Garcia 提交于
      The rtc-lib dependency is not required, and seems it was just
      copy-pasted from ARM's Kconfig. If platform requires rtc-lib,
      they should select it individually.
      Reviewed-by: NArnd Bergmann <arnd@arndb.de>
      Signed-off-by: NEzequiel Garcia <ezequiel@vanguardiasur.com.ar>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      99a50777
    • W
      arm64: make ARCH_SUPPORTS_DEBUG_PAGEALLOC depend on !HIBERNATION · da24eb1f
      Will Deacon 提交于
      Selecting both DEBUG_PAGEALLOC and HIBERNATION results in a build failure:
      
      | kernel/built-in.o: In function `saveable_page':
      | memremap.c:(.text+0x100f90): undefined reference to `kernel_page_present'
      | kernel/built-in.o: In function `swsusp_save':
      | memremap.c:(.text+0x1026f0): undefined reference to `kernel_page_present'
      | make: *** [vmlinux] Error 1
      
      James sayeth:
      
      "This is caused by DEBUG_PAGEALLOC, which clears the PTE_VALID bit from
      'free' pages. Hibernate uses it as a hint that it shouldn't save/access
      that page. This function is used to test whether the PTE_VALID bit has
      been cleared by kernel_map_pages(), hibernate is the only user.
      
      Fixing this exposes a bigger problem with that configuration though: if
      the resume kernel has cut free pages out of the linear map, we copy this
      swiss-cheese view of memory, and try to use it to restore...
      
      We can fixup the copy of the linear map, but it then explodes in my lazy
      'clean the whole kernel to PoC' after resume, as now both the kernel and
      linear map have holes in them."
      
      On closer inspection, the whole Kconfig machinery around DEBUG_PAGEALLOC,
      HIBERNATION, ARCH_SUPPORTS_DEBUG_PAGEALLOC and PAGE_POISONING looks like
      it might need some affection. In particular, DEBUG_ALLOC has:
      
      > depends on !HIBERNATION || ARCH_SUPPORTS_DEBUG_PAGEALLOC && !PPC && !SPARC
      
      which looks pretty fishy.
      
      For the moment, require ARCH_SUPPORTS_DEBUG_PAGEALLOC to depend on
      !HIBERNATION on arm64 and get allmodconfig building again.
      Signed-off-by: NJames Morse <james.morse@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      da24eb1f
  5. 28 4月, 2016 14 次提交
  6. 26 4月, 2016 8 次提交
  7. 25 4月, 2016 9 次提交
  8. 22 4月, 2016 1 次提交
    • A
      mm: replace open coded page to virt conversion with page_to_virt() · 1dff8083
      Ard Biesheuvel 提交于
      The open coded conversion from struct page address to virtual address in
      lowmem_page_address() involves an intermediate conversion step to pfn
      number/physical address. Since the placement of the struct page array
      relative to the linear mapping may be completely independent from the
      placement of physical RAM (as is that case for arm64 after commit
      dfd55ad8 'arm64: vmemmap: use virtual projection of linear region'),
      the conversion to physical address and back again should factor out of
      the equation, but unfortunately, the shifting and pointer arithmetic
      involved prevent this from happening, and the resulting calculation
      essentially subtracts the address of the start of physical memory and
      adds it back again, in a way that prevents the compiler from optimizing
      it away.
      
      Since the start of physical memory is not a build time constant on arm64,
      the resulting conversion involves an unnecessary memory access, which
      we would like to get rid of. So replace the open coded conversion with
      a call to page_to_virt(), and use the open coded conversion as its
      default definition, to be overriden by the architecture, if desired.
      The existing arch specific definitions of page_to_virt are all equivalent
      to this default definition, so by itself this patch is a no-op.
      Acked-by: NAndrew Morton <akpm@linux-foundation.org>
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      1dff8083