1. 25 11月, 2014 2 次提交
  2. 21 11月, 2014 1 次提交
    • W
      arm64: mm: report unhandled level-0 translation faults correctly · 7f73f7ae
      Will Deacon 提交于
      Translation faults that occur due to the input address being outside
      of the address range mapped by the relevant base register are reported
      as level 0 faults in ESR.DFSC.
      
      If the faulting access cannot be resolved by the kernel (e.g. because
      it is not mapped by a vma), then we report "input address range fault"
      on the console. This was fine until we added support for 48-bit VAs,
      which actually place PGDs at level 0 and can trigger faults for invalid
      addresses that are within the range of the page tables.
      
      This patch changes the string to report "level 0 translation fault",
      which is far less confusing.
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      7f73f7ae
  3. 20 11月, 2014 1 次提交
    • M
      arm64: pgalloc: consistently use PGALLOC_GFP · 15670ef1
      Mark Rutland 提交于
      We currently allocate different levels of page tables with a variety of
      differing flags, and the PGALLOC_GFP flags, intended for use when
      allocating any level of page table, are only used for ptes in
      pte_alloc_one. On x86, PGALLOC_GFP is used for all page table
      allocations.
      
      Currently the major differences are:
      
      * __GFP_NOTRACK -- Needed to ensure page tables are always accessible in
        the presence of kmemcheck to prevent recursive faults. Currently
        kmemcheck cannot be selected for arm64.
      
      * __GFP_REPEAT -- Causes the allocator to try to reclaim pages and retry
        upon a failure to allocate.
      
      * __GFP_ZERO -- Sometimes passed explicitly, sometimes zalloc variants
        are used.
      
      While we've no encountered issues so far, it would be preferable to be
      consistent. This patch ensures all levels of table are allocated in the
      same manner, with PGALLOC_GFP.
      
      Cc: Steve Capper <steve.capper@arm.com>
      Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      15670ef1
  4. 19 11月, 2014 1 次提交
    • Y
      arm64/mm: Remove hack in mmap randomize layout · d6c763af
      Yann Droneaud 提交于
      Since commit 8a0a9bd4 ('random: make get_random_int() more
      random'), get_random_int() returns a random value for each call,
      so comment and hack introduced in mmap_rnd() as part of commit
      1d18c47c ('arm64: MMU fault handling and page table management')
      are incorrects.
      
      Commit 1d18c47c seems to use the same hack introduced by
      commit a5adc91a ('powerpc: Ensure random space between stack
      and mmaps'), latter copied in commit 5a0efea0 ('sparc64: Sharpen
      address space randomization calculations.').
      
      But both architectures were cleaned up as part of commit
      fa8cbaaf ('powerpc+sparc64/mm: Remove hack in mmap randomize
      layout') as hack is no more needed since commit 8a0a9bd4.
      
      So the present patch removes the comment and the hack around
      get_random_int() on AArch64's mmap_rnd().
      
      Cc: David S. Miller <davem@davemloft.net>
      Cc: Anton Blanchard <anton@samba.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Acked-by: NDan McGee <dpmcgee@gmail.com>
      Signed-off-by: NYann Droneaud <ydroneaud@opteya.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      d6c763af
  5. 07 11月, 2014 1 次提交
  6. 25 10月, 2014 1 次提交
  7. 21 10月, 2014 2 次提交
  8. 10 10月, 2014 2 次提交
  9. 03 10月, 2014 2 次提交
  10. 02 10月, 2014 1 次提交
  11. 22 9月, 2014 1 次提交
  12. 18 9月, 2014 1 次提交
  13. 16 9月, 2014 1 次提交
  14. 12 9月, 2014 1 次提交
  15. 09 9月, 2014 1 次提交
    • M
      efi/arm64: Fix fdt-related memory reservation · 0ceac9e0
      Mark Salter 提交于
      Commit 86c8b27a:
       "arm64: ignore DT memreserve entries when booting in UEFI mode
      
      prevents early_init_fdt_scan_reserved_mem() from being called for
      arm64 kernels booting via UEFI. This was done because the kernel
      will use the UEFI memory map to determine reserved memory regions.
      That approach has problems in that early_init_fdt_scan_reserved_mem()
      also reserves the FDT itself and any node-specific reserved memory.
      By chance of some kernel configs, the FDT may be overwritten before
      it can be unflattened and the kernel will fail to boot. More subtle
      problems will result if the FDT has node specific reserved memory
      which is not really reserved.
      
      This patch has the UEFI stub remove the memory reserve map entries
      from the FDT as it does with the memory nodes. This allows
      early_init_fdt_scan_reserved_mem() to be called unconditionally
      so that the other needed reservations are made.
      Signed-off-by: NMark Salter <msalter@redhat.com>
      Acked-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Acked-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
      0ceac9e0
  16. 08 9月, 2014 2 次提交
    • L
      arm64: Add CONFIG_DEBUG_SET_MODULE_RONX support · 11d91a77
      Laura Abbott 提交于
      In a similar fashion to other architecture, add the infrastructure
      and Kconfig to enable DEBUG_SET_MODULE_RONX support. When
      enabled, module ranges will be marked read-only/no-execute as
      appropriate.
      Signed-off-by: NLaura Abbott <lauraa@codeaurora.org>
      [will: fixed off-by-one in module end check]
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      11d91a77
    • A
      arm64: convert part of soft_restart() to assembly · 5e051531
      Arun Chandran 提交于
      The current soft_restart() and setup_restart implementations incorrectly
      assume that compiler will not spill/fill values to/from stack. However
      this assumption seems to be wrong, revealed by the disassembly of the
      currently existing code (v3.16) built with Linaro GCC 4.9-2014.05.
      
      ffffffc000085224 <soft_restart>:
      ffffffc000085224:  a9be7bfd  stp    x29, x30, [sp,#-32]!
      ffffffc000085228:  910003fd  mov    x29, sp
      ffffffc00008522c:  f9000fa0  str    x0, [x29,#24]
      ffffffc000085230:  94003d21  bl     ffffffc0000946b4 <setup_mm_for_reboot>
      ffffffc000085234:  94003b33  bl     ffffffc000093f00 <flush_cache_all>
      ffffffc000085238:  94003dfa  bl     ffffffc000094a20 <cpu_cache_off>
      ffffffc00008523c:  94003b31  bl     ffffffc000093f00 <flush_cache_all>
      ffffffc000085240:  b0003321  adrp   x1, ffffffc0006ea000 <reset_devices>
      
      ffffffc000085244:  f9400fa0  ldr    x0, [x29,#24] ----> spilled addr
      ffffffc000085248:  f942fc22  ldr    x2, [x1,#1528] ----> global memstart_addr
      
      ffffffc00008524c:  f0000061  adrp   x1, ffffffc000094000 <__inval_cache_range+0x40>
      ffffffc000085250:  91290021  add    x1, x1, #0xa40
      ffffffc000085254:  8b010041  add    x1, x2, x1
      ffffffc000085258:  d2c00802  mov    x2, #0x4000000000           // #274877906944
      ffffffc00008525c:  8b020021  add    x1, x1, x2
      ffffffc000085260:  d63f0020  blr    x1
      ...
      
      Here the compiler generates memory accesses after the cache is disabled,
      loading stale values for the spilled value and global variable. As we cannot
      control when the compiler will access memory we must rewrite the
      functions in assembly to stash values we need in registers prior to
      disabling the cache, avoiding the use of memory.
      Reviewed-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NArun Chandran <achandran@mvista.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      5e051531
  17. 20 8月, 2014 1 次提交
  18. 23 7月, 2014 5 次提交
  19. 10 7月, 2014 1 次提交
    • M
      arm64: place initial page tables above the kernel · bd00cd5f
      Mark Rutland 提交于
      Currently we place swapper_pg_dir and idmap_pg_dir below the kernel
      image, between PHYS_OFFSET and (PHYS_OFFSET + TEXT_OFFSET). However,
      bootloaders may use portions of this memory below the kernel and we do
      not parse the memory reservation list until after the MMU has been
      enabled. As such we may clobber some memory a bootloader wishes to have
      preserved.
      
      To enable the use of all of this memory by bootloaders (when the
      required memory reservations are communicated to the kernel) it is
      necessary to move our initial page tables elsewhere. As we currently
      have an effectively unbound requirement for memory at the end of the
      kernel image for .bss, we can place the page tables here.
      
      This patch moves the initial page table to the end of the kernel image,
      after the BSS. As they do not consist of any initialised data they will
      be stripped from the kernel Image as with the BSS. The BSS clearing
      routine is updated to stop at __bss_stop rather than _end so as to not
      clobber the page tables, and memory reservations made redundant by the
      new organisation are removed.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Tested-by: NLaura Abbott <lauraa@codeaurora.org>
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      bd00cd5f
  20. 09 7月, 2014 1 次提交
  21. 04 7月, 2014 1 次提交
  22. 18 6月, 2014 1 次提交
  23. 05 6月, 2014 1 次提交
  24. 17 5月, 2014 1 次提交
    • M
      arm64: fix pud_huge() for 2-level pagetables · 4797ec2d
      Mark Salter 提交于
      The following happens when trying to run a kvm guest on a kernel
      configured for 64k pages. This doesn't happen with 4k pages:
      
        BUG: failure at include/linux/mm.h:297/put_page_testzero()!
        Kernel panic - not syncing: BUG!
        CPU: 2 PID: 4228 Comm: qemu-system-aar Tainted: GF            3.13.0-0.rc7.31.sa2.k32v1.aarch64.debug #1
        Call trace:
        [<fffffe0000096034>] dump_backtrace+0x0/0x16c
        [<fffffe00000961b4>] show_stack+0x14/0x1c
        [<fffffe000066e648>] dump_stack+0x84/0xb0
        [<fffffe0000668678>] panic+0xf4/0x220
        [<fffffe000018ec78>] free_reserved_area+0x0/0x110
        [<fffffe000018edd8>] free_pages+0x50/0x88
        [<fffffe00000a759c>] kvm_free_stage2_pgd+0x30/0x40
        [<fffffe00000a5354>] kvm_arch_destroy_vm+0x18/0x44
        [<fffffe00000a1854>] kvm_put_kvm+0xf0/0x184
        [<fffffe00000a1938>] kvm_vm_release+0x10/0x1c
        [<fffffe00001edc1c>] __fput+0xb0/0x288
        [<fffffe00001ede4c>] ____fput+0xc/0x14
        [<fffffe00000d5a2c>] task_work_run+0xa8/0x11c
        [<fffffe0000095c14>] do_notify_resume+0x54/0x58
      
      In arch/arm/kvm/mmu.c:unmap_range(), we end up doing an extra put_page()
      on the stage2 pgd which leads to the BUG in put_page_testzero(). This
      happens because a pud_huge() test in unmap_range() returns true when it
      should always be false with 2-level pages tables used by 64k pages.
      This patch removes support for huge puds if 2-level pagetables are
      being used.
      Signed-off-by: NMark Salter <msalter@redhat.com>
      [catalin.marinas@arm.com: removed #ifndef around PUD_SIZE check]
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      Cc: <stable@vger.kernel.org> # v3.11+
      4797ec2d
  25. 16 5月, 2014 1 次提交
  26. 10 5月, 2014 2 次提交
  27. 09 5月, 2014 4 次提交
    • S
      arm64: mm: Create gigabyte kernel logical mappings where possible · 206a2a73
      Steve Capper 提交于
      We have the capability to map 1GB level 1 blocks when using a 4K
      granule.
      
      This patch adjusts the create_mapping logic s.t. when mapping physical
      memory on boot, we attempt to use a 1GB block if both the VA and PA
      start and end are 1GB aligned. This both reduces the levels of lookup
      required to resolve a kernel logical address, as well as reduces TLB
      pressure on cores that support 1GB TLB entries.
      Signed-off-by: NSteve Capper <steve.capper@linaro.org>
      Tested-by: NJungseok Lee <jays.lee@samsung.com>
      [catalin.marinas@arm.com: s/prot_sect_kernel/PROT_SECT_NORMAL_EXEC/]
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      206a2a73
    • C
      arm64: Clean up the default pgprot setting · a501e324
      Catalin Marinas 提交于
      The primary aim of this patchset is to remove the pgprot_default and
      prot_sect_default global variables and rely strictly on predefined
      values. The original goal was to be able to run SMP kernels on UP
      hardware by not setting the Shareability bit. However, it is unlikely to
      see UP ARMv8 hardware and even if we do, the Shareability bit is no
      longer assumed to disable cacheable accesses.
      
      A side effect is that the device mappings now have the Shareability
      attribute set. The hardware, however, should ignore it since Device
      accesses are always Outer Shareable.
      
      Following the removal of the two global variables, there is some PROT_*
      macro reshuffling and cleanup, including the __PAGE_* macros (replaced
      by PAGE_*).
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      Acked-by: NWill Deacon <will.deacon@arm.com>
      a501e324
    • C
      arm64: Introduce execute-only page access permissions · bc07c2c6
      Catalin Marinas 提交于
      The ARMv8 architecture allows execute-only user permissions by clearing
      the PTE_UXN and PTE_USER bits. The kernel, however, can still access
      such page, so execute-only page permission does not protect against
      read(2)/write(2) etc. accesses. Systems requiring such protection must
      implement/enable features like SECCOMP.
      
      This patch changes the arm64 __P100 and __S100 protection_map[] macros
      to the new __PAGE_EXECONLY attributes. A side effect is that
      pte_valid_user() no longer triggers for __PAGE_EXECONLY since PTE_USER
      isn't set. To work around this, the check is done on the PTE_NG bit via
      the pte_valid_ng() macro. VM_READ is also checked now for page faults.
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      bc07c2c6
    • C
      arm64: Provide read/write fault information in compat signal handlers · 9141300a
      Catalin Marinas 提交于
      For AArch32, bit 11 (WnR) of the FSR/ESR register is set when the fault
      was caused by a write access and applications like Qemu rely on such
      information being provided in sigcontext. This patch introduces the
      ESR_EL1 tracking for the arm64 kernel faults and sets bit 11 accordingly
      in compat sigcontext.
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      9141300a