1. 14 12月, 2015 17 次提交
  2. 12 12月, 2015 2 次提交
  3. 11 12月, 2015 2 次提交
  4. 08 12月, 2015 1 次提交
  5. 05 12月, 2015 4 次提交
  6. 04 12月, 2015 1 次提交
  7. 27 11月, 2015 6 次提交
  8. 26 11月, 2015 3 次提交
    • C
      Revert "arm64: Mark kernel page ranges contiguous" · 667c2759
      Catalin Marinas 提交于
      This reverts commit 348a65cd.
      
      Incorrect page table manipulation that does not respect the ARM ARM
      recommended break-before-make sequence may lead to TLB conflicts. The
      contiguous PTE patch makes the system even more susceptible to such
      errors by changing the mapping from a single page to a contiguous range
      of pages. An additional TLB invalidation would reduce the risk window,
      however, the correct fix is to switch to a temporary swapper_pg_dir.
      Once the correct workaround is done, the reverted commit will be
      re-applied.
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      Reported-by: NJeremy Linton <jeremy.linton@arm.com>
      667c2759
    • W
      arm64: mm: keep reserved ASIDs in sync with mm after multiple rollovers · 0ebea808
      Will Deacon 提交于
      Under some unusual context-switching patterns, it is possible to end up
      with multiple threads from the same mm running concurrently with
      different ASIDs:
      
      1. CPU x schedules task t with mm p containing ASID a and generation g
         This task doesn't block and the CPU doesn't context switch.
         So:
           * per_cpu(active_asid, x) = {g,a}
           * p->context.id = {g,a}
      
      2. Some other CPU generates an ASID rollover. The global generation is
         now (g + 1). CPU x is still running t, with no context switch and
         so per_cpu(reserved_asid, x) = {g,a}
      
      3. CPU y schedules task t', which shares mm p with t. The generation
         mismatches, so we take the slowpath and hit the reserved ASID from
         CPU x. p is then updated so that p->context.id = {g + 1,a}
      
      4. CPU y schedules some other task u, which has an mm != p.
      
      5. Some other CPU generates *another* CPU rollover. The global
         generation is now (g + 2). CPU x is still running t, with no context
         switch and so per_cpu(reserved_asid, x) = {g,a}.
      
      6. CPU y once again schedules task t', but now *fails* to hit the
         reserved ASID from CPU x because of the generation mismatch. This
         results in a new ASID being allocated, despite the fact that t is
         still running on CPU x with the same mm.
      
      Consequently, TLBIs (e.g. as a result of CoW) will not be synchronised
      between the two threads.
      
      This patch fixes the problem by updating all of the matching reserved
      ASIDs when we hit on the slowpath (i.e. in step 3 above). This keeps
      the reserved ASIDs in-sync with the mm and avoids the problem.
      Reported-by: NTony Thompson <anthony.thompson@arm.com>
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      0ebea808
    • A
      arm64: KASAN depends on !(ARM64_16K_PAGES && ARM64_VA_BITS_48) · f1b9032f
      Andrey Ryabinin 提交于
      On KASAN + 16K_PAGES + 48BIT_VA
       arch/arm64/mm/kasan_init.c: In function ‘kasan_early_init’:
       include/linux/compiler.h:484:38: error: call to ‘__compiletime_assert_95’ declared with attribute error: BUILD_BUG_ON failed: !IS_ALIGNED(KASAN_SHADOW_END, PGDIR_SIZE)
          _compiletime_assert(condition, msg, __compiletime_assert_, __LINE__)
      
      Currently KASAN will not work on 16K_PAGES and 48BIT_VA, so
      forbid such configuration to avoid above build failure.
      Signed-off-by: NAndrey Ryabinin <aryabinin@virtuozzo.com>
      Reported-by: NSuzuki K. Poulose <Suzuki.Poulose@arm.com>
      Acked-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      f1b9032f
  9. 25 11月, 2015 4 次提交
    • M
      arm64: efi: correctly map runtime regions · 3b12acf4
      Mark Rutland 提交于
      The kernel may use a page granularity of 4K, 16K, or 64K depending on
      configuration.
      
      When mapping EFI runtime regions, we use memrange_efi_to_native to round
      the physical base address of a region down to a kernel page boundary,
      and round the size up to a kernel page boundary, adding the residue left
      over from rounding down the physical base address. We do not round down
      the virtual base address.
      
      In __create_mapping we account for the offset of the virtual base from a
      granule boundary, adding the residue to the size before rounding the
      base down to said granule boundary.
      
      Thus we account for the residue twice, and when the residue is non-zero
      will cause __create_mapping to map an additional page at the end of the
      region. Depending on the memory map, this page may be in a region we are
      not intended/permitted to map, or may clash with a different region that
      we wish to map. In typical cases, mapping the next item in the memory
      map will overwrite the erroneously created entry, as we sort the memory
      map in the stub.
      
      As __create_mapping can cope with base addresses which are not page
      aligned, we can instead rely on it to map the region appropriately, and
      simplify efi_virtmap_init by removing the unnecessary code.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Reviewed-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Leif Lindholm <leif.lindholm@linaro.org>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      3b12acf4
    • M
      arm64: mm: fix fault_info table xFSC decoding · c03784ee
      Mark Rutland 提交于
      We are missing descriptions for some valid xFSC values in the fault info
      table (e.g. "TLB conflict abort"), and have erroneous descriptions for
      reserved values (e.g. "asynchronous external abort", "debug event").
      
      This patch adds the missing xFSC values, and removes erroneous decoding
      of values reserved by the architecture, as described in ARM DDI 0487A.h.
      
      At the same time, fixed the unbalanced brackets for the synchronous
      parity error strings in the table.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Reviewed-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      c03784ee
    • S
      arm64: early_alloc: Fix check for allocation failure · 7142392d
      Suzuki K. Poulose 提交于
      In early_alloc we check if the memblock_alloc failed by checking
      the virtual address of the result, which will never fail. This patch
      fixes it to check the actual result for failure.
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NSuzuki K. Poulose <suzuki.poulose@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      7142392d
    • M
      arm64: kvm: report original PAR_EL1 upon panic · fbb4574c
      Mark Rutland 提交于
      If we call __kvm_hyp_panic while a guest context is active, we call
      __restore_sysregs before acquiring the system register values for the
      panic, in the process throwing away the PAR_EL1 value at the point of
      the panic.
      
      This patch modifies __kvm_hyp_panic to stash the PAR_EL1 value prior to
      restoring host register values, enabling us to report the original
      values at the point of the panic.
      Acked-by: NMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
      fbb4574c