1. 28 11月, 2014 5 次提交
  2. 27 11月, 2014 3 次提交
  3. 26 11月, 2014 2 次提交
  4. 25 11月, 2014 16 次提交
  5. 21 11月, 2014 8 次提交
  6. 20 11月, 2014 2 次提交
    • S
      arm64: percpu: Implement this_cpu operations · f97fc810
      Steve Capper 提交于
      The generic this_cpu operations disable interrupts to ensure that the
      requested operation is protected from pre-emption. For arm64, this is
      overkill and can hurt throughput and latency.
      
      This patch provides arm64 specific implementations for the this_cpu
      operations. Rather than disable interrupts, we use the exclusive
      monitor or atomic operations as appropriate.
      
      The following operations are implemented: add, add_return, and, or,
      read, write, xchg. We also wire up a cmpxchg implementation from
      cmpxchg.h.
      
      Testing was performed using the percpu_test module and hackbench on a
      Juno board running 3.18-rc4.
      Signed-off-by: NSteve Capper <steve.capper@linaro.org>
      Reviewed-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      f97fc810
    • M
      arm64: pgalloc: consistently use PGALLOC_GFP · 15670ef1
      Mark Rutland 提交于
      We currently allocate different levels of page tables with a variety of
      differing flags, and the PGALLOC_GFP flags, intended for use when
      allocating any level of page table, are only used for ptes in
      pte_alloc_one. On x86, PGALLOC_GFP is used for all page table
      allocations.
      
      Currently the major differences are:
      
      * __GFP_NOTRACK -- Needed to ensure page tables are always accessible in
        the presence of kmemcheck to prevent recursive faults. Currently
        kmemcheck cannot be selected for arm64.
      
      * __GFP_REPEAT -- Causes the allocator to try to reclaim pages and retry
        upon a failure to allocate.
      
      * __GFP_ZERO -- Sometimes passed explicitly, sometimes zalloc variants
        are used.
      
      While we've no encountered issues so far, it would be preferable to be
      consistent. This patch ensures all levels of table are allocated in the
      same manner, with PGALLOC_GFP.
      
      Cc: Steve Capper <steve.capper@arm.com>
      Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      15670ef1
  7. 19 11月, 2014 1 次提交
    • Y
      arm64/mm: Remove hack in mmap randomize layout · d6c763af
      Yann Droneaud 提交于
      Since commit 8a0a9bd4 ('random: make get_random_int() more
      random'), get_random_int() returns a random value for each call,
      so comment and hack introduced in mmap_rnd() as part of commit
      1d18c47c ('arm64: MMU fault handling and page table management')
      are incorrects.
      
      Commit 1d18c47c seems to use the same hack introduced by
      commit a5adc91a ('powerpc: Ensure random space between stack
      and mmaps'), latter copied in commit 5a0efea0 ('sparc64: Sharpen
      address space randomization calculations.').
      
      But both architectures were cleaned up as part of commit
      fa8cbaaf ('powerpc+sparc64/mm: Remove hack in mmap randomize
      layout') as hack is no more needed since commit 8a0a9bd4.
      
      So the present patch removes the comment and the hack around
      get_random_int() on AArch64's mmap_rnd().
      
      Cc: David S. Miller <davem@davemloft.net>
      Cc: Anton Blanchard <anton@samba.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Acked-by: NDan McGee <dpmcgee@gmail.com>
      Signed-off-by: NYann Droneaud <ydroneaud@opteya.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      d6c763af
  8. 17 11月, 2014 2 次提交
    • C
      arm64: Add COMPAT_HWCAP_LPAE · 7d57511d
      Catalin Marinas 提交于
      Commit a469abd0 (ARM: elf: add new hwcap for identifying atomic
      ldrd/strd instructions) introduces HWCAP_ELF for 32-bit ARM
      applications. As LPAE is always present on arm64, report the
      corresponding compat HWCAP to user space.
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      Cc: <stable@vger.kernel.org> # 3.11+
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      7d57511d
    • W
      mmu_gather: move minimal range calculations into generic code · fb7332a9
      Will Deacon 提交于
      On architectures with hardware broadcasting of TLB invalidation messages
      , it makes sense to reduce the range of the mmu_gather structure when
      unmapping page ranges based on the dirty address information passed to
      tlb_remove_tlb_entry.
      
      arm64 already does this by directly manipulating the start/end fields
      of the gather structure, but this confuses the generic code which
      does not expect these fields to change and can end up calculating
      invalid, negative ranges when forcing a flush in zap_pte_range.
      
      This patch moves the minimal range calculation out of the arm64 code
      and into the generic implementation, simplifying zap_pte_range in the
      process (which no longer needs to care about start/end, since they will
      point to the appropriate ranges already). With the range being tracked
      by core code, the need_flush flag is dropped in favour of checking that
      the end of the range has actually been set.
      
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Russell King - ARM Linux <linux@arm.linux.org.uk>
      Cc: Michal Simek <monstr@monstr.eu>
      Acked-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      fb7332a9
  9. 14 11月, 2014 1 次提交
    • W
      arm64: entry: use ldp/stp instead of push/pop when saving/restoring regs · 63648dd2
      Will Deacon 提交于
      The push/pop instructions can be suboptimal when saving/restoring large
      amounts of data to/from the stack, for example on entry/exit from the
      kernel. This is because:
      
        (1) They act on descending addresses (i.e. the newly decremented sp),
            which may defeat some hardware prefetchers
      
        (2) They introduce an implicit dependency between each instruction, as
            the sp has to be updated in order to resolve the address of the
            next access.
      
      This patch removes the push/pop instructions from our kernel entry/exit
      macros in favour of ldp/stp plus offset.
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      63648dd2