1. 09 8月, 2017 3 次提交
  2. 09 5月, 2017 2 次提交
  3. 06 4月, 2017 1 次提交
  4. 24 11月, 2016 1 次提交
    • C
      arm64: Remove I-cache invalidation from flush_cache_range() · ee6a7fce
      Catalin Marinas 提交于
      The flush_cache_range() function (similarly for flush_cache_page()) is
      called when the kernel is changing an existing VA->PA mapping range to
      either a new PA or to different attributes. Since ARMv8 has PIPT-like
      D-caches, this function does not need to perform any D-cache
      maintenance. The I-cache maintenance is already handled via set_pte_at()
      and flush_cache_range() cannot anyway guarantee that there are no cache
      lines left after invalidation due to the speculative loads.
      
      This patch makes flush_cache_range() a no-op.
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      ee6a7fce
  5. 08 11月, 2016 1 次提交
    • P
      arm64: Add uprobe support · 9842ceae
      Pratyush Anand 提交于
      This patch adds support for uprobe on ARM64 architecture.
      
      Unit tests for following have been done so far and they have been found
      working
          1. Step-able instructions, like sub, ldr, add etc.
          2. Simulation-able like ret, cbnz, cbz etc.
          3. uretprobe
          4. Reject-able instructions like sev, wfe etc.
          5. trapped and abort xol path
          6. probe at unaligned user address.
          7. longjump test cases
      
      Currently it does not support aarch32 instruction probing.
      Signed-off-by: NPratyush Anand <panand@redhat.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      9842ceae
  6. 22 8月, 2016 1 次提交
    • K
      arm64: mm: convert __dma_* routines to use start, size · d34fdb70
      Kwangwoo Lee 提交于
      __dma_* routines have been converted to use start and size instread of
      start and end addresses. The patch was origianlly for adding
      __clean_dcache_area_poc() which will be used in pmem driver to clean
      dcache to the PoC(Point of Coherency) in arch_wb_cache_pmem().
      
      The functionality of __clean_dcache_area_poc()  was equivalent to
      __dma_clean_range(). The difference was __dma_clean_range() uses the end
      address, but __clean_dcache_area_poc() uses the size to clean.
      
      Thus, __clean_dcache_area_poc() has been revised with a fallthrough
      function of __dma_clean_range() after the change that __dma_* routines
      use start and size instead of using start and end.
      
      As a consequence of using start and size, the name of __dma_* routines
      has also been altered following the terminology below:
          area: takes a start and size
          range: takes a start and end
      Reviewed-by: NRobin Murphy <robin.murphy@arm.com>
      Signed-off-by: NKwangwoo Lee <kwangwoo.lee@sk.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      d34fdb70
  7. 21 3月, 2016 1 次提交
  8. 22 2月, 2016 1 次提交
    • K
      asm-generic: Consolidate mark_rodata_ro() · e267d97b
      Kees Cook 提交于
      Instead of defining mark_rodata_ro() in each architecture, consolidate it.
      Signed-off-by: NKees Cook <keescook@chromium.org>
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andy Gross <agross@codeaurora.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Ashok Kumar <ashoks@broadcom.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: David Brown <david.brown@linaro.org>
      Cc: David Hildenbrand <dahi@linux.vnet.ibm.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Emese Revfy <re.emese@gmail.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Helge Deller <deller@gmx.de>
      Cc: James E.J. Bottomley <jejb@parisc-linux.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Luis R. Rodriguez <mcgrof@suse.com>
      Cc: Marc Zyngier <marc.zyngier@arm.com>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Mathias Krause <minipli@googlemail.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Nicolas Pitre <nicolas.pitre@linaro.org>
      Cc: PaX Team <pageexec@freemail.hu>
      Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Ross Zwisler <ross.zwisler@linux.intel.com>
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: Stephen Boyd <sboyd@codeaurora.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Toshi Kani <toshi.kani@hp.com>
      Cc: kernel-hardening@lists.openwall.com
      Cc: linux-arch <linux-arch@vger.kernel.org>
      Cc: linux-arm-kernel@lists.infradead.org
      Cc: linux-kernel@vger.kernel.org
      Cc: linux-parisc@vger.kernel.org
      Link: http://lkml.kernel.org/r/1455748879-21872-2-git-send-email-keescook@chromium.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      e267d97b
  9. 17 12月, 2015 1 次提交
  10. 07 10月, 2015 1 次提交
  11. 19 5月, 2015 1 次提交
  12. 22 1月, 2015 1 次提交
  13. 01 12月, 2014 1 次提交
  14. 08 9月, 2014 1 次提交
  15. 24 7月, 2014 1 次提交
    • C
      arm64: Fix barriers used for page table modifications · 7f0b1bf0
      Catalin Marinas 提交于
      The architecture specification states that both DSB and ISB are required
      between page table modifications and subsequent memory accesses using the
      corresponding virtual address. When TLB invalidation takes place, the
      tlb_flush_* functions already have the necessary barriers. However, there are
      other functions like create_mapping() for which this is not the case.
      
      The patch adds the DSB+ISB instructions in the set_pte() function for
      valid kernel mappings. The invalid pte case is handled by tlb_flush_*
      and the user mappings in general have a corresponding update_mmu_cache()
      call containing a DSB. Even when update_mmu_cache() isn't called, the
      kernel can still cope with an unlikely spurious page fault by
      re-executing the instruction.
      
      In addition, the set_pmd, set_pud() functions gain an ISB for
      architecture compliance when block mappings are created.
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      Reported-by: NLeif Lindholm <leif.lindholm@linaro.org>
      Acked-by: NSteve Capper <steve.capper@linaro.org>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: <stable@vger.kernel.org>
      7f0b1bf0
  16. 10 5月, 2014 1 次提交
  17. 28 2月, 2014 1 次提交
  18. 05 2月, 2014 1 次提交
  19. 08 6月, 2013 1 次提交
  20. 24 11月, 2012 1 次提交
  21. 17 9月, 2012 1 次提交