1. 05 10月, 2015 1 次提交
    • M
      arm64: readahead: fault retry breaks mmap file read random detection · 569ba74a
      Mark Salyzyn 提交于
      This is the arm64 portion of commit 45cac65b ("readahead: fault
      retry breaks mmap file read random detection"), which was absent from
      the initial port and has since gone unnoticed. The original commit says:
      
      > .fault now can retry.  The retry can break state machine of .fault.  In
      > filemap_fault, if page is miss, ra->mmap_miss is increased.  In the second
      > try, since the page is in page cache now, ra->mmap_miss is decreased.  And
      > these are done in one fault, so we can't detect random mmap file access.
      >
      > Add a new flag to indicate .fault is tried once.  In the second try, skip
      > ra->mmap_miss decreasing.  The filemap_fault state machine is ok with it.
      
      With this change, Mark reports that:
      
      > Random read improves by 250%, sequential read improves by 40%, and
      > random write by 400% to an eMMC device with dm crypto wrapped around it.
      
      Cc: Shaohua Li <shli@kernel.org>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Wu Fengguang <fengguang.wu@intel.com>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NMark Salyzyn <salyzyn@android.com>
      Signed-off-by: NRiley Andrews <riandrews@android.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      569ba74a
  2. 14 9月, 2015 1 次提交
  3. 20 8月, 2015 1 次提交
  4. 05 8月, 2015 1 次提交
    • W
      arm64: mm: ensure patched kernel text is fetched from PoU · 8ec41987
      Will Deacon 提交于
      The arm64 booting document requires that the bootloader has cleaned the
      kernel image to the PoC. However, when a CPU re-enters the kernel due to
      either a CPU hotplug "on" event or resuming from a low-power state (e.g.
      cpuidle), the kernel text may in-fact be dirty at the PoU due to things
      like alternative patching or even module loading.
      
      Thanks to I-cache speculation with the MMU off, stale instructions could
      be fetched prior to enabling the MMU, potentially leading to crashes
      when executing regions of code that have been modified at runtime.
      
      This patch addresses the issue by ensuring that the local I-cache is
      invalidated immediately after a CPU has enabled its MMU but before
      jumping out of the identity mapping. Any stale instructions fetched from
      the PoC will then be discarded and refetched correctly from the PoU.
      Patching kernel text executed prior to the MMU being enabled is
      prohibited, so the early entry code will always be clean.
      Reviewed-by: NMark Rutland <mark.rutland@arm.com>
      Tested-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      8ec41987
  5. 03 8月, 2015 1 次提交
  6. 28 7月, 2015 2 次提交
  7. 27 7月, 2015 9 次提交
  8. 07 7月, 2015 1 次提交
  9. 04 7月, 2015 1 次提交
  10. 01 7月, 2015 2 次提交
  11. 25 6月, 2015 1 次提交
    • Z
      mm/hugetlb: reduce arch dependent code about huge_pmd_unshare · e81f2d22
      Zhang Zhen 提交于
      Currently we have many duplicates in definitions of huge_pmd_unshare.  In
      all architectures this function just returns 0 when
      CONFIG_ARCH_WANT_HUGE_PMD_SHARE is N.
      
      This patch puts the default implementation in mm/hugetlb.c and lets these
      architectures use the common code.
      Signed-off-by: NZhang Zhen <zhenzhang.zhang@huawei.com>
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: James Hogan <james.hogan@imgtec.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Chris Metcalf <cmetcalf@ezchip.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: James Yang <James.Yang@freescale.com>
      Cc: Aneesh Kumar <aneesh.kumar@linux.vnet.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      e81f2d22
  12. 19 6月, 2015 2 次提交
  13. 17 6月, 2015 1 次提交
    • D
      arm64: mm: Fix freeing of the wrong memmap entries with !SPARSEMEM_VMEMMAP · b9bcc919
      Dave P Martin 提交于
      The memmap freeing code in free_unused_memmap() computes the end of
      each memblock by adding the memblock size onto the base.  However,
      if SPARSEMEM is enabled then the value (start) used for the base
      may already have been rounded downwards to work out which memmap
      entries to free after the previous memblock.
      
      This may cause memmap entries that are in use to get freed.
      
      In general, you're not likely to hit this problem unless there
      are at least 2 memblocks and one of them is not aligned to a
      sparsemem section boundary.  Note that carve-outs can increase
      the number of memblocks by splitting the regions listed in the
      device tree.
      
      This problem doesn't occur with SPARSEMEM_VMEMMAP, because the
      vmemmap code deals with freeing the unused regions of the memmap
      instead of requiring the arch code to do it.
      
      This patch gets the memblock base out of the memblock directly when
      computing the block end address to ensure the correct value is used.
      Signed-off-by: NDave Martin <Dave.Martin@arm.com>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      b9bcc919
  14. 15 6月, 2015 1 次提交
  15. 12 6月, 2015 1 次提交
  16. 05 6月, 2015 1 次提交
  17. 02 6月, 2015 2 次提交
  18. 19 5月, 2015 2 次提交
    • M
      arm64: kill flush_cache_all() · 68234df4
      Mark Rutland 提交于
      The documented semantics of flush_cache_all are not possible to provide
      for arm64 (short of flushing the entire physical address space by VA),
      and there are currently no users; KVM uses VA maintenance exclusively,
      cpu_reset is never called, and the only two users outside of arch code
      cannot be built for arm64.
      
      While cpu_soft_reset and related functions (which call flush_cache_all)
      were thought to be useful for kexec, their current implementations only
      serve to mask bugs. For correctness kexec will need to perform
      maintenance by VA anyway to account for system caches, line migration,
      and other subtleties of the cache architecture. As the extent of this
      cache maintenance will be kexec-specific, it should probably live in the
      kexec code.
      
      This patch removes flush_cache_all, and related unused components,
      preventing further abuse.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Cc: AKASHI Takahiro <takahiro.akashi@linaro.org>
      Cc: Geoff Levand <geoff@infradead.org>
      Acked-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
      Acked-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com>
      Acked-by: NMarc Zyngier <marc.zyngier@arm.com>
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      68234df4
    • D
      mm/fault, arch: Use pagefault_disable() to check for disabled pagefaults in the handler · 70ffdb93
      David Hildenbrand 提交于
      Introduce faulthandler_disabled() and use it to check for irq context and
      disabled pagefaults (via pagefault_disable()) in the pagefault handlers.
      
      Please note that we keep the in_atomic() checks in place - to detect
      whether in irq context (in which case preemption is always properly
      disabled).
      
      In contrast, preempt_disable() should never be used to disable pagefaults.
      With !CONFIG_PREEMPT_COUNT, preempt_disable() doesn't modify the preempt
      counter, and therefore the result of in_atomic() differs.
      We validate that condition by using might_fault() checks when calling
      might_sleep().
      
      Therefore, add a comment to faulthandler_disabled(), describing why this
      is needed.
      
      faulthandler_disabled() and pagefault_disable() are defined in
      linux/uaccess.h, so let's properly add that include to all relevant files.
      
      This patch is based on a patch from Thomas Gleixner.
      Reviewed-and-tested-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NDavid Hildenbrand <dahi@linux.vnet.ibm.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: David.Laight@ACULAB.COM
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: airlied@linux.ie
      Cc: akpm@linux-foundation.org
      Cc: benh@kernel.crashing.org
      Cc: bigeasy@linutronix.de
      Cc: borntraeger@de.ibm.com
      Cc: daniel.vetter@intel.com
      Cc: heiko.carstens@de.ibm.com
      Cc: herbert@gondor.apana.org.au
      Cc: hocko@suse.cz
      Cc: hughd@google.com
      Cc: mst@redhat.com
      Cc: paulus@samba.org
      Cc: ralf@linux-mips.org
      Cc: schwidefsky@de.ibm.com
      Cc: yang.shi@windriver.com
      Link: http://lkml.kernel.org/r/1431359540-32227-7-git-send-email-dahi@linux.vnet.ibm.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      70ffdb93
  19. 05 5月, 2015 1 次提交
  20. 30 4月, 2015 1 次提交
    • D
      arm64: add missing PAGE_ALIGN() to __dma_free() · 2cff98b9
      Dean Nelson 提交于
      __dma_alloc() does a PAGE_ALIGN() on the passed in size argument before
      doing anything else. __dma_free() does not. And because it doesn't, it is
      possible to leak memory should size not be an integer multiple of PAGE_SIZE.
      
      The solution is to add a PAGE_ALIGN() to __dma_free() like is done in
      __dma_alloc().
      
      Additionally, this patch removes a redundant PAGE_ALIGN() from
      __dma_alloc_coherent(), since __dma_alloc_coherent() can only be called
      from __dma_alloc(), which already does a PAGE_ALIGN() before the call.
      
      Cc: stable@vger.kernel.org
      Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NDean Nelson <dnelson@redhat.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      2cff98b9
  21. 27 4月, 2015 1 次提交
    • M
      arm64: dma-mapping: always clear allocated buffers · 6829e274
      Marek Szyprowski 提交于
      Buffers allocated by dma_alloc_coherent() are always zeroed on Alpha,
      ARM (32bit), MIPS, PowerPC, x86/x86_64 and probably other architectures.
      It turned out that some drivers rely on this 'feature'. Allocated buffer
      might be also exposed to userspace with dma_mmap() call, so clearing it
      is desired from security point of view to avoid exposing random memory
      to userspace. This patch unifies dma_alloc_coherent() behavior on ARM64
      architecture with other implementations by unconditionally zeroing
      allocated buffer.
      
      Cc: <stable@vger.kernel.org> # v3.14+
      Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      6829e274
  22. 15 4月, 2015 4 次提交
  23. 23 3月, 2015 1 次提交
  24. 21 3月, 2015 1 次提交