1. 02 6月, 2015 1 次提交
  2. 04 5月, 2015 4 次提交
  3. 18 4月, 2015 1 次提交
  4. 15 4月, 2015 10 次提交
  5. 10 4月, 2015 1 次提交
  6. 02 4月, 2015 2 次提交
  7. 30 3月, 2015 1 次提交
  8. 28 3月, 2015 2 次提交
  9. 18 3月, 2015 3 次提交
  10. 13 3月, 2015 1 次提交
  11. 11 3月, 2015 2 次提交
  12. 10 3月, 2015 1 次提交
  13. 23 2月, 2015 1 次提交
  14. 20 2月, 2015 1 次提交
  15. 18 2月, 2015 1 次提交
  16. 12 2月, 2015 2 次提交
    • K
      mm: fix false-positive warning on exit due mm_nr_pmds(mm) · b30fe6c7
      Kirill A. Shutemov 提交于
      The problem is that we check nr_ptes/nr_pmds in exit_mmap() which happens
      *before* pgd_free().  And if an arch does pte/pmd allocation in
      pgd_alloc() and frees them in pgd_free() we see offset in counters by the
      time of the checks.
      
      We tried to workaround this by offsetting expected counter value according
      to FIRST_USER_ADDRESS for both nr_pte and nr_pmd in exit_mmap().  But it
      doesn't work in some cases:
      
      1. ARM with LPAE enabled also has non-zero USER_PGTABLES_CEILING, but
         upper addresses occupied with huge pmd entries, so the trick with
         offsetting expected counter value will get really ugly: we will have
         to apply it nr_pmds, but not nr_ptes.
      
      2. Metag has non-zero FIRST_USER_ADDRESS, but doesn't do allocation
         pte/pmd page tables allocation in pgd_alloc(), just setup a pgd entry
         which is allocated at boot and shared accross all processes.
      
      The proposal is to move the check to check_mm() which happens *after*
      pgd_free() and do proper accounting during pgd_alloc() and pgd_free()
      which would bring counters to zero if nothing leaked.
      Signed-off-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Reported-by: NTyler Baker <tyler.baker@linaro.org>
      Tested-by: NTyler Baker <tyler.baker@linaro.org>
      Tested-by: NNishanth Menon <nm@ti.com>
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: James Hogan <james.hogan@imgtec.com>
      Cc: Guan Xuetao <gxt@mprc.pku.edu.cn>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      b30fe6c7
    • N
      mm/hugetlb: reduce arch dependent code around follow_huge_* · 61f77eda
      Naoya Horiguchi 提交于
      Currently we have many duplicates in definitions around
      follow_huge_addr(), follow_huge_pmd(), and follow_huge_pud(), so this
      patch tries to remove the m.  The basic idea is to put the default
      implementation for these functions in mm/hugetlb.c as weak symbols
      (regardless of CONFIG_ARCH_WANT_GENERAL_HUGETL B), and to implement
      arch-specific code only when the arch needs it.
      
      For follow_huge_addr(), only powerpc and ia64 have their own
      implementation, and in all other architectures this function just returns
      ERR_PTR(-EINVAL).  So this patch sets returning ERR_PTR(-EINVAL) as
      default.
      
      As for follow_huge_(pmd|pud)(), if (pmd|pud)_huge() is implemented to
      always return 0 in your architecture (like in ia64 or sparc,) it's never
      called (the callsite is optimized away) no matter how implemented it is.
      So in such architectures, we don't need arch-specific implementation.
      
      In some architecture (like mips, s390 and tile,) their current
      arch-specific follow_huge_(pmd|pud)() are effectively identical with the
      common code, so this patch lets these architecture use the common code.
      
      One exception is metag, where pmd_huge() could return non-zero but it
      expects follow_huge_pmd() to always return NULL.  This means that we need
      arch-specific implementation which returns NULL.  This behavior looks
      strange to me (because non-zero pmd_huge() implies that the architecture
      supports PMD-based hugepage, so follow_huge_pmd() can/should return some
      relevant value,) but that's beyond this cleanup patch, so let's keep it.
      
      Justification of non-trivial changes:
      - in s390, follow_huge_pmd() checks !MACHINE_HAS_HPAGE at first, and this
        patch removes the check. This is OK because we can assume MACHINE_HAS_HPAGE
        is true when follow_huge_pmd() can be called (note that pmd_huge() has
        the same check and always returns 0 for !MACHINE_HAS_HPAGE.)
      - in s390 and mips, we use HPAGE_MASK instead of PMD_MASK as done in common
        code. This patch forces these archs use PMD_MASK, but it's OK because
        they are identical in both archs.
        In s390, both of HPAGE_SHIFT and PMD_SHIFT are 20.
        In mips, HPAGE_SHIFT is defined as (PAGE_SHIFT + PAGE_SHIFT - 3) and
        PMD_SHIFT is define as (PAGE_SHIFT + PAGE_SHIFT + PTE_ORDER - 3), but
        PTE_ORDER is always 0, so these are identical.
      Signed-off-by: NNaoya Horiguchi <n-horiguchi@ah.jp.nec.com>
      Acked-by: NHugh Dickins <hughd@google.com>
      Cc: James Hogan <james.hogan@imgtec.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Mel Gorman <mel@csn.ul.ie>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Michal Hocko <mhocko@suse.cz>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Luiz Capitulino <lcapitulino@redhat.com>
      Cc: Nishanth Aravamudan <nacc@linux.vnet.ibm.com>
      Cc: Lee Schermerhorn <lee.schermerhorn@hp.com>
      Cc: Steve Capper <steve.capper@linaro.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      61f77eda
  17. 11 2月, 2015 1 次提交
  18. 07 2月, 2015 2 次提交
    • A
      ARM: 8297/1: cache-l2x0: optimize aurora range operations · 1d889679
      Arnd Bergmann 提交于
      The aurora_inv_range(), aurora_clean_range() and aurora_flush_range()
      functions are highly redundant, both in source and in object code, and
      they are harder to understand than necessary.
      
      By moving the range loop into the aurora_pa_range() function, they
      become trivial wrappers, and the object code start looking like what
      one would expect for an optimal implementation.
      
      Further optimization may be possible by using the per-CPU "virtual"
      registers to avoid the spinlocks in most cases.
      
       (on Armada 370 RD and Armada XP GP, boot tested, plus a little bit of
       DMA traffic by reading data from a SD card)
      Reviewed-by: NThomas Petazzoni <thomas.petazzoni@free-electrons.com>
      Tested-by: NThomas Petazzoni <thomas.petazzoni@free-electrons.com>
      Signed-off-by: NArnd Bergmann <arnd@arndb.de>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      1d889679
    • A
      ARM: 8296/1: cache-l2x0: clean up aurora cache handling · 20e783e3
      Arnd Bergmann 提交于
      The aurora cache controller is the only remaining user of a couple
      of functions in this file and are completely unused when that is
      disabled, leading to build warnings:
      
      arch/arm/mm/cache-l2x0.c:167:13: warning: 'l2x0_cache_sync' defined but not used [-Wunused-function]
      arch/arm/mm/cache-l2x0.c:184:13: warning: 'l2x0_flush_all' defined but not used [-Wunused-function]
      arch/arm/mm/cache-l2x0.c:194:13: warning: 'l2x0_disable' defined but not used [-Wunused-function]
      
      With the knowledge that the code is now aurora-specific, we can
      simplify it noticeably:
      
      - The pl310 errata workarounds are not needed on aurora and can be removed
      - As confirmed by Thomas Petazzoni from the data sheet, the cache_wait()
        macro is never needed.
      - No need to hold the lock across atomic cache sync
      - We can load the l2x0_base into a local variable across operations
      
      There should be no functional change in this patch, but readability
      and the generated object code improves, along with avoiding the
      warnings.
      
       (on Armada 370 RD and Armada XP GP, boot tested, plus a little bit of
       DMA traffic by reading data from a SD card)
      Acked-by: NThomas Petazzoni <thomas.petazzoni@free-electrons.com>
      Tested-by: NThomas Petazzoni <thomas.petazzoni@free-electrons.com>
      Signed-off-by: NArnd Bergmann <arnd@arndb.de>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      20e783e3
  19. 03 2月, 2015 1 次提交
    • W
      ARM: 8299/1: mm: ensure local active ASID is marked as allocated on rollover · 8e648066
      Will Deacon 提交于
      Commit e1a5848e ("ARM: 7924/1: mm: don't bother with reserved ttbr0
      when running with LPAE") removed the use of the reserved TTBR0 value
      for LPAE systems, since the ASID is held in the TTBR and can be updated
      atomicly with the pgd of the next mm.
      
      Unfortunately, this patch forgot to update flush_context, which
      deliberately avoids marking the local active ASID as allocated, since we
      used to switch via ASID zero and didn't need to allocate the ASID of
      the previous mm. The side-effect of this is that we can allocate the
      same ASID to the next mm and, between flushing the local TLB and updating
      TTBR0, we can perform speculative TLB fills for userspace nG mappings
      using the page table of the previous mm.
      
      The consequence of this is that the next mm can erroneously hit some
      mappings of the previous mm. Note that this was made significantly
      harder to hit by a391263c ("ARM: 8203/1: mm: try to re-use old ASID
      assignments following a rollover") but is still theoretically possible.
      
      This patch fixes the problem by removing the code from flush_context
      that forces the allocated ASID to zero for the local CPU. Many thanks
      to the Broadcom guys for tracking this one down.
      
      Fixes: e1a5848e ("ARM: 7924/1: mm: don't bother with reserved ttbr0 when running with LPAE")
      
      Cc: <stable@vger.kernel.org> # v3.14+
      Reported-by: NRaymond Ngun <rngun@broadcom.com>
      Tested-by: NRaymond Ngun <rngun@broadcom.com>
      Reviewed-by: NGregory Fong <gregory.0xf0@gmail.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      8e648066
  20. 30 1月, 2015 1 次提交
  21. 29 1月, 2015 1 次提交