1. 25 10月, 2013 5 次提交
    • M
      arm64: add CPU_HOTPLUG infrastructure · 9327e2c6
      Mark Rutland 提交于
      This patch adds the basic infrastructure necessary to support
      CPU_HOTPLUG on arm64, based on the arm implementation. Actual hotplug
      support will depend on an implementation's cpu_operations (e.g. PSCI).
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      9327e2c6
    • M
      arm64: read enable-method for CPU0 · e8765b26
      Mark Rutland 提交于
      With the advent of CPU_HOTPLUG, the enable-method property for CPU0 may
      tells us something useful (i.e. how to hotplug it back on), so we must
      read it along with all the enable-method for all the other CPUs.  Even
      on UP the enable-method may tell us useful information (e.g. if a core
      has some mechanism that might be usable for cpuidle), so we should
      always read it.
      
      This patch factors out the reading of the enable method, and ensures
      that CPU0's enable method is read regardless of whether the kernel is
      built with SMP support.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      e8765b26
    • M
      arm64: factor out spin-table boot method · 652af899
      Mark Rutland 提交于
      The arm64 kernel has an internal holding pen, which is necessary for
      some systems where we can't bring CPUs online individually and must hold
      multiple CPUs in a safe area until the kernel is able to handle them.
      The current SMP infrastructure for arm64 is closely coupled to this
      holding pen, and alternative boot methods must launch CPUs into the pen,
      where they sit before they are launched into the kernel proper.
      
      With PSCI (and possibly other future boot methods), we can bring CPUs
      online individually, and need not perform the secondary_holding_pen
      dance. Instead, this patch factors the holding pen management code out
      to the spin-table boot method code, as it is the only boot method
      requiring the pen.
      
      A new entry point for secondaries, secondary_entry is added for other
      boot methods to use, which bypasses the holding pen and its associated
      overhead when bringing CPUs online. The smp.pen.text section is also
      removed, as the pen can live in head.text without problem.
      
      The cpu_operations structure is extended with two new functions,
      cpu_boot and cpu_postboot, for bringing a cpu into the kernel and
      performing any post-boot cleanup required by a bootmethod (e.g.
      resetting the secondary_holding_pen_release to INVALID_HWID).
      Documentation is added for cpu_operations.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      652af899
    • M
      arm64: reorganise smp_enable_ops · cd1aebf5
      Mark Rutland 提交于
      For hotplug support, we're going to want a place to store operations
      that do more than bring CPUs online, and it makes sense to group these
      with our current smp_enable_ops. For cpuidle support, we'll want to
      group additional functions, and we may want them even for UP kernels.
      
      This patch renames smp_enable_ops to the more general cpu_operations,
      and pulls the definitions out of smp code such that they can be used in
      UP kernels. While we're at it, fix up instances of the cpu parameter to
      be an unsigned int, drop the init markings and rename the *_cpu
      functions to cpu_* to reduce future churn when cpu_operations is
      extended.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      cd1aebf5
    • M
      arm64: unify smp_psci.c and psci.c · 00ef54bb
      Mark Rutland 提交于
      The functions in psci.c are only used from smp_psci.c, and smp_psci
      cannot function without psci.c. Additionally psci.c is built when !SMP,
      where it's expected that cpu_suspend may be useful.
      
      This patch unifies the two files, removing pointless duplication and
      paving the way for PSCI support in UP systems.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      00ef54bb
  2. 24 10月, 2013 4 次提交
  3. 23 10月, 2013 1 次提交
  4. 03 10月, 2013 1 次提交
  5. 30 9月, 2013 2 次提交
  6. 28 9月, 2013 1 次提交
  7. 25 9月, 2013 2 次提交
  8. 20 9月, 2013 3 次提交
  9. 13 9月, 2013 3 次提交
  10. 12 9月, 2013 1 次提交
    • N
      mm: migrate: check movability of hugepage in unmap_and_move_huge_page() · 83467efb
      Naoya Horiguchi 提交于
      Currently hugepage migration works well only for pmd-based hugepages
      (mainly due to lack of testing,) so we had better not enable migration of
      other levels of hugepages until we are ready for it.
      
      Some users of hugepage migration (mbind, move_pages, and migrate_pages) do
      page table walk and check pud/pmd_huge() there, so they are safe.  But the
      other users (softoffline and memory hotremove) don't do this, so without
      this patch they can try to migrate unexpected types of hugepages.
      
      To prevent this, we introduce hugepage_migration_support() as an
      architecture dependent check of whether hugepage are implemented on a pmd
      basis or not.  And on some architecture multiple sizes of hugepages are
      available, so hugepage_migration_support() also checks hugepage size.
      Signed-off-by: NNaoya Horiguchi <n-horiguchi@ah.jp.nec.com>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: Hillf Danton <dhillf@gmail.com>
      Cc: Wanpeng Li <liwanp@linux.vnet.ibm.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Michal Hocko <mhocko@suse.cz>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      83467efb
  11. 03 9月, 2013 1 次提交
  12. 02 9月, 2013 2 次提交
  13. 31 8月, 2013 1 次提交
  14. 29 8月, 2013 1 次提交
  15. 28 8月, 2013 2 次提交
  16. 22 8月, 2013 3 次提交
  17. 20 8月, 2013 5 次提交
  18. 16 8月, 2013 1 次提交
    • L
      Fix TLB gather virtual address range invalidation corner cases · 2b047252
      Linus Torvalds 提交于
      Ben Tebulin reported:
      
       "Since v3.7.2 on two independent machines a very specific Git
        repository fails in 9/10 cases on git-fsck due to an SHA1/memory
        failures.  This only occurs on a very specific repository and can be
        reproduced stably on two independent laptops.  Git mailing list ran
        out of ideas and for me this looks like some very exotic kernel issue"
      
      and bisected the failure to the backport of commit 53a59fc6 ("mm:
      limit mmu_gather batching to fix soft lockups on !CONFIG_PREEMPT").
      
      That commit itself is not actually buggy, but what it does is to make it
      much more likely to hit the partial TLB invalidation case, since it
      introduces a new case in tlb_next_batch() that previously only ever
      happened when running out of memory.
      
      The real bug is that the TLB gather virtual memory range setup is subtly
      buggered.  It was introduced in commit 597e1c35 ("mm/mmu_gather:
      enable tlb flush range in generic mmu_gather"), and the range handling
      was already fixed at least once in commit e6c495a9 ("mm: fix the TLB
      range flushed when __tlb_remove_page() runs out of slots"), but that fix
      was not complete.
      
      The problem with the TLB gather virtual address range is that it isn't
      set up by the initial tlb_gather_mmu() initialization (which didn't get
      the TLB range information), but it is set up ad-hoc later by the
      functions that actually flush the TLB.  And so any such case that forgot
      to update the TLB range entries would potentially miss TLB invalidates.
      
      Rather than try to figure out exactly which particular ad-hoc range
      setup was missing (I personally suspect it's the hugetlb case in
      zap_huge_pmd(), which didn't have the same logic as zap_pte_range()
      did), this patch just gets rid of the problem at the source: make the
      TLB range information available to tlb_gather_mmu(), and initialize it
      when initializing all the other tlb gather fields.
      
      This makes the patch larger, but conceptually much simpler.  And the end
      result is much more understandable; even if you want to play games with
      partial ranges when invalidating the TLB contents in chunks, now the
      range information is always there, and anybody who doesn't want to
      bother with it won't introduce subtle bugs.
      
      Ben verified that this fixes his problem.
      Reported-bisected-and-tested-by: NBen Tebulin <tebulin@googlemail.com>
      Build-testing-by: NStephen Rothwell <sfr@canb.auug.org.au>
      Build-testing-by: NRichard Weinberger <richard.weinberger@gmail.com>
      Reviewed-by: NMichal Hocko <mhocko@suse.cz>
      Acked-by: NPeter Zijlstra <peterz@infradead.org>
      Cc: stable@vger.kernel.org
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      2b047252
  19. 09 8月, 2013 1 次提交