1. 31 3月, 2017 1 次提交
    • M
      arm64: fix NULL dereference in have_cpu_die() · 335d2c2d
      Mark Salter 提交于
      Commit 5c492c3f ("arm64: smp: Add function to determine if cpus are
      stuck in the kernel") added a helper function to determine if die() is
      supported in cpu_ops. This function assumes a cpu will have a valid
      cpu_ops entry, but that may not be the case for cpu0 is spin-table or
      parking protocol is used to boot secondary cpus. In that case, there
      is a NULL dereference if have_cpu_die() is called by cpu0. So add a
      check for a valid cpu_ops before dereferencing it.
      
      Fixes: 5c492c3f ("arm64: smp: Add function to determine if cpus are stuck in the kernel")
      Signed-off-by: NMark Salter <msalter@redhat.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      335d2c2d
  2. 03 3月, 2017 1 次提交
    • I
      sched/headers: Move task->mm handling methods to <linux/sched/mm.h> · 68e21be2
      Ingo Molnar 提交于
      Move the following task->mm helper APIs into a new header file,
      <linux/sched/mm.h>, to further reduce the size and complexity
      of <linux/sched.h>.
      
      Here are how the APIs are used in various kernel files:
      
        # mm_alloc():
        arch/arm/mach-rpc/ecard.c
        fs/exec.c
        include/linux/sched/mm.h
        kernel/fork.c
      
        # __mmdrop():
        arch/arc/include/asm/mmu_context.h
        include/linux/sched/mm.h
        kernel/fork.c
      
        # mmdrop():
        arch/arm/mach-rpc/ecard.c
        arch/m68k/sun3/mmu_emu.c
        arch/x86/mm/tlb.c
        drivers/gpu/drm/amd/amdkfd/kfd_process.c
        drivers/gpu/drm/i915/i915_gem_userptr.c
        drivers/infiniband/hw/hfi1/file_ops.c
        drivers/vfio/vfio_iommu_spapr_tce.c
        fs/exec.c
        fs/proc/base.c
        fs/proc/task_mmu.c
        fs/proc/task_nommu.c
        fs/userfaultfd.c
        include/linux/mmu_notifier.h
        include/linux/sched/mm.h
        kernel/fork.c
        kernel/futex.c
        kernel/sched/core.c
        mm/khugepaged.c
        mm/ksm.c
        mm/mmu_context.c
        mm/mmu_notifier.c
        mm/oom_kill.c
        virt/kvm/kvm_main.c
      
        # mmdrop_async_fn():
        include/linux/sched/mm.h
      
        # mmdrop_async():
        include/linux/sched/mm.h
        kernel/fork.c
      
        # mmget_not_zero():
        fs/userfaultfd.c
        include/linux/sched/mm.h
        mm/oom_kill.c
      
        # mmput():
        arch/arc/include/asm/mmu_context.h
        arch/arc/kernel/troubleshoot.c
        arch/frv/mm/mmu-context.c
        arch/powerpc/platforms/cell/spufs/context.c
        arch/sparc/include/asm/mmu_context_32.h
        drivers/android/binder.c
        drivers/gpu/drm/etnaviv/etnaviv_gem.c
        drivers/gpu/drm/i915/i915_gem_userptr.c
        drivers/infiniband/core/umem.c
        drivers/infiniband/core/umem_odp.c
        drivers/infiniband/core/uverbs_main.c
        drivers/infiniband/hw/mlx4/main.c
        drivers/infiniband/hw/mlx5/main.c
        drivers/infiniband/hw/usnic/usnic_uiom.c
        drivers/iommu/amd_iommu_v2.c
        drivers/iommu/intel-svm.c
        drivers/lguest/lguest_user.c
        drivers/misc/cxl/fault.c
        drivers/misc/mic/scif/scif_rma.c
        drivers/oprofile/buffer_sync.c
        drivers/vfio/vfio_iommu_type1.c
        drivers/vhost/vhost.c
        drivers/xen/gntdev.c
        fs/exec.c
        fs/proc/array.c
        fs/proc/base.c
        fs/proc/task_mmu.c
        fs/proc/task_nommu.c
        fs/userfaultfd.c
        include/linux/sched/mm.h
        kernel/cpuset.c
        kernel/events/core.c
        kernel/events/uprobes.c
        kernel/exit.c
        kernel/fork.c
        kernel/ptrace.c
        kernel/sys.c
        kernel/trace/trace_output.c
        kernel/tsacct.c
        mm/memcontrol.c
        mm/memory.c
        mm/mempolicy.c
        mm/migrate.c
        mm/mmu_notifier.c
        mm/nommu.c
        mm/oom_kill.c
        mm/process_vm_access.c
        mm/rmap.c
        mm/swapfile.c
        mm/util.c
        virt/kvm/async_pf.c
      
        # mmput_async():
        include/linux/sched/mm.h
        kernel/fork.c
        mm/oom_kill.c
      
        # get_task_mm():
        arch/arc/kernel/troubleshoot.c
        arch/powerpc/platforms/cell/spufs/context.c
        drivers/android/binder.c
        drivers/gpu/drm/etnaviv/etnaviv_gem.c
        drivers/infiniband/core/umem.c
        drivers/infiniband/core/umem_odp.c
        drivers/infiniband/hw/mlx4/main.c
        drivers/infiniband/hw/mlx5/main.c
        drivers/infiniband/hw/usnic/usnic_uiom.c
        drivers/iommu/amd_iommu_v2.c
        drivers/iommu/intel-svm.c
        drivers/lguest/lguest_user.c
        drivers/misc/cxl/fault.c
        drivers/misc/mic/scif/scif_rma.c
        drivers/oprofile/buffer_sync.c
        drivers/vfio/vfio_iommu_type1.c
        drivers/vhost/vhost.c
        drivers/xen/gntdev.c
        fs/proc/array.c
        fs/proc/base.c
        fs/proc/task_mmu.c
        include/linux/sched/mm.h
        kernel/cpuset.c
        kernel/events/core.c
        kernel/exit.c
        kernel/fork.c
        kernel/ptrace.c
        kernel/sys.c
        kernel/trace/trace_output.c
        kernel/tsacct.c
        mm/memcontrol.c
        mm/memory.c
        mm/mempolicy.c
        mm/migrate.c
        mm/mmu_notifier.c
        mm/nommu.c
        mm/util.c
      
        # mm_access():
        fs/proc/base.c
        include/linux/sched/mm.h
        kernel/fork.c
        mm/process_vm_access.c
      
        # mm_release():
        arch/arc/include/asm/mmu_context.h
        fs/exec.c
        include/linux/sched/mm.h
        include/uapi/linux/sched.h
        kernel/exit.c
        kernel/fork.c
      Acked-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      68e21be2
  3. 02 3月, 2017 2 次提交
  4. 28 2月, 2017 1 次提交
  5. 03 2月, 2017 1 次提交
  6. 12 11月, 2016 3 次提交
    • M
      arm64: split thread_info from task stack · c02433dd
      Mark Rutland 提交于
      This patch moves arm64's struct thread_info from the task stack into
      task_struct. This protects thread_info from corruption in the case of
      stack overflows, and makes its address harder to determine if stack
      addresses are leaked, making a number of attacks more difficult. Precise
      detection and handling of overflow is left for subsequent patches.
      
      Largely, this involves changing code to store the task_struct in sp_el0,
      and acquire the thread_info from the task struct. Core code now
      implements current_thread_info(), and as noted in <linux/sched.h> this
      relies on offsetof(task_struct, thread_info) == 0, enforced by core
      code.
      
      This change means that the 'tsk' register used in entry.S now points to
      a task_struct, rather than a thread_info as it used to. To make this
      clear, the TI_* field offsets are renamed to TSK_TI_*, with asm-offsets
      appropriately updated to account for the structural change.
      
      Userspace clobbers sp_el0, and we can no longer restore this from the
      stack. Instead, the current task is cached in a per-cpu variable that we
      can safely access from early assembly as interrupts are disabled (and we
      are thus not preemptible).
      
      Both secondary entry and idle are updated to stash the sp and task
      pointer separately.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Tested-by: NLaura Abbott <labbott@redhat.com>
      Cc: AKASHI Takahiro <takahiro.akashi@linaro.org>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: James Morse <james.morse@arm.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      c02433dd
    • M
      arm64: make cpu number a percpu variable · 57c82954
      Mark Rutland 提交于
      In the absence of CONFIG_THREAD_INFO_IN_TASK, core code maintains
      thread_info::cpu, and low-level architecture code can access this to
      build raw_smp_processor_id(). With CONFIG_THREAD_INFO_IN_TASK, core code
      maintains task_struct::cpu, which for reasons of hte header soup is not
      accessible to low-level arch code.
      
      Instead, we can maintain a percpu variable containing the cpu number.
      
      For both the old and new implementation of raw_smp_processor_id(), we
      read a syreg into a GPR, add an offset, and load the result. As the
      offset is now larger, it may not be folded into the load, but otherwise
      the assembly shouldn't change much.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Tested-by: NLaura Abbott <labbott@redhat.com>
      Cc: James Morse <james.morse@arm.com>
      Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      57c82954
    • M
      arm64: smp: prepare for smp_processor_id() rework · 580efaa7
      Mark Rutland 提交于
      Subsequent patches will make smp_processor_id() use a percpu variable.
      This will make smp_processor_id() dependent on the percpu offset, and
      thus we cannot use smp_processor_id() to figure out what to initialise
      the offset to.
      
      Prepare for this by initialising the percpu offset based on
      current::cpu, which will work regardless of how smp_processor_id() is
      implemented. Also, make this relationship obvious by placing this code
      together at the start of secondary_start_kernel().
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Tested-by: NLaura Abbott <labbott@redhat.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      580efaa7
  7. 17 10月, 2016 1 次提交
    • L
      arm64: kernel: numa: fix ACPI boot cpu numa node mapping · baa5567c
      Lorenzo Pieralisi 提交于
      Commit 7ba5f605 ("arm64/numa: remove the limitation that cpu0 must
      bind to node0") removed the numa cpu<->node mapping restriction whereby
      logical cpu 0 always corresponds to numa node 0; removing the
      restriction was correct, in that it does not really exist in practice
      but the commit only updated the early mapping of logical cpu 0 to its
      real numa node for the DT boot path, missing the ACPI one, leading to
      boot failures on ACPI systems owing to missing node<->cpu map for
      logical cpu 0.
      
      Fix the issue by updating the ACPI boot path with code that carries out
      the early cpu<->node mapping also for the boot cpu (ie cpu 0), mirroring
      what is currently done in the DT boot path.
      
      Fixes: 7ba5f605 ("arm64/numa: remove the limitation that cpu0 must bind to node0")
      Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com>
      Tested-by: NLaszlo Ersek <lersek@redhat.com>
      Reported-by: NLaszlo Ersek <lersek@redhat.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Laszlo Ersek <lersek@redhat.com>
      Cc: Hanjun Guo <hanjun.guo@linaro.org>
      Cc: Andrew Jones <drjones@redhat.com>
      Cc: Zhen Lei <thunder.leizhen@huawei.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      baa5567c
  8. 23 9月, 2016 1 次提交
    • D
      arm64: Call numa_store_cpu_info() earlier. · c18df0ad
      David Daney 提交于
      The wq_numa_init() function makes a private CPU to node map by calling
      cpu_to_node() early in the boot process, before the non-boot CPUs are
      brought online.  Since the default implementation of cpu_to_node()
      returns zero for CPUs that have never been brought online, the
      workqueue system's view is that *all* CPUs are on node zero.
      
      When the unbound workqueue for a non-zero node is created, the
      tsk_cpus_allowed() for the worker threads is the empty set because
      there are, in the view of the workqueue system, no CPUs on non-zero
      nodes.  The code in try_to_wake_up() using this empty cpumask ends up
      using the cpumask empty set value of NR_CPUS as an index into the
      per-CPU area pointer array, and gets garbage as it is one past the end
      of the array.  This results in:
      
      [    0.881970] Unable to handle kernel paging request at virtual address fffffb1008b926a4
      [    1.970095] pgd = fffffc00094b0000
      [    1.973530] [fffffb1008b926a4] *pgd=0000000000000000, *pud=0000000000000000, *pmd=0000000000000000
      [    1.982610] Internal error: Oops: 96000004 [#1] SMP
      [    1.987541] Modules linked in:
      [    1.990631] CPU: 48 PID: 295 Comm: cpuhp/48 Tainted: G        W       4.8.0-rc6-preempt-vol+ #9
      [    1.999435] Hardware name: Cavium ThunderX CN88XX board (DT)
      [    2.005159] task: fffffe0fe89cc300 task.stack: fffffe0fe8b8c000
      [    2.011158] PC is at try_to_wake_up+0x194/0x34c
      [    2.015737] LR is at try_to_wake_up+0x150/0x34c
      [    2.020318] pc : [<fffffc00080e7468>] lr : [<fffffc00080e7424>] pstate: 600000c5
      [    2.027803] sp : fffffe0fe8b8fb10
      [    2.031149] x29: fffffe0fe8b8fb10 x28: 0000000000000000
      [    2.036522] x27: fffffc0008c63bc8 x26: 0000000000001000
      [    2.041896] x25: fffffc0008c63c80 x24: fffffc0008bfb200
      [    2.047270] x23: 00000000000000c0 x22: 0000000000000004
      [    2.052642] x21: fffffe0fe89d25bc x20: 0000000000001000
      [    2.058014] x19: fffffe0fe89d1d00 x18: 0000000000000000
      [    2.063386] x17: 0000000000000000 x16: 0000000000000000
      [    2.068760] x15: 0000000000000018 x14: 0000000000000000
      [    2.074133] x13: 0000000000000000 x12: 0000000000000000
      [    2.079505] x11: 0000000000000000 x10: 0000000000000000
      [    2.084879] x9 : 0000000000000000 x8 : 0000000000000000
      [    2.090251] x7 : 0000000000000040 x6 : 0000000000000000
      [    2.095621] x5 : ffffffffffffffff x4 : 0000000000000000
      [    2.100991] x3 : 0000000000000000 x2 : 0000000000000000
      [    2.106364] x1 : fffffc0008be4c24 x0 : ffffff0ffffada80
      [    2.111737]
      [    2.113236] Process cpuhp/48 (pid: 295, stack limit = 0xfffffe0fe8b8c020)
      [    2.120102] Stack: (0xfffffe0fe8b8fb10 to 0xfffffe0fe8b90000)
      [    2.125914] fb00:                                   fffffe0fe8b8fb80 fffffc00080e7648
      .
      .
      .
      [    2.442859] Call trace:
      [    2.445327] Exception stack(0xfffffe0fe8b8f940 to 0xfffffe0fe8b8fa70)
      [    2.451843] f940: fffffe0fe89d1d00 0000040000000000 fffffe0fe8b8fb10 fffffc00080e7468
      [    2.459767] f960: fffffe0fe8b8f980 fffffc00080e4958 ffffff0ff91ab200 fffffc00080e4b64
      [    2.467690] f980: fffffe0fe8b8f9d0 fffffc00080e515c fffffe0fe8b8fa80 0000000000000000
      [    2.475614] f9a0: fffffe0fe8b8f9d0 fffffc00080e58e4 fffffe0fe8b8fa80 0000000000000000
      [    2.483540] f9c0: fffffe0fe8d10000 0000000000000040 fffffe0fe8b8fa50 fffffc00080e5ac4
      [    2.491465] f9e0: ffffff0ffffada80 fffffc0008be4c24 0000000000000000 0000000000000000
      [    2.499387] fa00: 0000000000000000 ffffffffffffffff 0000000000000000 0000000000000040
      [    2.507309] fa20: 0000000000000000 0000000000000000 0000000000000000 0000000000000000
      [    2.515233] fa40: 0000000000000000 0000000000000000 0000000000000000 0000000000000018
      [    2.523156] fa60: 0000000000000000 0000000000000000
      [    2.528089] [<fffffc00080e7468>] try_to_wake_up+0x194/0x34c
      [    2.533723] [<fffffc00080e7648>] wake_up_process+0x28/0x34
      [    2.539275] [<fffffc00080d3764>] create_worker+0x110/0x19c
      [    2.544824] [<fffffc00080d69dc>] alloc_unbound_pwq+0x3cc/0x4b0
      [    2.550724] [<fffffc00080d6bcc>] wq_update_unbound_numa+0x10c/0x1e4
      [    2.557066] [<fffffc00080d7d78>] workqueue_online_cpu+0x220/0x28c
      [    2.563234] [<fffffc00080bd288>] cpuhp_invoke_callback+0x6c/0x168
      [    2.569398] [<fffffc00080bdf74>] cpuhp_up_callbacks+0x44/0xe4
      [    2.575210] [<fffffc00080be194>] cpuhp_thread_fun+0x13c/0x148
      [    2.581027] [<fffffc00080dfbac>] smpboot_thread_fn+0x19c/0x1a8
      [    2.586929] [<fffffc00080dbd64>] kthread+0xdc/0xf0
      [    2.591776] [<fffffc0008083380>] ret_from_fork+0x10/0x50
      [    2.597147] Code: b00057e1 91304021 91005021 b8626822 (b8606821)
      [    2.603464] ---[ end trace 58c0cd36b88802bc ]---
      [    2.608138] Kernel panic - not syncing: Fatal exception
      
      Fix by moving call to numa_store_cpu_info() for all CPUs into
      smp_prepare_cpus(), which happens before wq_numa_init().  Since
      smp_store_cpu_info() now contains only a single function call,
      simplify by removing the function and out-lining its contents.
      Suggested-by: NRobert Richter <rric@kernel.org>
      Fixes: 1a2db300 ("arm64, numa: Add NUMA support for arm64 platforms.")
      Cc: <stable@vger.kernel.org> # 4.7.x-
      Signed-off-by: NDavid Daney <david.daney@cavium.com>
      Reviewed-by: NRobert Richter <rrichter@cavium.com>
      Tested-by: NYisheng Xie <xieyisheng1@huawei.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      c18df0ad
  9. 09 9月, 2016 2 次提交
    • S
      arm64: Rearrange CPU errata workaround checks · c47a1900
      Suzuki K Poulose 提交于
      Right now we run through the work around checks on a CPU
      from __cpuinfo_store_cpu. There are some problems with that:
      
      1) We initialise the system wide CPU feature registers only after the
      Boot CPU updates its cpuinfo. Now, if a work around depends on the
      variance of a CPU ID feature (e.g, check for Cache Line size mismatch),
      we have no way of performing it cleanly for the boot CPU.
      
      2) It is out of place, invoked from __cpuinfo_store_cpu() in cpuinfo.c. It
      is not an obvious place for that.
      
      This patch rearranges the CPU specific capability(aka work around) checks.
      
      1) At the moment we use verify_local_cpu_capabilities() to check if a new
      CPU has all the system advertised features. Use this for the secondary CPUs
      to perform the work around check. For that we rename
        verify_local_cpu_capabilities() => check_local_cpu_capabilities()
      which:
      
         If the system wide capabilities haven't been initialised (i.e, the CPU
         is activated at the boot), update the system wide detected work arounds.
      
         Otherwise (i.e a CPU hotplugged in later) verify that this CPU conforms to the
         system wide capabilities.
      
      2) Boot CPU updates the work arounds from smp_prepare_boot_cpu() after we have
      initialised the system wide CPU feature values.
      
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Andre Przywara <andre.przywara@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      c47a1900
    • Z
      arm64/numa: remove the limitation that cpu0 must bind to node0 · 7ba5f605
      Zhen Lei 提交于
      1. Remove the old binding code.
      2. Read the nid of cpu0 from dts.
      3. Fallback the nid of cpu0 to 0 when numa=off is set in bootargs.
      Signed-off-by: NZhen Lei <thunder.leizhen@huawei.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      7ba5f605
  10. 07 9月, 2016 1 次提交
    • C
      arm64: Use static keys for CPU features · efd9e03f
      Catalin Marinas 提交于
      This patch adds static keys transparently for all the cpu_hwcaps
      features by implementing an array of default-false static keys and
      enabling them when detected. The cpus_have_cap() check uses the static
      keys if the feature being checked is a constant, otherwise the compiler
      generates the bitmap test.
      
      Because of the early call to static_branch_enable() via
      check_local_cpu_errata() -> update_cpu_capabilities(), the jump labels
      are initialised in cpuinfo_store_boot_cpu().
      
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Suzuki K. Poulose <Suzuki.Poulose@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      efd9e03f
  11. 09 8月, 2016 1 次提交
  12. 21 7月, 2016 2 次提交
  13. 19 7月, 2016 1 次提交
    • W
      arm64: debug: unmask PSTATE.D earlier · 2ce39ad1
      Will Deacon 提交于
      Clearing PSTATE.D is one of the requirements for generating a debug
      exception. The arm64 booting protocol requires that PSTATE.D is set,
      since many of the debug registers (for example, the hw_breakpoint
      registers) are UNKNOWN out of reset and could potentially generate
      spurious, fatal debug exceptions in early boot code if PSTATE.D was
      clear. Once the debug registers have been safely initialised, PSTATE.D
      is cleared, however this is currently broken for two reasons:
      
      (1) The boot CPU clears PSTATE.D in a postcore_initcall and secondary
          CPUs clear PSTATE.D in secondary_start_kernel. Since the initcall
          runs after SMP (and the scheduler) have been initialised, there is
          no guarantee that it is actually running on the boot CPU. In this
          case, the boot CPU is left with PSTATE.D set and is not capable of
          generating debug exceptions.
      
      (2) In a preemptible kernel, we may explicitly schedule on the IRQ
          return path to EL1. If an IRQ occurs with PSTATE.D set in the idle
          thread, then we may schedule the kthread_init thread, run the
          postcore_initcall to clear PSTATE.D and then context switch back
          to the idle thread before returning from the IRQ. The exception
          return path will then restore PSTATE.D from the stack, and set it
          again.
      
      This patch fixes the problem by moving the clearing of PSTATE.D earlier
      to proc.S. This has the desirable effect of clearing it in one place for
      all CPUs, long before we have to worry about the scheduler or any
      exception handling. We ensure that the previous reset of MDSCR_EL1 has
      completed before unmasking the exception, so that any spurious
      exceptions resulting from UNKNOWN debug registers are not generated.
      
      Without this patch applied, the kprobes selftests have been seen to fail
      under KVM, where we end up attempting to step the OOL instruction buffer
      with PSTATE.D set and therefore fail to complete the step.
      
      Cc: <stable@vger.kernel.org>
      Acked-by: NMark Rutland <mark.rutland@arm.com>
      Reported-by: NCatalin Marinas <catalin.marinas@arm.com>
      Tested-by: NMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Tested-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      2ce39ad1
  14. 27 6月, 2016 1 次提交
  15. 22 6月, 2016 1 次提交
  16. 30 5月, 2016 1 次提交
  17. 11 5月, 2016 1 次提交
  18. 25 4月, 2016 1 次提交
    • S
      arm64: Fix behavior of maxcpus=N · 44dbcc93
      Suzuki K Poulose 提交于
      maxcpu=n sets the number of CPUs activated at boot time to a max of n,
      but allowing the remaining CPUs to be brought up later if the user
      decides to do so. However, on arm64 due to various reasons, we disallowed
      hotplugging CPUs beyond n, by marking them not present. Now that
      we have checks in place to make sure the hotplugged CPUs have compatible
      features with system and requires no new errata, relax the restriction.
      
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: James Morse <james.morse@arm.com>
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      44dbcc93
  19. 19 4月, 2016 1 次提交
    • J
      arm64: Reduce verbosity on SMP CPU stop · 82611c14
      Jan Glauber 提交于
      When CPUs are stopped during an abnormal operation like panic
      for each CPU a line is printed and the stack trace is dumped.
      
      This information is only interesting for the aborting CPU
      and on systems with many CPUs it only makes it harder to
      debug if after the aborting CPU the log is flooded with data
      about all other CPUs too.
      
      Therefore remove the stack dump and printk of other CPUs
      and only print a single line that the other CPUs are going to be
      stopped and, in case any CPUs remain online list them.
      Signed-off-by: NJan Glauber <jglauber@cavium.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      82611c14
  20. 16 4月, 2016 2 次提交
  21. 02 3月, 2016 1 次提交
    • T
      arch/hotplug: Call into idle with a proper state · fc6d73d6
      Thomas Gleixner 提交于
      Let the non boot cpus call into idle with the corresponding hotplug state, so
      the hotplug core can handle the further bringup. That's a first step to
      convert the boot side of the hotplugged cpus to do all the synchronization
      with the other side through the state machine. For now it'll only start the
      hotplug thread and kick the full bringup of the cpu.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: linux-arch@vger.kernel.org
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Rafael Wysocki <rafael.j.wysocki@intel.com>
      Cc: "Srivatsa S. Bhat" <srivatsa@mit.edu>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Arjan van de Ven <arjan@linux.intel.com>
      Cc: Sebastian Siewior <bigeasy@linutronix.de>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Paul McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul Turner <pjt@google.com>
      Link: http://lkml.kernel.org/r/20160226182341.614102639@linutronix.deSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      fc6d73d6
  22. 25 2月, 2016 2 次提交
    • S
      arm64: Handle early CPU boot failures · bb905274
      Suzuki K Poulose 提交于
      A secondary CPU could fail to come online due to insufficient
      capabilities and could simply die or loop in the kernel.
      e.g, a CPU with no support for the selected kernel PAGE_SIZE
      loops in kernel with MMU turned off.
      or a hotplugged CPU which doesn't have one of the advertised
      system capability will die during the activation.
      
      There is no way to synchronise the status of the failing CPU
      back to the master. This patch solves the issue by adding a
      field to the secondary_data which can be updated by the failing
      CPU. If the secondary CPU fails even before turning the MMU on,
      it updates the status in a special variable reserved in the head.txt
      section to make sure that the update can be cache invalidated safely
      without possible sharing of cache write back granule.
      
      Here are the possible states :
      
       -1. CPU_MMU_OFF - Initial value set by the master CPU, this value
      indicates that the CPU could not turn the MMU on, hence the status
      could not be reliably updated in the secondary_data. Instead, the
      CPU has updated the status @ __early_cpu_boot_status.
      
       0. CPU_BOOT_SUCCESS - CPU has booted successfully.
      
       1. CPU_KILL_ME - CPU has invoked cpu_ops->die, indicating the
      master CPU to synchronise by issuing a cpu_ops->cpu_kill.
      
       2. CPU_STUCK_IN_KERNEL - CPU couldn't invoke die(), instead is
      looping in the kernel. This information could be used by say,
      kexec to check if it is really safe to do a kexec reboot.
      
       3. CPU_PANIC_KERNEL - CPU detected some serious issues which
      requires kernel to crash immediately. The secondary CPU cannot
      call panic() until it has initialised the GIC. This flag can
      be used to instruct the master to do so.
      
      Cc: Mark Rutland <mark.rutland@arm.com>
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com>
      [catalin.marinas@arm.com: conflict resolution]
      [catalin.marinas@arm.com: converted "status" from int to long]
      [catalin.marinas@arm.com: updated update_early_cpu_boot_status to use str_l]
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      bb905274
    • S
      arm64: Move cpu_die_early to smp.c · fce6361f
      Suzuki K Poulose 提交于
      This patch moves cpu_die_early to smp.c, where it fits better.
      No functional changes, except for adding the necessary checks
      for CONFIG_HOTPLUG_CPU.
      
      Cc: Mark Rutland <mark.rutland@arm.com>
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      fce6361f
  23. 16 2月, 2016 2 次提交
    • L
      arm64: kernel: implement ACPI parking protocol · 5e89c55e
      Lorenzo Pieralisi 提交于
      The SBBR and ACPI specifications allow ACPI based systems that do not
      implement PSCI (eg systems with no EL3) to boot through the ACPI parking
      protocol specification[1].
      
      This patch implements the ACPI parking protocol CPU operations, and adds
      code that eases parsing the parking protocol data structures to the
      ARM64 SMP initializion carried out at the same time as cpus enumeration.
      
      To wake-up the CPUs from the parked state, this patch implements a
      wakeup IPI for ARM64 (ie arch_send_wakeup_ipi_mask()) that mirrors the
      ARM one, so that a specific IPI is sent for wake-up purpose in order
      to distinguish it from other IPI sources.
      
      Given the current ACPI MADT parsing API, the patch implements a glue
      layer that helps passing MADT GICC data structure from SMP initialization
      code to the parking protocol implementation somewhat overriding the CPU
      operations interfaces. This to avoid creating a completely trasparent
      DT/ACPI CPU operations layer that would require creating opaque
      structure handling for CPUs data (DT represents CPU through DT nodes, ACPI
      through static MADT table entries), which seems overkill given that ACPI
      on ARM64 mandates only two booting protocols (PSCI and parking protocol),
      so there is no need for further protocol additions.
      
      Based on the original work by Mark Salter <msalter@redhat.com>
      
      [1] https://acpica.org/sites/acpica/files/MP%20Startup%20for%20ARM%20platforms.docxSigned-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com>
      Tested-by: NLoc Ho <lho@apm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Hanjun Guo <hanjun.guo@linaro.org>
      Cc: Sudeep Holla <sudeep.holla@arm.com>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Mark Salter <msalter@redhat.com>
      Cc: Al Stone <ahs3@redhat.com>
      [catalin.marinas@arm.com: Added WARN_ONCE(!acpi_parking_protocol_valid() on the IPI]
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      5e89c55e
    • M
      arm64: unify idmap removal · 9e8e865b
      Mark Rutland 提交于
      We currently open-code the removal of the idmap and restoration of the
      current task's MMU state in a few places.
      
      Before introducing yet more copies of this sequence, unify these to call
      a new helper, cpu_uninstall_idmap.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Tested-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Reviewed-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Tested-by: NJeremy Linton <jeremy.linton@arm.com>
      Cc: Laura Abbott <labbott@fedoraproject.org>
      Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      9e8e865b
  24. 12 11月, 2015 1 次提交
  25. 21 10月, 2015 4 次提交
    • S
      arm64: Delay cpu feature capability checks · dbb4e152
      Suzuki K. Poulose 提交于
      At the moment we run through the arm64_features capability list for
      each CPU and set the capability if one of the CPU supports it. This
      could be problematic in a heterogeneous system with differing capabilities.
      Delay the CPU feature checks until all the enabled CPUs are up(i.e,
      smp_cpus_done(), so that we can make better decisions based on the
      overall system capability. Once we decide and advertise the capabilities
      the alternatives can be applied. From this state, we cannot roll back
      a feature to disabled based on the values from a new hotplugged CPU,
      due to the runtime patching and other reasons. So, for all new CPUs,
      we need to make sure that they have the established system capabilities.
      Failing which, we bring the CPU down, preventing it from turning online.
      Once the capabilities are decided, any new CPU booting up goes through
      verification to ensure that it has all the enabled capabilities and also
      invokes the respective enable() method on the CPU.
      
      The CPU errata checks are not delayed and is still executed per-CPU
      to detect the respective capabilities. If we ever come across a non-errata
      capability that needs to be checked on each-CPU, we could introduce them via
      a new capability table(or introduce a flag), which can be processed per CPU.
      
      The next patch will make the feature checks use the system wide
      safe value of a feature register.
      
      NOTE: The enable() methods associated with the capability is scheduled
      on all the CPUs (which is the only use case at the moment). If we need
      a different type of 'enable()' which only needs to be run once on any CPU,
      we should be able to handle that when needed.
      Signed-off-by: NSuzuki K. Poulose <suzuki.poulose@arm.com>
      Tested-by: NDave Martin <Dave.Martin@arm.com>
      [catalin.marinas@arm.com: static variable and coding style fixes]
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      dbb4e152
    • S
      arm64: Delay cpuinfo_store_boot_cpu · 4b998ff1
      Suzuki K. Poulose 提交于
      At the moment the boot CPU stores the cpuinfo long before the
      PERCPU areas are initialised by the kernel. This could be problematic
      as the non-boot CPU data structures might get copied with the data
      from the boot CPU, giving us no chance to detect if a particular CPU
      updated its cpuinfo. This patch delays the boot cpu store to
      smp_prepare_boot_cpu().
      
      Also kills the setup_processor() which no longer does meaningful
      work.
      Signed-off-by: NSuzuki K. Poulose <suzuki.poulose@arm.com>
      Tested-by: NDave Martin <Dave.Martin@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      4b998ff1
    • S
      arm64: Delay ELF HWCAP initialisation until all CPUs are up · 3a75578e
      Suzuki K. Poulose 提交于
      Delay the ELF HWCAP initialisation until all the (enabled) CPUs are
      up, i.e, smp_cpus_done(). This is in preparation for detecting the
      common features across the CPUS and creating a consistent ELF HWCAP
      for the system.
      Signed-off-by: NSuzuki K. Poulose <suzuki.poulose@arm.com>
      Tested-by: NDave Martin <Dave.Martin@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      3a75578e
    • S
      arm64: Make the CPU information more clear · 64f17818
      Suzuki K. Poulose 提交于
      At early boot, we print the CPU version/revision. On a heterogeneous
      system, we could have different types of CPUs. Print the CPU info for
      all active cpus. Also, the secondary CPUs prints the message only when
      they turn online.
      
      Also, remove the redundant 'revision' information which doesn't
      make any sense without the 'variant' field.
      Signed-off-by: NSuzuki K. Poulose <suzuki.poulose@arm.com>
      Tested-by: NDave Martin <Dave.Martin@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      64f17818
  26. 10 10月, 2015 1 次提交
    • Y
      arm64: fix a migrating irq bug when hotplug cpu · 217d453d
      Yang Yingliang 提交于
      When cpu is disabled, all irqs will be migratged to another cpu.
      In some cases, a new affinity is different, the old affinity need
      to be updated and if irq_set_affinity's return value is IRQ_SET_MASK_OK_DONE,
      the old affinity can not be updated. Fix it by using irq_do_set_affinity.
      
      And migrating interrupts is a core code matter, so use the generic
      function irq_migrate_all_off_this_cpu() to migrate interrupts in
      kernel/irq/migration.c.
      
      Cc: Jiang Liu <jiang.liu@linux.intel.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Russell King - ARM Linux <linux@arm.linux.org.uk>
      Cc: Hanjun Guo <hanjun.guo@linaro.org>
      Acked-by: NMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      217d453d
  27. 07 10月, 2015 2 次提交
  28. 30 7月, 2015 1 次提交