1. 21 10月, 2015 1 次提交
  2. 10 10月, 2015 1 次提交
    • Y
      arm64: fix a migrating irq bug when hotplug cpu · 217d453d
      Yang Yingliang 提交于
      When cpu is disabled, all irqs will be migratged to another cpu.
      In some cases, a new affinity is different, the old affinity need
      to be updated and if irq_set_affinity's return value is IRQ_SET_MASK_OK_DONE,
      the old affinity can not be updated. Fix it by using irq_do_set_affinity.
      
      And migrating interrupts is a core code matter, so use the generic
      function irq_migrate_all_off_this_cpu() to migrate interrupts in
      kernel/irq/migration.c.
      
      Cc: Jiang Liu <jiang.liu@linux.intel.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Russell King - ARM Linux <linux@arm.linux.org.uk>
      Cc: Hanjun Guo <hanjun.guo@linaro.org>
      Acked-by: NMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      217d453d
  3. 07 10月, 2015 2 次提交
  4. 30 7月, 2015 1 次提交
  5. 07 7月, 2015 1 次提交
  6. 03 7月, 2015 1 次提交
    • H
      ARM64 / SMP: Switch pr_err() to pr_debug() for disabled GICC entry · f9058929
      Hanjun Guo 提交于
      It is normal that firmware presents GICC entry or entries (processors)
      with disabled flag in ACPI MADT, taking a system of 16 cpus for example,
      ACPI firmware may present 8 ebabled first with another 8 cpus disabled
      in MADT, the disabled cpus can be hot-added later.
      
      Firmware may also present more cpus than the hardware actually has, but
      disabled the unused ones, and easily enable it when the hardware has such
      cpus to make the firmware code scalable.
      
      So that's not an error for disabled cpus in MADT, we can switch pr_err()
      to pr_debug() to make the boot a little quieter by default.
      
      Since hwid for disabled cpus often are invalid, and we check invalid hwid
      first in the code, for use case that hot add cpus later will be filtered
      out and will not be counted in possible cups, so move this check before
      the hwid one to prepare the code to count for disabeld cpus when cpu
      hot-plug is introduced.
      Signed-off-by: NHanjun Guo <hanjun.guo@linaro.org>
      Reviewed-by: NAl Stone <ahs3@redhat.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      f9058929
  7. 25 6月, 2015 1 次提交
  8. 27 5月, 2015 1 次提交
  9. 21 5月, 2015 1 次提交
  10. 19 5月, 2015 2 次提交
    • L
      ARM64: kernel: unify ACPI and DT cpus initialization · 0f078336
      Lorenzo Pieralisi 提交于
      The code that initializes cpus on arm64 is currently split in two
      different code paths that carry out DT and ACPI cpus initialization.
      
      Most of the code executing SMP initialization is common and should
      be merged to reduce discrepancies between ACPI and DT initialization
      and to have code initializing cpus in a single common place in the
      kernel.
      
      This patch refactors arm64 SMP cpus initialization code to merge
      ACPI and DT boot paths in a common file and to create sanity
      checks that can be reused by both boot methods.
      
      Current code assumes PSCI is the only available boot method
      when arm64 boots with ACPI; this can be easily extended if/when
      the ACPI parking protocol is merged into the kernel.
      Signed-off-by: NSudeep Holla <sudeep.holla@arm.com>
      Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com>
      Acked-by: NHanjun Guo <hanjun.guo@linaro.org>
      Acked-by: NMark Rutland <mark.rutland@arm.com>
      Tested-by: NHanjun Guo <hanjun.guo@linaro.org>
      Tested-by: Mark Rutland <mark.rutland@arm.com> [DT]
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      0f078336
    • L
      ARM64: kernel: make cpu_ops hooks DT agnostic · 819a8826
      Lorenzo Pieralisi 提交于
      ARM64 CPU operations such as cpu_init and cpu_init_idle take
      a struct device_node pointer as a parameter, which corresponds to
      the device tree node of the logical cpu on which the operation
      has to be applied.
      
      With the advent of ACPI on arm64, where MADT static table entries
      are used to initialize cpus, the device tree node parameter
      in cpu_ops hooks become useless when booting with ACPI, since
      in that case cpu device tree nodes are not present and can not be
      used for cpu initialization.
      
      The current cpu_init hook requires a struct device_node pointer
      parameter because it is called while parsing the device tree to
      initialize CPUs, when the cpu_logical_map (that is used to match
      a cpu node reg property to a device tree node) for a given logical
      cpu id is not set up yet. This means that the cpu_init hook cannot
      rely on the of_get_cpu_node function to retrieve the device tree
      node corresponding to the logical cpu id passed in as parameter,
      so the cpu device tree node must be passed in as a parameter to fix
      this catch-22 dependency cycle.
      
      This patch reshuffles the cpu_logical_map initialization code so
      that the cpu_init cpu_ops hook can safely use the of_get_cpu_node
      function to retrieve the cpu device tree node, removing the need for
      the device tree node pointer parameter.
      
      In the process, the patch removes device tree node parameters
      from all cpu_ops hooks, in preparation for SMP DT/ACPI cpus
      initialization consolidation.
      Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com>
      Acked-by: NHanjun Guo <hanjun.guo@linaro.org>
      Acked-by: NSudeep Holla <sudeep.holla@arm.com>
      Acked-by: NMark Rutland <mark.rutland@arm.com>
      Tested-by: NHanjun Guo <hanjun.guo@linaro.org>
      Tested-by: Mark Rutland <mark.rutland@arm.com> [DT]
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      819a8826
  11. 25 3月, 2015 1 次提交
  12. 23 3月, 2015 1 次提交
  13. 18 3月, 2015 1 次提交
  14. 05 3月, 2015 1 次提交
  15. 24 1月, 2015 1 次提交
  16. 04 12月, 2014 1 次提交
    • A
      arm64: add module support for alternatives fixups · 932ded4b
      Andre Przywara 提交于
      Currently the kernel patches all necessary instructions once at boot
      time, so modules are not covered by this.
      Change the apply_alternatives() function to take a beginning and an
      end pointer and introduce a new variant (apply_alternatives_all()) to
      cover the existing use case for the static kernel image section.
      Add a module_finalize() function to arm64 to check for an
      alternatives section in a module and patch only the instructions from
      that specific area.
      Since that module code is not touched before the module
      initialization has ended, we don't need to halt the machine before
      doing the patching in the module's code.
      Signed-off-by: NAndre Przywara <andre.przywara@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      932ded4b
  17. 25 11月, 2014 1 次提交
  18. 14 9月, 2014 1 次提交
  19. 08 8月, 2014 1 次提交
  20. 18 7月, 2014 1 次提交
  21. 17 5月, 2014 1 次提交
  22. 15 5月, 2014 1 次提交
  23. 04 3月, 2014 1 次提交
    • M
      arm64: topology: Implement basic CPU topology support · f6e763b9
      Mark Brown 提交于
      Add basic CPU topology support to arm64, based on the existing pre-v8
      code and some work done by Mark Hambleton.  This patch does not
      implement any topology discovery support since that should be based on
      information from firmware, it merely implements the scaffolding for
      integration of topology support in the architecture.
      
      No locking of the topology data is done since it is only modified during
      CPU bringup with external serialisation from the SMP code.
      
      The goal is to separate the architecture hookup for providing topology
      information from the DT parsing in order to ease review and avoid
      blocking the architecture code (which will be built on by other work)
      with the DT code review by providing something simple and basic.
      
      Following patches will implement support for interpreting topology
      information from MPIDR and for parsing the DT topology bindings for ARM,
      similar patches will be needed for ACPI.
      Signed-off-by: NMark Brown <broonie@linaro.org>
      Acked-by: NMark Rutland <mark.rutland@arm.com>
      [catalin.marinas@arm.com: removed CONFIG_CPU_TOPOLOGY, always on if SMP]
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      f6e763b9
  24. 26 2月, 2014 1 次提交
  25. 30 1月, 2014 1 次提交
  26. 20 12月, 2013 1 次提交
  27. 17 12月, 2013 1 次提交
  28. 26 11月, 2013 1 次提交
  29. 08 11月, 2013 1 次提交
  30. 05 11月, 2013 1 次提交
    • M
      arm64: move enabling of GIC before CPUs are set online · 7ade67b5
      Marc Zyngier 提交于
      Commit 53ae3acd (arm64: Only enable local interrupts after the CPU
      is marked online) moved the enabling of the GIC after the CPUs are
      marked online.
      
      This has some interesting effect:
      [...]
      [<ffffffc0002eefd8>] gic_raise_softirq+0xf8/0x160
      [<ffffffc000088f58>] smp_send_reschedule+0x38/0x40
      [<ffffffc0000c8728>] resched_task+0x84/0xc0
      [<ffffffc0000c8cdc>] check_preempt_curr+0x58/0x98
      [<ffffffc0000c8d38>] ttwu_do_wakeup+0x1c/0xf4
      [<ffffffc0000c8f90>] ttwu_do_activate.constprop.84+0x64/0x70
      [<ffffffc0000cad30>] try_to_wake_up+0x1d4/0x2b4
      [<ffffffc0000cae6c>] default_wake_function+0x10/0x18
      [<ffffffc0000c5ca4>] __wake_up_common+0x60/0xa0
      [<ffffffc0000c7784>] complete+0x48/0x64
      [<ffffffc000088bec>] secondary_start_kernel+0xe8/0x110
      [...]
      
      Here, we end-up calling gic_raise_softirq without having initialized
      the interrupt controller for this CPU. While this goes unnoticed
      with GICv2 (the distributor is always accessible), it explodes with
      GICv3.
      
      The fix is to move the call to notify_cpu_starting before we set
      the secondary CPU online.
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      7ade67b5
  31. 25 10月, 2013 4 次提交
    • M
      arm64: add CPU_HOTPLUG infrastructure · 9327e2c6
      Mark Rutland 提交于
      This patch adds the basic infrastructure necessary to support
      CPU_HOTPLUG on arm64, based on the arm implementation. Actual hotplug
      support will depend on an implementation's cpu_operations (e.g. PSCI).
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      9327e2c6
    • M
      arm64: read enable-method for CPU0 · e8765b26
      Mark Rutland 提交于
      With the advent of CPU_HOTPLUG, the enable-method property for CPU0 may
      tells us something useful (i.e. how to hotplug it back on), so we must
      read it along with all the enable-method for all the other CPUs.  Even
      on UP the enable-method may tell us useful information (e.g. if a core
      has some mechanism that might be usable for cpuidle), so we should
      always read it.
      
      This patch factors out the reading of the enable method, and ensures
      that CPU0's enable method is read regardless of whether the kernel is
      built with SMP support.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      e8765b26
    • M
      arm64: factor out spin-table boot method · 652af899
      Mark Rutland 提交于
      The arm64 kernel has an internal holding pen, which is necessary for
      some systems where we can't bring CPUs online individually and must hold
      multiple CPUs in a safe area until the kernel is able to handle them.
      The current SMP infrastructure for arm64 is closely coupled to this
      holding pen, and alternative boot methods must launch CPUs into the pen,
      where they sit before they are launched into the kernel proper.
      
      With PSCI (and possibly other future boot methods), we can bring CPUs
      online individually, and need not perform the secondary_holding_pen
      dance. Instead, this patch factors the holding pen management code out
      to the spin-table boot method code, as it is the only boot method
      requiring the pen.
      
      A new entry point for secondaries, secondary_entry is added for other
      boot methods to use, which bypasses the holding pen and its associated
      overhead when bringing CPUs online. The smp.pen.text section is also
      removed, as the pen can live in head.text without problem.
      
      The cpu_operations structure is extended with two new functions,
      cpu_boot and cpu_postboot, for bringing a cpu into the kernel and
      performing any post-boot cleanup required by a bootmethod (e.g.
      resetting the secondary_holding_pen_release to INVALID_HWID).
      Documentation is added for cpu_operations.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      652af899
    • M
      arm64: reorganise smp_enable_ops · cd1aebf5
      Mark Rutland 提交于
      For hotplug support, we're going to want a place to store operations
      that do more than bring CPUs online, and it makes sense to group these
      with our current smp_enable_ops. For cpuidle support, we'll want to
      group additional functions, and we may want them even for UP kernels.
      
      This patch renames smp_enable_ops to the more general cpu_operations,
      and pulls the definitions out of smp code such that they can be used in
      UP kernels. While we're at it, fix up instances of the cpu parameter to
      be an unsigned int, drop the init markings and rename the *_cpu
      functions to cpu_* to reduce future churn when cpu_operations is
      extended.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      cd1aebf5
  32. 31 8月, 2013 1 次提交
  33. 19 7月, 2013 1 次提交
  34. 15 7月, 2013 1 次提交
    • P
      arm64: delete __cpuinit usage from all users · b8c6453a
      Paul Gortmaker 提交于
      The __cpuinit type of throwaway sections might have made sense
      some time ago when RAM was more constrained, but now the savings
      do not offset the cost and complications.  For example, the fix in
      commit 5e427ec2 ("x86: Fix bit corruption at CPU resume time")
      is a good example of the nasty type of bugs that can be created
      with improper use of the various __init prefixes.
      
      After a discussion on LKML[1] it was decided that cpuinit should go
      the way of devinit and be phased out.  Once all the users are gone,
      we can then finally remove the macros themselves from linux/init.h.
      
      Note that some harmless section mismatch warnings may result, since
      notify_cpu_starting() and cpu_up() are arch independent (kernel/cpu.c)
      are flagged as __cpuinit  -- so if we remove the __cpuinit from
      arch specific callers, we will also get section mismatch warnings.
      As an intermediate step, we intend to turn the linux/init.h cpuinit
      content into no-ops as early as possible, since that will get rid
      of these warnings.  In any case, they are temporary and harmless.
      
      This removes all the arch/arm64 uses of the __cpuinit macros from
      all C files.  Currently arm64 does not have any __CPUINIT used in
      assembly files.
      
      [1] https://lkml.org/lkml/2013/5/20/589
      
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
      b8c6453a
  35. 26 4月, 2013 1 次提交
    • M
      arm64: Survive invalid cpu enable-methods · 39a90ca6
      Mark Rutland 提交于
      Currently, if you pass the kernel a dtb where a cpu node has an
      unsupported enable-method property (e.g. "not-psci"), it'll explode
      horribly, as it iterates over the enable_ops array incorrectly. It
      increments the pointer *at* the current element, rather than
      incrementing the pointer *to* the current element. As the first two
      elements pointed to structures that were contiguous in memory, this
      happened to be equivalent. However the third element is NULL, so when
      the list is exhausted, smp_get_enable_ops generates the wrong pointer,
      and dereferences an arbitrary portion of memory, which currently happens
      to contain zero.
      
      This patch fixes this by indirecting the pointer one level, so we
      iterate over the array elements correctly, avoiding the below panic:
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      39a90ca6