1. 28 5月, 2014 1 次提交
  2. 23 4月, 2014 1 次提交
    • N
      ARM: 8032/1: bL_switcher: fix validation check before its activation · 4530e4b6
      Nicolas Pitre 提交于
      The switcher should not depend on MAX_CLUSTER to determine ifit should
      be activated or not. In a multiplatform kernel binary it is possible to
      have dual-cluster and quad-cluster platforms configured in. In that case
      MAX_CLUSTER which is a build time limit should be 4 and that shouldn't
      prevent the switcher from working if the kernel is booted on a b.L
      dual-cluster system.
      
      In bL_switcher_halve_cpus() we already have a runtime validation check
      to make sure we're dealing with only two clusters, so booting on a quad
      cluster system will be caught and switcher activation aborted.
      
      However, the b.L switcher must ensure the MCPM layer is initialized on
      the booted hardware before doing anything.  The mcpm_is_available()
      function is added to that effect.
      Signed-off-by: NNicolas Pitre <nico@linaro.org>
      Tested-by: NAbhilash Kesavan <kesavan.abhilash@gmail.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      4530e4b6
  3. 07 11月, 2013 1 次提交
  4. 24 9月, 2013 9 次提交
    • D
      ARM: bL_switcher: Add query interface to discover CPU affinities · d08e2e09
      Dave Martin 提交于
      When the switcher is active, there is no straightforward way to
      figure out which logical CPU a given physical CPU maps to.
      
      This patch provides a function
      bL_switcher_get_logical_index(mpidr), which is analogous to
      get_logical_index().
      
      This function returns the logical CPU on which the specified
      physical CPU is grouped (or -EINVAL if unknown).
      If the switcher is inactive or not present, -EUNATCH is returned instead.
      Signed-off-by: NDave Martin <dave.martin@linaro.org>
      Signed-off-by: NNicolas Pitre <nico@linaro.org>
      d08e2e09
    • D
      ARM: bL_switcher/trace: Add kernel trace trigger interface · 29064b88
      Dave Martin 提交于
      This patch exports a bL_switcher_trace_trigger() function to
      provide a means for drivers using the trace events to get the
      current status when starting a trace session.
      
      Calling this function is equivalent to pinging the trace_trigger
      file in sysfs.
      Signed-off-by: NDave Martin <dave.martin@linaro.org>
      29064b88
    • D
      ARM: bL_switcher/trace: Add trace trigger for trace bootstrapping · b09bbe5b
      Dave Martin 提交于
      When tracing switching, an external tracer needs a way to bootstrap
      its knowledge of the logical<->physical CPU mapping.
      
      This patch adds a sysfs attribute trace_trigger.  A write to this
      attribute will generate a power:cpu_migrate_current event for each
      online CPU, indicating the current physical CPU for each logical
      CPU.
      
      Activating or deactivating the switcher also generates these
      events, so that the tracer knows about the resulting remapping of
      affected CPUs.
      Signed-off-by: NDave Martin <dave.martin@linaro.org>
      b09bbe5b
    • D
      ARM: bL_switcher: Basic trace events support · 1bfbddb6
      Dave Martin 提交于
      This patch adds simple trace events to the b.L switcher code
      to allow tracing of CPU migration events.
      
      To make use of the trace events, you will need:
      
      CONFIG_FTRACE=y
      CONFIG_ENABLE_DEFAULT_TRACERS=y
      
      The following events are added:
        * power:cpu_migrate_begin
        * power:cpu_migrate_finish
      
      each with the following data:
          u64     timestamp;
          u32     cpu_hwid;
      
      power:cpu_migrate_begin occurs immediately before the
      switcher-specific migration operations start.
      power:cpu_migrate_finish occurs immediately when migration is
      completed.
      
      The cpu_hwid field contains the ID fields of the MPIDR.
      
      * For power:cpu_migrate_begin, cpu_hwid is the ID of the outbound
        physical CPU (equivalent to (from_phys_cpu,from_phys_cluster)).
      
      * For power:cpu_migrate_finish, cpu_hwid is the ID of the inbound
        physical CPU (equivalent to (to_phys_cpu,to_phys_cluster)).
      
      By design, the cpu_hwid field is masked in the same way as the
      device tree cpu node reg property, allowing direct correlation to
      the DT description of the hardware.
      
      The timestamp is added in order to minimise timing noise.  An
      accurate system-wide clock should be used for generating this
      (hopefully getnstimeofday is appropriate, but it could be changed).
      It could be any monotonic shared clock, since the aim is to allow
      accurate deltas to be computed.  We don't necessarily care about
      accurate synchronisation with wall clock time.
      
      In practice, each switch takes place on a single logical CPU,
      and the trace infrastructure should guarantee that events are
      well-ordered with respect to a single logical CPU.
      Signed-off-by: NDave Martin <dave.martin@linaro.org>
      Signed-off-by: NNicolas Pitre <nico@linaro.org>
      1bfbddb6
    • N
      ARM: bL_switcher: wait until inbound is alive before performing a switch · 6137eba6
      Nicolas Pitre 提交于
      In some cases, a significant delay may be observed between the moment
      a request for a CPU to come up is made and the moment it is ready to
      start executing kernel code.  This is especially true when a whole
      cluster has to be powered up which may take in the order of miliseconds.
      It is therefore a good idea to let the outbound CPU continue to execute
      code in the mean time, and be notified when the inbound is ready before
      performing the actual switch.
      
      This is achieved by registering a completion block with the appropriate
      IPI callback, and programming the sending of an IPI by the early assembly
      code prior to entering the main kernel code.  Once the IPI is delivered
      to the outbound CPU, the completion block is "completed" and the switcher
      thread is resumed.
      Signed-off-by: NNicolas Pitre <nico@linaro.org>
      6137eba6
    • N
      ARM: bL_switcher: synchronize the outbound with the inbound · 108a9640
      Nicolas Pitre 提交于
      Let's wait for the inbound CPU to come up and snoop some of the outbound
      CPU cache before bringing the outbound CPU down.  That should be more
      efficient than going down right away.
      
      Possible improvements might involve some monitoring of the CCI event
      counters.
      Signed-off-by: NNicolas Pitre <nico@linaro.org>
      108a9640
    • D
      ARM: bL_switcher: Add switch completion callback for bL_switch_request() · 0577fee2
      Dave Martin 提交于
      There is no explicit way to know when a switch started via
      bL_switch_request() is complete.  This can lead to unpredictable
      behaviour when the switcher is controlled by a subsystem which
      makes dynamic decisions (such as cpufreq).
      
      The CPU PM notifier is not really suitable for signalling
      completion, because the CPU could get suspended and resumed for
      other, independent reasons while a switch request is in flight.
      Adding a whole new notifier for this seems excessive, and may tempt
      people to put heavyweight code on this path.
      
      This patch implements a new bL_switch_request_cb() function that
      allows for a per-request lightweight callback, private between the
      switcher and the caller of bL_switch_request_cb().
      
      Overlapping switches on a single CPU are considered incorrect if
      they are requested via bL_switch_request_cb() with a callback (they
      will lead to an unpredictable final state without explicit external
      synchronisation to force the requests into a particular order).
      Queuing requests robustly would be overkill because only one
      subsystem should be attempting to control the switcher at any time.
      
      Overlapping requests of this kind will be failed with -EBUSY to
      indicate that the second request won't take effect and the
      completer will never be called for it.
      
      bL_switch_request() is retained as a wrapper round the new function,
      with the old, fire-and-forget semantics.  In this case the last request
      will always win. The request may still be denied if a previous request
      with a completer is still pending.
      Signed-off-by: NDave Martin <dave.martin@linaro.org>
      Signed-off-by: NNicolas Pitre <nicolas.pitre@linaro.org>
      0577fee2
    • D
      ARM: bL_switcher: Add runtime control notifier · 491990e2
      Dave Martin 提交于
      Some subsystems will need to respond synchronously to runtime
      enabling and disabling of the switcher.
      
      This patch adds a dedicated notifier interface to support such
      subsystems.  Pre- and post- enable/disable notifications are sent
      to registered callbacks, allowing safe transition of non-b.L-
      transparent subsystems across these control transitions.
      
      Notifier callbacks may veto switcher (de)activation on pre notifications
      only.  Post notifications won't revert the action.
      
      If enabling or disabling of the switcher fails after the pre-change
      notification has been sent, subsystems which have registered
      notifiers can be left in an inappropriate state.
      
      This patch sends a suitable post-change notification on failure,
      indicating that the old state has been reestablished.
      
      For example, a failed initialisation will result in the following
      sequence:
      
          BL_NOTIFY_PRE_ENABLE
          /* switcher initialisation fails */
          BL_NOTIFY_POST_DISABLE
      
      It is the responsibility of notified subsystems to respond in an
      appropriate way.
      Signed-off-by: NDave Martin <dave.martin@linaro.org>
      Signed-off-by: NNicolas Pitre <nico@linaro.org>
      491990e2
    • D
      ARM: bL_switcher: Add synchronous enable/disable interface · c0f43751
      Dave Martin 提交于
      Some subsystems will need to know for sure whether the switcher is
      enabled or disabled during certain critical regions.
      
      This patch provides a simple mutex-based mechanism to discover
      whether the switcher is enabled and temporarily lock out further
      enable/disable:
      
        * bL_switcher_get_enabled() returns true iff the switcher is
          enabled and temporarily inhibits enable/disable.
      
        * bL_switcher_put_enabled() permits enable/disable of the switcher
          again after a previous call to bL_switcher_get_enabled().
      Signed-off-by: NDave Martin <dave.martin@linaro.org>
      Signed-off-by: NNicolas Pitre <nico@linaro.org>
      c0f43751
  5. 05 8月, 2013 2 次提交
    • N
      ARM: bL_switcher: filter CPU hotplug requests when the switcher is active · 27261435
      Nicolas Pitre 提交于
      Trying to support both the switcher and CPU hotplug at the same time
      is tricky due to ambiguous semantics.  So let's at least prevent users
      from messing around with those logical CPUs the switcher has removed
      and those which were not active when the switcher was activated.
      Signed-off-by: NNicolas Pitre <nico@linaro.org>
      27261435
    • N
      ARM: bL_switcher: remove assumptions between logical and physical CPUs · 38c35d4f
      Nicolas Pitre 提交于
      Up to now, the logical CPU was somehow tied to the physical CPU number
      within a cluster.  This causes problems when forcing the boot CPU to be
      different from the first enumerated CPU in the device tree creating a
      discrepancy between logical and physical CPU numbers.
      
      Let's make the pairing completely independent from physical CPU numbers.
      
      Let's keep only those logical CPUs with same initial CPU cluster to create
      a uniform scheduler profile without having to modify any of the probed
      topology and compute capacity data.  This has the potential to create
      a non contiguous CPU numbering space when the switcher is active with
      potential impact on buggy user space tools.  It is however better to fix
      those tools rather than making the switcher code more intrusive.
      Signed-off-by: NNicolas Pitre <nico@linaro.org>
      Reviewed-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com>
      38c35d4f
  6. 30 7月, 2013 8 次提交
    • N
      ARM: bL_switcher: add kernel cmdline param to disable the switcher on boot · c4821c05
      Nicolas Pitre 提交于
      By adding no_bL_switcher to the kernel cmdline string, the switcher
      won't be activated automatically at boot time.  It is still possible
      to activate it later with:
      
      	echo 1 > /sys/kernel/bL_switcher/active
      Signed-off-by: NNicolas Pitre <nico@linaro.org>
      c4821c05
    • N
      ARM: bL_switcher: ability to enable and disable the switcher via sysfs · 6b7437ae
      Nicolas Pitre 提交于
      The /sys/kernel/bL_switcher/enable file allows to enable or disable
      the switcher by writing 1 or 0 to it respectively.  It is still enabled
      by default on boot.
      Signed-off-by: NNicolas Pitre <nico@linaro.org>
      6b7437ae
    • N
      ARM: bL_switcher: do not hardcode GIC IDs in the code · ed96762e
      Nicolas Pitre 提交于
      Currently, GIC IDs are hardcoded making the code dependent on the 4+4 b.L
      configuration.  Let's allow for GIC IDs to be discovered upon switcher
      initialization to support other b.L configurations such as the 1+1 one,
      or 2+3 as on the VExpress TC2.
      Signed-off-by: NNicolas Pitre <nico@linaro.org>
      ed96762e
    • N
      ARM: bL_switcher: hot-unplug half of the available CPUs · 9797a0e9
      Nicolas Pitre 提交于
      In a regular kernel configuration, all the CPUs are initially available.
      But the switcher execution model uses half of them at any time.  Instead
      of hacking the DTB to remove half of the CPUs, let's remove them at
      run time and make sure we still have a working switcher configuration.
      This way, the same DTB can be used whether or not the switcher is used.
      Signed-off-by: NNicolas Pitre <nico@linaro.org>
      9797a0e9
    • N
      ARM: bL_switcher: simplify stack isolation · c052de26
      Nicolas Pitre 提交于
      We now have a dedicated thread for each logical CPU.  That's plenty
      of stack space for our needs.
      Signed-off-by: NNicolas Pitre <nico@linaro.org>
      c052de26
    • N
      ARM: bL_switcher: move to dedicated threads rather than workqueues · 71ce1dee
      Nicolas Pitre 提交于
      The workqueues are problematic as they may be contended.
      They can't be scheduled with top priority either.  Also the optimization
      in bL_switch_request() to skip the workqueue entirely when the target CPU
      and the calling CPU were the same didn't allow for bL_switch_request() to
      be called from atomic context, as might be the case for some cpufreq
      drivers.
      
      Let's move to dedicated kthreads instead.
      Signed-off-by: NNicolas Pitre <nico@linaro.org>
      71ce1dee
    • L
      ARM: bL_switcher: add clockevent save/restore support · 3f09d479
      Lorenzo Pieralisi 提交于
      Per-CPU timers that are shutdown when a CPU is switched over must be disabled
      upon switching and reprogrammed on the inbound CPU by relying on the
      clock events management API. save/restore sequence is executed with irqs
      disabled as mandated by the clock events API.
      
      The next_event is an absolute time, hence, when the inbound CPU resumes,
      if the timer has expired the min delta is forced into the tick device to
      fire after few cycles.
      
      This patch adds switching support for clock events that are per-CPU and
      have to be migrated when a switch takes place; the cpumask of the clock
      event device is checked against the cpumask of the current cpu, and if
      they match, the clockevent device mode is saved and it is put in
      shutdown mode. Resume code reprogrammes the tick device accordingly.
      
      Tested on A15/A7 fast models and architected timers.
      Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com>
      Signed-off-by: NNicolas Pitre <nico@linaro.org>
      3f09d479
    • N
      ARM: b.L: core switcher code · 1c33be57
      Nicolas Pitre 提交于
      This is the core code implementing big.LITTLE switcher functionality.
      Rationale for this code is available here:
      
      http://lwn.net/Articles/481055/
      
      The main entry point for a switch request is:
      
      void bL_switch_request(unsigned int cpu, unsigned int new_cluster_id)
      
      If the calling CPU is not the wanted one, this wrapper takes care of
      sending the request to the appropriate CPU with schedule_work_on().
      
      At the moment the core switch operation is handled by bL_switch_to()
      which must be called on the CPU for which a switch is requested.
      
      What this code does:
      
        * Return early if the current cluster is the wanted one.
      
        * Close the gate in the kernel entry vector for both the inbound
          and outbound CPUs.
      
        * Wake up the inbound CPU so it can perform its reset sequence in
          parallel up to the kernel entry vector gate.
      
        * Migrate all interrupts in the GIC targeting the outbound CPU
          interface to the inbound CPU interface, including SGIs. This is
          performed by gic_migrate_target() in drivers/irqchip/irq-gic.c.
      
        * Call cpu_pm_enter() which takes care of flushing the VFP state to
          RAM and save the CPU interface config from the GIC to RAM.
      
        * Modify the cpu_logical_map to refer to the inbound physical CPU.
      
        * Call cpu_suspend() which saves the CPU state (general purpose
          registers, page table address) onto the stack and store the
          resulting stack pointer in an array indexed by the updated
          cpu_logical_map, then call the provided shutdown function.
          This happens in arch/arm/kernel/sleep.S.
      
      At this point, the provided shutdown function executed by the outbound
      CPU ungates the inbound CPU. Therefore the inbound CPU:
      
        * Picks up the saved stack pointer in the array indexed by its MPIDR
          in arch/arm/kernel/sleep.S.
      
        * The MMU and caches are re-enabled using the saved state on the
          provided stack, just like if this was a resume operation from a
          suspended state.
      
        * Then cpu_suspend() returns, although this is on the inbound CPU
          rather than the outbound CPU which called it initially.
      
        * The function cpu_pm_exit() is called which effect is to restore the
          CPU interface state in the GIC using the state previously saved by
          the outbound CPU.
      
        * Exit of bL_switch_to() to resume normal kernel execution on the
          new CPU.
      
      However, the outbound CPU is potentially still running in parallel while
      the inbound CPU is resuming normal kernel execution, hence we need
      per CPU stack isolation to execute bL_do_switch().  After the outbound
      CPU has ungated the inbound CPU, it calls mcpm_cpu_power_down() to:
      
        * Clean its L1 cache.
      
        * If it is the last CPU still alive in its cluster (last man standing),
          it also cleans its L2 cache and disables cache snooping from the other
          cluster.
      
        * Power down the CPU (or whole cluster).
      
      Code called from bL_do_switch() might end up referencing 'current' for
      some reasons.  However, 'current' is derived from the stack pointer.
      With any arbitrary stack, the returned value for 'current' and any
      dereferenced values through it are just random garbage which may lead to
      segmentation faults.
      
      The active page table during the execution of bL_do_switch() is also a
      problem.  There is no guarantee that the inbound CPU won't destroy the
      corresponding task which would free the attached page table while the
      outbound CPU is still running and relying on it.
      
      To solve both issues, we borrow some of the task space belonging to
      the init/idle task which, by its nature, is lightly used and therefore
      is unlikely to clash with our usage.  The init task is also never going
      away.
      
      Right now the logical CPU number is assumed to be equivalent to the
      physical CPU number within each cluster. The kernel should also be
      booted with only one cluster active.  These limitations will be lifted
      eventually.
      Signed-off-by: NNicolas Pitre <nico@linaro.org>
      1c33be57