1. 24 4月, 2013 8 次提交
    • N
      ARM: mcpm: provide an interface to set the SMP ops at run time · a7eb7c6f
      Nicolas Pitre 提交于
      This is cleaner than exporting the mcpm_smp_ops structure.
      Signed-off-by: NNicolas Pitre <nico@linaro.org>
      Acked-by: NJon Medhurst <tixy@linaro.org>
      a7eb7c6f
    • N
      ARM: mcpm: generic SMP secondary bringup and hotplug support · 9ff221ba
      Nicolas Pitre 提交于
      Now that the cluster power API is in place, we can use it for SMP secondary
      bringup and CPU hotplug in a generic fashion.
      Signed-off-by: NNicolas Pitre <nico@linaro.org>
      Reviewed-by: NSantosh Shilimkar <santosh.shilimkar@ti.com>
      Reviewed-by: NWill Deacon <will.deacon@arm.com>
      9ff221ba
    • D
      ARM: mcpm_head.S: vlock-based first man election · 1ae98561
      Dave Martin 提交于
      Instead of requiring the first man to be elected in advance (which
      can be suboptimal in some situations), this patch uses a per-
      cluster mutex to co-ordinate selection of the first man.
      
      This should also make it more feasible to reuse this code path for
      asynchronous cluster resume (as in CPUidle scenarios).
      
      We must ensure that the vlock data doesn't share a cacheline with
      anything else, or dirty cache eviction could corrupt it.
      Signed-off-by: NDave Martin <dave.martin@linaro.org>
      Signed-off-by: NNicolas Pitre <nicolas.pitre@linaro.org>
      Reviewed-by: NSantosh Shilimkar <santosh.shilimkar@ti.com>
      Reviewed-by: NWill Deacon <will.deacon@arm.com>
      1ae98561
    • D
      ARM: mcpm: Add baremetal voting mutexes · 9762f12d
      Dave Martin 提交于
      This patch adds a simple low-level voting mutex implementation
      to be used to arbitrate during first man selection when no load/store
      exclusive instructions are usable.
      
      For want of a better name, these are called "vlocks".  (I was
      tempted to call them ballot locks, but "block" is way too confusing
      an abbreviation...)
      
      There is no function to wait for the lock to be released, and no
      vlock_lock() function since we don't need these at the moment.
      These could straightforwardly be added if vlocks get used for other
      purposes.
      
      For architectural correctness even Strongly-Ordered memory accesses
      require barriers in order to guarantee that multiple CPUs have a
      coherent view of the ordering of memory accesses.  Whether or not
      this matters depends on hardware implementation details of the
      memory system.  Since the purpose of this code is to provide a clean,
      generic locking mechanism with no platform-specific dependencies the
      barriers should be present to avoid unpleasant surprises on future
      platforms.
      
      Note:
      
        * When taking the lock, we don't care about implicit background
          memory operations and other signalling which may be pending,
          because those are not part of the critical section anyway.
      
          A DMB is sufficient to ensure correctly observed ordering if
          the explicit memory accesses in vlock_trylock.
      
        * No barrier is required after checking the election result,
          because the result is determined by the store to
          VLOCK_OWNER_OFFSET and is already globally observed due to the
          barriers in voting_end.  This means that global agreement on
          the winner is guaranteed, even before the winner is known
          locally.
      Signed-off-by: NDave Martin <dave.martin@linaro.org>
      Signed-off-by: NNicolas Pitre <nicolas.pitre@linaro.org>
      Reviewed-by: NSantosh Shilimkar <santosh.shilimkar@ti.com>
      Reviewed-by: NWill Deacon <will.deacon@arm.com>
      9762f12d
    • D
      ARM: mcpm: introduce helpers for platform coherency exit/setup · 7fe31d28
      Dave Martin 提交于
      This provides helper methods to coordinate between CPUs coming down
      and CPUs going up, as well as documentation on the used algorithms,
      so that cluster teardown and setup
      operations are not done for a cluster simultaneously.
      
      For use in the power_down() implementation:
        * __mcpm_cpu_going_down(unsigned int cluster, unsigned int cpu)
        * __mcpm_outbound_enter_critical(unsigned int cluster)
        * __mcpm_outbound_leave_critical(unsigned int cluster)
        * __mcpm_cpu_down(unsigned int cluster, unsigned int cpu)
      
      The power_up_setup() helper should do platform-specific setup in
      preparation for turning the CPU on, such as invalidating local caches
      or entering coherency.  It must be assembler for now, since it must
      run before the MMU can be switched on.  It is passed the affinity level
      for which initialization should be performed.
      
      Because the mcpm_sync_struct content is looked-up and modified
      with the cache enabled or disabled depending on the code path, it is
      crucial to always ensure proper cache maintenance to update main memory
      right away.  The sync_cache_*() helpers are used to that end.
      
      Also, in order to prevent a cached writer from interfering with an
      adjacent non-cached writer, we ensure each state variable is located to
      a separate cache line.
      
      Thanks to Nicolas Pitre and Achin Gupta for the help with this
      patch.
      Signed-off-by: NDave Martin <dave.martin@linaro.org>
      Signed-off-by: NNicolas Pitre <nico@linaro.org>
      Reviewed-by: NWill Deacon <will.deacon@arm.com>
      7fe31d28
    • N
      ARM: mcpm: introduce the CPU/cluster power API · 7c2b8605
      Nicolas Pitre 提交于
      This is the basic API used to handle the powering up/down of individual
      CPUs in a (multi-)cluster system.  The platform specific backend
      implementation has the responsibility to also handle the cluster level
      power as well when the first/last CPU in a cluster is brought up/down.
      Signed-off-by: NNicolas Pitre <nico@linaro.org>
      Reviewed-by: NSantosh Shilimkar <santosh.shilimkar@ti.com>
      Reviewed-by: NWill Deacon <will.deacon@arm.com>
      7c2b8605
    • N
      ARM: multi-cluster PM: secondary kernel entry code · e8db288e
      Nicolas Pitre 提交于
      CPUs in cluster based systems, such as big.LITTLE, have special needs
      when entering the kernel due to a hotplug event, or when resuming from
      a deep sleep mode.
      
      This is vectorized so multiple CPUs can enter the kernel in parallel
      without serialization.
      
      The mcpm prefix stands for "multi cluster power management", however
      this is usable on single cluster systems as well.  Only the basic
      structure is introduced here.  This will be extended with later patches.
      
      In order not to complexify things more than they currently have to,
      the planned work to make runtime adjusted MPIDR based indexing and
      dynamic memory allocation for cluster states is postponed to a later
      cycle. The MAX_NR_CLUSTERS and MAX_CPUS_PER_CLUSTER static definitions
      should be sufficient for those systems expected to be available in the
      near future.
      Signed-off-by: NNicolas Pitre <nico@linaro.org>
      Reviewed-by: NSantosh Shilimkar <santosh.shilimkar@ti.com>
      Reviewed-by: NWill Deacon <will.deacon@arm.com>
      e8db288e
    • N
      ARM: cacheflush: add synchronization helpers for mixed cache state accesses · 0c91e7e0
      Nicolas Pitre 提交于
      Algorithms used by the MCPM layer rely on state variables which are
      accessed while the cache is either active or inactive, depending
      on the code path and the active state.
      
      This patch introduces generic cache maintenance helpers to provide the
      necessary cache synchronization for such state variables to always hit
      main memory in an ordered way.
      Signed-off-by: NNicolas Pitre <nico@linaro.org>
      Acked-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      Acked-by: NDave Martin <dave.martin@linaro.org>
      0c91e7e0
  2. 18 3月, 2013 1 次提交
    • L
      perf,x86: fix wrmsr_on_cpu() warning on suspend/resume · 2a6e06b2
      Linus Torvalds 提交于
      Commit 1d9d8639 ("perf,x86: fix kernel crash with PEBS/BTS after
      suspend/resume") fixed a crash when doing PEBS performance profiling
      after resuming, but in using init_debug_store_on_cpu() to restore the
      DS_AREA mtrr it also resulted in a new WARN_ON() triggering.
      
      init_debug_store_on_cpu() uses "wrmsr_on_cpu()", which in turn uses CPU
      cross-calls to do the MSR update.  Which is not really valid at the
      early resume stage, and the warning is quite reasonable.  Now, it all
      happens to _work_, for the simple reason that smp_call_function_single()
      ends up just doing the call directly on the CPU when the CPU number
      matches, but we really should just do the wrmsr() directly instead.
      
      This duplicates the wrmsr() logic, but hopefully we can just remove the
      wrmsr_on_cpu() version eventually.
      Reported-and-tested-by: NParag Warudkar <parag.lkml@gmail.com>
      Cc: stable@vger.kernel.org
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      2a6e06b2
  3. 16 3月, 2013 2 次提交
  4. 14 3月, 2013 2 次提交
  5. 13 3月, 2013 4 次提交
  6. 12 3月, 2013 6 次提交
    • N
      ARM: 7670/1: fix the memset fix · 418df63a
      Nicolas Pitre 提交于
      Commit 455bd4c4 ("ARM: 7668/1: fix memset-related crashes caused by
      recent GCC (4.7.2) optimizations") attempted to fix a compliance issue
      with the memset return value.  However the memset itself became broken
      by that patch for misaligned pointers.
      
      This fixes the above by branching over the entry code from the
      misaligned fixup code to avoid reloading the original pointer.
      
      Also, because the function entry alignment is wrong in the Thumb mode
      compilation, that fixup code is moved to the end.
      
      While at it, the entry instructions are slightly reworked to help dual
      issue pipelines.
      Signed-off-by: NNicolas Pitre <nico@linaro.org>
      Tested-by: NAlexander Holler <holler@ahsoftware.de>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      418df63a
    • A
      ARM: spear3xx: Use correct pl080 header file · 27f423fe
      Arnd Bergmann 提交于
      The definitions have move around recently, causing build errors
      in spear3xx for all configurations:
      
      spear3xx.c:47:5: error: 'PL080_BSIZE_16' undeclared here (not in a function)
      spear3xx.c:47:23: error: 'PL080_CONTROL_SB_SIZE_SHIFT' undeclared here (not in a function)
      spear3xx.c:48:22: error: 'PL080_CONTROL_DB_SIZE_SHIFT' undeclared here (not in a function)
      Signed-off-by: NArnd Bergmann <arnd@arndb.de>
      Cc: Alessandro Rubini <rubini@gnudd.com>
      Cc: Viresh Kumar <viresh.kumar@linaro.org>
      27f423fe
    • A
      mfd: ab8500: Kill "reg" property from binding · d52701d3
      Arnd Bergmann 提交于
      The ab8500 device is a child of the prcmu device, which is a memory mapped
      bus device, whose children are addressable using physical memory addresses,
      not using mailboxes, so a mailbox number in the ab8500 node cannot be
      parsed by DT. Nothing uses this number, since it was only introduced
      as part of the failed attempt to clean up prcmu mailbox handling, and
      we can simply remove it.
      Signed-off-by: NArnd Bergmann <arnd@arndb.de>
      Signed-off-by: NSamuel Ortiz <sameo@linux.intel.com>
      d52701d3
    • P
      Arm: socfpga: pl330: Add #dma-cells for generic dma binding support · 0d8abbfd
      Padmavathi Venna 提交于
      This patch adds #dma-cells property to PL330 DMA controller nodes for
      supporting generic dma dt bindings on SOCFPGA platform. #dma-channels
      and #dma-requests are not required now but added in advance.
      Signed-off-by: NPadmavathi Venna <padma.v@samsung.com>
      Signed-off-by: NArnd Bergmann <arnd@arndb.de>
      0d8abbfd
    • M
      ARM: multiplatform: Sort the max gpio numbers. · 2a6ad871
      Maxime Ripard 提交于
      When building a multiplatform kernel, we could end up with a smaller
      number of GPIOs than the one required by the platform the kernel was
      running on.
      
      Sort the max GPIO number by descending order so that we always take the
      highest number required.
      Signed-off-by: NMaxime Ripard <maxime.ripard@free-electrons.com>
      Signed-off-by: NArnd Bergmann <arnd@arndb.de>
      2a6ad871
    • I
      xen: arm: mandate EABI and use generic atomic operations. · 85323a99
      Ian Campbell 提交于
      Rob Herring has observed that c81611c4 "xen: event channel arrays are
      xen_ulong_t and not unsigned long" introduced a compile failure when building
      without CONFIG_AEABI:
      
      /tmp/ccJaIZOW.s: Assembler messages:
      /tmp/ccJaIZOW.s:831: Error: even register required -- `ldrexd r5,r6,[r4]'
      
      Will Deacon pointed out that this is because OABI does not require even base
      registers for 64-bit values. We can avoid this by simply using the existing
      atomic64_xchg operation and the same containerof trick as used by the cmpxchg
      macros. However since this code is used on memory which is shared with the
      hypervisor we require proper atomic instructions and cannot use the generic
      atomic64 callbacks (which are based on spinlocks), therefore add a dependency
      on !GENERIC_ATOMIC64. Since we already depend on !CPU_V6 there isn't much
      downside to this.
      
      While thinking about this we also observed that OABI has different struct
      alignment requirements to EABI, which is a problem for hypercall argument
      structs which are shared with the hypervisor and which must be in EABI layout.
      Since I don't expect people to want to run OABI kernels on Xen depend on
      CONFIG_AEABI explicitly too (although it also happens to be enforced by the
      !GENERIC_ATOMIC64 requirement too).
      Signed-off-by: NIan Campbell <ian.campbell@citrix.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Rob Herring <robherring2@gmail.com>
      Acked-by: NStefano Stabellini <Stefano.Stabellini@eu.citrix.com>
      Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      85323a99
  7. 11 3月, 2013 8 次提交
  8. 09 3月, 2013 9 次提交