1. 24 4月, 2013 5 次提交
    • D
      ARM: mcpm: Add baremetal voting mutexes · 9762f12d
      Dave Martin 提交于
      This patch adds a simple low-level voting mutex implementation
      to be used to arbitrate during first man selection when no load/store
      exclusive instructions are usable.
      
      For want of a better name, these are called "vlocks".  (I was
      tempted to call them ballot locks, but "block" is way too confusing
      an abbreviation...)
      
      There is no function to wait for the lock to be released, and no
      vlock_lock() function since we don't need these at the moment.
      These could straightforwardly be added if vlocks get used for other
      purposes.
      
      For architectural correctness even Strongly-Ordered memory accesses
      require barriers in order to guarantee that multiple CPUs have a
      coherent view of the ordering of memory accesses.  Whether or not
      this matters depends on hardware implementation details of the
      memory system.  Since the purpose of this code is to provide a clean,
      generic locking mechanism with no platform-specific dependencies the
      barriers should be present to avoid unpleasant surprises on future
      platforms.
      
      Note:
      
        * When taking the lock, we don't care about implicit background
          memory operations and other signalling which may be pending,
          because those are not part of the critical section anyway.
      
          A DMB is sufficient to ensure correctly observed ordering if
          the explicit memory accesses in vlock_trylock.
      
        * No barrier is required after checking the election result,
          because the result is determined by the store to
          VLOCK_OWNER_OFFSET and is already globally observed due to the
          barriers in voting_end.  This means that global agreement on
          the winner is guaranteed, even before the winner is known
          locally.
      Signed-off-by: NDave Martin <dave.martin@linaro.org>
      Signed-off-by: NNicolas Pitre <nicolas.pitre@linaro.org>
      Reviewed-by: NSantosh Shilimkar <santosh.shilimkar@ti.com>
      Reviewed-by: NWill Deacon <will.deacon@arm.com>
      9762f12d
    • D
      ARM: mcpm: introduce helpers for platform coherency exit/setup · 7fe31d28
      Dave Martin 提交于
      This provides helper methods to coordinate between CPUs coming down
      and CPUs going up, as well as documentation on the used algorithms,
      so that cluster teardown and setup
      operations are not done for a cluster simultaneously.
      
      For use in the power_down() implementation:
        * __mcpm_cpu_going_down(unsigned int cluster, unsigned int cpu)
        * __mcpm_outbound_enter_critical(unsigned int cluster)
        * __mcpm_outbound_leave_critical(unsigned int cluster)
        * __mcpm_cpu_down(unsigned int cluster, unsigned int cpu)
      
      The power_up_setup() helper should do platform-specific setup in
      preparation for turning the CPU on, such as invalidating local caches
      or entering coherency.  It must be assembler for now, since it must
      run before the MMU can be switched on.  It is passed the affinity level
      for which initialization should be performed.
      
      Because the mcpm_sync_struct content is looked-up and modified
      with the cache enabled or disabled depending on the code path, it is
      crucial to always ensure proper cache maintenance to update main memory
      right away.  The sync_cache_*() helpers are used to that end.
      
      Also, in order to prevent a cached writer from interfering with an
      adjacent non-cached writer, we ensure each state variable is located to
      a separate cache line.
      
      Thanks to Nicolas Pitre and Achin Gupta for the help with this
      patch.
      Signed-off-by: NDave Martin <dave.martin@linaro.org>
      Signed-off-by: NNicolas Pitre <nico@linaro.org>
      Reviewed-by: NWill Deacon <will.deacon@arm.com>
      7fe31d28
    • N
      ARM: mcpm: introduce the CPU/cluster power API · 7c2b8605
      Nicolas Pitre 提交于
      This is the basic API used to handle the powering up/down of individual
      CPUs in a (multi-)cluster system.  The platform specific backend
      implementation has the responsibility to also handle the cluster level
      power as well when the first/last CPU in a cluster is brought up/down.
      Signed-off-by: NNicolas Pitre <nico@linaro.org>
      Reviewed-by: NSantosh Shilimkar <santosh.shilimkar@ti.com>
      Reviewed-by: NWill Deacon <will.deacon@arm.com>
      7c2b8605
    • N
      ARM: multi-cluster PM: secondary kernel entry code · e8db288e
      Nicolas Pitre 提交于
      CPUs in cluster based systems, such as big.LITTLE, have special needs
      when entering the kernel due to a hotplug event, or when resuming from
      a deep sleep mode.
      
      This is vectorized so multiple CPUs can enter the kernel in parallel
      without serialization.
      
      The mcpm prefix stands for "multi cluster power management", however
      this is usable on single cluster systems as well.  Only the basic
      structure is introduced here.  This will be extended with later patches.
      
      In order not to complexify things more than they currently have to,
      the planned work to make runtime adjusted MPIDR based indexing and
      dynamic memory allocation for cluster states is postponed to a later
      cycle. The MAX_NR_CLUSTERS and MAX_CPUS_PER_CLUSTER static definitions
      should be sufficient for those systems expected to be available in the
      near future.
      Signed-off-by: NNicolas Pitre <nico@linaro.org>
      Reviewed-by: NSantosh Shilimkar <santosh.shilimkar@ti.com>
      Reviewed-by: NWill Deacon <will.deacon@arm.com>
      e8db288e
    • N
      ARM: cacheflush: add synchronization helpers for mixed cache state accesses · 0c91e7e0
      Nicolas Pitre 提交于
      Algorithms used by the MCPM layer rely on state variables which are
      accessed while the cache is either active or inactive, depending
      on the code path and the active state.
      
      This patch introduces generic cache maintenance helpers to provide the
      necessary cache synchronization for such state variables to always hit
      main memory in an ordered way.
      Signed-off-by: NNicolas Pitre <nico@linaro.org>
      Acked-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      Acked-by: NDave Martin <dave.martin@linaro.org>
      0c91e7e0
  2. 18 3月, 2013 5 次提交
    • L
      Merge branch 'fixes' of git://git.linaro.org/people/rmk/linux-arm · 6210d421
      Linus Torvalds 提交于
      Pull ARM fixes from Russell King:
       "Just three fixes this time - a fix for a fix for our memset function,
        fixing the dummy clockevent so that it doesn't interfere with real
        hardware clockevents, and fixing a build error for Tegra."
      
      * 'fixes' of git://git.linaro.org/people/rmk/linux-arm:
        ARM: 7675/1: amba: tegra-ahb: Fix build error w/ PM_SLEEP w/o PM_RUNTIME
        ARM: 7674/1: smp: Avoid dummy clockevent being preferred over real hardware clock-event
        ARM: 7670/1: fix the memset fix
      6210d421
    • L
      Linux 3.9-rc3 · a937536b
      Linus Torvalds 提交于
      a937536b
    • D
      perf,x86: fix link failure for non-Intel configs · 6c4d3bc9
      David Rientjes 提交于
      Commit 1d9d8639 ("perf,x86: fix kernel crash with PEBS/BTS after
      suspend/resume") introduces a link failure since
      perf_restore_debug_store() is only defined for CONFIG_CPU_SUP_INTEL:
      
      	arch/x86/power/built-in.o: In function `restore_processor_state':
      	(.text+0x45c): undefined reference to `perf_restore_debug_store'
      
      Fix it by defining the dummy function appropriately.
      Signed-off-by: NDavid Rientjes <rientjes@google.com>
      Cc: stable@vger.kernel.org
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      6c4d3bc9
    • L
      perf,x86: fix wrmsr_on_cpu() warning on suspend/resume · 2a6e06b2
      Linus Torvalds 提交于
      Commit 1d9d8639 ("perf,x86: fix kernel crash with PEBS/BTS after
      suspend/resume") fixed a crash when doing PEBS performance profiling
      after resuming, but in using init_debug_store_on_cpu() to restore the
      DS_AREA mtrr it also resulted in a new WARN_ON() triggering.
      
      init_debug_store_on_cpu() uses "wrmsr_on_cpu()", which in turn uses CPU
      cross-calls to do the MSR update.  Which is not really valid at the
      early resume stage, and the warning is quite reasonable.  Now, it all
      happens to _work_, for the simple reason that smp_call_function_single()
      ends up just doing the call directly on the CPU when the CPU number
      matches, but we really should just do the wrmsr() directly instead.
      
      This duplicates the wrmsr() logic, but hopefully we can just remove the
      wrmsr_on_cpu() version eventually.
      Reported-and-tested-by: NParag Warudkar <parag.lkml@gmail.com>
      Cc: stable@vger.kernel.org
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      2a6e06b2
    • L
      Merge branch 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mason/linux-btrfs · 08637024
      Linus Torvalds 提交于
      Pull btrfs fixes from Chris Mason:
       "Eric's rcu barrier patch fixes a long standing problem with our
        unmount code hanging on to devices in workqueue helpers.  Liu Bo
        nailed down a difficult assertion for in-memory extent mappings."
      
      * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mason/linux-btrfs:
        Btrfs: fix warning of free_extent_map
        Btrfs: fix warning when creating snapshots
        Btrfs: return as soon as possible when edquot happens
        Btrfs: return EIO if we have extent tree corruption
        btrfs: use rcu_barrier() to wait for bdev puts at unmount
        Btrfs: remove btrfs_try_spin_lock
        Btrfs: get better concurrency for snapshot-aware defrag work
      08637024
  3. 16 3月, 2013 11 次提交
  4. 15 3月, 2013 17 次提交
  5. 14 3月, 2013 2 次提交