1. 11 6月, 2014 1 次提交
  2. 28 5月, 2014 1 次提交
    • P
      cpuidle: cpuidle-cps: add MIPS CPS cpuidle driver · d0508944
      Paul Burton 提交于
      This patch adds a cpuidle driver for systems based around the MIPS
      Coherent Processing System (CPS) architecture. It supports four idle
      states:
      
        - The standard MIPS wait instruction.
      
        - The non-coherent wait, clock gated & power gated states exposed by
          the recently added pm-cps layer.
      
      The pm-cps layer is used to enter all the deep idle states. Since cores
      in the clock or power gated states cannot service interrupts, the
      gic_send_ipi_single function is modified to send a power up command for
      the appropriate core to the CPC in cases where the target CPU has marked
      itself potentially incoherent.
      Signed-off-by: NPaul Burton <paul.burton@imgtec.com>
      d0508944
  3. 26 5月, 2014 1 次提交
  4. 09 5月, 2014 1 次提交
  5. 07 5月, 2014 1 次提交
  6. 01 5月, 2014 3 次提交
  7. 30 4月, 2014 1 次提交
  8. 18 4月, 2014 1 次提交
  9. 08 4月, 2014 1 次提交
  10. 12 3月, 2014 1 次提交
    • P
      cpuidle: delay enabling interrupts until all coupled CPUs leave idle · 0b89e9aa
      Paul Burton 提交于
      As described by a comment at the end of cpuidle_enter_state_coupled it
      can be inefficient for coupled idle states to return with IRQs enabled
      since they may proceed to service an interrupt instead of clearing the
      coupled idle state. Until they have finished & cleared the idle state
      all CPUs coupled with them will spin rather than being able to enter a
      safe idle state.
      
      Commits e1689795 "cpuidle: Add common time keeping and irq
      enabling" and 554c06ba "cpuidle: remove en_core_tk_irqen flag" led
      to the cpuidle_enter_state enabling interrupts for all idle states,
      including coupled ones, making this inefficiency unavoidable by drivers
      & the local_irq_enable near the end of cpuidle_enter_state_coupled
      redundant. This patch avoids enabling interrupts in cpuidle_enter_state
      after a coupled state has been entered, allowing them to remain disabled
      until all coupled CPUs have exited the idle state and
      cpuidle_enter_state_coupled re-enables them.
      
      Cc: Daniel Lezcano <daniel.lezcano@linaro.org>
      Signed-off-by: NPaul Burton <paul.burton@imgtec.com>
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      0b89e9aa
  11. 11 3月, 2014 2 次提交
  12. 07 3月, 2014 1 次提交
  13. 06 3月, 2014 5 次提交
  14. 05 3月, 2014 2 次提交
  15. 25 2月, 2014 3 次提交
    • F
      smp: Rename __smp_call_function_single() to smp_call_function_single_async() · c46fff2a
      Frederic Weisbecker 提交于
      The name __smp_call_function_single() doesn't tell much about the
      properties of this function, especially when compared to
      smp_call_function_single().
      
      The comments above the implementation are also misleading. The main
      point of this function is actually not to be able to embed the csd
      in an object. This is actually a requirement that result from the
      purpose of this function which is to raise an IPI asynchronously.
      
      As such it can be called with interrupts disabled. And this feature
      comes at the cost of the caller who then needs to serialize the
      IPIs on this csd.
      
      Lets rename the function and enhance the comments so that they reflect
      these properties.
      Suggested-by: NChristoph Hellwig <hch@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Christoph Hellwig <hch@infradead.org>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Jan Kara <jack@suse.cz>
      Cc: Jens Axboe <axboe@fb.com>
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      c46fff2a
    • F
      smp: Remove wait argument from __smp_call_function_single() · fce8ad15
      Frederic Weisbecker 提交于
      The main point of calling __smp_call_function_single() is to send
      an IPI in a pure asynchronous way. By embedding a csd in an object,
      a caller can send the IPI without waiting for a previous one to complete
      as is required by smp_call_function_single() for example. As such,
      sending this kind of IPI can be safe even when irqs are disabled.
      
      This flexibility comes at the expense of the caller who then needs to
      synchronize the csd lifecycle by himself and make sure that IPIs on a
      single csd are serialized.
      
      This is how __smp_call_function_single() works when wait = 0 and this
      usecase is relevant.
      
      Now there don't seem to be any usecase with wait = 1 that can't be
      covered by smp_call_function_single() instead, which is safer. Lets look
      at the two possible scenario:
      
      1) The user calls __smp_call_function_single(wait = 1) on a csd embedded
         in an object. It looks like a nice and convenient pattern at the first
         sight because we can then retrieve the object from the IPI handler easily.
      
         But actually it is a waste of memory space in the object since the csd
         can be allocated from the stack by smp_call_function_single(wait = 1)
         and the object can be passed an the IPI argument.
      
         Besides that, embedding the csd in an object is more error prone
         because the caller must take care of the serialization of the IPIs
         for this csd.
      
      2) The user calls __smp_call_function_single(wait = 1) on a csd that
         is allocated on the stack. It's ok but smp_call_function_single()
         can do it as well and it already takes care of the allocation on the
         stack. Again it's more simple and less error prone.
      
      Therefore, using the underscore prepend API version with wait = 1
      is a bad pattern and a sign that the caller can do safer and more
      simple.
      
      There was a single user of that which has just been converted.
      So lets remove this option to discourage further users.
      
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Christoph Hellwig <hch@infradead.org>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Jan Kara <jack@suse.cz>
      Cc: Jens Axboe <axboe@fb.com>
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      fce8ad15
    • A
      drivers: Enable building of Kirkwood drivers for mach-mvebu · ff1f0018
      Andrew Lunn 提交于
      With the move of kirkwood into mach-mvebu, drivers Kconfig need
      tweeking to allow the kirkwood specific drivers to be built.
      Signed-off-by: NAndrew Lunn <andrew@lunn.ch>
      Acked-by: NArnd Bergmann <arnd@arndb.de>
      Acked-by: NMark Brown <broonie@linaro.org>
      Acked-by: NKishon Vijay Abraham I <kishon@ti.com>
      Acked-by: NDaniel Lezcano <daniel.lezcano@linaro.org>
      Acked-by: NViresh Kumar <viresh.kumar@linaro.org>
      Tested-by: NJason Gunthorpe <jgunthorpe@obsidianresearch.com>
      Cc: Viresh Kumar <viresh.kumar@linaro.org>
      Cc: Rafael J. Wysocki <rjw@rjwysocki.net>
      Cc: Richard Purdie <rpurdie@rpsys.net>
      Cc: Bryan Wu <cooloney@gmail.com>
      Cc: Zhang Rui <rui.zhang@intel.com>
      Cc: Eduardo Valentin <eduardo.valentin@ti.com>
      Signed-off-by: NJason Cooper <jason@lakedaemon.net>
      ff1f0018
  16. 23 2月, 2014 1 次提交
  17. 12 2月, 2014 1 次提交
  18. 11 2月, 2014 1 次提交
    • N
      sched/idle, PPC: Remove redundant cpuidle_idle_call() · d8c6ad31
      Nicolas Pitre 提交于
      The core idle loop now takes care of it.  However a few things need
      checking:
      
      - Invocation of cpuidle_idle_call() in pseries_lpar_idle() happened
        through arch_cpu_idle() and was therefore always preceded by a call
        to ppc64_runlatch_off().  To preserve this property now that
        cpuidle_idle_call() is invoked directly from core code, a call to
        ppc64_runlatch_off() has been added to idle_loop_prolog() in
        platforms/pseries/processor_idle.c.
      
      - Similarly, cpuidle_idle_call() was followed by ppc64_runlatch_off()
        so a call to the later has been added to idle_loop_epilog().
      
      - And since arch_cpu_idle() always made sure to re-enable IRQs if they
        were not enabled, this is now
        done in idle_loop_epilog() as well.
      
      The above was made in order to keep the execution flow close to the
      original.  I don't know if that was strictly necessary. Someone well
      aquainted with the platform details might find some room for possible
      optimizations.
      Signed-off-by: NNicolas Pitre <nico@linaro.org>
      Reviewed-by: NPreeti U Murthy <preeti@linux.vnet.ibm.com>
      Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net>
      Cc: Daniel Lezcano <daniel.lezcano@linaro.org>
      Cc: linux-arm-kernel@lists.infradead.org
      Cc: linuxppc-dev@lists.ozlabs.org
      Cc: linux-sh@vger.kernel.org
      Cc: linux-pm@vger.kernel.org
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: linaro-kernel@lists.linaro.org
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Paul Mundt <lethal@linux-sh.org>
      Signed-off-by: NPeter Zijlstra <peterz@infradead.org>
      Link: http://lkml.kernel.org/n/tip-47o4m03citrfg9y1vxic5asb@git.kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      d8c6ad31
  19. 07 2月, 2014 1 次提交
    • P
      cpuidle: Handle clockevents_notify(BROADCAST_ENTER) failure · ba8f20c2
      Preeti U Murthy 提交于
      Some archs set the CPUIDLE_FLAG_TIMER_STOP flag for idle states in which the
      local timers stop. The cpuidle_idle_call() currently handles such idle states
      by calling into the broadcast framework so as to wakeup CPUs at their next
      wakeup event. With the hrtimer mode of broadcast, the BROADCAST_ENTER call
      into the broadcast frameowork can fail for archs that do not have an external
      clock device to handle wakeups and the CPU in question has thus to be made
      the stand by CPU. This patch handles such cases by failing the call into
      cpuidle so that the arch can take some default action. The arch will certainly
      not enter a similar idle state because a failed cpuidle call will also implicitly
      indicate that the broadcast framework has not registered this CPU to be woken up.
      Hence we are safe if we fail the cpuidle call.
      
      In the process move the functions that trace idle statistics just before and
      after the entry and exit into idle states respectively. In other
      scenarios where the call to cpuidle fails, we end up not tracing idle
      entry and exit since a decision on an idle state could not be taken. Similarly
      when the call to broadcast framework fails, we skip tracing idle statistics
      because we are in no further position to take a decision on an alternative
      idle state to enter into.
      Signed-off-by: NPreeti U Murthy <preeti@linux.vnet.ibm.com>
      Cc: deepthi@linux.vnet.ibm.com
      Cc: paulmck@linux.vnet.ibm.com
      Cc: fweisbec@gmail.com
      Cc: paulus@samba.org
      Cc: srivatsa.bhat@linux.vnet.ibm.com
      Cc: svaidy@linux.vnet.ibm.com
      Cc: peterz@infradead.org
      Cc: benh@kernel.crashing.org
      Acked-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      Cc: linuxppc-dev@lists.ozlabs.org
      Link: http://lkml.kernel.org/r/20140207080652.17187.66344.stgit@preeti.in.ibm.comSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      ba8f20c2
  20. 29 1月, 2014 6 次提交
  21. 30 12月, 2013 1 次提交
  22. 04 12月, 2013 1 次提交
    • K
      cpuidle: Check for dev before deregistering it. · 813e8e3d
      Konrad Rzeszutek Wilk 提交于
      If not, we could end up in the unfortunate situation where
      we dereference a NULL pointer b/c we have cpuidle disabled.
      
      This is the case when booting under Xen (which uses the
      ACPI P/C states but disables the CPU idle driver) - and can
      be easily reproduced when booting with cpuidle.off=1.
      
      BUG: unable to handle kernel NULL pointer dereference at           (null)
      IP: [<ffffffff8156db4a>] cpuidle_unregister_device+0x2a/0x90
      .. snip..
      Call Trace:
       [<ffffffff813b15b4>] acpi_processor_power_exit+0x3c/0x5c
       [<ffffffff813af0a9>] acpi_processor_stop+0x61/0xb6
       [<ffffffff814215bf>] __device_release_driver+0fffff81421653>] device_release_driver+0x23/0x30
       [<ffffffff81420ed8>] bus_remove_device+0x108/0x180
       [<ffffffff8141d9d9>] device_del+0x129/0x1c0
       [<ffffffff813cb4b0>] ? unregister_xenbus_watch+0x1f0/0x1f0
       [<ffffffff8141da8e>] device_unregister+0x1e/0x60
       [<ffffffff814243e9>] unregister_cpu+0x39/0x60
       [<ffffffff81019e03>] arch_unregister_cpu+0x23/0x30
       [<ffffffff813c3c51>] handle_vcpu_hotplug_event+0xc1/0xe0
       [<ffffffff813cb4f5>] xenwatch_thread+0x45/0x120
       [<ffffffff810af010>] ? abort_exclusive_wait+0xb0/0xb0
       [<ffffffff8108ec42>] kthread+0xd2/0xf0
       [<ffffffff8108eb70>] ? kthread_create_on_node+0x180/0x180
       [<ffffffff816ce17c>] ret_from_fork+0x7c/0xb0
       [<ffffffff8108eb70>] ? kthread_create_on_node+0x180/0x180
      
      This problem also appears in 3.12 and could be a candidate for backport.
      Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Cc: All applicable <stable@vger.kernel.org>
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      813e8e3d
  23. 30 10月, 2013 3 次提交