1. 05 4月, 2013 5 次提交
  2. 26 3月, 2013 1 次提交
  3. 25 3月, 2013 2 次提交
  4. 23 3月, 2013 7 次提交
  5. 16 3月, 2013 3 次提交
    • F
      timekeeping: utilize the suspend-nonstop clocksource to count suspended time · e445cf1c
      Feng Tang 提交于
      There are some new processors whose TSC clocksource won't stop during
      suspend. Currently, after system resumes, kernel will use persistent
      clock or RTC to compensate the sleep time, but with these nonstop
      clocksources, we could skip the special compensation from external
      sources, and just use current clocksource for time recounting.
      
      This can solve some time drift bugs caused by some not-so-accurate or
      error-prone RTC devices.
      
      The current way to count suspended time is first try to use the persistent
      clock, and then try the RTC if persistent clock can't be used. This
      patch will change the trying order to:
      	suspend-nonstop clocksource -> persistent clock -> RTC
      
      When counting the sleep time with nonstop clocksource, use an accurate way
      suggested by Jason Gunthorpe to cover very large delta cycles.
      Signed-off-by: NFeng Tang <feng.tang@intel.com>
      [jstultz: Small optimization, avoiding re-reading the clocksource]
      Signed-off-by: NJohn Stultz <john.stultz@linaro.org>
      e445cf1c
    • J
      timekeeping: Use inject_offset in warp_clock · 7859e404
      John Stultz 提交于
      When warping the clock (from a local time RTC), use
      timekeeping_inject_offset() to atomically add the offset.
      
      This avoids any minor time error caused by the delay between
      reading the time, and then setting the adjusted time.
      Signed-off-by: NJohn Stultz <john.stultz@linaro.org>
      7859e404
    • D
      timekeeping: Avoid adjust kernel time once hwclock kept in UTC time · c30bd099
      Dong Zhu 提交于
      If the Hardware Clock kept in local time,kernel will adjust the time
      to be UTC time.But if Hardware Clock kept in UTC time,system will make
      a dummy settimeofday call first (sys_tz.tz_minuteswest = 0) to make sure
      the time is not shifted,so at this point I think maybe it is not necessary
      to set the kernel time once the sys_tz.tz_minuteswest is zero.
      Signed-off-by: NDong Zhu <bluezhudong@gmail.com>
      [jstultz: Updated to merge with conflicting changes ]
      Signed-off-by: NJohn Stultz <john.stultz@linaro.org>
      c30bd099
  6. 13 3月, 2013 3 次提交
    • T
      tick: Provide a check for a forced broadcast pending · eaa907c5
      Thomas Gleixner 提交于
      On the CPU which gets woken along with the target CPU of the broadcast
      the following happens:
      
        deep_idle()
      			<-- spurious wakeup
        broadcast_exit()
          set forced bit
        
        enable interrupts
          
      			<-- Nothing happens
      
        disable interrupts
      
        broadcast_enter()
      			<-- Here we observe the forced bit is set
        deep_idle()
      
      Now after that the target CPU of the broadcast runs the broadcast
      handler and finds the other CPU in both the broadcast and the forced
      mask, sends the IPI and stuff gets back to normal.
      
      So it's not actually harmful, just more evidence for the theory, that
      hardware designers have access to very special drug supplies.
      
      Now there is no point in going back to deep idle just to wake up again
      right away via an IPI. Provide a check which allows the idle code to
      avoid the deep idle transition.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: LAK <linux-arm-kernel@lists.infradead.org>
      Cc: John Stultz <john.stultz@linaro.org>
      Cc: Arjan van de Veen <arjan@infradead.org>
      Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
      Tested-by: NSantosh Shilimkar <santosh.shilimkar@ti.com>
      Cc: Jason Liu <liu.h.jason@gmail.com>
      Link: http://lkml.kernel.org/r/20130306111537.565418308@linutronix.deSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      eaa907c5
    • T
      tick: Handle broadcast wakeup of multiple cpus · 989dcb64
      Thomas Gleixner 提交于
      Some brilliant hardware implementations wake multiple cores when the
      broadcast timer fires. This leads to the following interesting
      problem:
      
      CPU0				CPU1
      wakeup from idle		wakeup from idle
      
      leave broadcast mode		leave broadcast mode
       restart per cpu timer		 restart per cpu timer
       	     	 		go back to idle
      handle broadcast
       (empty mask)			
      				enter broadcast mode
      				programm broadcast device
      enter broadcast mode
      programm broadcast device
      
      So what happens is that due to the forced reprogramming of the cpu
      local timer, we need to set a event in the future. Now if we manage to
      go back to idle before the timer fires, we switch off the timer and
      arm the broadcast device with an already expired time (covered by
      forced mode). So in the worst case we repeat the above ping pong
      forever.
      					
      Unfortunately we have no information about what caused the wakeup, but
      we can check current time against the expiry time of the local cpu. If
      the local event is already in the past, we know that the broadcast
      timer is about to fire and send an IPI. So we mark ourself as an IPI
      target even if we left broadcast mode and avoid the reprogramming of
      the local cpu timer.
      
      This still leaves the possibility that a CPU which is not handling the
      broadcast interrupt is going to reach idle again before the IPI
      arrives. This can't be solved in the core code and will be handled in
      follow up patches.
      Reported-by: NJason Liu <liu.h.jason@gmail.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: LAK <linux-arm-kernel@lists.infradead.org>
      Cc: John Stultz <john.stultz@linaro.org>
      Cc: Arjan van de Veen <arjan@infradead.org>
      Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
      Tested-by: NSantosh Shilimkar <santosh.shilimkar@ti.com>
      Link: http://lkml.kernel.org/r/20130306111537.492045206@linutronix.deSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      989dcb64
    • T
      tick: Avoid programming the local cpu timer if broadcast pending · 26517f3e
      Thomas Gleixner 提交于
      If the local cpu timer stops in deep idle, we arm the broadcast device
      and get woken by an IPI. Now when we return from deep idle we reenable
      the local cpu timer unconditionally before handling the IPI. But
      that's a pointless exercise: the timer is already expired and the IPI
      is on the way. And it's an expensive exercise as we use the forced
      reprogramming mode so that we do not lose a timer event. This forced
      reprogramming will loop at least once in the retry.
      
      To avoid this reprogramming, we mark the cpu in a pending bit mask
      before we send the IPI. Now when the IPI target cpu wakes up, it will
      see the pending bit set and skip the reprogramming. The reprogramming
      of the cpu local timer will happen in the IPI handler which runs the
      cpu local timer interrupt function.
      Reported-by: NJason Liu <liu.h.jason@gmail.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: LAK <linux-arm-kernel@lists.infradead.org>
      Cc: John Stultz <john.stultz@linaro.org>
      Cc: Arjan van de Veen <arjan@infradead.org>
      Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
      Tested-by: NSantosh Shilimkar <santosh.shilimkar@ti.com>
      Link: http://lkml.kernel.org/r/20130306111537.431082074@linutronix.deSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      26517f3e
  7. 09 3月, 2013 1 次提交
  8. 07 3月, 2013 5 次提交
  9. 03 3月, 2013 2 次提交
    • A
      fix compat_sys_rt_sigprocmask() · db61ec29
      Al Viro 提交于
      Converting bitmask to 32bit granularity is fine, but we'd better
      _do_ something with the result.  Such as "copy it to userland"...
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      db61ec29
    • J
      trace/ring_buffer: handle 64bit aligned structs · 649508f6
      James Hogan 提交于
      Some 32 bit architectures require 64 bit values to be aligned (for
      example Meta which has 64 bit read/write instructions). These require 8
      byte alignment of event data too, so use
      !CONFIG_HAVE_64BIT_ALIGNED_ACCESS instead of !CONFIG_64BIT ||
      CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS to decide alignment, and align
      buffer_data_page::data accordingly.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Acked-by: Steven Rostedt <rostedt@goodmis.org> (previous version subtly different)
      649508f6
  10. 02 3月, 2013 8 次提交
  11. 01 3月, 2013 1 次提交
    • F
      irq: Don't re-enable interrupts at the end of irq_exit · 4cd5d111
      Frederic Weisbecker 提交于
      Commit 74eed016
      "irq: Ensure irq_exit() code runs with interrupts disabled"
      restore interrupts flags in the end of irq_exit() for archs
      that don't define __ARCH_IRQ_EXIT_IRQS_DISABLED.
      
      However always returning from irq_exit() with interrupts
      disabled should not be a problem for these archs. Prior to
      this commit this was already happening anytime we processed
      pending softirqs anyway.
      Suggested-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      4cd5d111
  12. 28 2月, 2013 2 次提交
    • S
      hlist: drop the node parameter from iterators · b67bfe0d
      Sasha Levin 提交于
      I'm not sure why, but the hlist for each entry iterators were conceived
      
              list_for_each_entry(pos, head, member)
      
      The hlist ones were greedy and wanted an extra parameter:
      
              hlist_for_each_entry(tpos, pos, head, member)
      
      Why did they need an extra pos parameter? I'm not quite sure. Not only
      they don't really need it, it also prevents the iterator from looking
      exactly like the list iterator, which is unfortunate.
      
      Besides the semantic patch, there was some manual work required:
      
       - Fix up the actual hlist iterators in linux/list.h
       - Fix up the declaration of other iterators based on the hlist ones.
       - A very small amount of places were using the 'node' parameter, this
       was modified to use 'obj->member' instead.
       - Coccinelle didn't handle the hlist_for_each_entry_safe iterator
       properly, so those had to be fixed up manually.
      
      The semantic patch which is mostly the work of Peter Senna Tschudin is here:
      
      @@
      iterator name hlist_for_each_entry, hlist_for_each_entry_continue, hlist_for_each_entry_from, hlist_for_each_entry_rcu, hlist_for_each_entry_rcu_bh, hlist_for_each_entry_continue_rcu_bh, for_each_busy_worker, ax25_uid_for_each, ax25_for_each, inet_bind_bucket_for_each, sctp_for_each_hentry, sk_for_each, sk_for_each_rcu, sk_for_each_from, sk_for_each_safe, sk_for_each_bound, hlist_for_each_entry_safe, hlist_for_each_entry_continue_rcu, nr_neigh_for_each, nr_neigh_for_each_safe, nr_node_for_each, nr_node_for_each_safe, for_each_gfn_indirect_valid_sp, for_each_gfn_sp, for_each_host;
      
      type T;
      expression a,c,d,e;
      identifier b;
      statement S;
      @@
      
      -T b;
          <+... when != b
      (
      hlist_for_each_entry(a,
      - b,
      c, d) S
      |
      hlist_for_each_entry_continue(a,
      - b,
      c) S
      |
      hlist_for_each_entry_from(a,
      - b,
      c) S
      |
      hlist_for_each_entry_rcu(a,
      - b,
      c, d) S
      |
      hlist_for_each_entry_rcu_bh(a,
      - b,
      c, d) S
      |
      hlist_for_each_entry_continue_rcu_bh(a,
      - b,
      c) S
      |
      for_each_busy_worker(a, c,
      - b,
      d) S
      |
      ax25_uid_for_each(a,
      - b,
      c) S
      |
      ax25_for_each(a,
      - b,
      c) S
      |
      inet_bind_bucket_for_each(a,
      - b,
      c) S
      |
      sctp_for_each_hentry(a,
      - b,
      c) S
      |
      sk_for_each(a,
      - b,
      c) S
      |
      sk_for_each_rcu(a,
      - b,
      c) S
      |
      sk_for_each_from
      -(a, b)
      +(a)
      S
      + sk_for_each_from(a) S
      |
      sk_for_each_safe(a,
      - b,
      c, d) S
      |
      sk_for_each_bound(a,
      - b,
      c) S
      |
      hlist_for_each_entry_safe(a,
      - b,
      c, d, e) S
      |
      hlist_for_each_entry_continue_rcu(a,
      - b,
      c) S
      |
      nr_neigh_for_each(a,
      - b,
      c) S
      |
      nr_neigh_for_each_safe(a,
      - b,
      c, d) S
      |
      nr_node_for_each(a,
      - b,
      c) S
      |
      nr_node_for_each_safe(a,
      - b,
      c, d) S
      |
      - for_each_gfn_sp(a, c, d, b) S
      + for_each_gfn_sp(a, c, d) S
      |
      - for_each_gfn_indirect_valid_sp(a, c, d, b) S
      + for_each_gfn_indirect_valid_sp(a, c, d) S
      |
      for_each_host(a,
      - b,
      c) S
      |
      for_each_host_safe(a,
      - b,
      c, d) S
      |
      for_each_mesh_entry(a,
      - b,
      c, d) S
      )
          ...+>
      
      [akpm@linux-foundation.org: drop bogus change from net/ipv4/raw.c]
      [akpm@linux-foundation.org: drop bogus hunk from net/ipv6/raw.c]
      [akpm@linux-foundation.org: checkpatch fixes]
      [akpm@linux-foundation.org: fix warnings]
      [akpm@linux-foudnation.org: redo intrusive kvm changes]
      Tested-by: NPeter Senna Tschudin <peter.senna@gmail.com>
      Acked-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Signed-off-by: NSasha Levin <sasha.levin@oracle.com>
      Cc: Wu Fengguang <fengguang.wu@intel.com>
      Cc: Marcelo Tosatti <mtosatti@redhat.com>
      Cc: Gleb Natapov <gleb@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      b67bfe0d
    • C
      kcmp: make it depend on CHECKPOINT_RESTORE · 1e142b29
      Cyrill Gorcunov 提交于
      Since kcmp syscall has been implemented (initially on x86 architecture) a
      number of other archs wire it up as well: xtensa, sparc, sh, s390, mips,
      microblaze, m68k (not taking into account those who uses
      <asm-generic/unistd.h> for syscall numbers definitions).
      
      But the Makefile, which turns kcmp.o generation on still depends on former
      config-x86.  Thus get rid of this limitation and make kcmp.o depend on
      CHECKPOINT_RESTORE option.
      Signed-off-by: NCyrill Gorcunov <gorcunov@openvz.org>
      Cc: Pavel Emelyanov <xemul@parallels.com>
      Cc: Andrey Vagin <avagin@openvz.org>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      1e142b29