1. 23 6月, 2014 5 次提交
    • V
      hrtimer: Kick lowres dynticks targets on timer enqueue · 49a2a075
      Viresh Kumar 提交于
      In lowres mode, hrtimers are serviced by the tick instead of a clock
      event. It works well as long as the tick stays periodic but we must also
      make sure that the hrtimers are serviced in dynticks mode targets,
      pretty much like timer list timers do.
      
      Note that all dynticks modes are concerned: get_nohz_timer_target()
      tries not to return remote idle CPUs but there is nothing to prevent
      the elected target from entering dynticks idle mode until we lock its
      base. It's also prefectly legal to enqueue hrtimers on full dynticks CPU.
      
      So there are two requirements to correctly handle dynticks:
      
      1) On target's tick stop time, we must not delay the next tick further
         the next hrtimer.
      
      2) On hrtimer queue time. If the tick of the target is stopped, we must
         wake up that CPU such that it sees the new hrtimer and recalculate
         the next tick accordingly.
      
      The point 1 is well handled currently through get_nohz_timer_interrupt() and
      cmp_next_hrtimer_event().
      
      But the point 2 isn't handled at all.
      
      Fixing this is easy though as we have the necessary API ready for that.
      All we need is to call wake_up_nohz_cpu() on a target when a newly
      enqueued hrtimer requires tick rescheduling, like timer list timer do.
      Signed-off-by: NViresh Kumar <viresh.kumar@linaro.org>
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Link: http://lkml.kernel.org/r/3d7ea08ce008698e26bd39fe10f55949391073ab.1403507178.git.viresh.kumar@linaro.orgSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      49a2a075
    • V
      hrtimer: Store cpu-number in struct hrtimer_cpu_base · cddd0248
      Viresh Kumar 提交于
      In lowres mode, hrtimers are serviced by the tick instead of a clock
      event. Now it works well as long as the tick stays periodic but we
      must also make sure that the hrtimers are serviced in dynticks mode.
      
      Part of that job consist in kicking a dynticks hrtimer target in order
      to make it reconsider the next tick to schedule to correctly handle the
      hrtimer's expiring time. And that part isn't handled by the hrtimers
      subsystem.
      
      To prepare for fixing this, we need __hrtimer_start_range_ns() to be
      able to resolve the CPU target associated to a hrtimer's object
      'cpu_base' so that the kick can be centralized there.
      
      So lets store it in the 'struct hrtimer_cpu_base' to resolve the CPU
      without overhead. It is set once at CPU's online notification.
      Signed-off-by: NViresh Kumar <viresh.kumar@linaro.org>
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Link: http://lkml.kernel.org/r/1403393357-2070-4-git-send-email-fweisbec@gmail.comSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      cddd0248
    • V
      timer: Kick dynticks targets on mod_timer*() calls · 9f6d9baa
      Viresh Kumar 提交于
      When a timer is enqueued or modified on a dynticks target, that CPU
      must re-evaluate the next tick to service that timer.
      
      The tick re-evaluation is performed by an IPI kick on the target.
      Now while we correctly call wake_up_nohz_cpu() from add_timer_on(), the
      mod_timer*() API family doesn't support so well dynticks targets.
      
      The reason for this is likely that __mod_timer() isn't supposed to
      select an idle target for a timer, unless that target is the current
      CPU, in which case a dynticks idle kick isn't actually needed.
      
      But there is a small race window lurking behind that assumption: the
      elected target has all the time to turn dynticks idle between the call
      to get_nohz_timer_target() and the locking of its base. Hence a risk
      that we enqueue a timer on a dynticks idle destination without kicking
      it. As a result, the timer might be serviced too late in the future.
      
      Also a target elected by __mod_timer() can be in full dynticks mode
      and thus require to be kicked as well. And unlike idle dynticks, this
      concern both local and remote targets.
      
      To fix this whole issue, lets centralize the dynticks kick to
      internal_add_timer() so that it is well handled for all sort of timer
      enqueue. Even timer migration is concerned so that a full dynticks target
      is correctly kicked as needed when timers are migrating to it.
      Signed-off-by: NViresh Kumar <viresh.kumar@linaro.org>
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Link: http://lkml.kernel.org/r/1403393357-2070-3-git-send-email-fweisbec@gmail.comSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      9f6d9baa
    • V
      timer: Store cpu-number in struct tvec_base · d6f93829
      Viresh Kumar 提交于
      Timers are serviced by the tick. But when a timer is enqueued on a
      dynticks target, we need to kick it in order to make it reconsider the
      next tick to schedule to correctly handle the timer's expiring time.
      
      Now while this kick is correctly performed for add_timer_on(), the
      mod_timer*() family has been a bit neglected.
      
      To prepare for fixing this, we need internal_add_timer() to be able to
      resolve the CPU target associated to a timer's object 'base' so that the
      kick can be centralized there.
      
      This can't be passed as an argument as not all the callers know the CPU
      number of a timer's base. So lets store it in the struct tvec_base to
      resolve the CPU without much overhead. It is set once for good at every
      CPU's first boot.
      Signed-off-by: NViresh Kumar <viresh.kumar@linaro.org>
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Link: http://lkml.kernel.org/r/1403393357-2070-2-git-send-email-fweisbec@gmail.comSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      d6f93829
    • T
      time/timers: Move all time(r) related files into kernel/time · 5cee9645
      Thomas Gleixner 提交于
      Except for Kconfig.HZ. That needs a separate treatment.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      5cee9645
  2. 05 6月, 2014 1 次提交
    • J
      timekeeping: use printk_deferred when holding timekeeping seqlock · 6d9bcb62
      John Stultz 提交于
      Jiri Bohac pointed out that there are rare but potential deadlock
      possibilities when calling printk while holding the timekeeping
      seqlock.
      
      This is due to printk() triggering console sem wakeup, which can
      cause scheduling code to trigger hrtimers which may try to read
      the time.
      
      Specifically, as Jiri pointed out, that path is:
        printk
          vprintk_emit
            console_unlock
              up(&console_sem)
                __up
      	    wake_up_process
      	      try_to_wake_up
      	        ttwu_do_activate
      		  ttwu_activate
      		    activate_task
      		      enqueue_task
      		        enqueue_task_fair
      			  hrtick_update
      			    hrtick_start_fair
      			      hrtick_start_fair
      			        get_time
      				  ktime_get
      				    --> endless loop on
      				    read_seqcount_retry(&timekeeper_seq, ...)
      
      This patch tries to avoid this issue by using printk_deferred (previously
      named printk_sched) which should defer printing via a irq_work_queue.
      Signed-off-by: NJohn Stultz <john.stultz@linaro.org>
      Reported-by: NJiri Bohac <jbohac@suse.cz>
      Reviewed-by: NSteven Rostedt <rostedt@goodmis.org>
      Cc: Jan Kara <jack@suse.cz>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      6d9bcb62
  3. 13 5月, 2014 2 次提交
  4. 23 4月, 2014 1 次提交
  5. 16 4月, 2014 3 次提交
  6. 08 4月, 2014 1 次提交
  7. 28 3月, 2014 1 次提交
  8. 26 3月, 2014 2 次提交
  9. 03 3月, 2014 1 次提交
  10. 20 2月, 2014 1 次提交
    • S
      sched_clock: Prevent callers from seeing half-updated data · 5ae8aabe
      Stephen Boyd 提交于
      The generic sched_clock registration function was previously
      done lockless, due to the fact that it was expected to be called
      only once. However, now there are systems that may register
      multiple sched_clock sources, for which the lack of locking has
      casued problems:
      
      If two sched_clock sources are registered we may end up in a
      situation where a call to sched_clock() may be accessing the
      epoch cycle count for the old counter and the cycle count for the
      new counter. This can lead to confusing results where
      sched_clock() values jump and then are reset to 0 (due to the way
      the registration function forces the epoch_ns to be 0).
      
      Fix this by reorganizing the registration function to hold the
      seqlock for as short a time as possible while we update the
      clock_data structure for a new counter. We also put any
      accumulated time into epoch_ns instead of resetting the time to
      0 so that the clock doesn't reset after each successful
      registration.
      
      [jstultz: Added extra context to the commit message]
      Reported-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NStephen Boyd <sboyd@codeaurora.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Josh Cartwright <joshc@codeaurora.org>
      Link: http://lkml.kernel.org/r/1392662736-7803-2-git-send-email-john.stultz@linaro.orgSigned-off-by: NJohn Stultz <john.stultz@linaro.org>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      5ae8aabe
  11. 15 2月, 2014 1 次提交
  12. 14 2月, 2014 1 次提交
    • T
      tick: Clear broadcast pending bit when switching to oneshot · dd5fd9b9
      Thomas Gleixner 提交于
      AMD systems which use the C1E workaround in the amd_e400_idle routine
      trigger the WARN_ON_ONCE in the broadcast code when onlining a CPU.
      
      The reason is that the idle routine of those AMD systems switches the
      cpu into forced broadcast mode early on before the newly brought up
      CPU can switch over to high resolution / NOHZ mode. The timer related
      CPU1 bringup looks like this:
      
        clockevent_register_device(local_apic);
        tick_setup(local_apic);
        ...
        idle()
      	tick_broadcast_on_off(FORCE);
      	tick_broadcast_oneshot_control(ENTER)
      	  cpumask_set(cpu, broadcast_oneshot_mask);
      	halt();
      
      Now the broadcast interrupt on CPU0 sets CPU1 in the
      broadcast_pending_mask and wakes CPU1. So CPU1 continues:
      
      	local_apic_timer_interrupt()
      	   tick_handle_periodic();
      	   softirq()
      	     tick_init_highres();
      	       cpumask_clr(cpu, broadcast_oneshot_mask);
      	
      	tick_broadcast_oneshot_control(ENTER)
      	   WARN_ON(cpumask_test(cpu, broadcast_pending_mask);
      
      So while we remove CPU1 from the broadcast_oneshot_mask when we switch
      over to highres mode, we do not clear the pending bit, which then
      triggers the warning when we go back to idle.
      
      The reason why this is only visible on C1E affected AMD systems is
      that the other machines enter the deep sleep states via
      acpi_idle/intel_idle and exit the broadcast mode before executing the
      remote triggered local_apic_timer_interrupt. So the pending bit is
      already cleared when the switch over to highres mode is clearing the
      oneshot mask.
      
      The solution is simple: Clear the pending bit together with the mask
      bit when we switch over to highres mode.
      
      Stanislaw came up independently with the same patch by enforcing the
      C1E workaround and debugging the fallout. I picked mine, because mine
      has a changelog :)
      Reported-by: Npoma <pomidorabelisima@gmail.com>
      Debugged-by: NStanislaw Gruszka <sgruszka@redhat.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Olaf Hering <olaf@aepfle.de>
      Cc: Dave Jones <davej@redhat.com>
      Cc: Justin M. Forbes <jforbes@redhat.com>
      Cc: Josh Boyer <jwboyer@redhat.com>
      Link: http://lkml.kernel.org/r/alpine.DEB.2.02.1402111434180.21991@ionos.tec.linutronix.de
      Cc: stable@vger.kernel.org # 3.10+
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      dd5fd9b9
  13. 09 2月, 2014 1 次提交
  14. 07 2月, 2014 6 次提交
  15. 06 2月, 2014 1 次提交
  16. 16 1月, 2014 3 次提交
  17. 13 1月, 2014 1 次提交
    • P
      sched/clock, x86: Use a static_key for sched_clock_stable · 35af99e6
      Peter Zijlstra 提交于
      In order to avoid the runtime condition and variable load turn
      sched_clock_stable into a static_key.
      
      Also provide a shorter implementation of local_clock() and
      cpu_clock(int) when sched_clock_stable==1.
      
                              MAINLINE   PRE       POST
      
          sched_clock_stable: 1          1         1
          (cold) sched_clock: 329841     221876    215295
          (cold) local_clock: 301773     234692    220773
          (warm) sched_clock: 38375      25602     25659
          (warm) local_clock: 100371     33265     27242
          (warm) rdtsc:       27340      24214     24208
          sched_clock_stable: 0          0         0
          (cold) sched_clock: 382634     235941    237019
          (cold) local_clock: 396890     297017    294819
          (warm) sched_clock: 38194      25233     25609
          (warm) local_clock: 143452     71234     71232
          (warm) rdtsc:       27345      24245     24243
      Signed-off-by: NPeter Zijlstra <peterz@infradead.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Link: http://lkml.kernel.org/n/tip-eummbdechzz37mwmpags1gjr@git.kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      35af99e6
  18. 12 1月, 2014 1 次提交
  19. 24 12月, 2013 7 次提交
    • J
      timekeeping: Remove comment that's mostly out of date · 38aef31c
      John Stultz 提交于
      Prior to 92bb1fcf (Only
      do nanosecond rounding on GENERIC_TIME_VSYSCALL_OLD
      systems), the comment here was accuate, but now we can
      mostly avoid the extra rounding which causes the unlikey
      to be actually likely here.
      
      So remove the out of date comment.
      Signed-off-by: NJohn Stultz <john.stultz@linaro.org>
      38aef31c
    • Y
      timekeeper: fix comment typo for tk_setup_internals() · d26e4fe0
      Yijing Wang 提交于
      Fix trivial comment typo for tk_setup_internals().
      Signed-off-by: NYijing Wang <wangyijing@huawei.com>
      Signed-off-by: NJohn Stultz <john.stultz@linaro.org>
      d26e4fe0
    • J
      timekeeping: Fix missing timekeeping_update in suspend path · 330a1617
      John Stultz 提交于
      Since 48cdc135 (Implement a shadow timekeeper), we have to
      call timekeeping_update() after any adjustment to the timekeeping
      structure in order to make sure that any adjustments to the structure
      persist.
      
      In the timekeeping suspend path, we udpate the timekeeper
      structure, so we should be sure to update the shadow-timekeeper
      before releasing the timekeeping locks. Currently this isn't done.
      
      In most cases, the next time related code to run would be
      timekeeping_resume, which does update the shadow-timekeeper, but
      in an abundence of caution, this patch adds the call to
      timekeeping_update() in the suspend path.
      
      Cc: Sasha Levin <sasha.levin@oracle.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Prarit Bhargava <prarit@redhat.com>
      Cc: Richard Cochran <richardcochran@gmail.com>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: stable <stable@vger.kernel.org> #3.10+
      Signed-off-by: NJohn Stultz <john.stultz@linaro.org>
      330a1617
    • J
      timekeeping: Fix CLOCK_TAI timer/nanosleep delays · 04005f60
      John Stultz 提交于
      A think-o in the calculation of the monotonic -> tai time offset
      results in CLOCK_TAI timers and nanosleeps to expire late (the
      latency is ~2x the tai offset).
      
      Fix this by adding the tai offset from the realtime offset instead
      of subtracting.
      
      Cc: Sasha Levin <sasha.levin@oracle.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Prarit Bhargava <prarit@redhat.com>
      Cc: Richard Cochran <richardcochran@gmail.com>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: stable <stable@vger.kernel.org> #3.10+
      Signed-off-by: NJohn Stultz <john.stultz@linaro.org>
      04005f60
    • J
      tick/timekeeping: Call update_wall_time outside the jiffies lock · 47a1b796
      John Stultz 提交于
      Since the xtime lock was split into the timekeeping lock and
      the jiffies lock, we no longer need to call update_wall_time()
      while holding the jiffies lock.
      
      Thus, this patch splits update_wall_time() out from do_timer().
      
      This allows us to get away from calling clock_was_set_delayed()
      in update_wall_time() and instead use the standard clock_was_set()
      call that previously would deadlock, as it causes the jiffies lock
      to be acquired.
      
      Cc: Sasha Levin <sasha.levin@oracle.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Prarit Bhargava <prarit@redhat.com>
      Cc: Richard Cochran <richardcochran@gmail.com>
      Cc: Ingo Molnar <mingo@kernel.org>
      Signed-off-by: NJohn Stultz <john.stultz@linaro.org>
      47a1b796
    • J
      timekeeping: Avoid possible deadlock from clock_was_set_delayed · 6fdda9a9
      John Stultz 提交于
      As part of normal operaions, the hrtimer subsystem frequently calls
      into the timekeeping code, creating a locking order of
        hrtimer locks -> timekeeping locks
      
      clock_was_set_delayed() was suppoed to allow us to avoid deadlocks
      between the timekeeping the hrtimer subsystem, so that we could
      notify the hrtimer subsytem the time had changed while holding
      the timekeeping locks. This was done by scheduling delayed work
      that would run later once we were out of the timekeeing code.
      
      But unfortunately the lock chains are complex enoguh that in
      scheduling delayed work, we end up eventually trying to grab
      an hrtimer lock.
      
      Sasha Levin noticed this in testing when the new seqlock lockdep
      enablement triggered the following (somewhat abrieviated) message:
      
      [  251.100221] ======================================================
      [  251.100221] [ INFO: possible circular locking dependency detected ]
      [  251.100221] 3.13.0-rc2-next-20131206-sasha-00005-g8be2375-dirty #4053 Not tainted
      [  251.101967] -------------------------------------------------------
      [  251.101967] kworker/10:1/4506 is trying to acquire lock:
      [  251.101967]  (timekeeper_seq){----..}, at: [<ffffffff81160e96>] retrigger_next_event+0x56/0x70
      [  251.101967]
      [  251.101967] but task is already holding lock:
      [  251.101967]  (hrtimer_bases.lock#11){-.-...}, at: [<ffffffff81160e7c>] retrigger_next_event+0x3c/0x70
      [  251.101967]
      [  251.101967] which lock already depends on the new lock.
      [  251.101967]
      [  251.101967]
      [  251.101967] the existing dependency chain (in reverse order) is:
      [  251.101967]
      -> #5 (hrtimer_bases.lock#11){-.-...}:
      [snipped]
      -> #4 (&rt_b->rt_runtime_lock){-.-...}:
      [snipped]
      -> #3 (&rq->lock){-.-.-.}:
      [snipped]
      -> #2 (&p->pi_lock){-.-.-.}:
      [snipped]
      -> #1 (&(&pool->lock)->rlock){-.-...}:
      [  251.101967]        [<ffffffff81194803>] validate_chain+0x6c3/0x7b0
      [  251.101967]        [<ffffffff81194d9d>] __lock_acquire+0x4ad/0x580
      [  251.101967]        [<ffffffff81194ff2>] lock_acquire+0x182/0x1d0
      [  251.101967]        [<ffffffff84398500>] _raw_spin_lock+0x40/0x80
      [  251.101967]        [<ffffffff81153e69>] __queue_work+0x1a9/0x3f0
      [  251.101967]        [<ffffffff81154168>] queue_work_on+0x98/0x120
      [  251.101967]        [<ffffffff81161351>] clock_was_set_delayed+0x21/0x30
      [  251.101967]        [<ffffffff811c4bd1>] do_adjtimex+0x111/0x160
      [  251.101967]        [<ffffffff811e2711>] compat_sys_adjtimex+0x41/0x70
      [  251.101967]        [<ffffffff843a4b49>] ia32_sysret+0x0/0x5
      [  251.101967]
      -> #0 (timekeeper_seq){----..}:
      [snipped]
      [  251.101967] other info that might help us debug this:
      [  251.101967]
      [  251.101967] Chain exists of:
        timekeeper_seq --> &rt_b->rt_runtime_lock --> hrtimer_bases.lock#11
      
      [  251.101967]  Possible unsafe locking scenario:
      [  251.101967]
      [  251.101967]        CPU0                    CPU1
      [  251.101967]        ----                    ----
      [  251.101967]   lock(hrtimer_bases.lock#11);
      [  251.101967]                                lock(&rt_b->rt_runtime_lock);
      [  251.101967]                                lock(hrtimer_bases.lock#11);
      [  251.101967]   lock(timekeeper_seq);
      [  251.101967]
      [  251.101967]  *** DEADLOCK ***
      [  251.101967]
      [  251.101967] 3 locks held by kworker/10:1/4506:
      [  251.101967]  #0:  (events){.+.+.+}, at: [<ffffffff81154960>] process_one_work+0x200/0x530
      [  251.101967]  #1:  (hrtimer_work){+.+...}, at: [<ffffffff81154960>] process_one_work+0x200/0x530
      [  251.101967]  #2:  (hrtimer_bases.lock#11){-.-...}, at: [<ffffffff81160e7c>] retrigger_next_event+0x3c/0x70
      [  251.101967]
      [  251.101967] stack backtrace:
      [  251.101967] CPU: 10 PID: 4506 Comm: kworker/10:1 Not tainted 3.13.0-rc2-next-20131206-sasha-00005-g8be2375-dirty #4053
      [  251.101967] Workqueue: events clock_was_set_work
      
      So the best solution is to avoid calling clock_was_set_delayed() while
      holding the timekeeping lock, and instead using a flag variable to
      decide if we should call clock_was_set() once we've released the locks.
      
      This works for the case here, where the do_adjtimex() was the deadlock
      trigger point. Unfortuantely, in update_wall_time() we still hold
      the jiffies lock, which would deadlock with the ipi triggered by
      clock_was_set(), preventing us from calling it even after we drop the
      timekeeping lock. So instead call clock_was_set_delayed() at that point.
      
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Prarit Bhargava <prarit@redhat.com>
      Cc: Richard Cochran <richardcochran@gmail.com>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Sasha Levin <sasha.levin@oracle.com>
      Cc: stable <stable@vger.kernel.org> #3.10+
      Reported-by: NSasha Levin <sasha.levin@oracle.com>
      Tested-by: NSasha Levin <sasha.levin@oracle.com>
      Signed-off-by: NJohn Stultz <john.stultz@linaro.org>
      6fdda9a9
    • J
      timekeeping: Fix potential lost pv notification of time change · 5258d3f2
      John Stultz 提交于
      In 780427f0 (Indicate that clock was set in the pvclock
      gtod notifier), logic was added to pass a CLOCK_WAS_SET
      notification to the pvclock notifier chain.
      
      While that patch added a action flag returned from
      accumulate_nsecs_to_secs(), it only uses the returned value
      in one location, and not in the logarithmic accumulation.
      
      This means if a leap second triggered during the logarithmic
      accumulation (which is most likely where it would happen),
      the notification that the clock was set would not make it to
      the pv notifiers.
      
      This patch extends the logarithmic_accumulation pass down
      that action flag so proper notification will occur.
      
      This patch also changes the varialbe action -> clock_set
      per Ingo's suggestion.
      
      Cc: Sasha Levin <sasha.levin@oracle.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: David Vrabel <david.vrabel@citrix.com>
      Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Cc: Prarit Bhargava <prarit@redhat.com>
      Cc: Richard Cochran <richardcochran@gmail.com>
      Cc: <xen-devel@lists.xen.org>
      Cc: stable <stable@vger.kernel.org> #3.11+
      Signed-off-by: NJohn Stultz <john.stultz@linaro.org>
      5258d3f2