1. 18 7月, 2008 2 次提交
    • M
      cpu hotplug, sched: Introduce cpu_active_map and redo sched domain managment (take 2) · e761b772
      Max Krasnyansky 提交于
      This is based on Linus' idea of creating cpu_active_map that prevents
      scheduler load balancer from migrating tasks to the cpu that is going
      down.
      
      It allows us to simplify domain management code and avoid unecessary
      domain rebuilds during cpu hotplug event handling.
      
      Please ignore the cpusets part for now. It needs some more work in order
      to avoid crazy lock nesting. Although I did simplfy and unify domain
      reinitialization logic. We now simply call partition_sched_domains() in
      all the cases. This means that we're using exact same code paths as in
      cpusets case and hence the test below cover cpusets too.
      Cpuset changes to make rebuild_sched_domains() callable from various
      contexts are in the separate patch (right next after this one).
      
      This not only boots but also easily handles
      	while true; do make clean; make -j 8; done
      and
      	while true; do on-off-cpu 1; done
      at the same time.
      (on-off-cpu 1 simple does echo 0/1 > /sys/.../cpu1/online thing).
      
      Suprisingly the box (dual-core Core2) is quite usable. In fact I'm typing
      this on right now in gnome-terminal and things are moving just fine.
      
      Also this is running with most of the debug features enabled (lockdep,
      mutex, etc) no BUG_ONs or lockdep complaints so far.
      
      I believe I addressed all of the Dmitry's comments for original Linus'
      version. I changed both fair and rt balancer to mask out non-active cpus.
      And replaced cpu_is_offline() with !cpu_active() in the main scheduler
      code where it made sense (to me).
      Signed-off-by: NMax Krasnyanskiy <maxk@qualcomm.com>
      Acked-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Acked-by: NGregory Haskins <ghaskins@novell.com>
      Cc: dmitry.adamushko@gmail.com
      Cc: pj@sgi.com
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      e761b772
    • M
      sched: reduce stack size in isolated_cpu_setup() · 13b40c1e
      Mike Travis 提交于
        * Remove 16k stack requirements in isolated_cpu_setup when NR_CPUS=4096.
      Signed-off-by: NMike Travis <travis@sgi.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      13b40c1e
  2. 11 7月, 2008 1 次提交
  3. 10 7月, 2008 1 次提交
    • D
      sched: fix cpu hotplug · dc7fab8b
      Dmitry Adamushko 提交于
      I think we may have a race between try_to_wake_up() and
      migrate_live_tasks() -> move_task_off_dead_cpu() when the later one
      may end up looping endlessly.
      
      Interrupts are enabled on other CPUs when migration_call(CPU_DEAD, ...) is
      called so we may get a race between try_to_wake_up() and
      migrate_live_tasks() -> move_task_off_dead_cpu(). The former one may push
      a task out of a dead CPU causing the later one to loop endlessly.
      
      Heiko Carstens observed:
      
      | That's exactly what explains a dump I got yesterday. Thanks for fixing! :)
      Signed-off-by: NDmitry Adamushko <dmitry.adamushko@gmail.com>
      Cc: miaox@cn.fujitsu.com
      Cc: Lai Jiangshan <laijs@cn.fujitsu.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Avi Kivity <avi@qumranet.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      dc7fab8b
  4. 08 7月, 2008 1 次提交
  5. 04 7月, 2008 3 次提交
    • A
      sched: fix accounting in task delay accounting & migration · 46ac22ba
      Ankita Garg 提交于
      On Thu, Jun 19, 2008 at 12:27:14PM +0200, Peter Zijlstra wrote:
      > On Thu, 2008-06-05 at 10:50 +0530, Ankita Garg wrote:
      >
      > > Thanks Peter for the explanation...
      > >
      > > I agree with the above and that is the reason why I did not see weird
      > > values with cpu_time. But, run_delay still would suffer skews as the end
      > > points for delta could be taken on different cpus due to migration (more
      > > so on RT kernel due to the push-pull operations). With the below patch,
      > > I could not reproduce the issue I had seen earlier. After every dequeue,
      > > we take the delta and start wait measurements from zero when moved to a
      > > different rq.
      >
      > OK, so task delay delay accounting is broken because it doesn't take
      > migration into account.
      >
      > What you've done is make it symmetric wrt enqueue, and account it like
      >
      >   cpu0      cpu1
      >
      > enqueue
      >  <wait-d1>
      > dequeue
      >             enqueue
      >              <wait-d2>
      >             run
      >
      > Where you add both d1 and d2 to the run_delay,.. right?
      >
      
      Thanks for reviewing the patch. The above is exactly what I have done.
      
      > This seems like a good fix, however it looks like the patch will break
      > compilation in !CONFIG_SCHEDSTATS && !CONFIG_TASK_DELAY_ACCT, of it
      > failing to provide a stub for sched_info_dequeue() in that case.
      
      Fixed. Pl. find the new patch below.
      Signed-off-by: NAnkita Garg <ankita@in.ibm.com>
      Acked-by: NPeter Zijlstra <peterz@infradead.org>
      Cc: Gregory Haskins <ghaskins@novell.com>
      Cc: rostedt@goodmis.org
      Cc: suresh.b.siddha@intel.com
      Cc: aneesh.kumar@linux.vnet.ibm.com
      Cc: dhaval@linux.vnet.ibm.com
      Cc: vatsa@linux.vnet.ibm.com
      Cc: David Bahi <DBahi@novell.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      46ac22ba
    • G
      sched: add avg-overlap support to RT tasks · 2087a1ad
      Gregory Haskins 提交于
      We have the notion of tracking process-coupling (a.k.a. buddy-wake) via
      the p->se.last_wake / p->se.avg_overlap facilities, but it is only used
      for cfs to cfs interactions.  There is no reason why an rt to cfs
      interaction cannot share in establishing a relationhip in a similar
      manner.
      
      Because PREEMPT_RT runs many kernel threads as FIFO priority, we often
      times have heavy interaction between RT threads waking CFS applications.
      This patch offers a substantial boost (50-60%+) in perfomance under those
      circumstances.
      Signed-off-by: NGregory Haskins <ghaskins@novell.com>
      Cc: npiggin@suse.de
      Cc: rostedt@goodmis.org
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      2087a1ad
    • G
      sched: terminate newidle balancing once at least one task has moved over · c4acb2c0
      Gregory Haskins 提交于
      Inspired by Peter Zijlstra.
      Signed-off-by: NGregory Haskins <ghaskins@novell.com>
      Cc: npiggin@suse.de
      Cc: rostedt@goodmis.org
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      c4acb2c0
  6. 01 7月, 2008 1 次提交
    • R
      sched: fix divide error when trying to configure rt_period to zero · 619b0488
      Raistlin 提交于
      Here it is another little Oops we found while configuring invalid values
      via cgroups:
      
      echo 0 > /dev/cgroups/0/cpu.rt_period_us
      or
      echo 4294967296 > /dev/cgroups/0/cpu.rt_period_us
      
      [  205.509825] divide error: 0000 [#1]
      [  205.510151] Modules linked in:
      [  205.510151]
      [  205.510151] Pid: 2339, comm: bash Not tainted (2.6.26-rc8 #33)
      [  205.510151] EIP: 0060:[<c030c6ef>] EFLAGS: 00000293 CPU: 0
      [  205.510151] EIP is at div64_u64+0x5f/0x70
      [  205.510151] EAX: 0000389f EBX: 00000000 ECX: 00000000 EDX: 00000000
      [  205.510151] ESI: d9800000 EDI: 00000000 EBP: c6cede60 ESP: c6cede50
      [  205.510151]  DS: 007b ES: 007b FS: 0000 GS: 0033 SS: 0068
      [  205.510151] Process bash (pid: 2339, ti=c6cec000 task=c79be370 task.ti=c6cec000)
      [  205.510151] Stack: d9800000 0000389f c05971a0 d9800000 c6cedeb4 c0214dbd 00000000 00000000
      [  205.510151]        c6cede88 c0242bd8 c05377c0 c7a41b40 00000000 00000000 00000000 c05971a0
      [  205.510151]        c780ed20 c7508494 c7a41b40 00000000 00000002 c6cedebc c05971a0 ffffffea
      [  205.510151] Call Trace:
      [  205.510151]  [<c0214dbd>] ? __rt_schedulable+0x1cd/0x240
      [  205.510151]  [<c0242bd8>] ? cgroup_file_open+0x18/0xe0
      [  205.510151]  [<c0214fe4>] ? tg_set_bandwidth+0xa4/0xf0
      [  205.510151]  [<c0215066>] ? sched_group_set_rt_period+0x36/0x50
      [  205.510151]  [<c021508e>] ? cpu_rt_period_write_uint+0xe/0x10
      [  205.510151]  [<c0242dc5>] ? cgroup_file_write+0x125/0x160
      [  205.510151]  [<c0232c15>] ? hrtimer_interrupt+0x155/0x190
      [  205.510151]  [<c02f047f>] ? security_file_permission+0xf/0x20
      [  205.510151]  [<c0277ad8>] ? rw_verify_area+0x48/0xc0
      [  205.510151]  [<c0283744>] ? dupfd+0x104/0x130
      [  205.510151]  [<c027838c>] ? vfs_write+0x9c/0x160
      [  205.510151]  [<c0242ca0>] ? cgroup_file_write+0x0/0x160
      [  205.510151]  [<c027850d>] ? sys_write+0x3d/0x70
      [  205.510151]  [<c0203019>] ? sysenter_past_esp+0x6a/0x91
      [  205.510151]  =======================
      [  205.510151] Code: 0f 45 de 31 f6 0f ad d0 d3 ea f6 c1 20 0f 45 c2 0f 45 d6 89 45 f0 89 55 f4 8b 55 f4 31 c9 8b 45 f0 39 d3 89 c6 77 08 89 d0 31 d2 <f7> f3 89 c1 83 c4 08 89 f0 f7 f3 89 ca 5b 5e 5d c3 55 89 e5 56
      [  205.510151] EIP: [<c030c6ef>] div64_u64+0x5f/0x70 SS:ESP 0068:c6cede50
      
      The attached patch solves the issue for me.
      
      I'm checking as soon as possible for the period not being zero since, if
      it is, going ahead is useless. This way we also save a mutex_lock() and
      a read_lock() wrt doing it inside tg_set_bandwidth() or
      __rt_schedulable().
      Signed-off-by: NDario Faggioli <raistlin@linux.it>
      Signed-off-by: NMichael Trimarchi <trimarchimichael@yahoo.it>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      619b0488
  7. 30 6月, 2008 2 次提交
    • V
      sched: fix warning · 30432094
      Vegard Nossum 提交于
      This patch fixes the following warning:
      
      kernel/sched.c:1667: warning: 'cfs_rq_set_shares' defined but not used
      
      This seems the correct way to fix this; cfs_rq_set_shares() is only used
      in a single place, which is also inside #ifdef CONFIG_FAIR_GROUP_SCHED.
      Signed-off-by: NVegard Nossum <vegard.nossum@gmail.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      30432094
    • I
      sched: build fix · 34e83e85
      Ingo Molnar 提交于
      fix:
      
      kernel/sched.c: In function ‘sched_group_set_shares':
      kernel/sched.c:8635: error: implicit declaration of function ‘cfs_rq_set_shares'
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      34e83e85
  8. 29 6月, 2008 1 次提交
    • D
      sched: fix cpu hotplug · 79c53799
      Dmitry Adamushko 提交于
      the CPU hotplug problems (crashes under high-volume unplug+replug
      tests) seem to be related to migrate_dead_tasks().
      
      Firstly I added traces to see all tasks being migrated with
      migrate_live_tasks() and migrate_dead_tasks(). On my setup the problem
      pops up (the one with "se == NULL" in the loop of
      pick_next_task_fair()) shortly after the traces indicate that some has
      been migrated with migrate_dead_tasks()). btw., I can reproduce it
      much faster now with just a plain cpu down/up loop.
      
      [disclaimer] Well, unless I'm really missing something important in
      this late hour [/desclaimer] pick_next_task() is not something
      appropriate for migrate_dead_tasks() :-)
      
      the following change seems to eliminate the problem on my setup
      (although, I kept it running only for a few minutes to get a few
      messages indicating migrate_dead_tasks() does move tasks and the
      system is still ok)
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      79c53799
  9. 27 6月, 2008 19 次提交
  10. 24 6月, 2008 1 次提交
    • R
      sched: add new API sched_setscheduler_nocheck: add a flag to control access checks · 961ccddd
      Rusty Russell 提交于
      Hidehiro Kawai noticed that sched_setscheduler() can fail in
      stop_machine: it calls sched_setscheduler() from insmod, which can
      have CAP_SYS_MODULE without CAP_SYS_NICE.
      
      Two cases could have failed, so are changed to sched_setscheduler_nocheck:
        kernel/softirq.c:cpu_callback()
      	- CPU hotplug callback
        kernel/stop_machine.c:__stop_machine_run()
      	- Called from various places, including modprobe()
      Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
      Cc: Jeremy Fitzhardinge <jeremy@goop.org>
      Cc: Hidehiro Kawai <hidehiro.kawai.ez@hitachi.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: linux-mm@kvack.org
      Cc: sugita <yumiko.sugita.yf@hitachi.com>
      Cc: Satoshi OSHIMA <satoshi.oshima.fk@hitachi.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      961ccddd
  11. 20 6月, 2008 3 次提交
    • O
      sched: refactor wait_for_completion_timeout() · ea71a546
      Oleg Nesterov 提交于
      Simplify the code and fix the boundary condition of
      wait_for_completion_timeout(,0).
      
      We can kill the first __remove_wait_queue() as well.
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      ea71a546
    • R
      sched: fix wait_for_completion_timeout() spurious failure under heavy load · bb10ed09
      Roland Dreier 提交于
      It seems that the current implementaton of wait_for_completion_timeout()
      has a small problem under very high load for the common pattern:
      
      	if (!wait_for_completion_timeout(&done, timeout))
      		/* handle failure */
      
      because the implementation very roughly does (lots of code deleted to
      show the basic flow):
      
      	static inline long __sched
      	do_wait_for_common(struct completion *x, long timeout, int state)
      	{
      		if (x->done)
      			return timeout;
      
      		do {
      			timeout = schedule_timeout(timeout);
      
      			if (!timeout)
      				return timeout;
      
      		} while (!x->done);
      
      		return timeout;
      	}
      
      so if the system is very busy and x->done is not set when
      do_wait_for_common() is entered, it is possible that the first call to
      schedule_timeout() returns 0 because the task doing wait_for_completion
      doesn't get rescheduled for a long time, even if it is woken up early
      enough.
      
      In this case, wait_for_completion_timeout() returns 0 without even
      checking x->done again, and the code above falls into its failure case
      purely for scheduler reasons, even if the hardware event or whatever was
      being waited for happened early enough.
      
      It would make sense to add an extra test to do_wait_for() in the timeout
      case and return 1 if x->done is actually set.
      
      A quick audit (not exhaustive) of wait_for_completion_timeout() callers
      seems to indicate that no one actually cares about the return value in
      the success case -- they just test for 0 (timed out) versus non-zero
      (wait succeeded).
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      bb10ed09
    • P
      sched: rt: fix the bandwidth contraint computations · 10b612f4
      Peter Zijlstra 提交于
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: "Daniel K." <dk@uw.no>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      10b612f4
  12. 19 6月, 2008 4 次提交
    • L
      cpuset: limit the input of cpuset.sched_relax_domain_level · 30e0e178
      Li Zefan 提交于
      We allow the inputs to be [-1 ... SD_LV_MAX), and return -EINVAL
      for inputs outside this range.
      Signed-off-by: NLi Zefan <lizf@cn.fujitsu.com>
      Acked-by: NPaul Menage <menage@google.com>
      Acked-by: NPaul Jackson <pj@sgi.com>
      Acked-by: NHidetoshi Seto <seto.hidetoshi@jp.fujitsu.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      30e0e178
    • M
      sched: CPU hotplug events must not destroy scheduler domains created by the cpusets · f18f982a
      Max Krasnyansky 提交于
      First issue is not related to the cpusets. We're simply leaking doms_cur.
      It's allocated in arch_init_sched_domains() which is called for every
      hotplug event. So we just keep reallocation doms_cur without freeing it.
      I introduced free_sched_domains() function that cleans things up.
      
      Second issue is that sched domains created by the cpusets are
      completely destroyed by the CPU hotplug events. For all CPU hotplug
      events scheduler attaches all CPUs to the NULL domain and then puts
      them all into the single domain thereby destroying domains created
      by the cpusets (partition_sched_domains).
      The solution is simple, when cpusets are enabled scheduler should not
      create default domain and instead let cpusets do that. Which is
      exactly what the patch does.
      Signed-off-by: NMax Krasnyansky <maxk@qualcomm.com>
      Cc: pj@sgi.com
      Cc: menage@google.com
      Cc: rostedt@goodmis.org
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      f18f982a
    • P
      sched: rt-group: fix hierarchy · 7ea56616
      Peter Zijlstra 提交于
      Don't re-set the entity's runqueue to the wrong rq after we've set it
      to the right one.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Tested-by: NDaniel K. <dk@uw.no>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      7ea56616
    • D
      sched: NULL pointer dereference while setting sched_rt_period_us · 49307fd6
      Dario Faggioli 提交于
      When CONFIG_RT_GROUP_SCHED and CONFIG_CGROUP_SCHED are enabled, with:
      
       echo 10000 > /proc/sys/kernel/sched_rt_period_us
      
      We get this:
      
       BUG: unable to handle kernel NULL pointer dereference at 0000008c
       [  947.682233] IP: [<c0216b72>] __rt_schedulable+0x12/0x160
       [  947.683123] *pde = 00000000=20
       [  947.683782] Oops: 0000 [#1]
       [  947.684307] Modules linked in:
       [  947.684308]
       [  947.684308] Pid: 2359, comm: bash Not tainted (2.6.26-rc6 #8)
       [  947.684308] EIP: 0060:[<c0216b72>] EFLAGS: 00000246 CPU: 0
       [  947.684308] EIP is at __rt_schedulable+0x12/0x160
       [  947.684308] EAX: 00000000 EBX: 00000000 ECX: 00000000 EDX: 00000001
       [  947.684308] ESI: c0521db4 EDI: 00000001 EBP: c6cc9f00 ESP: c6cc9ed0
       [  947.684308]  DS: 007b ES: 007b FS: 0000 GS: 0033 SS: 0068
       [  947.684308] Process bash (pid: 2359, tiÆcc8000 taskÇa54f00=20 task.tiÆcc8000)
       [  947.684308] Stack: c0222790 00000000 080f8c08 c0521db4 c6cc9f00 00000001 00000000 00000000
       [  947.684308]        c6cc9f9c 00000000 c0521db4 00000001 c6cc9f28 c0216d40 00000000 00000000
       [  947.684308]        c6cc9f9c 000f4240 000e7ef0 ffffffff c0521db4 c79dfb60 c6cc9f58 c02af2cc
       [  947.684308] Call Trace:
       [  947.684308]  [<c0222790>] ? do_proc_dointvec_conv+0x0/0x50
       [  947.684308]  [<c0216d40>] ? sched_rt_handler+0x80/0x110
       [  947.684308]  [<c02af2cc>] ? proc_sys_call_handler+0x9c/0xb0
       [  947.684308]  [<c02af2fa>] ? proc_sys_write+0x1a/0x20
       [  947.684308]  [<c0273c36>] ? vfs_write+0x96/0x160
       [  947.684308]  [<c02af2e0>] ? proc_sys_write+0x0/0x20
       [  947.684308]  [<c027423d>] ? sys_write+0x3d/0x70
       [  947.684308]  [<c0202ef5>] ? sysenter_past_esp+0x6a/0x91
       [  947.684308]  =======================
       [  947.684308] Code: 24 04 e8 62 b1 0e 00 89 c7 89 f8 8b 5d f4 8b 75
       f8 8b 7d fc 89 ec 5d c3 90 55 89 e5 57 56 53 83 ec 24 89 45 ec 89 55 e4
       89 4d e8 <8b> b8 8c 00 00 00 85 ff 0f 84 c9 00 00 00 8b 57 24 39 55 e8
       8b
       [  947.684308] EIP: [<c0216b72>] __rt_schedulable+0x12/0x160 SS:ESP  0068:c6cc9ed0
      
      We think the following patch solves the issue.
      Signed-off-by: NDario Faggioli <raistlin@linux.it>
      Signed-off-by: NMichael Trimarchi <trimarchimichael@yahoo.it>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      49307fd6
  13. 18 6月, 2008 1 次提交
    • D
      sched: rework of "prioritize non-migratable tasks over migratable ones" · 20b6331b
      Dmitry Adamushko 提交于
      regarding this commit: 45c01e82
      
      I think we can do it simpler. Please take a look at the patch below.
      
      Instead of having 2 separate arrays (which is + ~800 bytes on x86_32 and
      twice so on x86_64), let's add "exclusive" (the ones that are bound to
      this CPU) tasks to the head of the queue and "shared" ones -- to the
      end.
      
      In case of a few newly woken up "exclusive" tasks, they are 'stacked'
      (not queued as now), meaning that a task {i+1} is being placed in front
      of the previously woken up task {i}. But I don't think that this
      behavior may cause any realistic problems.
      
      There are a couple of changes on top of this one.
      
      (1) in check_preempt_curr_rt()
      
      I don't think there is a need for the "pick_next_rt_entity(rq, &rq->rt)
      != &rq->curr->rt" check.
      
      enqueue_task_rt(p) and check_preempt_curr_rt() are always called one
      after another with rq->lock being held so the following check
      "p->rt.nr_cpus_allowed == 1 && rq->curr->rt.nr_cpus_allowed != 1" should
      be enough (well, just its left part) to guarantee that 'p' has been
      queued in front of the 'curr'.
      
      (2) in set_cpus_allowed_rt()
      
      I don't thinks there is a need for requeue_task_rt() here.
      
      Perhaps, the only case when 'requeue' (+ reschedule) might be useful is
      as follows:
      
      i) weight == 1 && cpu_isset(task_cpu(p), *new_mask)
      
      i.e. a task is being bound to this CPU);
      
      ii) 'p' != rq->curr
      
      but here, 'p' has already been on this CPU for a while and was not
      migrated. i.e. it's possible that 'rq->curr' would not have high chances
      to be migrated right at this particular moment (although, has chance in
      a bit longer term), should we allow it to be preempted.
      
      Anyway, I think we should not perhaps make it more complex trying to
      address some rare corner cases. For instance, that's why a single queue
      approach would be preferable. Unless I'm missing something obvious, this
      approach gives us similar functionality at lower cost.
      
      Verified only compilation-wise.
      
      (Almost)-Signed-off-by: Dmitry Adamushko <dmitry.adamushko@gmail.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      20b6331b