1. 16 10月, 2008 27 次提交
  2. 14 10月, 2008 3 次提交
  3. 10 10月, 2008 3 次提交
  4. 09 10月, 2008 1 次提交
    • I
      sched debug: add name to sched_domain sysctl entries · a5d8c348
      Ingo Molnar 提交于
      add /proc/sys/kernel/sched_domain/cpu0/domain0/name, to make
      it easier to see which specific scheduler domain remained at
      that entry.
      
      Since we process the scheduler domain tree and
      simplify it, it's not always immediately clear during debugging
      which domain came from where.
      
      depends on CONFIG_SCHED_DEBUG=y.
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      a5d8c348
  5. 08 10月, 2008 1 次提交
  6. 07 10月, 2008 1 次提交
  7. 06 10月, 2008 1 次提交
  8. 04 10月, 2008 2 次提交
    • D
      sched_rt.c: resch needed in rt_rq_enqueue() for the root rt_rq · f6121f4f
      Dario Faggioli 提交于
      While working on the new version of the code for SCHED_SPORADIC I
      noticed something strange in the present throttling mechanism. More
      specifically in the throttling timer handler in sched_rt.c
      (do_sched_rt_period_timer()) and in rt_rq_enqueue().
      
      The problem is that, when unthrottling a runqueue, rt_rq_enqueue() only
      asks for rescheduling if the runqueue has a sched_entity associated to
      it (i.e., rt_rq->rt_se != NULL).
      Now, if the runqueue is the root rq (which has a rt_se = NULL)
      rescheduling does not take place, and it is delayed to some undefined
      instant in the future.
      
      This imply some random bandwidth usage by the RT tasks under throttling.
      For instance, setting rt_runtime_us/rt_period_us = 950ms/1000ms an RT
      task will get less than 95%. In our tests we got something varying
      between 70% to 95%.
      Using smaller time values, e.g., 95ms/100ms, things are even worse, and
      I can see values also going down to 20-25%!!
      
      The tests we performed are simply running 'yes' as a SCHED_FIFO task,
      and checking the CPU usage with top, but we can investigate thoroughly
      if you think it is needed.
      
      Things go much better, for us, with the attached patch... Don't know if
      it is the best approach, but it solved the issue for us.
      Signed-off-by: NDario Faggioli <raistlin@linux.it>
      Signed-off-by: NMichael Trimarchi <trimarchimichael@yahoo.it>
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: <stable@kernel.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      f6121f4f
    • T
      clockevents: check broadcast tick device not the clock events device · 07454bff
      Thomas Gleixner 提交于
      Impact: jiffies increment too fast.
      
      Hugh Dickins noted that with NOHZ=n and HIGHRES=n jiffies get
      incremented too fast. The reason is a wrong check in the broadcast
      enter/exit code, which keeps the local apic timer in periodic mode
      when the switch happens.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      07454bff
  9. 03 10月, 2008 1 次提交