1. 17 10月, 2008 5 次提交
  2. 14 10月, 2008 3 次提交
  3. 10 10月, 2008 3 次提交
  4. 09 10月, 2008 1 次提交
    • I
      sched debug: add name to sched_domain sysctl entries · a5d8c348
      Ingo Molnar 提交于
      add /proc/sys/kernel/sched_domain/cpu0/domain0/name, to make
      it easier to see which specific scheduler domain remained at
      that entry.
      
      Since we process the scheduler domain tree and
      simplify it, it's not always immediately clear during debugging
      which domain came from where.
      
      depends on CONFIG_SCHED_DEBUG=y.
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      a5d8c348
  5. 08 10月, 2008 1 次提交
  6. 07 10月, 2008 1 次提交
  7. 06 10月, 2008 1 次提交
  8. 04 10月, 2008 2 次提交
    • D
      sched_rt.c: resch needed in rt_rq_enqueue() for the root rt_rq · f6121f4f
      Dario Faggioli 提交于
      While working on the new version of the code for SCHED_SPORADIC I
      noticed something strange in the present throttling mechanism. More
      specifically in the throttling timer handler in sched_rt.c
      (do_sched_rt_period_timer()) and in rt_rq_enqueue().
      
      The problem is that, when unthrottling a runqueue, rt_rq_enqueue() only
      asks for rescheduling if the runqueue has a sched_entity associated to
      it (i.e., rt_rq->rt_se != NULL).
      Now, if the runqueue is the root rq (which has a rt_se = NULL)
      rescheduling does not take place, and it is delayed to some undefined
      instant in the future.
      
      This imply some random bandwidth usage by the RT tasks under throttling.
      For instance, setting rt_runtime_us/rt_period_us = 950ms/1000ms an RT
      task will get less than 95%. In our tests we got something varying
      between 70% to 95%.
      Using smaller time values, e.g., 95ms/100ms, things are even worse, and
      I can see values also going down to 20-25%!!
      
      The tests we performed are simply running 'yes' as a SCHED_FIFO task,
      and checking the CPU usage with top, but we can investigate thoroughly
      if you think it is needed.
      
      Things go much better, for us, with the attached patch... Don't know if
      it is the best approach, but it solved the issue for us.
      Signed-off-by: NDario Faggioli <raistlin@linux.it>
      Signed-off-by: NMichael Trimarchi <trimarchimichael@yahoo.it>
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: <stable@kernel.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      f6121f4f
    • T
      clockevents: check broadcast tick device not the clock events device · 07454bff
      Thomas Gleixner 提交于
      Impact: jiffies increment too fast.
      
      Hugh Dickins noted that with NOHZ=n and HIGHRES=n jiffies get
      incremented too fast. The reason is a wrong check in the broadcast
      enter/exit code, which keeps the local apic timer in periodic mode
      when the switch happens.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      07454bff
  9. 03 10月, 2008 4 次提交
  10. 30 9月, 2008 2 次提交
  11. 29 9月, 2008 5 次提交
    • B
      mm owner: fix race between swapoff and exit · 31a78f23
      Balbir Singh 提交于
      There's a race between mm->owner assignment and swapoff, more easily
      seen when task slab poisoning is turned on.  The condition occurs when
      try_to_unuse() runs in parallel with an exiting task.  A similar race
      can occur with callers of get_task_mm(), such as /proc/<pid>/<mmstats>
      or ptrace or page migration.
      
      CPU0                                    CPU1
                                              try_to_unuse
                                              looks at mm = task0->mm
                                              increments mm->mm_users
      task 0 exits
      mm->owner needs to be updated, but no
      new owner is found (mm_users > 1, but
      no other task has task->mm = task0->mm)
      mm_update_next_owner() leaves
                                              mmput(mm) decrements mm->mm_users
      task0 freed
                                              dereferencing mm->owner fails
      
      The fix is to notify the subsystem via mm_owner_changed callback(),
      if no new owner is found, by specifying the new task as NULL.
      
      Jiri Slaby:
      mm->owner was set to NULL prior to calling cgroup_mm_owner_callbacks(), but
      must be set after that, so as not to pass NULL as old owner causing oops.
      
      Daisuke Nishimura:
      mm_update_next_owner() may set mm->owner to NULL, but mem_cgroup_from_task()
      and its callers need to take account of this situation to avoid oops.
      
      Hugh Dickins:
      Lockdep warning and hang below exec_mmap() when testing these patches.
      exit_mm() up_reads mmap_sem before calling mm_update_next_owner(),
      so exec_mmap() now needs to do the same.  And with that repositioning,
      there's now no point in mm_need_new_owner() allowing for NULL mm.
      Reported-by: NHugh Dickins <hugh@veritas.com>
      Signed-off-by: NBalbir Singh <balbir@linux.vnet.ibm.com>
      Signed-off-by: NJiri Slaby <jirislaby@gmail.com>
      Signed-off-by: NDaisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
      Signed-off-by: NHugh Dickins <hugh@veritas.com>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Paul Menage <menage@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      31a78f23
    • T
      hrtimer: prevent migration of per CPU hrtimers · ccc7dadf
      Thomas Gleixner 提交于
      Impact: per CPU hrtimers can be migrated from a dead CPU
      
      The hrtimer code has no knowledge about per CPU timers, but we need to
      prevent the migration of such timers and warn when such a timer is
      active at migration time.
      
      Explicitely mark the timers as per CPU and use a more understandable
      mode descriptor for the interrupts safe unlocked callback mode, which
      is used by hrtimer_sleeper and the scheduler code.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      ccc7dadf
    • T
      hrtimer: mark migration state · b00c1a99
      Thomas Gleixner 提交于
      Impact: during migration active hrtimers can be seen as inactive
      
      The migration code removes the hrtimers from the queues of the dead
      CPU and sets the state temporary to INACTIVE. The enqueue code sets it
      to ACTIVE/PENDING again.
      
      Prevent that the wrong state can be seen by using a separate migration
      state bit.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      b00c1a99
    • T
      hrtimer: fix migration of CB_IRQSAFE_NO_SOFTIRQ hrtimers · 41e1022e
      Thomas Gleixner 提交于
      Impact: Stale timers after a CPU went offline.
      
      commit 37bb6cb4
             hrtimer: unlock hrtimer_wakeup
      
      changed the hrtimer sleeper callback mode to CB_IRQSAFE_NO_SOFTIRQ due
      to locking problems. A result of this change is that when enqueue is
      called for an already expired hrtimer the callback function is not
      longer called directly from the enqueue code. The normal callers have
      been fixed in the code, but the migration code which moves hrtimers
      from a dead CPU to a live CPU was not made aware of this.
      
      This can be fixed by checking the timer state after the call to
      enqueue in the migration code.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      41e1022e
    • T
      hrtimer: migrate pending list on cpu offline · 7659e349
      Thomas Gleixner 提交于
      Impact: hrtimers which are on the pending list are not migrated at cpu
      	offline and can be stale forever
      
      Add the pending list migration when CONFIG_HIGH_RES_TIMERS is enabled
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      7659e349
  12. 26 9月, 2008 2 次提交
  13. 25 9月, 2008 1 次提交
  14. 23 9月, 2008 9 次提交