1. 16 3月, 2012 1 次提交
  2. 15 2月, 2012 4 次提交
  3. 14 2月, 2012 1 次提交
  4. 10 2月, 2012 1 次提交
  5. 07 2月, 2012 2 次提交
    • S
      perf: Fix double start/stop in x86_pmu_start() · f39d47ff
      Stephane Eranian 提交于
      The following patch fixes a bug introduced by the following
      commit:
      
              e050e3f0 ("perf: Fix broken interrupt rate throttling")
      
      The patch caused the following warning to pop up depending on
      the sampling frequency adjustments:
      
        ------------[ cut here ]------------
        WARNING: at arch/x86/kernel/cpu/perf_event.c:995 x86_pmu_start+0x79/0xd4()
      
      It was caused by the following call sequence:
      
      perf_adjust_freq_unthr_context.part() {
           stop()
           if (delta > 0) {
                perf_adjust_period() {
                    if (period > 8*...) {
                        stop()
                        ...
                        start()
                    }
                }
            }
            start()
      }
      
      Which caused a double start and a double stop, thus triggering
      the assert in x86_pmu_start().
      
      The patch fixes the problem by avoiding the double calls. We
      pass a new argument to perf_adjust_period() to indicate whether
      or not the event is already stopped. We can't just remove the
      start/stop from that function because it's called from
      __perf_event_overflow where the event needs to be reloaded via a
      stop/start back-toback call.
      
      The patch reintroduces the assertion in x86_pmu_start() which
      was removed by commit:
      
      	84f2b9b2 ("perf: Remove deprecated WARN_ON_ONCE()")
      
      In this second version, we've added calls to disable/enable PMU
      during unthrottling or frequency adjustment based on bug report
      of spurious NMI interrupts from Eric Dumazet.
      Reported-and-tested-by: NEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NStephane Eranian <eranian@google.com>
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: markus@trippelsdorf.de
      Cc: paulus@samba.org
      Link: http://lkml.kernel.org/r/20120207133956.GA4932@quad
      [ Minor edits to the changelog and to the code ]
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      f39d47ff
    • T
      block: strip out locking optimization in put_io_context() · 11a3122f
      Tejun Heo 提交于
      put_io_context() performed a complex trylock dancing to avoid
      deferring ioc release to workqueue.  It was also broken on UP because
      trylock was always assumed to succeed which resulted in unbalanced
      preemption count.
      
      While there are ways to fix the UP breakage, even the most
      pathological microbench (forced ioc allocation and tight fork/exit
      loop) fails to show any appreciable performance benefit of the
      optimization.  Strip it out.  If there turns out to be workloads which
      are affected by this change, simpler optimization from the discussion
      thread can be applied later.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      LKML-Reference: <1328514611.21268.66.camel@sli10-conroe>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      11a3122f
  6. 05 2月, 2012 1 次提交
    • S
      PM / Freezer: Thaw only kernel threads if freezing of kernel threads fails · 379e0be8
      Srivatsa S. Bhat 提交于
      If freezing of kernel threads fails, we are expected to automatically
      thaw tasks in the error recovery path. However, at times, we encounter
      situations in which we would like the automatic error recovery path
      to thaw only the kernel threads, because we want to be able to do
      some more cleanup before we thaw userspace. Something like:
      
      error = freeze_kernel_threads();
      if (error) {
      	/* Do some cleanup */
      
      	/* Only then thaw userspace tasks*/
      	thaw_processes();
      }
      
      An example of such a situation is where we freeze/thaw filesystems
      during suspend/hibernation. There, if freezing of kernel threads
      fails, we would like to thaw the frozen filesystems before thawing
      the userspace tasks.
      
      So, modify freeze_kernel_threads() to thaw only kernel threads in
      case of freezing failure. And change suspend_freeze_processes()
      accordingly. (At the same time, let us also get rid of the rather
      cryptic usage of the conditional operator (:?) in that function.)
      
      [rjw: In fact, this patch fixes a regression introduced during the
       3.3 merge window, because without it thaw_processes() may be called
       before swsusp_free() in some situations and that may lead to massive
       memory allocation failures.]
      Signed-off-by: NSrivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
      Acked-by: NTejun Heo <tj@kernel.org>
      Acked-by: NNigel Cunningham <nigel@tuxonice.net>
      Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl>
      379e0be8
  7. 04 2月, 2012 1 次提交
  8. 03 2月, 2012 1 次提交
  9. 02 2月, 2012 1 次提交
  10. 30 1月, 2012 1 次提交
    • R
      PM / Hibernate: Fix s2disk regression related to freezing workqueues · 181e9bde
      Rafael J. Wysocki 提交于
      Commit 2aede851
      
        PM / Hibernate: Freeze kernel threads after preallocating memory
      
      introduced a mechanism by which kernel threads were frozen after
      the preallocation of hibernate image memory to avoid problems with
      frozen kernel threads not responding to memory freeing requests.
      However, it overlooked the s2disk code path in which the
      SNAPSHOT_CREATE_IMAGE ioctl was run directly after SNAPSHOT_FREE,
      which caused freeze_workqueues_begin() to BUG(), because it saw
      that worqueues had been already frozen.
      
      Although in principle this issue might be addressed by removing
      the relevant BUG_ON() from freeze_workqueues_begin(), that would
      reintroduce the very problem that commit 2aede851
      attempted to avoid into that particular code path.  For this reason,
      to fix the issue at hand, introduce thaw_kernel_threads() and make
      the SNAPSHOT_FREE ioctl execute it.
      
      Special thanks to Srivatsa S. Bhat for detailed analysis of the
      problem.
      Reported-and-tested-by: NJiri Slaby <jslaby@suse.cz>
      Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl>
      Acked-by: NSrivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
      Cc: stable@kernel.org
      181e9bde
  11. 27 1月, 2012 19 次提交
    • C
      sched/rt: Fix task stack corruption under __ARCH_WANT_INTERRUPTS_ON_CTXSW · cb297a3e
      Chanho Min 提交于
      This issue happens under the following conditions:
      
       1. preemption is off
       2. __ARCH_WANT_INTERRUPTS_ON_CTXSW is defined
       3. RT scheduling class
       4. SMP system
      
      Sequence is as follows:
      
       1.suppose current task is A. start schedule()
       2.task A is enqueued pushable task at the entry of schedule()
         __schedule
          prev = rq->curr;
          ...
          put_prev_task
           put_prev_task_rt
            enqueue_pushable_task
       4.pick the task B as next task.
         next = pick_next_task(rq);
       3.rq->curr set to task B and context_switch is started.
         rq->curr = next;
       4.At the entry of context_swtich, release this cpu's rq->lock.
         context_switch
          prepare_task_switch
           prepare_lock_switch
            raw_spin_unlock_irq(&rq->lock);
       5.Shortly after rq->lock is released, interrupt is occurred and start IRQ context
       6.try_to_wake_up() which called by ISR acquires rq->lock
          try_to_wake_up
           ttwu_remote
            rq = __task_rq_lock(p)
            ttwu_do_wakeup(rq, p, wake_flags);
              task_woken_rt
       7.push_rt_task picks the task A which is enqueued before.
         task_woken_rt
          push_rt_tasks(rq)
           next_task = pick_next_pushable_task(rq)
       8.At find_lock_lowest_rq(), If double_lock_balance() returns 0,
         lowest_rq can be the remote rq.
        (But,If preemption is on, double_lock_balance always return 1 and it
         does't happen.)
         push_rt_task
          find_lock_lowest_rq
           if (double_lock_balance(rq, lowest_rq))..
       9.find_lock_lowest_rq return the available rq. task A is migrated to
         the remote cpu/rq.
         push_rt_task
          ...
          deactivate_task(rq, next_task, 0);
          set_task_cpu(next_task, lowest_rq->cpu);
          activate_task(lowest_rq, next_task, 0);
       10. But, task A is on irq context at this cpu.
           So, task A is scheduled by two cpus at the same time until restore from IRQ.
           Task A's stack is corrupted.
      
      To fix it, don't migrate an RT task if it's still running.
      Signed-off-by: NChanho Min <chanho.min@lge.com>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Acked-by: NSteven Rostedt <rostedt@goodmis.org>
      Cc: <stable@kernel.org>
      Link: http://lkml.kernel.org/r/CAOAMb1BHA=5fm7KTewYyke6u-8DP0iUuJMpgQw54vNeXFsGpoQ@mail.gmail.comSigned-off-by: NIngo Molnar <mingo@elte.hu>
      cb297a3e
    • S
      perf: Fix broken interrupt rate throttling · e050e3f0
      Stephane Eranian 提交于
      This patch fixes the sampling interrupt throttling mechanism.
      
      It was broken in v3.2. Events were not being unthrottled. The
      unthrottling mechanism required that events be checked at each
      timer tick.
      
      This patch solves this problem and also separates:
      
        - unthrottling
        - multiplexing
        - frequency-mode period adjustments
      
      Not all of them need to be executed at each timer tick.
      
      This third version of the patch is based on my original patch +
      PeterZ proposal (https://lkml.org/lkml/2012/1/7/87).
      
      At each timer tick, for each context:
      
        - if the current CPU has throttled events, we unthrottle events
      
        - if context has frequency-based events, we adjust sampling periods
      
        - if we have reached the jiffies interval, we multiplex (rotate)
      
      We decoupled rotation (multiplexing) from frequency-mode sampling
      period adjustments.  They should not necessarily happen at the same
      rate. Multiplexing is subject to jiffies_interval (currently at 1
      but could be higher once the tunable is exposed via sysfs).
      
      We have grouped frequency-mode adjustment and unthrottling into the
      same routine to minimize code duplication. When throttled while in
      frequency mode, we scan the events only once.
      
      We have fixed the threshold enforcement code in __perf_event_overflow().
      There was a bug whereby it would allow more than the authorized rate
      because an increment of hwc->interrupts was not executed at the right
      place.
      
      The patch was tested with low sampling limit (2000) and fixed periods,
      frequency mode, overcommitted PMU.
      
      On a 2.1GHz AMD CPU:
      
       $ cat /proc/sys/kernel/perf_event_max_sample_rate
       2000
      
      We set a rate of 3000 samples/sec (2.1GHz/3000 = 700000):
      
       $ perf record -e cycles,cycles -c 700000  noploop 10
       $ perf report -D | tail -21
      
       Aggregated stats:
                 TOTAL events:      80086
                  MMAP events:         88
                  COMM events:          2
                  EXIT events:          4
              THROTTLE events:      19996
            UNTHROTTLE events:      19996
                SAMPLE events:      40000
      
       cycles stats:
                 TOTAL events:      40006
                  MMAP events:          5
                  COMM events:          1
                  EXIT events:          4
              THROTTLE events:       9998
            UNTHROTTLE events:       9998
                SAMPLE events:      20000
      
       cycles stats:
                 TOTAL events:      39996
              THROTTLE events:       9998
            UNTHROTTLE events:       9998
                SAMPLE events:      20000
      
      For 10s, the cap is 2x2000x10 = 40000 samples.
      We get exactly that: 20000 samples/event.
      Signed-off-by: NStephane Eranian <eranian@google.com>
      Cc: <stable@kernel.org> # v3.2+
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Link: http://lkml.kernel.org/r/20120126160319.GA5655@quadSigned-off-by: NIngo Molnar <mingo@elte.hu>
      e050e3f0
    • Y
      sched: Fix ancient race in do_exit() · b5740f4b
      Yasunori Goto 提交于
      try_to_wake_up() has a problem which may change status from TASK_DEAD to
      TASK_RUNNING in race condition with SMI or guest environment of virtual
      machine. As a result, exited task is scheduled() again and panic occurs.
      
      Here is the sequence how it occurs:
      
       ----------------------------------+-----------------------------
                                         |
                  CPU A                  |             CPU B
       ----------------------------------+-----------------------------
      
      TASK A calls exit()....
      
      do_exit()
      
        exit_mm()
          down_read(mm->mmap_sem);
      
          rwsem_down_failed_common()
      
            set TASK_UNINTERRUPTIBLE
            set waiter.task <= task A
            list_add to sem->wait_list
                 :
            raw_spin_unlock_irq()
            (I/O interruption occured)
      
                                            __rwsem_do_wake(mmap_sem)
      
                                              list_del(&waiter->list);
                                              waiter->task = NULL
                                              wake_up_process(task A)
                                                try_to_wake_up()
                                                   (task is still
                                                     TASK_UNINTERRUPTIBLE)
                                                    p->on_rq is still 1.)
      
                                                    ttwu_do_wakeup()
                                                       (*A)
                                                         :
           (I/O interruption handler finished)
      
            if (!waiter.task)
                schedule() is not called
                due to waiter.task is NULL.
      
            tsk->state = TASK_RUNNING
      
                :
                                                    check_preempt_curr();
                                                        :
        task->state = TASK_DEAD
                                                    (*B)
                                              <---    set TASK_RUNNING (*C)
      
           schedule()
           (exit task is running again)
           BUG_ON() is called!
       --------------------------------------------------------
      
      The execution time between (*A) and (*B) is usually very short,
      because the interruption is disabled, and setting TASK_RUNNING at (*C)
      must be executed before setting TASK_DEAD.
      
      HOWEVER, if SMI is interrupted between (*A) and (*B),
      (*C) is able to execute AFTER setting TASK_DEAD!
      Then, exited task is scheduled again, and BUG_ON() is called....
      
      If the system works on guest system of virtual machine, the time
      between (*A) and (*B) may be also long due to scheduling of hypervisor,
      and same phenomenon can occur.
      
      By this patch, do_exit() waits for releasing task->pi_lock which is used
      in try_to_wake_up(). It guarantees the task becomes TASK_DEAD after
      waking up.
      Signed-off-by: NYasunori Goto <y-goto@jp.fujitsu.com>
      Acked-by: NOleg Nesterov <oleg@redhat.com>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Link: http://lkml.kernel.org/r/20120117174031.3118.E1E9C6FF@jp.fujitsu.comSigned-off-by: NIngo Molnar <mingo@elte.hu>
      b5740f4b
    • T
      time: Move common updates to a function · cc06268c
      Thomas Gleixner 提交于
      CC: Thomas Gleixner <tglx@linutronix.de>
      CC: Eric Dumazet <eric.dumazet@gmail.com>
      CC: Richard Cochran <richardcochran@gmail.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NJohn Stultz <john.stultz@linaro.org>
      cc06268c
    • T
      time: Reorder so the hot data is together · 058892e6
      Thomas Gleixner 提交于
      Keep all the interesting data in a single cache line.
      
      CC: Thomas Gleixner <tglx@linutronix.de>
      CC: Eric Dumazet <eric.dumazet@gmail.com>
      CC: Richard Cochran <richardcochran@gmail.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NJohn Stultz <john.stultz@linaro.org>
      058892e6
    • J
      time: Remove most of xtime_lock usage in timekeeping.c · 92c1d3ed
      John Stultz 提交于
      Now that ntp.c's locking is reworked, we can remove most
      of the xtime_lock usage in timekeeping.c
      
      The remaining xtime_lock presence is really for jiffies access
      and the global load calculation.
      
      CC: Thomas Gleixner <tglx@linutronix.de>
      CC: Eric Dumazet <eric.dumazet@gmail.com>
      CC: Richard Cochran <richardcochran@gmail.com>
      Signed-off-by: NJohn Stultz <john.stultz@linaro.org>
      92c1d3ed
    • J
      ntp: Add ntp_lock to replace xtime_locking · bd331268
      John Stultz 提交于
      Use a ntp_lock spin lock to replace xtime_lock locking in ntp.c
      
      CC: Thomas Gleixner <tglx@linutronix.de>
      CC: Eric Dumazet <eric.dumazet@gmail.com>
      CC: Richard Cochran <richardcochran@gmail.com>
      Signed-off-by: NJohn Stultz <john.stultz@linaro.org>
      bd331268
    • J
      ntp: Access tick_length variable via ntp_tick_length() · ea7cf49a
      John Stultz 提交于
      Currently the NTP managed tick_length value is accessed globally,
      in preparations for locking cleanups, make sure it is accessed via
      a function and mark it as static.
      
      CC: Thomas Gleixner <tglx@linutronix.de>
      CC: Eric Dumazet <eric.dumazet@gmail.com>
      CC: Richard Cochran <richardcochran@gmail.com>
      Signed-off-by: NJohn Stultz <john.stultz@linaro.org>
      ea7cf49a
    • J
      ntp: Cleanup timex.h · 8357929e
      John Stultz 提交于
      Move ntp_sycned to ntp.c and mark time_status as static.
      Also yank function declaration for non-existant function.
      
      CC: Thomas Gleixner <tglx@linutronix.de>
      CC: Eric Dumazet <eric.dumazet@gmail.com>
      CC: Richard Cochran <richardcochran@gmail.com>
      Signed-off-by: NJohn Stultz <john.stultz@linaro.org>
      8357929e
    • J
      time: Add timekeeper lock · 70471f2f
      John Stultz 提交于
      Now that all the timekeeping variables are stored in
      the timekeeper structure, add a new lock to protect the
      structure.
      
      For now, this lock nests under the xtime_lock for writes.
      
      For readers, we don't need to take xtime_lock anymore.
      
      CC: Thomas Gleixner <tglx@linutronix.de>
      CC: Eric Dumazet <eric.dumazet@gmail.com>
      CC: Richard Cochran <richardcochran@gmail.com>
      Signed-off-by: NJohn Stultz <john.stultz@linaro.org>
      70471f2f
    • J
      time: Cleanup global variables and move them to the top · 8fcce546
      John Stultz 提交于
      Move global xtime_lock and timekeeping_suspended values up
      to the top of timekeeping.c
      
      CC: Thomas Gleixner <tglx@linutronix.de>
      CC: Eric Dumazet <eric.dumazet@gmail.com>
      CC: Richard Cochran <richardcochran@gmail.com>
      Signed-off-by: NJohn Stultz <john.stultz@linaro.org>
      8fcce546
    • J
      time: Move raw_time into timekeeper structure · 01f71b47
      John Stultz 提交于
      In preparation for locking cleanups, move raw_time into
      timekeeper structure.
      
      CC: Thomas Gleixner <tglx@linutronix.de>
      CC: Eric Dumazet <eric.dumazet@gmail.com>
      CC: Richard Cochran <richardcochran@gmail.com>
      Signed-off-by: NJohn Stultz <john.stultz@linaro.org>
      01f71b47
    • J
      time: Move xtime into timekeeeper structure · 8ff2cb92
      John Stultz 提交于
      In preparation for locking cleanups, move xtime into
      timekeeper structure.
      
      CC: Thomas Gleixner <tglx@linutronix.de>
      CC: Eric Dumazet <eric.dumazet@gmail.com>
      CC: Richard Cochran <richardcochran@gmail.com>
      Signed-off-by: NJohn Stultz <john.stultz@linaro.org>
      8ff2cb92
    • J
      time: Move wall_to_monotonic into the timekeeper structure · d9f7217a
      John Stultz 提交于
      In preparation for locking cleanups, move wall_to_monotonic
      into the timekeeper structure.
      
      CC: Thomas Gleixner <tglx@linutronix.de>
      CC: Eric Dumazet <eric.dumazet@gmail.com>
      CC: Richard Cochran <richardcochran@gmail.com>
      Signed-off-by: NJohn Stultz <john.stultz@linaro.org>
      d9f7217a
    • J
      time: Move total_sleep_time into the timekeeper structure · 00c5fb77
      John Stultz 提交于
      Move total_sleep_time into the timekeeper structure in preparation
      for locking cleanups
      
      CC: Thomas Gleixner <tglx@linutronix.de>
      CC: Eric Dumazet <eric.dumazet@gmail.com>
      CC: Richard Cochran <richardcochran@gmail.com>
      Signed-off-by: NJohn Stultz <john.stultz@linaro.org>
      00c5fb77
    • P
      bugs, x86: Fix printk levels for panic, softlockups and stack dumps · b0f4c4b3
      Prarit Bhargava 提交于
      rsyslog will display KERN_EMERG messages on a connected
      terminal.  However, these messages are useless/undecipherable
      for a general user.
      
      For example, after a softlockup we get:
      
       Message from syslogd@intel-s3e37-04 at Jan 25 14:18:06 ...
       kernel:Stack:
      
       Message from syslogd@intel-s3e37-04 at Jan 25 14:18:06 ...
       kernel:Call Trace:
      
       Message from syslogd@intel-s3e37-04 at Jan 25 14:18:06 ...
       kernel:Code: ff ff a8 08 75 25 31 d2 48 8d 86 38 e0 ff ff 48 89
       d1 0f 01 c8 0f ae f0 48 8b 86 38 e0 ff ff a8 08 75 08 b1 01 4c 89 e0 0f 01 c9 <e8> ea 69 dd ff 4c 29 e8 48 89 c7 e8 0f bc da ff 49 89 c4 49 89
      
      This happens because the printk levels for these messages are
      incorrect. Only an informational message should be displayed on
      a terminal.
      
      I modified the printk levels for various messages in the kernel
      and tested the output by using the drivers/misc/lkdtm.c kernel
      modules (ie, softlockups, panics, hard lockups, etc.) and
      confirmed that the console output was still the same and that
      the output to the terminals was correct.
      
      For example, in the case of a softlockup we now see the much
      more informative:
      
       Message from syslogd@intel-s3e37-04 at Jan 25 10:18:06 ...
       BUG: soft lockup - CPU4 stuck for 60s!
      
      instead of the above confusing messages.
      
      AFAICT, the messages no longer have to be KERN_EMERG.  In the
      most important case of a panic we set console_verbose().  As for
      the other less severe cases the correct data is output to the
      console and /var/log/messages.
      
      Successfully tested by me using the drivers/misc/lkdtm.c module.
      Signed-off-by: NPrarit Bhargava <prarit@redhat.com>
      Cc: dzickus@redhat.com
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Link: http://lkml.kernel.org/r/1327586134-11926-1-git-send-email-prarit@redhat.comSigned-off-by: NIngo Molnar <mingo@elte.hu>
      b0f4c4b3
    • S
      sched/nohz: Fix nohz cpu idle load balancing state with cpu hotplug · 71325960
      Suresh Siddha 提交于
      With the recent nohz scheduler changes, rq's nohz flag
      'NOHZ_TICK_STOPPED' and its associated state doesn't get cleared
      immediately after the cpu exits idle. This gets cleared as part
      of the next tick seen on that cpu.
      
      For the cpu offline support, we need to clear this state
      manually. Fix it by registering a cpu notifier, which clears the
      nohz idle load balance state for this rq explicitly during the
      CPU_DYING notification.
      
      There won't be any nohz updates for that cpu, after the
      CPU_DYING notification. But lets be extra paranoid and skip
      updating the nohz state in the select_nohz_load_balancer() if
      the cpu is not in active state anymore.
      Reported-by: NSrivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
      Reviewed-and-tested-by: NSrivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
      Tested-by: NSergey Senozhatsky <sergey.senozhatsky@gmail.com>
      Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Link: http://lkml.kernel.org/r/1327026538.16150.40.camel@sbsiddha-desk.sc.intel.comSigned-off-by: NIngo Molnar <mingo@elte.hu>
      71325960
    • C
      sched/s390: Fix compile error in sched/core.c · db7e527d
      Christian Borntraeger 提交于
      Commit 029632fb ("sched: Make
      separate sched*.c translation units") removed the include of
      asm/mutex.h from sched.c.
      
      This breaks the combination of:
      
       CONFIG_MUTEX_SPIN_ON_OWNER=yes
       CONFIG_HAVE_ARCH_MUTEX_CPU_RELAX=yes
      
      like s390 without mutex debugging:
      
        CC      kernel/sched/core.o
        kernel/sched/core.c: In function ‘mutex_spin_on_owner’:
        kernel/sched/core.c:3287: error: implicit declaration of function ‘arch_mutex_cpu_relax’
      
      Lets re-add the include to kernel/sched/core.c
      Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Link: http://lkml.kernel.org/r/1326268696-30904-1-git-send-email-borntraeger@de.ibm.comSigned-off-by: NIngo Molnar <mingo@elte.hu>
      db7e527d
    • P
      sched: Fix rq->nr_uninterruptible update race · 4ca9b72b
      Peter Zijlstra 提交于
      KOSAKI Motohiro noticed the following race:
      
       > CPU0                    CPU1
       > --------------------------------------------------------
       > deactivate_task()
       >                         task->state = TASK_UNINTERRUPTIBLE;
       > activate_task()
       >    rq->nr_uninterruptible--;
       >
       >                         schedule()
       >                           deactivate_task()
       >                             rq->nr_uninterruptible++;
       >
      
      Kosaki-San's scenario is possible when CPU0 runs
      __sched_setscheduler() against CPU1's current @task.
      
      __sched_setscheduler() does a dequeue/enqueue in order to move
      the task to its new queue (position) to reflect the newly provided
      scheduling parameters. However it should be completely invariant to
      nr_uninterruptible accounting, sched_setscheduler() doesn't affect
      readyness to run, merely policy on when to run.
      
      So convert the inappropriate activate/deactivate_task usage to
      enqueue/dequeue_task, which avoids the nr_uninterruptible accounting.
      
      Also convert the two other sites: __migrate_task() and
      normalize_task() that still use activate/deactivate_task. These sites
      aren't really a problem since __migrate_task() will only be called on
      non-running task (and therefore are immume to the described problem)
      and normalize_task() isn't ever used on regular systems.
      
      Also remove the comments from activate/deactivate_task since they're
      misleading at best.
      Reported-by: NKOSAKI Motohiro <kosaki.motohiro@gmail.com>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Link: http://lkml.kernel.org/r/1327486224.2614.45.camel@laptopSigned-off-by: NIngo Molnar <mingo@elte.hu>
      4ca9b72b
  12. 24 1月, 2012 3 次提交
  13. 23 1月, 2012 1 次提交
    • G
      net: introduce res_counter_charge_nofail() for socket allocations · 0e90b31f
      Glauber Costa 提交于
      There is a case in __sk_mem_schedule(), where an allocation
      is beyond the maximum, but yet we are allowed to proceed.
      It happens under the following condition:
      
      	sk->sk_wmem_queued + size >= sk->sk_sndbuf
      
      The network code won't revert the allocation in this case,
      meaning that at some point later it'll try to do it. Since
      this is never communicated to the underlying res_counter
      code, there is an inbalance in res_counter uncharge operation.
      
      I see two ways of fixing this:
      
      1) storing the information about those allocations somewhere
         in memcg, and then deducting from that first, before
         we start draining the res_counter,
      2) providing a slightly different allocation function for
         the res_counter, that matches the original behavior of
         the network code more closely.
      
      I decided to go for #2 here, believing it to be more elegant,
      since #1 would require us to do basically that, but in a more
      obscure way.
      Signed-off-by: NGlauber Costa <glommer@parallels.com>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Michal Hocko <mhocko@suse.cz>
      CC: Tejun Heo <tj@kernel.org>
      CC: Li Zefan <lizf@cn.fujitsu.com>
      CC: Laurent Chavey <chavey@google.com>
      Acked-by: NTejun Heo <tj@kernel.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      0e90b31f
  14. 21 1月, 2012 2 次提交
  15. 20 1月, 2012 1 次提交