1. 07 5月, 2010 1 次提交
    • T
      cpu_stop: implement stop_cpu[s]() · 1142d810
      Tejun Heo 提交于
      Implement a simplistic per-cpu maximum priority cpu monopolization
      mechanism.  A non-sleeping callback can be scheduled to run on one or
      multiple cpus with maximum priority monopolozing those cpus.  This is
      primarily to replace and unify RT workqueue usage in stop_machine and
      scheduler migration_thread which currently is serving multiple
      purposes.
      
      Four functions are provided - stop_one_cpu(), stop_one_cpu_nowait(),
      stop_cpus() and try_stop_cpus().
      
      This is to allow clean sharing of resources among stop_cpu and all the
      migration thread users.  One stopper thread per cpu is created which
      is currently named "stopper/CPU".  This will eventually replace the
      migration thread and take on its name.
      
      * This facility was originally named cpuhog and lived in separate
        files but Peter Zijlstra nacked the name and thus got renamed to
        cpu_stop and moved into stop_machine.c.
      
      * Better reporting of preemption leak as per Peter's suggestion.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Acked-by: NPeter Zijlstra <peterz@infradead.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Dimitri Sivanich <sivanich@sgi.com>
      1142d810
  2. 23 4月, 2010 3 次提交
    • S
      sched: Fix select_idle_sibling() logic in select_task_rq_fair() · 99bd5e2f
      Suresh Siddha 提交于
      Issues in the current select_idle_sibling() logic in select_task_rq_fair()
      in the context of a task wake-up:
      
      a) Once we select the idle sibling, we use that domain (spanning the cpu that
         the task is currently woken-up and the idle sibling that we found) in our
         wake_affine() decisions. This domain is completely different from the
         domain(we are supposed to use) that spans the cpu that the task currently
         woken-up and the cpu where the task previously ran.
      
      b) We do select_idle_sibling() check only for the cpu that the task is
         currently woken-up on. If select_task_rq_fair() selects the previously run
         cpu for waking the task, doing a select_idle_sibling() check
         for that cpu also helps and we don't do this currently.
      
      c) In the scenarios where the cpu that the task is woken-up is busy but
         with its HT siblings are idle, we are selecting the task be woken-up
         on the idle HT sibling instead of a core that it previously ran
         and currently completely idle. i.e., we are not taking decisions based on
         wake_affine() but directly selecting an idle sibling that can cause
         an imbalance at the SMT/MC level which will be later corrected by the
         periodic load balancer.
      
      Fix this by first going through the load imbalance calculations using
      wake_affine() and once we make a decision of woken-up cpu vs previously-ran cpu,
      then choose a possible idle sibling for waking up the task on.
      Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <1270079265.7835.8.camel@sbs-t61.sc.intel.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      99bd5e2f
    • P
      sched: Pre-compute cpumask_weight(sched_domain_span(sd)) · 669c55e9
      Peter Zijlstra 提交于
      Dave reported that his large SPARC machines spend lots of time in
      hweight64(), try and optimize some of those needless cpumask_weight()
      invocations (esp. with the large offstack cpumasks these are very
      expensive indeed).
      Reported-by: NDavid Miller <davem@davemloft.net>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      669c55e9
    • P
      sched: Cure load average vs NO_HZ woes · 74f5187a
      Peter Zijlstra 提交于
      Chase reported that due to us decrementing calc_load_task prematurely
      (before the next LOAD_FREQ sample), the load average could be scewed
      by as much as the number of CPUs in the machine.
      
      This patch, based on Chase's patch, cures the problem by keeping the
      delta of the CPU going into NO_HZ idle separately and folding that in
      on the next LOAD_FREQ update.
      
      This restores the balance and we get strict LOAD_FREQ period samples.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Acked-by: NChase Douglas <chase.douglas@canonical.com>
      LKML-Reference: <1271934490.1776.343.camel@laptop>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      74f5187a
  3. 15 4月, 2010 1 次提交
  4. 11 4月, 2010 1 次提交
  5. 07 4月, 2010 1 次提交
  6. 06 4月, 2010 3 次提交
    • A
      sched: Fix sched_getaffinity() · 84fba5ec
      Anton Blanchard 提交于
      taskset on 2.6.34-rc3 fails on one of my ppc64 test boxes with
      the following error:
      
        sched_getaffinity(0, 16, 0x10029650030) = -1 EINVAL (Invalid argument)
      
      This box has 128 threads and 16 bytes is enough to cover it.
      
      Commit cd3d8031 (sched:
      sched_getaffinity(): Allow less than NR_CPUS length) is
      comparing this 16 bytes agains nr_cpu_ids.
      
      Fix it by comparing nr_cpu_ids to the number of bits in the
      cpumask we pass in.
      Signed-off-by: NAnton Blanchard <anton@samba.org>
      Reviewed-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Sharyathi Nagesh <sharyath@in.ibm.com>
      Cc: Ulrich Drepper <drepper@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Jack Steiner <steiner@sgi.com>
      Cc: Russ Anderson <rja@sgi.com>
      Cc: Mike Travis <travis@sgi.com>
      LKML-Reference: <20100406070218.GM5594@kryten>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      84fba5ec
    • N
      Fix up possibly racy module refcounting · 5fbfb18d
      Nick Piggin 提交于
      Module refcounting is implemented with a per-cpu counter for speed.
      However there is a race when tallying the counter where a reference may
      be taken by one CPU and released by another.  Reference count summation
      may then see the decrement without having seen the previous increment,
      leading to lower than expected count.  A module which never has its
      actual reference drop below 1 may return a reference count of 0 due to
      this race.
      
      Module removal generally runs under stop_machine, which prevents this
      race causing bugs due to removal of in-use modules.  However there are
      other real bugs in module.c code and driver code (module_refcount is
      exported) where the callers do not run under stop_machine.
      
      Fix this by maintaining running per-cpu counters for the number of
      module refcount increments and the number of refcount decrements.  The
      increments are tallied after the decrements, so any decrement seen will
      always have its corresponding increment counted.  The final refcount is
      the difference of the total increments and decrements, preventing a
      low-refcount from being returned.
      Signed-off-by: NNick Piggin <npiggin@suse.de>
      Acked-by: NRusty Russell <rusty@rustcorp.com.au>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      5fbfb18d
    • E
      audit: preface audit printk with audit · 449cedf0
      Eric Paris 提交于
      There have been a number of reports of people seeing the message:
      "name_count maxed, losing inode data: dev=00:05, inode=3185"
      in dmesg.  These usually lead to people reporting problems to the filesystem
      group who are in turn clueless what they mean.
      
      Eventually someone finds me and I explain what is going on and that
      these come from the audit system.  The basics of the problem is that the
      audit subsystem never expects a single syscall to 'interact' (for some
      wish washy meaning of interact) with more than 20 inodes.  But in fact
      some operations like loading kernel modules can cause changes to lots of
      inodes in debugfs.
      
      There are a couple real fixes being bandied about including removing the
      fixed compile time limit of 20 or not auditing changes in debugfs (or
      both) but neither are small and obvious so I am not sending them for
      immediate inclusion (I hope Al forwards a real solution next devel
      window).
      
      In the meantime this patch simply adds 'audit' to the beginning of the
      crap message so if a user sees it, they come blame me first and we can
      talk about what it means and make sure we understand all of the reasons
      it can happen and make sure this gets solved correctly in the long run.
      Signed-off-by: NEric Paris <eparis@redhat.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      449cedf0
  7. 03 4月, 2010 19 次提交
  8. 01 4月, 2010 2 次提交
    • F
      perf: Use hot regs with software sched switch/migrate events · e49a5bd3
      Frederic Weisbecker 提交于
      Scheduler's task migration events don't work because they always
      pass NULL regs perf_sw_event(). The event hence gets filtered
      in perf_swevent_add().
      
      Scheduler's context switches events use task_pt_regs() to get
      the context when the event occured which is a wrong thing to
      do as this won't give us the place in the kernel where we went
      to sleep but the place where we left userspace. The result is
      even more wrong if we switch from a kernel thread.
      
      Use the hot regs snapshot for both events as they belong to the
      non-interrupt/exception based events family. Unlike page faults
      or so that provide the regs matching the exact origin of the event,
      we need to save the current context.
      
      This makes the task migration event working and fix the context
      switch callchains and origin ip.
      
      Example: perf record -a -e cs
      
      Before:
      
          10.91%      ksoftirqd/0                  0  [k] 0000000000000000
                      |
                      --- (nil)
                          perf_callchain
                          perf_prepare_sample
                          __perf_event_overflow
                          perf_swevent_overflow
                          perf_swevent_add
                          perf_swevent_ctx_event
                          do_perf_sw_event
                          __perf_sw_event
                          perf_event_task_sched_out
                          schedule
                          run_ksoftirqd
                          kthread
                          kernel_thread_helper
      
      After:
      
          23.77%  hald-addon-stor  [kernel.kallsyms]  [k] schedule
                  |
                  --- schedule
                     |
                     |--60.00%-- schedule_timeout
                     |          wait_for_common
                     |          wait_for_completion
                     |          blk_execute_rq
                     |          scsi_execute
                     |          scsi_execute_req
                     |          sr_test_unit_ready
                     |          |
                     |          |--66.67%-- sr_media_change
                     |          |          media_changed
                     |          |          cdrom_media_changed
                     |          |          sr_block_media_changed
                     |          |          check_disk_change
                     |          |          cdrom_open
      
      v2: Always build perf_arch_fetch_caller_regs() now that software
      events need that too. They don't need it from modules, unlike trace
      events, so we keep the EXPORT_SYMBOL in trace_event_perf.c
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: David Miller <davem@davemloft.net>
      e49a5bd3
    • F
      perf: Correctly align perf event tracing buffer · eb1e7961
      Frederic Weisbecker 提交于
      The trace event buffer used by perf to record raw sample events
      is typed as an array of char and may then not be aligned to 8
      by alloc_percpu().
      
      But we need it to be aligned to 8 in sparc64 because we cast
      this buffer into a random structure type built by the TRACE_EVENT()
      macro to store the traces. So if a random 64 bits field is accessed
      inside, it may be not under an expected good alignment.
      
      Use an array of long instead to force the appropriate alignment, and
      perform a compile time check to ensure the size in byte of the buffer
      is a multiple of sizeof(long) so that its actual size doesn't get
      shrinked under us.
      
      This fixes unaligned accesses reported while using perf lock
      in sparc 64.
      Suggested-by: NDavid Miller <davem@davemloft.net>
      Suggested-by: NTejun Heo <htejun@gmail.com>
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: David Miller <davem@davemloft.net>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      eb1e7961
  9. 31 3月, 2010 1 次提交
    • T
      genirq: Force MSI irq handlers to run with interrupts disabled · 753649db
      Thomas Gleixner 提交于
      Network folks reported that directing all MSI-X vectors of their multi
      queue NICs to a single core can cause interrupt stack overflows when
      enough interrupts fire at the same time.
      
      This is caused by the fact that we run interrupt handlers by default
      with interrupts enabled unless the driver reuqests the interrupt with
      the IRQF_DISABLED set. The NIC handlers do not set this flag, so
      simultaneous interrupts can nest unlimited and cause the stack
      overflow.
      
      The only safe counter measure is to run the interrupt handlers with
      interrupts disabled. We can't switch to this mode in general right
      now, but it is safe to do so for MSI interrupts.
      
      Force IRQF_DISABLED for MSI interrupt handlers.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Andi Kleen <andi@firstfloor.org>
      Cc: Linus Torvalds <torvalds@osdl.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Alan Cox <alan@lxorguk.ukuu.org.uk>
      Cc: David Miller <davem@davemloft.net>
      Cc: Greg Kroah-Hartman <gregkh@suse.de>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: stable@kernel.org
      753649db
  10. 30 3月, 2010 7 次提交
    • T
      include cleanup: Update gfp.h and slab.h includes to prepare for breaking... · 5a0e3ad6
      Tejun Heo 提交于
      include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
      
      percpu.h is included by sched.h and module.h and thus ends up being
      included when building most .c files.  percpu.h includes slab.h which
      in turn includes gfp.h making everything defined by the two files
      universally available and complicating inclusion dependencies.
      
      percpu.h -> slab.h dependency is about to be removed.  Prepare for
      this change by updating users of gfp and slab facilities include those
      headers directly instead of assuming availability.  As this conversion
      needs to touch large number of source files, the following script is
      used as the basis of conversion.
      
        http://userweb.kernel.org/~tj/misc/slabh-sweep.py
      
      The script does the followings.
      
      * Scan files for gfp and slab usages and update includes such that
        only the necessary includes are there.  ie. if only gfp is used,
        gfp.h, if slab is used, slab.h.
      
      * When the script inserts a new include, it looks at the include
        blocks and try to put the new include such that its order conforms
        to its surrounding.  It's put in the include block which contains
        core kernel includes, in the same order that the rest are ordered -
        alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
        doesn't seem to be any matching order.
      
      * If the script can't find a place to put a new include (mostly
        because the file doesn't have fitting include block), it prints out
        an error message indicating which .h file needs to be added to the
        file.
      
      The conversion was done in the following steps.
      
      1. The initial automatic conversion of all .c files updated slightly
         over 4000 files, deleting around 700 includes and adding ~480 gfp.h
         and ~3000 slab.h inclusions.  The script emitted errors for ~400
         files.
      
      2. Each error was manually checked.  Some didn't need the inclusion,
         some needed manual addition while adding it to implementation .h or
         embedding .c file was more appropriate for others.  This step added
         inclusions to around 150 files.
      
      3. The script was run again and the output was compared to the edits
         from #2 to make sure no file was left behind.
      
      4. Several build tests were done and a couple of problems were fixed.
         e.g. lib/decompress_*.c used malloc/free() wrappers around slab
         APIs requiring slab.h to be added manually.
      
      5. The script was run on all .h files but without automatically
         editing them as sprinkling gfp.h and slab.h inclusions around .h
         files could easily lead to inclusion dependency hell.  Most gfp.h
         inclusion directives were ignored as stuff from gfp.h was usually
         wildly available and often used in preprocessor macros.  Each
         slab.h inclusion directive was examined and added manually as
         necessary.
      
      6. percpu.h was updated not to include slab.h.
      
      7. Build test were done on the following configurations and failures
         were fixed.  CONFIG_GCOV_KERNEL was turned off for all tests (as my
         distributed build env didn't work with gcov compiles) and a few
         more options had to be turned off depending on archs to make things
         build (like ipr on powerpc/64 which failed due to missing writeq).
      
         * x86 and x86_64 UP and SMP allmodconfig and a custom test config.
         * powerpc and powerpc64 SMP allmodconfig
         * sparc and sparc64 SMP allmodconfig
         * ia64 SMP allmodconfig
         * s390 SMP allmodconfig
         * alpha SMP allmodconfig
         * um on x86_64 SMP allmodconfig
      
      8. percpu.h modifications were reverted so that it could be applied as
         a separate patch and serve as bisection point.
      
      Given the fact that I had only a couple of failures from tests on step
      6, I'm fairly confident about the coverage of this conversion patch.
      If there is a breakage, it's likely to be something in one of the arch
      headers which should be easily discoverable easily on most builds of
      the specific arch.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Guess-its-ok-by: NChristoph Lameter <cl@linux-foundation.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
      5a0e3ad6
    • M
      CRED: Fix memory leak in error handling · 570b8fb5
      Mathieu Desnoyers 提交于
      Fix a memory leak on an OOM condition in prepare_usermodehelper_creds().
      Signed-off-by: NMathieu Desnoyers <mathieu.desnoyers@efficios.com>
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      Signed-off-by: NJames Morris <jmorris@namei.org>
      570b8fb5
    • J
      ring-buffer: Add missing unlock · 292f60c0
      Julia Lawall 提交于
      In some error handling cases the lock is not unlocked.  The return is
      converted to a goto, to share the unlock at the end of the function.
      
      A simplified version of the semantic patch that finds this problem is as
      follows: (http://coccinelle.lip6.fr/)
      
      // <smpl>
      @r exists@
      expression E1;
      identifier f;
      @@
      
      f (...) { <+...
      * spin_lock_irq (E1,...);
      ... when != E1
      * return ...;
      ...+> }
      // </smpl>
      Signed-off-by: NJulia Lawall <julia@diku.dk>
      LKML-Reference: <Pine.LNX.4.64.1003291736440.21896@ask.diku.dk>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      292f60c0
    • L
      tracing: Fix lockdep warning in global_clock() · e36673ec
      Li Zefan 提交于
      # echo 1 > events/enable
       # echo global > trace_clock
      
      ------------[ cut here ]------------
      WARNING: at kernel/lockdep.c:3162 check_flags+0xb2/0x190()
      ...
      ---[ end trace 3f86734a89416623 ]---
      possible reason: unannotated irqs-on.
      ...
      
      There's no reason to use the raw_local_irq_save() in trace_clock_global.
      The local_irq_save() version is fine, and does not cause the bug in lockdep.
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: NLi Zefan <lizf@cn.fujitsu.com>
      LKML-Reference: <4BA97FA1.7030606@cn.fujitsu.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      e36673ec
    • I
      x86: Do not free zero sized per cpu areas · eed63519
      Ian Campbell 提交于
      This avoids an infinite loop in free_early_partial().
      
      Add a warning to free_early_partial() to catch future problems.
      
      -v5: put back start > end back into WARN_ONCE()
      -v6: use one line for warning, suggested by Linus
      -v7: more tests
      -v8: remove the function name as suggested by Johannes
           WARN_ONCE() will print out that function name.
      Signed-off-by: NIan Campbell <ian.campbell@citrix.com>
      Signed-off-by: NYinghai Lu <yinghai@kernel.org>
      Tested-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Tested-by: NJoel Becker <joel.becker@oracle.com>
      Tested-by: NStanislaw Gruszka <sgruszka@redhat.com>
      Acked-by: NJohannes Weiner <hannes@cmpxchg.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: David Miller <davem@davemloft.net>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      LKML-Reference: <1269830604-26214-4-git-send-email-yinghai@kernel.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      eed63519
    • D
      SLOW_WORK: CONFIG_SLOW_WORK_PROC should be CONFIG_SLOW_WORK_DEBUG · a53f4f9e
      David Howells 提交于
      CONFIG_SLOW_WORK_PROC was changed to CONFIG_SLOW_WORK_DEBUG, but not in all
      instances.  Change the remaining instances.  This makes the debugfs file
      display the time mark and the owner's description again.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      a53f4f9e
    • D
      slow-work: use get_ref wrapper instead of directly calling get_ref · 88be12c4
      Dave Airlie 提交于
      Otherwise we can get an oops if the user has no get_ref/put_ref
      requirement.
      Signed-off-by: NDave Airlie <airlied@redhat.com>
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      88be12c4
  11. 29 3月, 2010 1 次提交