1. 28 5月, 2010 1 次提交
  2. 30 4月, 2010 3 次提交
  3. 18 11月, 2009 1 次提交
  4. 16 11月, 2009 1 次提交
  5. 19 10月, 2009 1 次提交
    • A
      HWPOISON: Allow schedule_on_each_cpu() from keventd · 65a64464
      Andi Kleen 提交于
      Right now when calling schedule_on_each_cpu() from keventd there
      is a deadlock because it tries to schedule a work item on the current CPU
      too. This happens via lru_add_drain_all() in hwpoison.
      
      Just call the function for the current CPU in this case. This is actually
      faster too.
      
      Debugging with Fengguang Wu & Max Asbock
      Signed-off-by: NAndi Kleen <ak@linux.intel.com>
      65a64464
  6. 15 10月, 2009 2 次提交
  7. 09 9月, 2009 1 次提交
  8. 04 8月, 2009 1 次提交
  9. 02 6月, 2009 1 次提交
    • Z
      ftrace, workqueuetrace: make workqueue tracepoints use TRACE_EVENT macro · fb39125f
      Zhaolei 提交于
      v3: zhaolei@cn.fujitsu.com: Change TRACE_EVENT definition to new format
          introduced by Steven Rostedt: consolidate trace and trace_event headers
      v2: kosaki@jp.fujitsu.com: print the function names instead of addr, and zap
          the work addr
      v1: zhaolei@cn.fujitsu.com: Make workqueue tracepoints use TRACE_EVENT macro
      
      TRACE_EVENT is a more generic way to define tracepoints.
      Doing so adds these new capabilities to the tracepoints:
      
        - zero-copy and per-cpu splice() tracing
        - binary tracing without printf overhead
        - structured logging records exposed under /debug/tracing/events
        - trace events embedded in function tracer output and other plugins
        - user-defined, per tracepoint filter expressions
      
      Then, this patch converts DEFINE_TRACE to TRACE_EVENT in workqueue related
      tracepoints.
      
      [ Impact: expand workqueue tracer to events tracing ]
      Signed-off-by: NZhao Lei <zhaolei@cn.fujitsu.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Tom Zanussi <tzanussi@gmail.com>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      fb39125f
  10. 09 4月, 2009 1 次提交
    • A
      work_on_cpu(): rewrite it to create a kernel thread on demand · 6b44003e
      Andrew Morton 提交于
      Impact: circular locking bugfix
      
      The various implemetnations and proposed implemetnations of work_on_cpu()
      are vulnerable to various deadlocks because they all used queues of some
      form.
      
      Unrelated pieces of kernel code thus gained dependencies wherein if one
      work_on_cpu() caller holds a lock which some other work_on_cpu() callback
      also takes, the kernel could rarely deadlock.
      
      Fix this by creating a short-lived kernel thread for each work_on_cpu()
      invokation.
      
      This is not terribly fast, but the only current caller of work_on_cpu() is
      pci_call_probe().
      
      It would be nice to find some other way of doing the node-local
      allocations in the PCI probe code so that we can zap work_on_cpu()
      altogether.  The code there is rather nasty.  I can't think of anything
      simple at this time...
      
      Cc: Ingo Molnar <mingo@elte.hu>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
      6b44003e
  11. 03 4月, 2009 1 次提交
  12. 30 3月, 2009 1 次提交
  13. 20 1月, 2009 2 次提交
  14. 17 1月, 2009 2 次提交
  15. 14 1月, 2009 1 次提交
    • F
      tracing: add a new workqueue tracer · e1d8aa9f
      Frederic Weisbecker 提交于
      Impact: new tracer
      
      The workqueue tracer provides some statistical informations
      about each cpu workqueue thread such as the number of the
      works inserted and executed since their creation. It can help
      to evaluate the amount of work each of them have to perform.
      For example it can help a developer to decide whether he should
      choose a per cpu workqueue instead of a singlethreaded one.
      
      It only traces statistical informations for now but it will probably later
      provide event tracing too.
      
      Such a tracer could help too, and be improved, to help rt priority sorted
      workqueue development.
      
      To have a snapshot of the workqueues state at any time, just do
      
      cat /debugfs/tracing/trace_stat/workqueues
      
      Ie:
      
        1    125        125       reiserfs/1
        1      0          0       scsi_tgtd/1
        1      0          0       aio/1
        1      0          0       ata/1
        1    114        114       kblockd/1
        1      0          0       kintegrityd/1
        1   2147       2147       events/1
      
        0      0          0       kpsmoused
        0    105        105       reiserfs/0
        0      0          0       scsi_tgtd/0
        0      0          0       aio/0
        0      0          0       ata_aux
        0      0          0       ata/0
        0      0          0       cqueue
        0      0          0       kacpi_notify
        0      0          0       kacpid
        0    149        149       kblockd/0
        0      0          0       kintegrityd/0
        0   1000       1000       khelper
        0   2270       2270       events/0
      
      Changes in V2:
      
      _ Drop the static array based on NR_CPU and dynamically allocate the stat array
        with num_possible_cpus() and other cpu mask facilities....
      _ Trace workqueue insertion at a bit lower level (insert_work instead of queue_work) to handle
        even the workqueue barriers.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      e1d8aa9f
  16. 01 1月, 2009 1 次提交
  17. 14 11月, 2008 1 次提交
  18. 06 11月, 2008 1 次提交
    • R
      cpumask: introduce new API, without changing anything · 2d3854a3
      Rusty Russell 提交于
      Impact: introduce new APIs
      
      We want to deprecate cpumasks on the stack, as we are headed for
      gynormous numbers of CPUs.  Eventually, we want to head towards an
      undefined 'struct cpumask' so they can never be declared on stack.
      
      1) New cpumask functions which take pointers instead of copies.
         (cpus_* -> cpumask_*)
      
      2) Several new helpers to reduce requirements for temporary cpumasks
         (cpumask_first_and, cpumask_next_and, cpumask_any_and)
      
      3) Helpers for declaring cpumasks on or offstack for large NR_CPUS
         (cpumask_var_t, alloc_cpumask_var and free_cpumask_var)
      
      4) 'struct cpumask' for explicitness and to mark new-style code.
      
      5) Make iterator functions stop at nr_cpu_ids (a runtime constant),
         not NR_CPUS for time efficiency and for smaller dynamic allocations
         in future.
      
      6) cpumask_copy() so we can allocate less than a full cpumask eventually
         (for alloc_cpumask_var), and so we can eliminate the 'struct cpumask'
         definition eventually.
      
      7) work_on_cpu() helper for doing task on a CPU, rather than saving old
         cpumask for current thread and manipulating it.
      
      8) smp_call_function_many() which is smp_call_function_mask() except
         taking a cpumask pointer.
      
      Note that this patch simply introduces the new functions and leaves
      the obsolescent ones in place.  This is to simplify the transition
      patches.
      Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      2d3854a3
  19. 22 10月, 2008 1 次提交
  20. 17 10月, 2008 1 次提交
  21. 11 8月, 2008 2 次提交
  22. 31 7月, 2008 1 次提交
  23. 26 7月, 2008 8 次提交
  24. 25 7月, 2008 1 次提交
  25. 05 7月, 2008 1 次提交
  26. 24 5月, 2008 1 次提交
  27. 01 5月, 2008 1 次提交