1. 22 8月, 2012 5 次提交
    • T
      workqueue: reimplement cancel_delayed_work() using try_to_grab_pending() · 57b30ae7
      Tejun Heo 提交于
      cancel_delayed_work() can't be called from IRQ handlers due to its use
      of del_timer_sync() and can't cancel work items which are already
      transferred from timer to worklist.
      
      Also, unlike other flush and cancel functions, a canceled delayed_work
      would still point to the last associated cpu_workqueue.  If the
      workqueue is destroyed afterwards and the work item is re-used on a
      different workqueue, the queueing code can oops trying to dereference
      already freed cpu_workqueue.
      
      This patch reimplements cancel_delayed_work() using
      try_to_grab_pending() and set_work_cpu_and_clear_pending().  This
      allows the function to be called from IRQ handlers and makes its
      behavior consistent with other flush / cancel functions.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      57b30ae7
    • T
      workqueue: use irqsafe timer for delayed_work · e0aecdd8
      Tejun Heo 提交于
      Up to now, for delayed_works, try_to_grab_pending() couldn't be used
      from IRQ handlers because IRQs may happen while
      delayed_work_timer_fn() is in progress leading to indefinite -EAGAIN.
      
      This patch makes delayed_work use the new TIMER_IRQSAFE flag for
      delayed_work->timer.  This makes try_to_grab_pending() and thus
      mod_delayed_work_on() safe to call from IRQ handlers.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      e0aecdd8
    • T
      workqueue: clean up delayed_work initializers and add missing one · f991b318
      Tejun Heo 提交于
      Reimplement delayed_work initializers using new timer initializers
      which take timer flags.  This reduces code duplications and will ease
      further initializer changes.  This patch also adds a missing
      initializer - INIT_DEFERRABLE_WORK_ONSTACK().
      Signed-off-by: NTejun Heo <tj@kernel.org>
      f991b318
    • T
      workqueue: make deferrable delayed_work initializer names consistent · 203b42f7
      Tejun Heo 提交于
      Initalizers for deferrable delayed_work are confused.
      
      * __DEFERRED_WORK_INITIALIZER()
      * DECLARE_DEFERRED_WORK()
      * INIT_DELAYED_WORK_DEFERRABLE()
      
      Rename them to
      
      * __DEFERRABLE_WORK_INITIALIZER()
      * DECLARE_DEFERRABLE_WORK()
      * INIT_DEFERRABLE_WORK()
      
      This patch doesn't cause any functional changes.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      203b42f7
    • T
      workqueue: cosmetic whitespace updates for macro definitions · ee64e7f6
      Tejun Heo 提交于
      Consistently use the last tab position for '\' line continuation in
      complex macro definitions.  This is to help the following patches.
      
      This patch is cosmetic.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      ee64e7f6
  2. 21 8月, 2012 4 次提交
    • T
      workqueue: deprecate system_nrt[_freezable]_wq · 3b07e9ca
      Tejun Heo 提交于
      system_nrt[_freezable]_wq are now spurious.  Mark them deprecated and
      convert all users to system[_freezable]_wq.
      
      If you're cc'd and wondering what's going on: Now all workqueues are
      non-reentrant, so there's no reason to use system_nrt[_freezable]_wq.
      Please use system[_freezable]_wq instead.
      
      This patch doesn't make any functional difference.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Acked-By: NLai Jiangshan <laijs@cn.fujitsu.com>
      
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: David Airlie <airlied@linux.ie>
      Cc: Jiri Kosina <jkosina@suse.cz>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
      Cc: David Howells <dhowells@redhat.com>
      3b07e9ca
    • T
      workqueue: deprecate flush[_delayed]_work_sync() · 43829731
      Tejun Heo 提交于
      flush[_delayed]_work_sync() are now spurious.  Mark them deprecated
      and convert all users to flush[_delayed]_work().
      
      If you're cc'd and wondering what's going on: Now all workqueues are
      non-reentrant and the regular flushes guarantee that the work item is
      not pending or running on any CPU on return, so there's no reason to
      use the sync flushes at all and they're going away.
      
      This patch doesn't make any functional difference.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: Paul Mundt <lethal@linux-sh.org>
      Cc: Ian Campbell <ian.campbell@citrix.com>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Mattia Dongili <malattia@linux.it>
      Cc: Kent Yoder <key@linux.vnet.ibm.com>
      Cc: David Airlie <airlied@linux.ie>
      Cc: Jiri Kosina <jkosina@suse.cz>
      Cc: Karsten Keil <isdn@linux-pingi.de>
      Cc: Bryan Wu <bryan.wu@canonical.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Alasdair Kergon <agk@redhat.com>
      Cc: Mauro Carvalho Chehab <mchehab@infradead.org>
      Cc: Florian Tobias Schandinat <FlorianSchandinat@gmx.de>
      Cc: David Woodhouse <dwmw2@infradead.org>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: linux-wireless@vger.kernel.org
      Cc: Anton Vorontsov <cbou@mail.ru>
      Cc: Sangbeom Kim <sbkim73@samsung.com>
      Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Eric Van Hensbergen <ericvh@gmail.com>
      Cc: Takashi Iwai <tiwai@suse.de>
      Cc: Steven Whitehouse <swhiteho@redhat.com>
      Cc: Petr Vandrovec <petr@vandrovec.name>
      Cc: Mark Fasheh <mfasheh@suse.com>
      Cc: Christoph Hellwig <hch@infradead.org>
      Cc: Avi Kivity <avi@redhat.com> 
      43829731
    • T
      workqueue: gut system_nrt[_freezable]_wq() · ae930e0f
      Tejun Heo 提交于
      Now that all workqueues are non-reentrant, system[_freezable]_wq() are
      equivalent to system_nrt[_freezable]_wq().  Replace the latter with
      wrappers around system[_freezable]_wq().  The wrapping goes through
      inline functions so that __deprecated can be added easily.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      ae930e0f
    • T
      workqueue: gut flush[_delayed]_work_sync() · 606a5020
      Tejun Heo 提交于
      Now that all workqueues are non-reentrant, flush[_delayed]_work_sync()
      are equivalent to flush[_delayed]_work().  Drop the separate
      implementation and make them thin wrappers around
      flush[_delayed]_work().
      
      * start_flush_work() no longer takes @wait_executing as the only left
        user - flush_work() - always sets it to %true.
      
      * __cancel_work_timer() uses flush_work() instead of wait_on_work().
      Signed-off-by: NTejun Heo <tj@kernel.org>
      606a5020
  3. 14 8月, 2012 1 次提交
    • T
      workqueue: fix CPU binding of flush_delayed_work[_sync]() · 1265057f
      Tejun Heo 提交于
      delayed_work encodes the workqueue to use and the last CPU in
      delayed_work->work.data while it's on timer.  The target CPU is
      implicitly recorded as the CPU the timer is queued on and
      delayed_work_timer_fn() queues delayed_work->work to the CPU it is
      running on.
      
      Unfortunately, this leaves flush_delayed_work[_sync]() no way to find
      out which CPU the delayed_work was queued for when they try to
      re-queue after killing the timer.  Currently, it chooses the local CPU
      flush is running on.  This can unexpectedly move a delayed_work queued
      on a specific CPU to another CPU and lead to subtle errors.
      
      There isn't much point in trying to save several bytes in struct
      delayed_work, which is already close to a hundred bytes on 64bit with
      all debug options turned off.  This patch adds delayed_work->cpu to
      remember the CPU it's queued for.
      
      Note that if the timer is migrated during CPU down, the work item
      could be queued to the downed global_cwq after this change.  As a
      detached global_cwq behaves like an unbound one, this doesn't change
      much for the delayed_work.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      1265057f
  4. 04 8月, 2012 6 次提交
    • T
      workqueue: implement mod_delayed_work[_on]() · 8376fe22
      Tejun Heo 提交于
      Workqueue was lacking a mechanism to modify the timeout of an already
      pending delayed_work.  delayed_work users have been working around
      this using several methods - using an explicit timer + work item,
      messing directly with delayed_work->timer, and canceling before
      re-queueing, all of which are error-prone and/or ugly.
      
      This patch implements mod_delayed_work[_on]() which behaves similarly
      to mod_timer() - if the delayed_work is idle, it's queued with the
      given delay; otherwise, its timeout is modified to the new value.
      Zero @delay guarantees immediate execution.
      
      v2: Updated to reflect try_to_grab_pending() changes.  Now safe to be
          called from bh context.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      8376fe22
    • T
      workqueue: mark a work item being canceled as such · bbb68dfa
      Tejun Heo 提交于
      There can be two reasons try_to_grab_pending() can fail with -EAGAIN.
      One is when someone else is queueing or deqeueing the work item.  With
      the previous patches, it is guaranteed that PENDING and queued state
      will soon agree making it safe to busy-retry in this case.
      
      The other is if multiple __cancel_work_timer() invocations are racing
      one another.  __cancel_work_timer() grabs PENDING and then waits for
      running instances of the target work item on all CPUs while holding
      PENDING and !queued.  try_to_grab_pending() invoked from another task
      will keep returning -EAGAIN while the current owner is waiting.
      
      Not distinguishing the two cases is okay because __cancel_work_timer()
      is the only user of try_to_grab_pending() and it invokes
      wait_on_work() whenever grabbing fails.  For the first case, busy
      looping should be fine but wait_on_work() doesn't cause any critical
      problem.  For the latter case, the new contender usually waits for the
      same condition as the current owner, so no unnecessarily extended
      busy-looping happens.  Combined, these make __cancel_work_timer()
      technically correct even without irq protection while grabbing PENDING
      or distinguishing the two different cases.
      
      While the current code is technically correct, not distinguishing the
      two cases makes it difficult to use try_to_grab_pending() for other
      purposes than canceling because it's impossible to tell whether it's
      safe to busy-retry grabbing.
      
      This patch adds a mechanism to mark a work item being canceled.
      try_to_grab_pending() now disables irq on success and returns -EAGAIN
      to indicate that grabbing failed but PENDING and queued states are
      gonna agree soon and it's safe to busy-loop.  It returns -ENOENT if
      the work item is being canceled and it may stay PENDING && !queued for
      arbitrary amount of time.
      
      __cancel_work_timer() is modified to mark the work canceling with
      WORK_OFFQ_CANCELING after grabbing PENDING, thus making
      try_to_grab_pending() fail with -ENOENT instead of -EAGAIN.  Also, it
      invokes wait_on_work() iff grabbing failed with -ENOENT.  This isn't
      necessary for correctness but makes it consistent with other future
      users of try_to_grab_pending().
      
      v2: try_to_grab_pending() was testing preempt_count() to ensure that
          the caller has disabled preemption.  This triggers spuriously if
          !CONFIG_PREEMPT_COUNT.  Use preemptible() instead.  Reported by
          Fengguang Wu.
      
      v3: Updated so that try_to_grab_pending() disables irq on success
          rather than requiring preemption disabled by the caller.  This
          makes busy-looping easier and will allow try_to_grap_pending() to
          be used from bh/irq contexts.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Fengguang Wu <fengguang.wu@intel.com>
      bbb68dfa
    • T
      workqueue: introduce WORK_OFFQ_FLAG_* · b5490077
      Tejun Heo 提交于
      Low WORK_STRUCT_FLAG_BITS bits of work_struct->data contain
      WORK_STRUCT_FLAG_* and flush color.  If the work item is queued, the
      rest point to the cpu_workqueue with WORK_STRUCT_CWQ set; otherwise,
      WORK_STRUCT_CWQ is clear and the bits contain the last CPU number -
      either a real CPU number or one of WORK_CPU_*.
      
      Scheduled addition of mod_delayed_work[_on]() requires an additional
      flag, which is used only while a work item is off queue.  There are
      more than enough bits to represent off-queue CPU number on both 32 and
      64bits.  This patch introduces WORK_OFFQ_FLAG_* which occupy the lower
      part of the @work->data high bits while off queue.  This patch doesn't
      define any actual OFFQ flag yet.
      
      Off-queue CPU number is now shifted by WORK_OFFQ_CPU_SHIFT, which adds
      the number of bits used by OFFQ flags to WORK_STRUCT_FLAG_SHIFT, to
      make room for OFFQ flags.
      
      To avoid shift width warning with large WORK_OFFQ_FLAG_BITS, ulong
      cast is added to WORK_STRUCT_NO_CPU and, just in case, BUILD_BUG_ON()
      to check that there are enough bits to accomodate off-queue CPU number
      is added.
      
      This patch doesn't make any functional difference.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      b5490077
    • T
      workqueue: set delayed_work->timer function on initialization · d8e794df
      Tejun Heo 提交于
      delayed_work->timer.function is currently initialized during
      queue_delayed_work_on().  Export delayed_work_timer_fn() and set
      delayed_work timer function during delayed_work initialization
      together with other fields.
      
      This ensures the timer function is always valid on an initialized
      delayed_work.  This is to help mod_delayed_work() implementation.
      
      To detect delayed_work users which diddle with the internal timer,
      trigger WARN if timer function doesn't match on queue.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      d8e794df
    • T
      workqueue: make queueing functions return bool · d4283e93
      Tejun Heo 提交于
      All queueing functions return 1 on success, 0 if the work item was
      already pending.  Update them to return bool instead.  This signifies
      better that they don't return 0 / -errno.
      
      This is cleanup and doesn't cause any functional difference.
      
      While at it, fix comment opening for schedule_work_on().
      Signed-off-by: NTejun Heo <tj@kernel.org>
      d4283e93
    • T
      workqueue: reorder queueing functions so that _on() variants are on top · 0a13c00e
      Tejun Heo 提交于
      Currently, queue/schedule[_delayed]_work_on() are located below the
      counterpart without the _on postifx even though the latter is usually
      implemented using the former.  Swap them.
      
      This is cleanup and doesn't cause any functional difference.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      0a13c00e
  5. 02 3月, 2012 1 次提交
    • A
      Block: use a freezable workqueue for disk-event polling · 62d3c543
      Alan Stern 提交于
      This patch (as1519) fixes a bug in the block layer's disk-events
      polling.  The polling is done by a work routine queued on the
      system_nrt_wq workqueue.  Since that workqueue isn't freezable, the
      polling continues even in the middle of a system sleep transition.
      
      Obviously, polling a suspended drive for media changes and such isn't
      a good thing to do; in the case of USB mass-storage devices it can
      lead to real problems requiring device resets and even re-enumeration.
      
      The patch fixes things by creating a new system-wide, non-reentrant,
      freezable workqueue and using it for disk-events polling.
      Signed-off-by: NAlan Stern <stern@rowland.harvard.edu>
      CC: <stable@kernel.org>
      Acked-by: NTejun Heo <tj@kernel.org>
      Acked-by: NRafael J. Wysocki <rjw@sisk.pl>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      62d3c543
  6. 11 1月, 2012 1 次提交
  7. 27 7月, 2011 1 次提交
  8. 20 5月, 2011 2 次提交
  9. 21 2月, 2011 1 次提交
  10. 17 2月, 2011 1 次提交
  11. 09 2月, 2011 1 次提交
  12. 15 12月, 2010 1 次提交
  13. 27 10月, 2010 1 次提交
  14. 21 10月, 2010 1 次提交
  15. 19 10月, 2010 1 次提交
    • T
      workqueue: remove in_workqueue_context() · daaae6b0
      Tejun Heo 提交于
      Commit a25909a4 (lockdep: Add an in_workqueue_context() lockdep-based
      test function) added in_workqueue_context() but there hasn't been any
      in-kernel user and the lockdep annotation in workqueue is scheduled to
      change.  Remove the unused function.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      daaae6b0
  16. 11 10月, 2010 1 次提交
    • T
      workqueue: add and use WQ_MEM_RECLAIM flag · 6370a6ad
      Tejun Heo 提交于
      Add WQ_MEM_RECLAIM flag which currently maps to WQ_RESCUER, mark
      WQ_RESCUER as internal and replace all external WQ_RESCUER usages to
      WQ_MEM_RECLAIM.
      
      This makes the API users express the intent of the workqueue instead
      of indicating the internal mechanism used to guarantee forward
      progress.  This is also to make it cleaner to add more semantics to
      WQ_MEM_RECLAIM.  For example, if deemed necessary, memory reclaim
      workqueues can be made highpri.
      
      This patch doesn't introduce any functional change.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Jeff Garzik <jgarzik@pobox.com>
      Cc: Dave Chinner <david@fromorbit.com>
      Cc: Steven Whitehouse <swhiteho@redhat.com>
      6370a6ad
  17. 19 9月, 2010 3 次提交
    • T
      workqueue: implement flush[_delayed]_work_sync() · 09383498
      Tejun Heo 提交于
      Implement flush[_delayed]_work_sync().  These are flush functions
      which also make sure no CPU is still executing the target work from
      earlier queueing instances.  These are similar to
      cancel[_delayed]_work_sync() except that the target work item is
      flushed instead of cancelled.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      09383498
    • T
      workqueue: cleanup flush/cancel functions · 401a8d04
      Tejun Heo 提交于
      Make the following cleanup changes.
      
      * Relocate flush/cancel function prototypes and definitions.
      
      * Relocate wait_on_cpu_work() and wait_on_work() before
        try_to_grab_pending().  These will be used to implement
        flush_work_sync().
      
      * Make all flush/cancel functions return bool instead of int.
      
      * Update wait_on_cpu_work() and wait_on_work() to return %true if they
        actually waited.
      
      * Add / update comments.
      
      This patch doesn't cause any functional changes.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      401a8d04
    • T
      workqueue: implement alloc_ordered_workqueue() · 81dcaf65
      Tejun Heo 提交于
      alloc_ordered_workqueue() creates a workqueue which processes each
      work itemp one by one in the queued order.  This will be used to
      replace create_freezeable_workqueue() and
      create_singlethread_workqueue().
      Signed-off-by: NTejun Heo <tj@kernel.org>
      81dcaf65
  18. 13 9月, 2010 1 次提交
  19. 25 8月, 2010 2 次提交
    • T
      workqueue: fix cwq->nr_active underflow · 8a2e8e5d
      Tejun Heo 提交于
      cwq->nr_active is used to keep track of how many work items are active
      for the cpu workqueue, where 'active' is defined as either pending on
      global worklist or executing.  This is used to implement the
      max_active limit and workqueue freezing.  If a work item is queued
      after nr_active has already reached max_active, the work item doesn't
      increment nr_active and is put on the delayed queue and gets activated
      later as previous active work items retire.
      
      try_to_grab_pending() which is used in the cancellation path
      unconditionally decremented nr_active whether the work item being
      cancelled is currently active or delayed, so cancelling a delayed work
      item makes nr_active underflow.  This breaks max_active enforcement
      and triggers BUG_ON() in destroy_workqueue() later on.
      
      This patch fixes this bug by adding a flag WORK_STRUCT_DELAYED, which
      is set while a work item in on the delayed list and making
      try_to_grab_pending() decrement nr_active iff the work item is
      currently active.
      
      The addition of the flag enlarges cwq alignment to 256 bytes which is
      getting a bit too large.  It's scheduled to be reduced back to 128
      bytes by merging WORK_STRUCT_PENDING and WORK_STRUCT_CWQ in the next
      devel cycle.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Reported-by: NJohannes Berg <johannes@sipsolutions.net>
      8a2e8e5d
    • T
      workqueue: improve destroy_workqueue() debuggability · e41e704b
      Tejun Heo 提交于
      Now that the worklist is global, having works pending after wq
      destruction can easily lead to oops and destroy_workqueue() have
      several BUG_ON()s to catch these cases.  Unfortunately, BUG_ON()
      doesn't tell much about how the work became pending after the final
      flush_workqueue().
      
      This patch adds WQ_DYING which is set before the final flush begins.
      If a work is requested to be queued on a dying workqueue,
      WARN_ON_ONCE() is triggered and the request is ignored.  This clearly
      indicates which caller is trying to queue a work on a dying workqueue
      and keeps the system working in most cases.
      
      Locking rule comment is updated such that the 'I' rule includes
      modifying the field from destruction path.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      e41e704b
  20. 01 8月, 2010 1 次提交
    • S
      workqueue: mark init_workqueues() as early_initcall() · 6ee0578b
      Suresh Siddha 提交于
      Mark init_workqueues() as early_initcall() and thus it will be initialized
      before smp bringup. init_workqueues() registers for the hotcpu notifier
      and thus it should cope with the processors that are brought online after
      the workqueues are initialized.
      
      x86 smp bringup code uses workqueues and uses a workaround for the
      cold boot process (as the workqueues are initialized post smp_init()).
      Marking init_workqueues() as early_initcall() will pave the way for
      cleaning up this code.
      Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com>
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      6ee0578b
  21. 23 7月, 2010 1 次提交
  22. 02 7月, 2010 3 次提交
    • T
      workqueue: remove WQ_SINGLE_CPU and use WQ_UNBOUND instead · c7fc77f7
      Tejun Heo 提交于
      WQ_SINGLE_CPU combined with @max_active of 1 is used to achieve full
      ordering among works queued to a workqueue.  The same can be achieved
      using WQ_UNBOUND as unbound workqueues always use the gcwq for
      WORK_CPU_UNBOUND.  As @max_active is always one and benefits from cpu
      locality isn't accessible anyway, serving them with unbound workqueues
      should be fine.
      
      Drop WQ_SINGLE_CPU support and use WQ_UNBOUND instead.  Note that most
      single thread workqueue users will be converted to use multithread or
      non-reentrant instead and only the ones which require strict ordering
      will keep using WQ_UNBOUND + @max_active of 1.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      c7fc77f7
    • T
      workqueue: implement unbound workqueue · f3421797
      Tejun Heo 提交于
      This patch implements unbound workqueue which can be specified with
      WQ_UNBOUND flag on creation.  An unbound workqueue has the following
      properties.
      
      * It uses a dedicated gcwq with a pseudo CPU number WORK_CPU_UNBOUND.
        This gcwq is always online and disassociated.
      
      * Workers are not bound to any CPU and not concurrency managed.  Works
        are dispatched to workers as soon as possible and the only applied
        limitation is @max_active.  IOW, all unbound workqeueues are
        implicitly high priority.
      
      Unbound workqueues can be used as simple execution context provider.
      Contexts unbound to any cpu are served as soon as possible.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Arjan van de Ven <arjan@linux.intel.com>
      Cc: David Howells <dhowells@redhat.com>
      f3421797
    • T
      workqueue: prepare for WQ_UNBOUND implementation · bdbc5dd7
      Tejun Heo 提交于
      In preparation of WQ_UNBOUND addition, make the following changes.
      
      * Add WORK_CPU_* constants for pseudo cpu id numbers used (currently
        only WORK_CPU_NONE) and use them instead of NR_CPUS.  This is to
        allow another pseudo cpu id for unbound cpu.
      
      * Reorder WQ_* flags.
      
      * Make workqueue_struct->cpu_wq a union which contains a percpu
        pointer, regular pointer and an unsigned long value and use
        kzalloc/kfree() in UP allocation path.  This will be used to
        implement unbound workqueues which will use only one cwq on SMPs.
      
      * Move alloc_cwqs() allocation after initialization of wq fields, so
        that alloc_cwqs() has access to wq->flags.
      
      * Trivial relocation of wq local variables in freeze functions.
      
      These changes don't cause any functional change.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      bdbc5dd7