1. 22 8月, 2012 7 次提交
    • T
      workqueue: reimplement cancel_delayed_work() using try_to_grab_pending() · 57b30ae7
      Tejun Heo 提交于
      cancel_delayed_work() can't be called from IRQ handlers due to its use
      of del_timer_sync() and can't cancel work items which are already
      transferred from timer to worklist.
      
      Also, unlike other flush and cancel functions, a canceled delayed_work
      would still point to the last associated cpu_workqueue.  If the
      workqueue is destroyed afterwards and the work item is re-used on a
      different workqueue, the queueing code can oops trying to dereference
      already freed cpu_workqueue.
      
      This patch reimplements cancel_delayed_work() using
      try_to_grab_pending() and set_work_cpu_and_clear_pending().  This
      allows the function to be called from IRQ handlers and makes its
      behavior consistent with other flush / cancel functions.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      57b30ae7
    • T
      workqueue: use mod_delayed_work() instead of __cancel + queue · e7c2f967
      Tejun Heo 提交于
      Now that mod_delayed_work() is safe to call from IRQ handlers,
      __cancel_delayed_work() followed by queue_delayed_work() can be
      replaced with mod_delayed_work().
      
      Most conversions are straight-forward except for the following.
      
      * net/core/link_watch.c: linkwatch_schedule_work() was doing a quite
        elaborate dancing around its delayed_work.  Collapse it such that
        linkwatch_work is queued for immediate execution if LW_URGENT and
        existing timer is kept otherwise.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Tomi Valkeinen <tomi.valkeinen@ti.com> 
      e7c2f967
    • T
      workqueue: use irqsafe timer for delayed_work · e0aecdd8
      Tejun Heo 提交于
      Up to now, for delayed_works, try_to_grab_pending() couldn't be used
      from IRQ handlers because IRQs may happen while
      delayed_work_timer_fn() is in progress leading to indefinite -EAGAIN.
      
      This patch makes delayed_work use the new TIMER_IRQSAFE flag for
      delayed_work->timer.  This makes try_to_grab_pending() and thus
      mod_delayed_work_on() safe to call from IRQ handlers.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      e0aecdd8
    • T
      workqueue: clean up delayed_work initializers and add missing one · f991b318
      Tejun Heo 提交于
      Reimplement delayed_work initializers using new timer initializers
      which take timer flags.  This reduces code duplications and will ease
      further initializer changes.  This patch also adds a missing
      initializer - INIT_DEFERRABLE_WORK_ONSTACK().
      Signed-off-by: NTejun Heo <tj@kernel.org>
      f991b318
    • T
      workqueue: make deferrable delayed_work initializer names consistent · 203b42f7
      Tejun Heo 提交于
      Initalizers for deferrable delayed_work are confused.
      
      * __DEFERRED_WORK_INITIALIZER()
      * DECLARE_DEFERRED_WORK()
      * INIT_DELAYED_WORK_DEFERRABLE()
      
      Rename them to
      
      * __DEFERRABLE_WORK_INITIALIZER()
      * DECLARE_DEFERRABLE_WORK()
      * INIT_DEFERRABLE_WORK()
      
      This patch doesn't cause any functional changes.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      203b42f7
    • T
      workqueue: cosmetic whitespace updates for macro definitions · ee64e7f6
      Tejun Heo 提交于
      Consistently use the last tab position for '\' line continuation in
      complex macro definitions.  This is to help the following patches.
      
      This patch is cosmetic.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      ee64e7f6
    • T
  2. 21 8月, 2012 10 次提交
    • T
      timer: Implement TIMER_IRQSAFE · c5f66e99
      Tejun Heo 提交于
      Timer internals are protected with irq-safe locks but timer execution
      isn't, so a timer being dequeued for execution and its execution
      aren't atomic against IRQs.  This makes it impossible to wait for its
      completion from IRQ handlers and difficult to shoot down a timer from
      IRQ handlers.
      
      This issue caused some issues for delayed_work interface.  Because
      there's no way to reliably shoot down delayed_work->timer from IRQ
      handlers, __cancel_delayed_work() can't share the logic to steal the
      target delayed_work with cancel_delayed_work_sync(), and can only
      steal delayed_works which are on queued on timer.  Similarly, the
      pending mod_delayed_work() can't be used from IRQ handlers.
      
      This patch adds a new timer flag TIMER_IRQSAFE, which makes the timer
      to be executed without enabling IRQ after dequeueing such that its
      dequeueing and execution are atomic against IRQ handlers.
      
      This makes it safe to wait for the timer's completion from IRQ
      handlers, for example, using del_timer_sync().  It can never be
      executing on the local CPU and if executing on other CPUs it won't be
      interrupted until done.
      
      This will enable simplifying delayed_work cancel/mod interface.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: torvalds@linux-foundation.org
      Cc: peterz@infradead.org
      Link: http://lkml.kernel.org/r/1344449428-24962-5-git-send-email-tj@kernel.orgSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      c5f66e99
    • T
      timer: Clean up timer initializers · fc683995
      Tejun Heo 提交于
      Over time, timer initializers became messy with unnecessarily
      duplicated code which are inconsistently spread across timer.h and
      timer.c.
      
      This patch cleans up timer initializers.
      
      * timer.c::__init_timer() is renamed to do_init_timer().
      
      * __TIMER_INITIALIZER() added.  It takes @flags and all initializers
        are wrappers around it.
      
      * init_timer[_on_stack]_key() now take @flags.
      
      * __init_timer[_on_stack]() added.  They take @flags and all init
        macros are wrappers around them.
      
      * __setup_timer[_on_stack]() added.  It uses __init_timer() and takes
        @flags.  All setup macros are wrappers around the two.
      
      Note that this patch doesn't add missing init/setup combinations -
      e.g. init_timer_deferrable_on_stack().  Adding missing ones is
      trivial.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: torvalds@linux-foundation.org
      Cc: peterz@infradead.org
      Link: http://lkml.kernel.org/r/1344449428-24962-4-git-send-email-tj@kernel.orgSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      fc683995
    • T
      timer: Relocate declarations of init_timer_on_stack_key() · 5a9af38d
      Tejun Heo 提交于
      init_timer_on_stack_key() is used by init macro definitions.  Move
      init_timer_on_stack_key() and destroy_timer_on_stack() declarations
      above init macro defs.  This will make the next init cleanup patch
      easier to read.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: torvalds@linux-foundation.org
      Cc: peterz@infradead.org
      Link: http://lkml.kernel.org/r/1344449428-24962-3-git-send-email-tj@kernel.orgSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      5a9af38d
    • T
      timer: Generalize timer->base flags handling · e52b1db3
      Tejun Heo 提交于
      To prepare for addition of another flag, generalize timer->base flags
      handling.
      
      * Rename from TBASE_*_FLAG to TIMER_* and make them LU constants.
      
      * Define and use TIMER_FLAG_MASK for flags masking so that multiple
        flags can be handled correctly.
      
      * Don't dereference timer->base directly even if
        !tbase_get_deferrable().  All two such places are already passed in
        @base, so use it instead.
      
      * Make sure tvec_base's alignment is large enough for timer->base
        flags using BUILD_BUG_ON().
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: torvalds@linux-foundation.org
      Cc: peterz@infradead.org
      Link: http://lkml.kernel.org/r/1344449428-24962-2-git-send-email-tj@kernel.orgSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      e52b1db3
    • T
      workqueue: deprecate system_nrt[_freezable]_wq · 3b07e9ca
      Tejun Heo 提交于
      system_nrt[_freezable]_wq are now spurious.  Mark them deprecated and
      convert all users to system[_freezable]_wq.
      
      If you're cc'd and wondering what's going on: Now all workqueues are
      non-reentrant, so there's no reason to use system_nrt[_freezable]_wq.
      Please use system[_freezable]_wq instead.
      
      This patch doesn't make any functional difference.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Acked-By: NLai Jiangshan <laijs@cn.fujitsu.com>
      
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: David Airlie <airlied@linux.ie>
      Cc: Jiri Kosina <jkosina@suse.cz>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
      Cc: David Howells <dhowells@redhat.com>
      3b07e9ca
    • T
      workqueue: deprecate flush[_delayed]_work_sync() · 43829731
      Tejun Heo 提交于
      flush[_delayed]_work_sync() are now spurious.  Mark them deprecated
      and convert all users to flush[_delayed]_work().
      
      If you're cc'd and wondering what's going on: Now all workqueues are
      non-reentrant and the regular flushes guarantee that the work item is
      not pending or running on any CPU on return, so there's no reason to
      use the sync flushes at all and they're going away.
      
      This patch doesn't make any functional difference.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: Paul Mundt <lethal@linux-sh.org>
      Cc: Ian Campbell <ian.campbell@citrix.com>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Mattia Dongili <malattia@linux.it>
      Cc: Kent Yoder <key@linux.vnet.ibm.com>
      Cc: David Airlie <airlied@linux.ie>
      Cc: Jiri Kosina <jkosina@suse.cz>
      Cc: Karsten Keil <isdn@linux-pingi.de>
      Cc: Bryan Wu <bryan.wu@canonical.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Alasdair Kergon <agk@redhat.com>
      Cc: Mauro Carvalho Chehab <mchehab@infradead.org>
      Cc: Florian Tobias Schandinat <FlorianSchandinat@gmx.de>
      Cc: David Woodhouse <dwmw2@infradead.org>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: linux-wireless@vger.kernel.org
      Cc: Anton Vorontsov <cbou@mail.ru>
      Cc: Sangbeom Kim <sbkim73@samsung.com>
      Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Eric Van Hensbergen <ericvh@gmail.com>
      Cc: Takashi Iwai <tiwai@suse.de>
      Cc: Steven Whitehouse <swhiteho@redhat.com>
      Cc: Petr Vandrovec <petr@vandrovec.name>
      Cc: Mark Fasheh <mfasheh@suse.com>
      Cc: Christoph Hellwig <hch@infradead.org>
      Cc: Avi Kivity <avi@redhat.com> 
      43829731
    • T
      workqueue: gut system_nrt[_freezable]_wq() · ae930e0f
      Tejun Heo 提交于
      Now that all workqueues are non-reentrant, system[_freezable]_wq() are
      equivalent to system_nrt[_freezable]_wq().  Replace the latter with
      wrappers around system[_freezable]_wq().  The wrapping goes through
      inline functions so that __deprecated can be added easily.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      ae930e0f
    • T
      workqueue: gut flush[_delayed]_work_sync() · 606a5020
      Tejun Heo 提交于
      Now that all workqueues are non-reentrant, flush[_delayed]_work_sync()
      are equivalent to flush[_delayed]_work().  Drop the separate
      implementation and make them thin wrappers around
      flush[_delayed]_work().
      
      * start_flush_work() no longer takes @wait_executing as the only left
        user - flush_work() - always sets it to %true.
      
      * __cancel_work_timer() uses flush_work() instead of wait_on_work().
      Signed-off-by: NTejun Heo <tj@kernel.org>
      606a5020
    • T
      workqueue: make all workqueues non-reentrant · dbf2576e
      Tejun Heo 提交于
      By default, each per-cpu part of a bound workqueue operates separately
      and a work item may be executing concurrently on different CPUs.  The
      behavior avoids some cross-cpu traffic but leads to subtle weirdities
      and not-so-subtle contortions in the API.
      
      * There's no sane usefulness in allowing a single work item to be
        executed concurrently on multiple CPUs.  People just get the
        behavior unintentionally and get surprised after learning about it.
        Most either explicitly synchronize or use non-reentrant/ordered
        workqueue but this is error-prone.
      
      * flush_work() can't wait for multiple instances of the same work item
        on different CPUs.  If a work item is executing on cpu0 and then
        queued on cpu1, flush_work() can only wait for the one on cpu1.
      
        Unfortunately, work items can easily cross CPU boundaries
        unintentionally when the queueing thread gets migrated.  This means
        that if multiple queuers compete, flush_work() can't even guarantee
        that the instance queued right before it is finished before
        returning.
      
      * flush_work_sync() was added to work around some of the deficiencies
        of flush_work().  In addition to the usual flushing, it ensures that
        all currently executing instances are finished before returning.
        This operation is expensive as it has to walk all CPUs and at the
        same time fails to address competing queuer case.
      
        Incorrectly using flush_work() when flush_work_sync() is necessary
        is an easy error to make and can lead to bugs which are difficult to
        reproduce.
      
      * Similar problems exist for flush_delayed_work[_sync]().
      
      Other than the cross-cpu access concern, there's no benefit in
      allowing parallel execution and it's plain silly to have this level of
      contortion for workqueue which is widely used from core code to
      extremely obscure drivers.
      
      This patch makes all workqueues non-reentrant.  If a work item is
      executing on a different CPU when queueing is requested, it is always
      queued to that CPU.  This guarantees that any given work item can be
      executing on one CPU at maximum and if a work item is queued and
      executing, both are on the same CPU.
      
      The only behavior change which may affect workqueue users negatively
      is that non-reentrancy overrides the affinity specified by
      queue_work_on().  On a reentrant workqueue, the affinity specified by
      queue_work_on() is always followed.  Now, if the work item is
      executing on one of the CPUs, the work item will be queued there
      regardless of the requested affinity.  I've reviewed all workqueue
      users which request explicit affinity, and, fortunately, none seems to
      be crazy enough to exploit parallel execution of the same work item.
      
      This adds an additional busy_hash lookup if the work item was
      previously queued on a different CPU.  This shouldn't be noticeable
      under any sane workload.  Work item queueing isn't a very
      high-frequency operation and they don't jump across CPUs all the time.
      In a micro benchmark to exaggerate this difference - measuring the
      time it takes for two work items to repeatedly jump between two CPUs a
      number (10M) of times with busy_hash table densely populated, the
      difference was around 3%.
      
      While the overhead is measureable, it is only visible in pathological
      cases and the difference isn't huge.  This change brings much needed
      sanity to workqueue and makes its behavior consistent with timer.  I
      think this is the right tradeoff to make.
      
      This enables significant simplification of workqueue API.
      Simplification patches will follow.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      dbf2576e
    • V
      workqueue: fix checkpatch issues · 044c782c
      Valentin Ilie 提交于
      Fixed some checkpatch warnings.
      
      tj: adapted to wq/for-3.7 and massaged pr_xxx() format strings a bit.
      Signed-off-by: NValentin Ilie <valentin.ilie@gmail.com>
      Signed-off-by: NTejun Heo <tj@kernel.org>
      LKML-Reference: <1345326762-21747-1-git-send-email-valentin.ilie@gmail.com>
      044c782c
  3. 17 8月, 2012 6 次提交
    • J
      workqueue: use system_highpri_wq for unbind_work · 7635d2fd
      Joonsoo Kim 提交于
      To speed cpu down processing up, use system_highpri_wq.
      As scheduling priority of workers on it is higher than system_wq and
      it is not contended by other normal works on this cpu, work on it
      is processed faster than system_wq.
      
      tj: CPU up/downs care quite a bit about latency these days.  This
          shouldn't hurt anything and makes sense.
      Signed-off-by: NJoonsoo Kim <js1304@gmail.com>
      Signed-off-by: NTejun Heo <tj@kernel.org>
      7635d2fd
    • J
      workqueue: use system_highpri_wq for highpri workers in rebind_workers() · e2b6a6d5
      Joonsoo Kim 提交于
      In rebind_workers(), we do inserting a work to rebind to cpu for busy workers.
      Currently, in this case, we use only system_wq. This makes a possible
      error situation as there is mismatch between cwq->pool and worker->pool.
      
      To prevent this, we should use system_highpri_wq for highpri worker
      to match theses. This implements it.
      
      tj: Rephrased comment a bit.
      Signed-off-by: NJoonsoo Kim <js1304@gmail.com>
      Signed-off-by: NTejun Heo <tj@kernel.org>
      e2b6a6d5
    • J
      workqueue: introduce system_highpri_wq · 1aabe902
      Joonsoo Kim 提交于
      Commit 3270476a ('workqueue: reimplement
      WQ_HIGHPRI using a separate worker_pool') introduce separate worker pool
      for HIGHPRI. When we handle busyworkers for gcwq, it can be normal worker
      or highpri worker. But, we don't consider this difference in rebind_workers(),
      we use just system_wq for highpri worker. It makes mismatch between
      cwq->pool and worker->pool.
      
      It doesn't make error in current implementation, but possible in the future.
      Now, we introduce system_highpri_wq to use proper cwq for highpri workers
      in rebind_workers(). Following patch fix this issue properly.
      
      tj: Even apart from rebinding, having system_highpri_wq generally
          makes sense.
      Signed-off-by: NJoonsoo Kim <js1304@gmail.com>
      Signed-off-by: NTejun Heo <tj@kernel.org>
      1aabe902
    • J
      workqueue: change value of lcpu in __queue_delayed_work_on() · e42986de
      Joonsoo Kim 提交于
      We assign cpu id into work struct's data field in __queue_delayed_work_on().
      In current implementation, when work is come in first time,
      current running cpu id is assigned.
      If we do __queue_delayed_work_on() with CPU A on CPU B,
      __queue_work() invoked in delayed_work_timer_fn() go into
      the following sub-optimal path in case of WQ_NON_REENTRANT.
      
      	gcwq = get_gcwq(cpu);
      	if (wq->flags & WQ_NON_REENTRANT &&
      		(last_gcwq = get_work_gcwq(work)) && last_gcwq != gcwq) {
      
      Change lcpu to @cpu and rechange lcpu to local cpu if lcpu is WORK_CPU_UNBOUND.
      It is sufficient to prevent to go into sub-optimal path.
      
      tj: Slightly rephrased the comment.
      Signed-off-by: NJoonsoo Kim <js1304@gmail.com>
      Signed-off-by: NTejun Heo <tj@kernel.org>
      e42986de
    • J
      workqueue: correct req_cpu in trace_workqueue_queue_work() · b75cac93
      Joonsoo Kim 提交于
      When we do tracing workqueue_queue_work(), it records requested cpu.
      But, if !(@wq->flag & WQ_UNBOUND) and @cpu is WORK_CPU_UNBOUND,
      requested cpu is changed as local cpu.
      In case of @wq->flag & WQ_UNBOUND, above change is not occured,
      therefore it is reasonable to correct it.
      
      Use temporary local variable for storing requested cpu.
      Signed-off-by: NJoonsoo Kim <js1304@gmail.com>
      Signed-off-by: NTejun Heo <tj@kernel.org>
      b75cac93
    • J
      workqueue: use enum value to set array size of pools in gcwq · 330dad5b
      Joonsoo Kim 提交于
      Commit 3270476a ('workqueue: reimplement
      WQ_HIGHPRI using a separate worker_pool') introduce separate worker_pool
      for HIGHPRI. Although there is NR_WORKER_POOLS enum value which represent
      size of pools, definition of worker_pool in gcwq doesn't use it.
      Using it makes code robust and prevent future mistakes.
      So change code to use this enum value.
      Signed-off-by: NJoonsoo Kim <js1304@gmail.com>
      Signed-off-by: NTejun Heo <tj@kernel.org>
      330dad5b
  4. 14 8月, 2012 3 次提交
    • T
      workqueue: add missing wmb() in clear_work_data() · 23657bb1
      Tejun Heo 提交于
      Any operation which clears PENDING should be preceded by a wmb to
      guarantee that the next PENDING owner sees all the changes made before
      PENDING release.
      
      There are only two places where PENDING is cleared -
      set_work_cpu_and_clear_pending() and clear_work_data().  The caller of
      the former already does smp_wmb() but the latter doesn't have any.
      
      Move the wmb above set_work_cpu_and_clear_pending() into it and add
      one to clear_work_data().
      
      There hasn't been any report related to this issue, and, given how
      clear_work_data() is used, it is extremely unlikely to have caused any
      actual problems on any architecture.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      23657bb1
    • T
      workqueue: fix CPU binding of flush_delayed_work[_sync]() · 1265057f
      Tejun Heo 提交于
      delayed_work encodes the workqueue to use and the last CPU in
      delayed_work->work.data while it's on timer.  The target CPU is
      implicitly recorded as the CPU the timer is queued on and
      delayed_work_timer_fn() queues delayed_work->work to the CPU it is
      running on.
      
      Unfortunately, this leaves flush_delayed_work[_sync]() no way to find
      out which CPU the delayed_work was queued for when they try to
      re-queue after killing the timer.  Currently, it chooses the local CPU
      flush is running on.  This can unexpectedly move a delayed_work queued
      on a specific CPU to another CPU and lead to subtle errors.
      
      There isn't much point in trying to save several bytes in struct
      delayed_work, which is already close to a hundred bytes on 64bit with
      all debug options turned off.  This patch adds delayed_work->cpu to
      remember the CPU it's queued for.
      
      Note that if the timer is migrated during CPU down, the work item
      could be queued to the downed global_cwq after this change.  As a
      detached global_cwq behaves like an unbound one, this doesn't change
      much for the delayed_work.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      1265057f
    • T
      workqueue: use mod_delayed_work() instead of cancel + queue · 41f63c53
      Tejun Heo 提交于
      Convert delayed_work users doing cancel_delayed_work() followed by
      queue_delayed_work() to mod_delayed_work().
      
      Most conversions are straight-forward.  Ones worth mentioning are,
      
      * drivers/edac/edac_mc.c: edac_mc_workq_setup() converted to always
        use mod_delayed_work() and cancel loop in
        edac_mc_reset_delay_period() is dropped.
      
      * drivers/platform/x86/thinkpad_acpi.c: No need to remember whether
        watchdog is active or not.  @fan_watchdog_active and related code
        dropped.
      
      * drivers/power/charger-manager.c: Seemingly a lot of
        delayed_work_pending() abuse going on here.
        [delayed_]work_pending() are unsynchronized and racy when used like
        this.  I converted one instance in fullbatt_handler().  Please
        conver the rest so that it invokes workqueue APIs for the intended
        target state rather than trying to game work item pending state
        transitions.  e.g. if timer should be modified - call
        mod_delayed_work(), canceled - call cancel_delayed_work[_sync]().
      
      * drivers/thermal/thermal_sys.c: thermal_zone_device_set_polling()
        simplified.  Note that round_jiffies() calls in this function are
        meaningless.  round_jiffies() work on absolute jiffies not delta
        delay used by delayed_work.
      
      v2: Tomi pointed out that __cancel_delayed_work() users can't be
          safely converted to mod_delayed_work().  They could be calling it
          from irq context and if that happens while delayed_work_timer_fn()
          is running, it could deadlock.  __cancel_delayed_work() users are
          dropped.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Acked-by: NHenrique de Moraes Holschuh <hmh@hmh.eng.br>
      Acked-by: NDmitry Torokhov <dmitry.torokhov@gmail.com>
      Acked-by: NAnton Vorontsov <cbouatmailru@gmail.com>
      Acked-by: NDavid Howells <dhowells@redhat.com>
      Cc: Tomi Valkeinen <tomi.valkeinen@ti.com>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Jiri Kosina <jkosina@suse.cz>
      Cc: Doug Thompson <dougthompson@xmission.com>
      Cc: David Airlie <airlied@linux.ie>
      Cc: Roland Dreier <roland@kernel.org>
      Cc: "John W. Linville" <linville@tuxdriver.com>
      Cc: Zhang Rui <rui.zhang@intel.com>
      Cc: Len Brown <len.brown@intel.com>
      Cc: "J. Bruce Fields" <bfields@fieldses.org>
      Cc: Johannes Berg <johannes@sipsolutions.net>
      41f63c53
  5. 04 8月, 2012 13 次提交
    • T
      workqueue: implement mod_delayed_work[_on]() · 8376fe22
      Tejun Heo 提交于
      Workqueue was lacking a mechanism to modify the timeout of an already
      pending delayed_work.  delayed_work users have been working around
      this using several methods - using an explicit timer + work item,
      messing directly with delayed_work->timer, and canceling before
      re-queueing, all of which are error-prone and/or ugly.
      
      This patch implements mod_delayed_work[_on]() which behaves similarly
      to mod_timer() - if the delayed_work is idle, it's queued with the
      given delay; otherwise, its timeout is modified to the new value.
      Zero @delay guarantees immediate execution.
      
      v2: Updated to reflect try_to_grab_pending() changes.  Now safe to be
          called from bh context.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      8376fe22
    • T
      workqueue: mark a work item being canceled as such · bbb68dfa
      Tejun Heo 提交于
      There can be two reasons try_to_grab_pending() can fail with -EAGAIN.
      One is when someone else is queueing or deqeueing the work item.  With
      the previous patches, it is guaranteed that PENDING and queued state
      will soon agree making it safe to busy-retry in this case.
      
      The other is if multiple __cancel_work_timer() invocations are racing
      one another.  __cancel_work_timer() grabs PENDING and then waits for
      running instances of the target work item on all CPUs while holding
      PENDING and !queued.  try_to_grab_pending() invoked from another task
      will keep returning -EAGAIN while the current owner is waiting.
      
      Not distinguishing the two cases is okay because __cancel_work_timer()
      is the only user of try_to_grab_pending() and it invokes
      wait_on_work() whenever grabbing fails.  For the first case, busy
      looping should be fine but wait_on_work() doesn't cause any critical
      problem.  For the latter case, the new contender usually waits for the
      same condition as the current owner, so no unnecessarily extended
      busy-looping happens.  Combined, these make __cancel_work_timer()
      technically correct even without irq protection while grabbing PENDING
      or distinguishing the two different cases.
      
      While the current code is technically correct, not distinguishing the
      two cases makes it difficult to use try_to_grab_pending() for other
      purposes than canceling because it's impossible to tell whether it's
      safe to busy-retry grabbing.
      
      This patch adds a mechanism to mark a work item being canceled.
      try_to_grab_pending() now disables irq on success and returns -EAGAIN
      to indicate that grabbing failed but PENDING and queued states are
      gonna agree soon and it's safe to busy-loop.  It returns -ENOENT if
      the work item is being canceled and it may stay PENDING && !queued for
      arbitrary amount of time.
      
      __cancel_work_timer() is modified to mark the work canceling with
      WORK_OFFQ_CANCELING after grabbing PENDING, thus making
      try_to_grab_pending() fail with -ENOENT instead of -EAGAIN.  Also, it
      invokes wait_on_work() iff grabbing failed with -ENOENT.  This isn't
      necessary for correctness but makes it consistent with other future
      users of try_to_grab_pending().
      
      v2: try_to_grab_pending() was testing preempt_count() to ensure that
          the caller has disabled preemption.  This triggers spuriously if
          !CONFIG_PREEMPT_COUNT.  Use preemptible() instead.  Reported by
          Fengguang Wu.
      
      v3: Updated so that try_to_grab_pending() disables irq on success
          rather than requiring preemption disabled by the caller.  This
          makes busy-looping easier and will allow try_to_grap_pending() to
          be used from bh/irq contexts.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Fengguang Wu <fengguang.wu@intel.com>
      bbb68dfa
    • T
      workqueue: reorganize try_to_grab_pending() and __cancel_timer_work() · 36e227d2
      Tejun Heo 提交于
      * Use bool @is_dwork instead of @timer and let try_to_grab_pending()
        use to_delayed_work() to determine the delayed_work address.
      
      * Move timer handling from __cancel_work_timer() to
        try_to_grab_pending().
      
      * Make try_to_grab_pending() use -EAGAIN instead of -1 for
        busy-looping and drop the ret local variable.
      
      * Add proper function comment to try_to_grab_pending().
      
      This makes the code a bit easier to understand and will ease further
      changes.  This patch doesn't make any functional change.
      
      v2: Use @is_dwork instead of @timer.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      36e227d2
    • T
      workqueue: factor out __queue_delayed_work() from queue_delayed_work_on() · 7beb2edf
      Tejun Heo 提交于
      This is to prepare for mod_delayed_work[_on]() and doesn't cause any
      functional difference.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      7beb2edf
    • T
      workqueue: introduce WORK_OFFQ_FLAG_* · b5490077
      Tejun Heo 提交于
      Low WORK_STRUCT_FLAG_BITS bits of work_struct->data contain
      WORK_STRUCT_FLAG_* and flush color.  If the work item is queued, the
      rest point to the cpu_workqueue with WORK_STRUCT_CWQ set; otherwise,
      WORK_STRUCT_CWQ is clear and the bits contain the last CPU number -
      either a real CPU number or one of WORK_CPU_*.
      
      Scheduled addition of mod_delayed_work[_on]() requires an additional
      flag, which is used only while a work item is off queue.  There are
      more than enough bits to represent off-queue CPU number on both 32 and
      64bits.  This patch introduces WORK_OFFQ_FLAG_* which occupy the lower
      part of the @work->data high bits while off queue.  This patch doesn't
      define any actual OFFQ flag yet.
      
      Off-queue CPU number is now shifted by WORK_OFFQ_CPU_SHIFT, which adds
      the number of bits used by OFFQ flags to WORK_STRUCT_FLAG_SHIFT, to
      make room for OFFQ flags.
      
      To avoid shift width warning with large WORK_OFFQ_FLAG_BITS, ulong
      cast is added to WORK_STRUCT_NO_CPU and, just in case, BUILD_BUG_ON()
      to check that there are enough bits to accomodate off-queue CPU number
      is added.
      
      This patch doesn't make any functional difference.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      b5490077
    • T
      workqueue: move try_to_grab_pending() upwards · bf4ede01
      Tejun Heo 提交于
      try_to_grab_pending() will be used by to-be-implemented
      mod_delayed_work[_on]().  Move try_to_grab_pending() and related
      functions above queueing functions.
      
      This patch only moves functions around.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      bf4ede01
    • T
      workqueue: fix zero @delay handling of queue_delayed_work_on() · 715f1300
      Tejun Heo 提交于
      If @delay is zero and the dealyed_work is idle, queue_delayed_work()
      queues it for immediate execution; however, queue_delayed_work_on()
      lacks this logic and always goes through timer regardless of @delay.
      
      This patch moves 0 @delay handling logic from queue_delayed_work() to
      queue_delayed_work_on() so that both functions behave the same.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      715f1300
    • T
      workqueue: unify local CPU queueing handling · 57469821
      Tejun Heo 提交于
      Queueing functions have been using different methods to determine the
      local CPU.
      
      * queue_work() superflously uses get/put_cpu() to acquire and hold the
        local CPU across queue_work_on().
      
      * delayed_work_timer_fn() uses smp_processor_id().
      
      * queue_delayed_work() calls queue_delayed_work_on() with -1 @cpu
        which is interpreted as the local CPU.
      
      * flush_delayed_work[_sync]() were using raw_smp_processor_id().
      
      * __queue_work() interprets %WORK_CPU_UNBOUND as local CPU if the
        target workqueue is bound one but nobody uses this.
      
      This patch converts all functions to uniformly use %WORK_CPU_UNBOUND
      to indicate local CPU and use the local binding feature of
      __queue_work().  unlikely() is dropped from %WORK_CPU_UNBOUND handling
      in __queue_work().
      Signed-off-by: NTejun Heo <tj@kernel.org>
      57469821
    • T
      workqueue: set delayed_work->timer function on initialization · d8e794df
      Tejun Heo 提交于
      delayed_work->timer.function is currently initialized during
      queue_delayed_work_on().  Export delayed_work_timer_fn() and set
      delayed_work timer function during delayed_work initialization
      together with other fields.
      
      This ensures the timer function is always valid on an initialized
      delayed_work.  This is to help mod_delayed_work() implementation.
      
      To detect delayed_work users which diddle with the internal timer,
      trigger WARN if timer function doesn't match on queue.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      d8e794df
    • T
      workqueue: disable irq while manipulating PENDING · 8930caba
      Tejun Heo 提交于
      Queueing operations use WORK_STRUCT_PENDING_BIT to synchronize access
      to the target work item.  They first try to claim the bit and proceed
      with queueing only after that succeeds and there's a window between
      PENDING being set and the actual queueing where the task can be
      interrupted or preempted.
      
      There's also a similar window in process_one_work() when clearing
      PENDING.  A work item is dequeued, gcwq->lock is released and then
      PENDING is cleared and the worker might get interrupted or preempted
      between releasing gcwq->lock and clearing PENDING.
      
      cancel[_delayed]_work_sync() tries to claim or steal PENDING.  The
      function assumes that a work item with PENDING is either queued or in
      the process of being [de]queued.  In the latter case, it busy-loops
      until either the work item loses PENDING or is queued.  If canceling
      coincides with the above described interrupts or preemptions, the
      canceling task will busy-loop while the queueing or executing task is
      preempted.
      
      This patch keeps irq disabled across claiming PENDING and actual
      queueing and moves PENDING clearing in process_one_work() inside
      gcwq->lock so that busy looping from PENDING && !queued doesn't wait
      for interrupted/preempted tasks.  Note that, in process_one_work(),
      setting last CPU and clearing PENDING got merged into single
      operation.
      
      This removes possible long busy-loops and will allow using
      try_to_grab_pending() from bh and irq contexts.
      
      v2: __queue_work() was testing preempt_count() to ensure that the
          caller has disabled preemption.  This triggers spuriously if
          !CONFIG_PREEMPT_COUNT.  Use preemptible() instead.  Reported by
          Fengguang Wu.
      
      v3: Disable irq instead of preemption.  IRQ will be disabled while
          grabbing gcwq->lock later anyway and this allows using
          try_to_grab_pending() from bh and irq contexts.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Fengguang Wu <fengguang.wu@intel.com>
      8930caba
    • T
      workqueue: add missing smp_wmb() in process_one_work() · 959d1af8
      Tejun Heo 提交于
      WORK_STRUCT_PENDING is used to claim ownership of a work item and
      process_one_work() releases it before starting execution.  When
      someone else grabs PENDING, all pre-release updates to the work item
      should be visible and all updates made by the new owner should happen
      afterwards.
      
      Grabbing PENDING uses test_and_set_bit() and thus has a full barrier;
      however, clearing doesn't have a matching wmb.  Given the preceding
      spin_unlock and use of clear_bit, I don't believe this can be a
      problem on an actual machine and there hasn't been any related report
      but it still is theretically possible for clear_pending to permeate
      upwards and happen before work->entry update.
      
      Add an explicit smp_wmb() before work_clear_pending().
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: stable@vger.kernel.org
      959d1af8
    • T
      workqueue: make queueing functions return bool · d4283e93
      Tejun Heo 提交于
      All queueing functions return 1 on success, 0 if the work item was
      already pending.  Update them to return bool instead.  This signifies
      better that they don't return 0 / -errno.
      
      This is cleanup and doesn't cause any functional difference.
      
      While at it, fix comment opening for schedule_work_on().
      Signed-off-by: NTejun Heo <tj@kernel.org>
      d4283e93
    • T
      workqueue: reorder queueing functions so that _on() variants are on top · 0a13c00e
      Tejun Heo 提交于
      Currently, queue/schedule[_delayed]_work_on() are located below the
      counterpart without the _on postifx even though the latter is usually
      implemented using the former.  Swap them.
      
      This is cleanup and doesn't cause any functional difference.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      0a13c00e
  6. 03 8月, 2012 1 次提交