1. 20 3月, 2013 1 次提交
    • T
      workqueue: directly restore CPU affinity of workers from CPU_ONLINE · a9ab775b
      Tejun Heo 提交于
      Rebinding workers of a per-cpu pool after a CPU comes online involves
      a lot of back-and-forth mostly because only the task itself could
      adjust CPU affinity if PF_THREAD_BOUND was set.
      
      As CPU_ONLINE itself couldn't adjust affinity, it had to somehow
      coerce the workers themselves to perform set_cpus_allowed_ptr().  Due
      to the various states a worker can be in, this led to three different
      paths a worker may be rebound.  worker->rebind_work is queued to busy
      workers.  Idle ones are signaled by unlinking worker->entry and call
      idle_worker_rebind().  The manager isn't covered by either and
      implements its own mechanism.
      
      PF_THREAD_BOUND has been relaced with PF_NO_SETAFFINITY and CPU_ONLINE
      itself now can manipulate CPU affinity of workers.  This patch
      replaces the existing rebind mechanism with direct one where
      CPU_ONLINE iterates over all workers using for_each_pool_worker(),
      restores CPU affinity, and clears WORKER_UNBOUND.
      
      There are a couple subtleties.  All bound idle workers should have
      their runqueues set to that of the bound CPU; however, if the target
      task isn't running, set_cpus_allowed_ptr() just updates the
      cpus_allowed mask deferring the actual migration to when the task
      wakes up.  This is worked around by waking up idle workers after
      restoring CPU affinity before any workers can become bound.
      
      Another subtlety is stems from matching @pool->nr_running with the
      number of running unbound workers.  While DISASSOCIATED, all workers
      are unbound and nr_running is zero.  As workers become bound again,
      nr_running needs to be adjusted accordingly; however, there is no good
      way to tell whether a given worker is running without poking into
      scheduler internals.  Instead of clearing UNBOUND directly,
      rebind_workers() replaces UNBOUND with another new NOT_RUNNING flag -
      REBOUND, which will later be cleared by the workers themselves while
      preparing for the next round of work item execution.  The only change
      needed for the workers is clearing REBOUND along with PREP.
      
      * This patch leaves for_each_busy_worker() without any user.  Removed.
      
      * idle_worker_rebind(), busy_worker_rebind_fn(), worker->rebind_work
        and rebind logic in manager_workers() removed.
      
      * worker_thread() now looks at WORKER_DIE instead of testing whether
        @worker->entry is empty to determine whether it needs to do
        something special as dying is the only special thing now.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Reviewed-by: NLai Jiangshan <laijs@cn.fujitsu.com>
      a9ab775b
  2. 13 3月, 2013 1 次提交
  3. 05 3月, 2013 1 次提交
    • L
      workqueue: better define synchronization rule around rescuer->pool updates · b3104104
      Lai Jiangshan 提交于
      Rescuers visit different worker_pools to process work items from pools
      under pressure.  Currently, rescuer->pool is updated outside any
      locking and when an outsider looks at a rescuer, there's no way to
      tell when and whether rescuer->pool is gonna change.  While this
      doesn't currently cause any problem, it is nasty.
      
      With recent worker_maybe_bind_and_lock() changes, we can move
      rescuer->pool updates inside pool locks such that if rescuer->pool
      equals a locked pool, it's guaranteed to stay that way until the pool
      is unlocked.
      
      Move rescuer->pool inside pool->lock.
      
      This patch doesn't introduce any visible behavior difference.
      
      tj: Updated the description.
      Signed-off-by: NLai Jiangshan <laijs@cn.fujitsu.com>
      Signed-off-by: NTejun Heo <tj@kernel.org>
      b3104104
  4. 14 2月, 2013 1 次提交
    • T
      workqueue: rename cpu_workqueue to pool_workqueue · 112202d9
      Tejun Heo 提交于
      workqueue has moved away from global_cwqs to worker_pools and with the
      scheduled custom worker pools, wforkqueues will be associated with
      pools which don't have anything to do with CPUs.  The workqueue code
      went through significant amount of changes recently and mass renaming
      isn't likely to hurt much additionally.  Let's replace 'cpu' with
      'pool' so that it reflects the current design.
      
      * s/struct cpu_workqueue_struct/struct pool_workqueue/
      * s/cpu_wq/pool_wq/
      * s/cwq/pwq/
      
      This patch is purely cosmetic.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      112202d9
  5. 25 1月, 2013 1 次提交
    • T
      workqueue: remove global_cwq · a60dc39c
      Tejun Heo 提交于
      global_cwq is now nothing but a container for per-cpu standard
      worker_pools.  Declare the worker pools directly as
      cpu/unbound_std_worker_pools[] and remove global_cwq.
      
      * ____cacheline_aligned_in_smp moved from global_cwq to worker_pool.
        This probably would have made sense even before this change as we
        want each pool to be aligned.
      
      * get_gcwq() is replaced with std_worker_pools() which returns the
        pointer to the standard pool array for a given CPU.
      
      * __alloc_workqueue_key() updated to use get_std_worker_pool() instead
        of open-coding pool determination.
      
      This is part of an effort to remove global_cwq and make worker_pool
      the top level abstraction, which in turn will help implementing worker
      pools with user-specified attributes.
      
      v2: Joonsoo pointed out that it'd better to align struct worker_pool
          rather than the array so that every pool is aligned.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Reviewed-by: NLai Jiangshan <laijs@cn.fujitsu.com>
      Cc: Joonsoo Kim <js1304@gmail.com>
      a60dc39c
  6. 19 1月, 2013 3 次提交