1. 05 12月, 2017 1 次提交
  2. 28 11月, 2017 1 次提交
  3. 22 11月, 2017 1 次提交
    • K
      treewide: Remove TIMER_FUNC_TYPE and TIMER_DATA_TYPE casts · 841b86f3
      Kees Cook 提交于
      With all callbacks converted, and the timer callback prototype
      switched over, the TIMER_FUNC_TYPE cast is no longer needed,
      so remove it. Conversion was done with the following scripts:
      
          perl -pi -e 's|\(TIMER_FUNC_TYPE\)||g' \
              $(git grep TIMER_FUNC_TYPE | cut -d: -f1 | sort -u)
      
          perl -pi -e 's|\(TIMER_DATA_TYPE\)||g' \
              $(git grep TIMER_DATA_TYPE | cut -d: -f1 | sort -u)
      
      The now unused macros are also dropped from include/linux/timer.h.
      Signed-off-by: NKees Cook <keescook@chromium.org>
      841b86f3
  4. 08 11月, 2017 1 次提交
  5. 06 11月, 2017 1 次提交
  6. 03 11月, 2017 1 次提交
  7. 25 10月, 2017 2 次提交
    • B
      workqueue: Remove now redundant lock acquisitions wrt. workqueue flushes · fd1a5b04
      Byungchul Park 提交于
      The workqueue code added manual lock acquisition annotations to catch
      deadlocks.
      
      After lockdepcrossrelease was introduced, some of those became redundant,
      since wait_for_completion() already does the acquisition and tracking.
      
      Remove the duplicate annotations.
      Signed-off-by: NByungchul Park <byungchul.park@lge.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: amir73il@gmail.com
      Cc: axboe@kernel.dk
      Cc: darrick.wong@oracle.com
      Cc: david@fromorbit.com
      Cc: hch@infradead.org
      Cc: idryomov@gmail.com
      Cc: johan@kernel.org
      Cc: johannes.berg@intel.com
      Cc: kernel-team@lge.com
      Cc: linux-block@vger.kernel.org
      Cc: linux-fsdevel@vger.kernel.org
      Cc: linux-mm@kvack.org
      Cc: linux-xfs@vger.kernel.org
      Cc: oleg@redhat.com
      Cc: tj@kernel.org
      Link: http://lkml.kernel.org/r/1508921765-15396-9-git-send-email-byungchul.park@lge.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      fd1a5b04
    • M
      locking/atomics, workqueue: Convert ACCESS_ONCE() to READ_ONCE()/WRITE_ONCE() · c95491ed
      Mark Rutland 提交于
      For several reasons, it is desirable to use {READ,WRITE}_ONCE() in
      preference to ACCESS_ONCE(), and new code is expected to use one of the
      former. So far, there's been no reason to change most existing uses of
      ACCESS_ONCE(), as these aren't currently harmful.
      
      However, for some features it is necessary to instrument reads and
      writes separately, which is not possible with ACCESS_ONCE(). This
      distinction is critical to correct operation.
      
      It's possible to transform the bulk of kernel code using the Coccinelle
      script below. However, this doesn't handle comments, leaving references
      to ACCESS_ONCE() instances which have been removed. As a preparatory
      step, this patch converts the workqueue code and comments to use
      {READ,WRITE}_ONCE() consistently.
      
      ----
      virtual patch
      
      @ depends on patch @
      expression E1, E2;
      @@
      
      - ACCESS_ONCE(E1) = E2
      + WRITE_ONCE(E1, E2)
      
      @ depends on patch @
      expression E;
      @@
      
      - ACCESS_ONCE(E)
      + READ_ONCE(E)
      ----
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Acked-by: NTejun Heo <tj@kernel.org>
      Cc: Lai Jiangshan <jiangshanlai@gmail.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: davem@davemloft.net
      Cc: linux-arch@vger.kernel.org
      Cc: mpe@ellerman.id.au
      Cc: shuah@kernel.org
      Cc: snitzer@redhat.com
      Cc: thor.thayer@linux.intel.com
      Cc: viro@zeniv.linux.org.uk
      Cc: will.deacon@arm.com
      Link: http://lkml.kernel.org/r/1508792849-3115-12-git-send-email-paulmck@linux.vnet.ibm.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      c95491ed
  8. 22 10月, 2017 1 次提交
  9. 18 10月, 2017 1 次提交
  10. 10 10月, 2017 1 次提交
    • T
      workqueue: replace pool->manager_arb mutex with a flag · 692b4825
      Tejun Heo 提交于
      Josef reported a HARDIRQ-safe -> HARDIRQ-unsafe lock order detected by
      lockdep:
      
       [ 1270.472259] WARNING: HARDIRQ-safe -> HARDIRQ-unsafe lock order detected
       [ 1270.472783] 4.14.0-rc1-xfstests-12888-g76833e8 #110 Not tainted
       [ 1270.473240] -----------------------------------------------------
       [ 1270.473710] kworker/u5:2/5157 [HC0[0]:SC0[0]:HE0:SE1] is trying to acquire:
       [ 1270.474239]  (&(&lock->wait_lock)->rlock){+.+.}, at: [<ffffffff8da253d2>] __mutex_unlock_slowpath+0xa2/0x280
       [ 1270.474994]
       [ 1270.474994] and this task is already holding:
       [ 1270.475440]  (&pool->lock/1){-.-.}, at: [<ffffffff8d2992f6>] worker_thread+0x366/0x3c0
       [ 1270.476046] which would create a new lock dependency:
       [ 1270.476436]  (&pool->lock/1){-.-.} -> (&(&lock->wait_lock)->rlock){+.+.}
       [ 1270.476949]
       [ 1270.476949] but this new dependency connects a HARDIRQ-irq-safe lock:
       [ 1270.477553]  (&pool->lock/1){-.-.}
       ...
       [ 1270.488900] to a HARDIRQ-irq-unsafe lock:
       [ 1270.489327]  (&(&lock->wait_lock)->rlock){+.+.}
       ...
       [ 1270.494735]  Possible interrupt unsafe locking scenario:
       [ 1270.494735]
       [ 1270.495250]        CPU0                    CPU1
       [ 1270.495600]        ----                    ----
       [ 1270.495947]   lock(&(&lock->wait_lock)->rlock);
       [ 1270.496295]                                local_irq_disable();
       [ 1270.496753]                                lock(&pool->lock/1);
       [ 1270.497205]                                lock(&(&lock->wait_lock)->rlock);
       [ 1270.497744]   <Interrupt>
       [ 1270.497948]     lock(&pool->lock/1);
      
      , which will cause a irq inversion deadlock if the above lock scenario
      happens.
      
      The root cause of this safe -> unsafe lock order is the
      mutex_unlock(pool->manager_arb) in manage_workers() with pool->lock
      held.
      
      Unlocking mutex while holding an irq spinlock was never safe and this
      problem has been around forever but it never got noticed because the
      only time the mutex is usually trylocked while holding irqlock making
      actual failures very unlikely and lockdep annotation missed the
      condition until the recent b9c16a0e ("locking/mutex: Fix
      lockdep_assert_held() fail").
      
      Using mutex for pool->manager_arb has always been a bit of stretch.
      It primarily is an mechanism to arbitrate managership between workers
      which can easily be done with a pool flag.  The only reason it became
      a mutex is that pool destruction path wants to exclude parallel
      managing operations.
      
      This patch replaces the mutex with a new pool flag POOL_MANAGER_ACTIVE
      and make the destruction path wait for the current manager on a wait
      queue.
      
      v2: Drop unnecessary flag clearing before pool destruction as
          suggested by Boqun.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Reported-by: NJosef Bacik <josef@toxicpanda.com>
      Reviewed-by: NLai Jiangshan <jiangshanlai@gmail.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Boqun Feng <boqun.feng@gmail.com>
      Cc: stable@vger.kernel.org
      692b4825
  11. 05 10月, 2017 2 次提交
    • K
      workqueue: Convert callback to use from_timer() · 8c20feb6
      Kees Cook 提交于
      In preparation for unconditionally passing the struct timer_list pointer
      to all timer callbacks, switch workqueue to use from_timer() and pass the
      timer pointer explicitly.
      Signed-off-by: NKees Cook <keescook@chromium.org>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: linux-mips@linux-mips.org
      Cc: Petr Mladek <pmladek@suse.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Lai Jiangshan <jiangshanlai@gmail.com>
      Cc: Sebastian Reichel <sre@kernel.org>
      Cc: Kalle Valo <kvalo@qca.qualcomm.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Pavel Machek <pavel@ucw.cz>
      Cc: linux1394-devel@lists.sourceforge.net
      Cc: Chris Metcalf <cmetcalf@mellanox.com>
      Cc: linux-s390@vger.kernel.org
      Cc: linux-wireless@vger.kernel.org
      Cc: "James E.J. Bottomley" <jejb@linux.vnet.ibm.com>
      Cc: Wim Van Sebroeck <wim@iguana.be>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Ursula Braun <ubraun@linux.vnet.ibm.com>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Viresh Kumar <viresh.kumar@linaro.org>
      Cc: Harish Patil <harish.patil@cavium.com>
      Cc: Stephen Boyd <sboyd@codeaurora.org>
      Cc: Guenter Roeck <linux@roeck-us.net>
      Cc: Manish Chopra <manish.chopra@cavium.com>
      Cc: Len Brown <len.brown@intel.com>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: linux-pm@vger.kernel.org
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Julian Wiedmann <jwi@linux.vnet.ibm.com>
      Cc: John Stultz <john.stultz@linaro.org>
      Cc: Mark Gross <mark.gross@intel.com>
      Cc: linux-watchdog@vger.kernel.org
      Cc: linux-scsi@vger.kernel.org
      Cc: "Martin K. Petersen" <martin.petersen@oracle.com>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Stefan Richter <stefanr@s5r6.in-berlin.de>
      Cc: Michael Reed <mdr@sgi.com>
      Cc: netdev@vger.kernel.org
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: linuxppc-dev@lists.ozlabs.org
      Cc: Sudip Mukherjee <sudipm.mukherjee@gmail.com>
      Link: https://lkml.kernel.org/r/1507159627-127660-14-git-send-email-keescook@chromium.org
      8c20feb6
    • K
      timer: Remove users of TIMER_DEFERRED_INITIALIZER · 5cd79d6a
      Kees Cook 提交于
      This removes uses of TIMER_DEFERRED_INITIALIZER and chooses a location
      to call timer_setup() from before add_timer() or mod_timer() is called.
      Adjusts callbacks to use from_timer() as needed.
      Signed-off-by: NKees Cook <keescook@chromium.org>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: linux-mips@linux-mips.org
      Cc: Petr Mladek <pmladek@suse.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Lai Jiangshan <jiangshanlai@gmail.com>
      Cc: Sebastian Reichel <sre@kernel.org>
      Cc: Kalle Valo <kvalo@qca.qualcomm.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Pavel Machek <pavel@ucw.cz>
      Cc: linux1394-devel@lists.sourceforge.net
      Cc: Chris Metcalf <cmetcalf@mellanox.com>
      Cc: linux-s390@vger.kernel.org
      Cc: linux-wireless@vger.kernel.org
      Cc: "James E.J. Bottomley" <jejb@linux.vnet.ibm.com>
      Cc: Wim Van Sebroeck <wim@iguana.be>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Ursula Braun <ubraun@linux.vnet.ibm.com>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Viresh Kumar <viresh.kumar@linaro.org>
      Cc: Harish Patil <harish.patil@cavium.com>
      Cc: Stephen Boyd <sboyd@codeaurora.org>
      Cc: Guenter Roeck <linux@roeck-us.net>
      Cc: Manish Chopra <manish.chopra@cavium.com>
      Cc: Len Brown <len.brown@intel.com>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: linux-pm@vger.kernel.org
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Julian Wiedmann <jwi@linux.vnet.ibm.com>
      Cc: John Stultz <john.stultz@linaro.org>
      Cc: Mark Gross <mark.gross@intel.com>
      Cc: linux-watchdog@vger.kernel.org
      Cc: linux-scsi@vger.kernel.org
      Cc: "Martin K. Petersen" <martin.petersen@oracle.com>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Stefan Richter <stefanr@s5r6.in-berlin.de>
      Cc: Michael Reed <mdr@sgi.com>
      Cc: netdev@vger.kernel.org
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: linuxppc-dev@lists.ozlabs.org
      Cc: Sudip Mukherjee <sudipm.mukherjee@gmail.com>
      Link: https://lkml.kernel.org/r/1507159627-127660-7-git-send-email-keescook@chromium.org
      5cd79d6a
  12. 29 8月, 2017 1 次提交
  13. 25 8月, 2017 2 次提交
    • P
      locking/lockdep: Fix workqueue crossrelease annotation · e6f3faa7
      Peter Zijlstra 提交于
      The new completion/crossrelease annotations interact unfavourable with
      the extant flush_work()/flush_workqueue() annotations.
      
      The problem is that when a single work class does:
      
        wait_for_completion(&C)
      
      and
      
        complete(&C)
      
      in different executions, we'll build dependencies like:
      
        lock_map_acquire(W)
        complete_acquire(C)
      
      and
      
        lock_map_acquire(W)
        complete_release(C)
      
      which results in the dependency chain: W->C->W, which lockdep thinks
      spells deadlock, even though there is no deadlock potential since
      works are ran concurrently.
      
      One possibility would be to change the work 'lock' to recursive-read,
      but that would mean hitting a lockdep limitation on recursive locks.
      Also, unconditinoally switching to recursive-read here would fail to
      detect the actual deadlock on single-threaded workqueues, which do
      have a problem with this.
      
      For now, forcefully disregard these locks for crossrelease.
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Acked-by: NTejun Heo <tj@kernel.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: boqun.feng@gmail.com
      Cc: byungchul.park@lge.com
      Cc: david@fromorbit.com
      Cc: johannes@sipsolutions.net
      Cc: oleg@redhat.com
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      e6f3faa7
    • P
      workqueue/lockdep: 'Fix' flush_work() annotation · a1d14934
      Peter Zijlstra 提交于
      The flush_work() annotation as introduced by commit:
      
        e159489b ("workqueue: relax lockdep annotation on flush_work()")
      
      hits on the lockdep problem with recursive read locks.
      
      The situation as described is:
      
      Work W1:                Work W2:        Task:
      
      ARR(Q)                  ARR(Q)		flush_workqueue(Q)
      A(W1)                   A(W2)             A(Q)
        flush_work(W2)			  R(Q)
          A(W2)
          R(W2)
          if (special)
            A(Q)
          else
            ARR(Q)
          R(Q)
      
      where: A - acquire, ARR - acquire-read-recursive, R - release.
      
      Where under 'special' conditions we want to trigger a lock recursion
      deadlock, but otherwise allow the flush_work(). The allowing is done
      by using recursive read locks (ARR), but lockdep is broken for
      recursive stuff.
      
      However, there appears to be no need to acquire the lock if we're not
      'special', so if we remove the 'else' clause things become much
      simpler and no longer need the recursion thing at all.
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Acked-by: NTejun Heo <tj@kernel.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: boqun.feng@gmail.com
      Cc: byungchul.park@lge.com
      Cc: david@fromorbit.com
      Cc: johannes@sipsolutions.net
      Cc: oleg@redhat.com
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      a1d14934
  14. 23 8月, 2017 1 次提交
  15. 17 8月, 2017 1 次提交
    • B
      locking/lockdep: Explicitly initialize wq_barrier::done::map · 52fa5bc5
      Boqun Feng 提交于
      With the new lockdep crossrelease feature, which checks completions usage,
      a false positive is reported in the workqueue code:
      
      > Worker A : acquired of wfc.work -> wait for cpu_hotplug_lock to be released
      > Task   B : acquired of cpu_hotplug_lock -> wait for lock#3 to be released
      > Task   C : acquired of lock#3 -> wait for completion of barr->done
      > (Task C is in lru_add_drain_all_cpuslocked())
      > Worker D : wait for wfc.work to be released -> will complete barr->done
      
      Such a dead lock can not happen because Task C's barr->done and Worker D's
      barr->done can not be the same instance.
      
      The reason of this false positive is we initialize all wq_barrier::done
      at insert_wq_barrier() via init_completion(), which makes them belong to
      the same lock class, therefore, impossible circles are reported.
      
      To fix this, explicitly initialize the lockdep map for wq_barrier::done
      in insert_wq_barrier(), so that the lock class key of wq_barrier::done
      is a subkey of the corresponding work_struct, as a result we won't build
      a dependency between a wq_barrier with a unrelated work, and we can
      differ wq barriers based on the related works, so the false positive
      above is avoided.
      
      Also define the empty lockdep_init_map_crosslock() for !CROSSRELEASE
      to make the code simple and away from unnecessary #ifdefs.
      Reported-by: NIngo Molnar <mingo@kernel.org>
      Signed-off-by: NBoqun Feng <boqun.feng@gmail.com>
      Cc: Byungchul Park <byungchul.park@lge.com>
      Cc: Lai Jiangshan <jiangshanlai@gmail.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/20170817094622.12915-1-boqun.feng@gmail.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      52fa5bc5
  16. 10 8月, 2017 1 次提交
    • B
      locking/lockdep: Implement the 'crossrelease' feature · b09be676
      Byungchul Park 提交于
      Lockdep is a runtime locking correctness validator that detects and
      reports a deadlock or its possibility by checking dependencies between
      locks. It's useful since it does not report just an actual deadlock but
      also the possibility of a deadlock that has not actually happened yet.
      That enables problems to be fixed before they affect real systems.
      
      However, this facility is only applicable to typical locks, such as
      spinlocks and mutexes, which are normally released within the context in
      which they were acquired. However, synchronization primitives like page
      locks or completions, which are allowed to be released in any context,
      also create dependencies and can cause a deadlock.
      
      So lockdep should track these locks to do a better job. The 'crossrelease'
      implementation makes these primitives also be tracked.
      Signed-off-by: NByungchul Park <byungchul.park@lge.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: akpm@linux-foundation.org
      Cc: boqun.feng@gmail.com
      Cc: kernel-team@lge.com
      Cc: kirill@shutemov.name
      Cc: npiggin@gmail.com
      Cc: walken@google.com
      Cc: willy@infradead.org
      Link: http://lkml.kernel.org/r/1502089981-21272-6-git-send-email-byungchul.park@lge.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      b09be676
  17. 07 8月, 2017 1 次提交
  18. 28 7月, 2017 1 次提交
    • M
      workqueue: Work around edge cases for calc of pool's cpumask · 1ad0f0a7
      Michael Bringmann 提交于
      There is an underlying assumption/trade-off in many layers of the Linux
      system that CPU <-> node mapping is static.  This is despite the presence
      of features like NUMA and 'hotplug' that support the dynamic addition/
      removal of fundamental system resources like CPUs and memory.  PowerPC
      systems, however, do provide extensive features for the dynamic change
      of resources available to a system.
      
      Currently, there is little or no synchronization protection around the
      updating of the CPU <-> node mapping, and the export/update of this
      information for other layers / modules.  In systems which can change
      this mapping during 'hotplug', like PowerPC, the information is changing
      underneath all layers that might reference it.
      
      This patch attempts to ensure that a valid, usable cpumask attribute
      is used by the workqueue infrastructure when setting up new resource
      pools.  It prevents a crash that has been observed when an 'empty'
      cpumask is passed along to the worker/task scheduling code.  It is
      intended as a temporary workaround until a more fundamental review and
      correction of the issue can be done.
      
      [With additions to the patch provided by Tejun Hao <tj@kernel.org>]
      Signed-off-by: NMichael Bringmann <mwb@linux.vnet.ibm.com>
      Signed-off-by: NTejun Heo <tj@kernel.org>
      1ad0f0a7
  19. 26 7月, 2017 1 次提交
    • T
      workqueue: implicit ordered attribute should be overridable · 0a94efb5
      Tejun Heo 提交于
      5c0338c6 ("workqueue: restore WQ_UNBOUND/max_active==1 to be
      ordered") automatically enabled ordered attribute for unbound
      workqueues w/ max_active == 1.  Because ordered workqueues reject
      max_active and some attribute changes, this implicit ordered mode
      broke cases where the user creates an unbound workqueue w/ max_active
      == 1 and later explicitly changes the related attributes.
      
      This patch distinguishes explicit and implicit ordered setting and
      overrides from attribute changes if implict.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Fixes: 5c0338c6 ("workqueue: restore WQ_UNBOUND/max_active==1 to be ordered")
      0a94efb5
  20. 19 7月, 2017 1 次提交
    • T
      workqueue: restore WQ_UNBOUND/max_active==1 to be ordered · 5c0338c6
      Tejun Heo 提交于
      The combination of WQ_UNBOUND and max_active == 1 used to imply
      ordered execution.  After NUMA affinity 4c16bd32 ("workqueue:
      implement NUMA affinity for unbound workqueues"), this is no longer
      true due to per-node worker pools.
      
      While the right way to create an ordered workqueue is
      alloc_ordered_workqueue(), the documentation has been misleading for a
      long time and people do use WQ_UNBOUND and max_active == 1 for ordered
      workqueues which can lead to subtle bugs which are very difficult to
      trigger.
      
      It's unlikely that we'd see noticeable performance impact by enforcing
      ordering on WQ_UNBOUND / max_active == 1 workqueues.  Let's
      automatically set __WQ_ORDERED for those workqueues.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Reported-by: NChristoph Hellwig <hch@infradead.org>
      Reported-by: NAlexei Potashnik <alexei@purestorage.com>
      Fixes: 4c16bd32 ("workqueue: implement NUMA affinity for unbound workqueues")
      Cc: stable@vger.kernel.org # v3.10+
      5c0338c6
  21. 20 6月, 2017 1 次提交
    • I
      sched/wait: Rename wait_queue_t => wait_queue_entry_t · ac6424b9
      Ingo Molnar 提交于
      Rename:
      
      	wait_queue_t		=>	wait_queue_entry_t
      
      'wait_queue_t' was always a slight misnomer: its name implies that it's a "queue",
      but in reality it's a queue *entry*. The 'real' queue is the wait queue head,
      which had to carry the name.
      
      Start sorting this out by renaming it to 'wait_queue_entry_t'.
      
      This also allows the real structure name 'struct __wait_queue' to
      lose its double underscore and become 'struct wait_queue_entry',
      which is the more canonical nomenclature for such data types.
      
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      ac6424b9
  22. 15 4月, 2017 1 次提交
    • T
      workqueue: Provide work_on_cpu_safe() · 0e8d6a93
      Thomas Gleixner 提交于
      work_on_cpu() is not protected against CPU hotplug. For code which requires
      to be either executed on an online CPU or to fail if the CPU is not
      available the callsite would have to protect against CPU hotplug.
      
      Provide a function which does get/put_online_cpus() around the call to
      work_on_cpu() and fails the call with -ENODEV if the target CPU is not
      online.
      
      Preparatory patch to convert several racy task affinity manipulations.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Acked-by: NTejun Heo <tj@kernel.org>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Herbert Xu <herbert@gondor.apana.org.au>
      Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Sebastian Siewior <bigeasy@linutronix.de>
      Cc: Lai Jiangshan <jiangshanlai@gmail.com>
      Cc: Viresh Kumar <viresh.kumar@linaro.org>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Len Brown <lenb@kernel.org>
      Link: http://lkml.kernel.org/r/20170412201042.262610721@linutronix.deSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      0e8d6a93
  23. 07 3月, 2017 2 次提交
  24. 10 2月, 2017 1 次提交
    • K
      time: Remove CONFIG_TIMER_STATS · dfb4357d
      Kees Cook 提交于
      Currently CONFIG_TIMER_STATS exposes process information across namespaces:
      
      kernel/time/timer_list.c print_timer():
      
              SEQ_printf(m, ", %s/%d", tmp, timer->start_pid);
      
      /proc/timer_list:
      
       #11: <0000000000000000>, hrtimer_wakeup, S:01, do_nanosleep, cron/2570
      
      Given that the tracer can give the same information, this patch entirely
      removes CONFIG_TIMER_STATS.
      Suggested-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NKees Cook <keescook@chromium.org>
      Acked-by: NJohn Stultz <john.stultz@linaro.org>
      Cc: Nicolas Pitre <nicolas.pitre@linaro.org>
      Cc: linux-doc@vger.kernel.org
      Cc: Lai Jiangshan <jiangshanlai@gmail.com>
      Cc: Shuah Khan <shuah@kernel.org>
      Cc: Xing Gao <xgao01@email.wm.edu>
      Cc: Jonathan Corbet <corbet@lwn.net>
      Cc: Jessica Frazelle <me@jessfraz.com>
      Cc: kernel-hardening@lists.openwall.com
      Cc: Nicolas Iooss <nicolas.iooss_linux@m4x.org>
      Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
      Cc: Petr Mladek <pmladek@suse.com>
      Cc: Richard Cochran <richardcochran@gmail.com>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Michal Marek <mmarek@suse.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: "Eric W. Biederman" <ebiederm@xmission.com>
      Cc: Olof Johansson <olof@lixom.net>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: linux-api@vger.kernel.org
      Cc: Arjan van de Ven <arjan@linux.intel.com>
      Link: http://lkml.kernel.org/r/20170208192659.GA32582@beastSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      dfb4357d
  25. 20 10月, 2016 1 次提交
    • T
      workqueue: move wq_numa_init() to workqueue_init() · 2186d9f9
      Tejun Heo 提交于
      While splitting up workqueue initialization into two parts,
      ac8f73400782 ("workqueue: make workqueue available early during boot")
      put wq_numa_init() into workqueue_init_early().  Unfortunately, on
      some archs including power and arm64, cpu to node mapping isn't yet
      established by the time the early init is called leading to incorrect
      NUMA initialization and subsequently the following oops due to zero
      cpumask on node-specific unbound pools.
      
        Unable to handle kernel paging request for data at address 0x00000038
        Faulting instruction address: 0xc0000000000fc0cc
        Oops: Kernel access of bad area, sig: 11 [#1]
        SMP NR_CPUS=2048 NUMA PowerNV
        Modules linked in:
        CPU: 0 PID: 1 Comm: swapper/0 Not tainted 4.8.0-compiler_gcc-6.2.0-next-20161005 #94
        task: c0000007f5400000 task.stack: c000001ffc084000
        NIP: c0000000000fc0cc LR: c0000000000ed928 CTR: c0000000000fbfd0
        REGS: c000001ffc087780 TRAP: 0300   Not tainted  (4.8.0-compiler_gcc-6.2.0-next-20161005)
        MSR: 9000000002009033 <SF,HV,VEC,EE,ME,IR,DR,RI,LE>  CR: 48000424  XER: 00000000
        CFAR: c0000000000089dc DAR: 0000000000000038 DSISR: 40000000 SOFTE: 0
        GPR00: c0000000000ed928 c000001ffc087a00 c000000000e63200 c000000010d6d600
        GPR04: c0000007f5409200 0000000000000021 000000000748e08c 000000000000001f
        GPR08: 0000000000000000 0000000000000021 000000000748f1f8 0000000000000000
        GPR12: 0000000028000422 c00000000fb80000 c00000000000e0c8 0000000000000000
        GPR16: 0000000000000000 0000000000000000 0000000000000021 0000000000000001
        GPR20: ffffffffafb50401 0000000000000000 c000000010d6d600 000000000000ba7e
        GPR24: 000000000000ba7e c000000000d8bc58 afb504000afb5041 0000000000000001
        GPR28: 0000000000000000 0000000000000004 c0000007f5409280 0000000000000000
        NIP [c0000000000fc0cc] enqueue_task_fair+0xfc/0x18b0
        LR [c0000000000ed928] activate_task+0x78/0xe0
        Call Trace:
        [c000001ffc087a00] [c0000007f5409200] 0xc0000007f5409200 (unreliable)
        [c000001ffc087b10] [c0000000000ed928] activate_task+0x78/0xe0
        [c000001ffc087b50] [c0000000000ede58] ttwu_do_activate+0x68/0xc0
        [c000001ffc087b90] [c0000000000ef1b8] try_to_wake_up+0x208/0x4f0
        [c000001ffc087c10] [c0000000000d3484] create_worker+0x144/0x250
        [c000001ffc087cb0] [c000000000cd72d0] workqueue_init+0x124/0x150
        [c000001ffc087d00] [c000000000cc0e74] kernel_init_freeable+0x158/0x360
        [c000001ffc087dc0] [c00000000000e0e4] kernel_init+0x24/0x160
        [c000001ffc087e30] [c00000000000bfa0] ret_from_kernel_thread+0x5c/0xbc
        Instruction dump:
        62940401 3b800000 3aa00000 7f17c378 3a600001 3b600001 60000000 60000000
        60420000 72490021 ebfe0150 2f890001 <ebbf0038> 419e0de0 7fbee840 419e0e58
        ---[ end trace 0000000000000000 ]---
      
      Fix it by moving wq_numa_init() to workqueue_init().  As this means
      that the early intialization may not have full NUMA info for per-cpu
      pools and ignores NUMA affinity for unbound pools, fix them up from
      workqueue_init() after wq_numa_init().
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Reported-by: NMichael Ellerman <mpe@ellerman.id.au>
      Link: http://lkml.kernel.org/r/87twck5wqo.fsf@concordia.ellerman.id.au
      Fixes: ac8f73400782 ("workqueue: make workqueue available early during boot")
      Signed-off-by: NTejun Heo <tj@kernel.org>
      2186d9f9
  26. 12 10月, 2016 1 次提交
  27. 18 9月, 2016 2 次提交
    • T
      workqueue: remove keventd_up() · 863b710b
      Tejun Heo 提交于
      keventd_up() no longer has in-kernel users.  Remove it and make
      wq_online static.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      863b710b
    • T
      workqueue: make workqueue available early during boot · 3347fa09
      Tejun Heo 提交于
      Workqueue is currently initialized in an early init call; however,
      there are cases where early boot code has to be split and reordered to
      come after workqueue initialization or the same code path which makes
      use of workqueues is used both before workqueue initailization and
      after.  The latter cases have to gate workqueue usages with
      keventd_up() tests, which is nasty and easy to get wrong.
      
      Workqueue usages have become widespread and it'd be a lot more
      convenient if it can be used very early from boot.  This patch splits
      workqueue initialization into two steps.  workqueue_init_early() which
      sets up the basic data structures so that workqueues can be created
      and work items queued, and workqueue_init() which actually brings up
      workqueues online and starts executing queued work items.  The former
      step can be done very early during boot once memory allocation,
      cpumasks and idr are initialized.  The latter right after kthreads
      become available.
      
      This allows work item queueing and canceling from very early boot
      which is what most of these use cases want.
      
      * As systemd_wq being initialized doesn't indicate that workqueue is
        fully online anymore, update keventd_up() to test wq_online instead.
        The follow-up patches will get rid of all its usages and the
        function itself.
      
      * Flushing doesn't make sense before workqueue is fully initialized.
        The flush functions trigger WARN and return immediately before fully
        online.
      
      * Work items are never in-flight before fully online.  Canceling can
        always succeed by skipping the flush step.
      
      * Some code paths can no longer assume to be called with irq enabled
        as irq is disabled during early boot.  Use irqsave/restore
        operations instead.
      
      v2: Watchdog init, which requires timer to be running, moved from
          workqueue_init_early() to workqueue_init().
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Suggested-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Link: http://lkml.kernel.org/r/CA+55aFx0vPuMuxn00rBSM192n-Du5uxy+4AvKa0SBSOVJeuCGg@mail.gmail.com
      3347fa09
  28. 16 9月, 2016 1 次提交
  29. 29 8月, 2016 1 次提交
  30. 14 7月, 2016 1 次提交
  31. 02 7月, 2016 1 次提交
  32. 17 6月, 2016 1 次提交
    • P
      workqueue: Fix setting affinity of unbound worker threads · d945b5e9
      Peter Zijlstra 提交于
      With commit e9d867a6 ("sched: Allow per-cpu kernel threads to
      run on online && !active"), __set_cpus_allowed_ptr() expects that only
      strict per-cpu kernel threads can have affinity to an online CPU which
      is not yet active.
      
      This assumption is currently broken in the CPU_ONLINE notification
      handler for the workqueues where restore_unbound_workers_cpumask()
      calls set_cpus_allowed_ptr() when the first cpu in the unbound
      worker's pool->attr->cpumask comes online. Since
      set_cpus_allowed_ptr() is called with pool->attr->cpumask in which
      only one CPU is online which is not yet active, we get the following
      WARN_ON during an CPU online operation.
      
      ------------[ cut here ]------------
      WARNING: CPU: 40 PID: 248 at kernel/sched/core.c:1166
      __set_cpus_allowed_ptr+0x228/0x2e0
      Modules linked in:
      CPU: 40 PID: 248 Comm: cpuhp/40 Not tainted 4.6.0-autotest+ #4
      <..snip..>
      Call Trace:
      [c000000f273ff920] [c00000000010493c] __set_cpus_allowed_ptr+0x2cc/0x2e0 (unreliable)
      [c000000f273ffac0] [c0000000000ed4b0] workqueue_cpu_up_callback+0x2c0/0x470
      [c000000f273ffb70] [c0000000000f5c58] notifier_call_chain+0x98/0x100
      [c000000f273ffbc0] [c0000000000c5ed0] __cpu_notify+0x70/0xe0
      [c000000f273ffc00] [c0000000000c6028] notify_online+0x38/0x50
      [c000000f273ffc30] [c0000000000c5214] cpuhp_invoke_callback+0x84/0x250
      [c000000f273ffc90] [c0000000000c562c] cpuhp_up_callbacks+0x5c/0x120
      [c000000f273ffce0] [c0000000000c64d4] cpuhp_thread_fun+0x184/0x1c0
      [c000000f273ffd20] [c0000000000fa050] smpboot_thread_fn+0x290/0x2a0
      [c000000f273ffd80] [c0000000000f45b0] kthread+0x110/0x130
      [c000000f273ffe30] [c000000000009570] ret_from_kernel_thread+0x5c/0x6c
      ---[ end trace 00f1456578b2a3b2 ]---
      
      This patch fixes this by limiting the mask to the intersection of
      the pool affinity and online CPUs.
      
      Changelog-cribbed-from: Gautham R. Shenoy <ego@linux.vnet.ibm.com>
      Reported-by: NAbdul Haleem <abdhalee@linux.vnet.ibm.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Signed-off-by: NTejun Heo <tj@kernel.org>
      d945b5e9
  33. 20 5月, 2016 2 次提交
  34. 13 5月, 2016 1 次提交
    • W
      workqueue: fix rebind bound workers warning · f7c17d26
      Wanpeng Li 提交于
      ------------[ cut here ]------------
      WARNING: CPU: 0 PID: 16 at kernel/workqueue.c:4559 rebind_workers+0x1c0/0x1d0
      Modules linked in:
      CPU: 0 PID: 16 Comm: cpuhp/0 Not tainted 4.6.0-rc4+ #31
      Hardware name: IBM IBM System x3550 M4 Server -[7914IUW]-/00Y8603, BIOS -[D7E128FUS-1.40]- 07/23/2013
       0000000000000000 ffff881037babb58 ffffffff8139d885 0000000000000010
       0000000000000000 0000000000000000 0000000000000000 ffff881037babba8
       ffffffff8108505d ffff881037ba0000 000011cf3e7d6e60 0000000000000046
      Call Trace:
       dump_stack+0x89/0xd4
       __warn+0xfd/0x120
       warn_slowpath_null+0x1d/0x20
       rebind_workers+0x1c0/0x1d0
       workqueue_cpu_up_callback+0xf5/0x1d0
       notifier_call_chain+0x64/0x90
       ? trace_hardirqs_on_caller+0xf2/0x220
       ? notify_prepare+0x80/0x80
       __raw_notifier_call_chain+0xe/0x10
       __cpu_notify+0x35/0x50
       notify_down_prepare+0x5e/0x80
       ? notify_prepare+0x80/0x80
       cpuhp_invoke_callback+0x73/0x330
       ? __schedule+0x33e/0x8a0
       cpuhp_down_callbacks+0x51/0xc0
       cpuhp_thread_fun+0xc1/0xf0
       smpboot_thread_fn+0x159/0x2a0
       ? smpboot_create_threads+0x80/0x80
       kthread+0xef/0x110
       ? wait_for_completion+0xf0/0x120
       ? schedule_tail+0x35/0xf0
       ret_from_fork+0x22/0x50
       ? __init_kthread_worker+0x70/0x70
      ---[ end trace eb12ae47d2382d8f ]---
      notify_down_prepare: attempt to take down CPU 0 failed
      
      This bug can be reproduced by below config w/ nohz_full= all cpus:
      
      CONFIG_BOOTPARAM_HOTPLUG_CPU0=y
      CONFIG_DEBUG_HOTPLUG_CPU0=y
      CONFIG_NO_HZ_FULL=y
      
      As Thomas pointed out:
      
      | If a down prepare callback fails, then DOWN_FAILED is invoked for all
      | callbacks which have successfully executed DOWN_PREPARE.
      |
      | But, workqueue has actually two notifiers. One which handles
      | UP/DOWN_FAILED/ONLINE and one which handles DOWN_PREPARE.
      |
      | Now look at the priorities of those callbacks:
      |
      | CPU_PRI_WORKQUEUE_UP        = 5
      | CPU_PRI_WORKQUEUE_DOWN      = -5
      |
      | So the call order on DOWN_PREPARE is:
      |
      | CB 1
      | CB ...
      | CB workqueue_up() -> Ignores DOWN_PREPARE
      | CB ...
      | CB X ---> Fails
      |
      | So we call up to CB X with DOWN_FAILED
      |
      | CB 1
      | CB ...
      | CB workqueue_up() -> Handles DOWN_FAILED
      | CB ...
      | CB X-1
      |
      | So the problem is that the workqueue stuff handles DOWN_FAILED in the up
      | callback, while it should do it in the down callback. Which is not a good idea
      | either because it wants to be called early on rollback...
      |
      | Brilliant stuff, isn't it? The hotplug rework will solve this problem because
      | the callbacks become symetric, but for the existing mess, we need some
      | workaround in the workqueue code.
      
      The boot CPU handles housekeeping duty(unbound timers, workqueues,
      timekeeping, ...) on behalf of full dynticks CPUs. It must remain
      online when nohz full is enabled. There is a priority set to every
      notifier_blocks:
      
      workqueue_cpu_up > tick_nohz_cpu_down > workqueue_cpu_down
      
      So tick_nohz_cpu_down callback failed when down prepare cpu 0, and
      notifier_blocks behind tick_nohz_cpu_down will not be called any
      more, which leads to workers are actually not unbound. Then hotplug
      state machine will fallback to undo and online cpu 0 again. Workers
      will be rebound unconditionally even if they are not unbound and
      trigger the warning in this progress.
      
      This patch fix it by catching !DISASSOCIATED to avoid rebind bound
      workers.
      
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Lai Jiangshan <jiangshanlai@gmail.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Frédéric Weisbecker <fweisbec@gmail.com>
      Cc: stable@vger.kernel.org
      Suggested-by: NLai Jiangshan <jiangshanlai@gmail.com>
      Signed-off-by: NWanpeng Li <wanpeng.li@hotmail.com>
      f7c17d26