1. 25 10月, 2017 8 次提交
    • B
      workqueue: Remove now redundant lock acquisitions wrt. workqueue flushes · fd1a5b04
      Byungchul Park 提交于
      The workqueue code added manual lock acquisition annotations to catch
      deadlocks.
      
      After lockdepcrossrelease was introduced, some of those became redundant,
      since wait_for_completion() already does the acquisition and tracking.
      
      Remove the duplicate annotations.
      Signed-off-by: NByungchul Park <byungchul.park@lge.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: amir73il@gmail.com
      Cc: axboe@kernel.dk
      Cc: darrick.wong@oracle.com
      Cc: david@fromorbit.com
      Cc: hch@infradead.org
      Cc: idryomov@gmail.com
      Cc: johan@kernel.org
      Cc: johannes.berg@intel.com
      Cc: kernel-team@lge.com
      Cc: linux-block@vger.kernel.org
      Cc: linux-fsdevel@vger.kernel.org
      Cc: linux-mm@kvack.org
      Cc: linux-xfs@vger.kernel.org
      Cc: oleg@redhat.com
      Cc: tj@kernel.org
      Link: http://lkml.kernel.org/r/1508921765-15396-9-git-send-email-byungchul.park@lge.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      fd1a5b04
    • B
      locking/lockdep: Introduce CONFIG_BOOTPARAM_LOCKDEP_CROSSRELEASE_FULLSTACK=y · e121d64e
      Byungchul Park 提交于
      Add a Kconfig knob that enables the lockdep "crossrelease_fullstack" boot parameter.
      Suggested-by: NIngo Molnar <mingo@kernel.org>
      Signed-off-by: NByungchul Park <byungchul.park@lge.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: amir73il@gmail.com
      Cc: axboe@kernel.dk
      Cc: darrick.wong@oracle.com
      Cc: david@fromorbit.com
      Cc: hch@infradead.org
      Cc: idryomov@gmail.com
      Cc: johan@kernel.org
      Cc: johannes.berg@intel.com
      Cc: kernel-team@lge.com
      Cc: linux-block@vger.kernel.org
      Cc: linux-fsdevel@vger.kernel.org
      Cc: linux-mm@kvack.org
      Cc: linux-xfs@vger.kernel.org
      Cc: oleg@redhat.com
      Cc: tj@kernel.org
      Link: http://lkml.kernel.org/r/1508921765-15396-7-git-send-email-byungchul.park@lge.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      e121d64e
    • B
      locking/lockdep: Add a boot parameter allowing unwind in cross-release and disable it by default · d141babe
      Byungchul Park 提交于
      Johan Hovold reported a heavy performance regression caused by lockdep
      cross-release:
      
       > Boot time (from "Linux version" to login prompt) had in fact doubled
       > since 4.13 where it took 17 seconds (with my current config) compared to
       > the 35 seconds I now see with 4.14-rc4.
       >
       > I quick bisect pointed to lockdep and specifically the following commit:
       >
       >	28a903f6 ("locking/lockdep: Handle non(or multi)-acquisition
       >	               of a crosslock")
       >
       > which I've verified is the commit which doubled the boot time (compared
       > to 28a903f6^) (added by lockdep crossrelease series [1]).
      
      Currently cross-release performs unwind on every acquisition, but that
      is very expensive.
      
      This patch makes unwind optional and disables it by default and only
      records acquire_ip.
      
      Full stack traces are sometimes required for full analysis, in which
      case a boot paramter, crossrelease_fullstack, can be specified.
      
      On my qemu Ubuntu machine (x86_64, 4 cores, 512M), the regression was
      fixed. We measure boot times with 'perf stat --null --repeat 10 $QEMU',
      where $QEMU launches a kernel with init=/bin/true:
      
      1. No lockdep enabled:
      
       Performance counter stats for 'qemu_booting_time.sh bzImage' (10 runs):
      
             2.756558155 seconds time elapsed                    ( +-  0.09% )
      
      2. Lockdep enabled:
      
       Performance counter stats for 'qemu_booting_time.sh bzImage' (10 runs):
      
             2.968710420 seconds time elapsed                    ( +-  0.12% )
      
      3. Lockdep enabled + cross-release enabled:
      
       Performance counter stats for 'qemu_booting_time.sh bzImage' (10 runs):
      
             3.153839636 seconds time elapsed                    ( +-  0.31% )
      
      4. Lockdep enabled + cross-release enabled + this patch applied:
      
       Performance counter stats for 'qemu_booting_time.sh bzImage' (10 runs):
      
             2.963669551 seconds time elapsed                    ( +-  0.11% )
      
      I.e. lockdep cross-release performance is now indistinguishable from
      vanilla lockdep.
      Bisected-by: NJohan Hovold <johan@kernel.org>
      Analyzed-by: NThomas Gleixner <tglx@linutronix.de>
      Suggested-by: NThomas Gleixner <tglx@linutronix.de>
      Reported-by: NJohan Hovold <johan@kernel.org>
      Signed-off-by: NByungchul Park <byungchul.park@lge.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: amir73il@gmail.com
      Cc: axboe@kernel.dk
      Cc: darrick.wong@oracle.com
      Cc: david@fromorbit.com
      Cc: hch@infradead.org
      Cc: idryomov@gmail.com
      Cc: johannes.berg@intel.com
      Cc: kernel-team@lge.com
      Cc: linux-block@vger.kernel.org
      Cc: linux-fsdevel@vger.kernel.org
      Cc: linux-mm@kvack.org
      Cc: linux-xfs@vger.kernel.org
      Cc: oleg@redhat.com
      Cc: tj@kernel.org
      Link: http://lkml.kernel.org/r/1508921765-15396-5-git-send-email-byungchul.park@lge.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      d141babe
    • M
      locking/atomics: COCCINELLE/treewide: Convert trivial ACCESS_ONCE() patterns... · 6aa7de05
      Mark Rutland 提交于
      locking/atomics: COCCINELLE/treewide: Convert trivial ACCESS_ONCE() patterns to READ_ONCE()/WRITE_ONCE()
      
      Please do not apply this to mainline directly, instead please re-run the
      coccinelle script shown below and apply its output.
      
      For several reasons, it is desirable to use {READ,WRITE}_ONCE() in
      preference to ACCESS_ONCE(), and new code is expected to use one of the
      former. So far, there's been no reason to change most existing uses of
      ACCESS_ONCE(), as these aren't harmful, and changing them results in
      churn.
      
      However, for some features, the read/write distinction is critical to
      correct operation. To distinguish these cases, separate read/write
      accessors must be used. This patch migrates (most) remaining
      ACCESS_ONCE() instances to {READ,WRITE}_ONCE(), using the following
      coccinelle script:
      
      ----
      // Convert trivial ACCESS_ONCE() uses to equivalent READ_ONCE() and
      // WRITE_ONCE()
      
      // $ make coccicheck COCCI=/home/mark/once.cocci SPFLAGS="--include-headers" MODE=patch
      
      virtual patch
      
      @ depends on patch @
      expression E1, E2;
      @@
      
      - ACCESS_ONCE(E1) = E2
      + WRITE_ONCE(E1, E2)
      
      @ depends on patch @
      expression E;
      @@
      
      - ACCESS_ONCE(E)
      + READ_ONCE(E)
      ----
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: davem@davemloft.net
      Cc: linux-arch@vger.kernel.org
      Cc: mpe@ellerman.id.au
      Cc: shuah@kernel.org
      Cc: snitzer@redhat.com
      Cc: thor.thayer@linux.intel.com
      Cc: tj@kernel.org
      Cc: viro@zeniv.linux.org.uk
      Cc: will.deacon@arm.com
      Link: http://lkml.kernel.org/r/1508792849-3115-19-git-send-email-paulmck@linux.vnet.ibm.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      6aa7de05
    • M
      locking/atomics, workqueue: Convert ACCESS_ONCE() to READ_ONCE()/WRITE_ONCE() · c95491ed
      Mark Rutland 提交于
      For several reasons, it is desirable to use {READ,WRITE}_ONCE() in
      preference to ACCESS_ONCE(), and new code is expected to use one of the
      former. So far, there's been no reason to change most existing uses of
      ACCESS_ONCE(), as these aren't currently harmful.
      
      However, for some features it is necessary to instrument reads and
      writes separately, which is not possible with ACCESS_ONCE(). This
      distinction is critical to correct operation.
      
      It's possible to transform the bulk of kernel code using the Coccinelle
      script below. However, this doesn't handle comments, leaving references
      to ACCESS_ONCE() instances which have been removed. As a preparatory
      step, this patch converts the workqueue code and comments to use
      {READ,WRITE}_ONCE() consistently.
      
      ----
      virtual patch
      
      @ depends on patch @
      expression E1, E2;
      @@
      
      - ACCESS_ONCE(E1) = E2
      + WRITE_ONCE(E1, E2)
      
      @ depends on patch @
      expression E;
      @@
      
      - ACCESS_ONCE(E)
      + READ_ONCE(E)
      ----
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Acked-by: NTejun Heo <tj@kernel.org>
      Cc: Lai Jiangshan <jiangshanlai@gmail.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: davem@davemloft.net
      Cc: linux-arch@vger.kernel.org
      Cc: mpe@ellerman.id.au
      Cc: shuah@kernel.org
      Cc: snitzer@redhat.com
      Cc: thor.thayer@linux.intel.com
      Cc: viro@zeniv.linux.org.uk
      Cc: will.deacon@arm.com
      Link: http://lkml.kernel.org/r/1508792849-3115-12-git-send-email-paulmck@linux.vnet.ibm.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      c95491ed
    • W
      locking/qrwlock: Prevent slowpath writers getting held up by fastpath · d1331661
      Will Deacon 提交于
      When a prospective writer takes the qrwlock locking slowpath due to the
      lock being held, it attempts to cmpxchg the wmode field from 0 to
      _QW_WAITING so that concurrent lockers also take the slowpath and queue
      on the spinlock accordingly, allowing the lockers to drain.
      
      Unfortunately, this isn't fair, because a fastpath writer that comes in
      after the lock is made available but before the _QW_WAITING flag is set
      can effectively jump the queue. If there is a steady stream of prospective
      writers, then the waiter will be held off indefinitely.
      
      This patch restores fairness by separating _QW_WAITING and _QW_LOCKED
      into two distinct fields: _QW_LOCKED continues to occupy the bottom byte
      of the lockword so that it can be cleared unconditionally when unlocking,
      but _QW_WAITING now occupies what used to be the bottom bit of the reader
      count. This then forces the slow-path for concurrent lockers.
      Tested-by: NWaiman Long <longman@redhat.com>
      Tested-by: NJeremy Linton <jeremy.linton@arm.com>
      Tested-by: NAdam Wallis <awallis@codeaurora.org>
      Tested-by: NJan Glauber <jglauber@cavium.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      Acked-by: NPeter Zijlstra <peterz@infradead.org>
      Cc: Boqun Feng <boqun.feng@gmail.com>
      Cc: Jeremy.Linton@arm.com
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: linux-arm-kernel@lists.infradead.org
      Link: http://lkml.kernel.org/r/1507810851-306-6-git-send-email-will.deacon@arm.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      d1331661
    • W
      locking/qrwlock: Use atomic_cond_read_acquire() when spinning in qrwlock · b519b56e
      Will Deacon 提交于
      The qrwlock slowpaths involve spinning when either a prospective reader
      is waiting for a concurrent writer to drain, or a prospective writer is
      waiting for concurrent readers to drain. In both of these situations,
      atomic_cond_read_acquire() can be used to avoid busy-waiting and make use
      of any backoff functionality provided by the architecture.
      
      This patch replaces the open-code loops and rspin_until_writer_unlock()
      implementation with atomic_cond_read_acquire(). The write mode transition
      zero to _QW_WAITING is left alone, since (a) this doesn't need acquire
      semantics and (b) should be fast.
      Tested-by: NWaiman Long <longman@redhat.com>
      Tested-by: NJeremy Linton <jeremy.linton@arm.com>
      Tested-by: NAdam Wallis <awallis@codeaurora.org>
      Tested-by: NJan Glauber <jglauber@cavium.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      Acked-by: NPeter Zijlstra <peterz@infradead.org>
      Cc: Boqun Feng <boqun.feng@gmail.com>
      Cc: Jeremy.Linton@arm.com
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: linux-arm-kernel@lists.infradead.org
      Link: http://lkml.kernel.org/r/1507810851-306-4-git-send-email-will.deacon@arm.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      b519b56e
    • W
      locking/qrwlock: Use 'struct qrwlock' instead of 'struct __qrwlock' · e0d02285
      Will Deacon 提交于
      There's no good reason to keep the internal structure of struct qrwlock
      hidden from qrwlock.h, particularly as it's actually needed for unlock
      and ends up being abstracted independently behind the __qrwlock_write_byte()
      function.
      
      Stop pretending we can hide this stuff, and move the __qrwlock definition
      into qrwlock, removing the __qrwlock_write_byte() nastiness and using the
      same struct definition everywhere instead.
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      Acked-by: NPeter Zijlstra <peterz@infradead.org>
      Cc: Boqun Feng <boqun.feng@gmail.com>
      Cc: Jeremy.Linton@arm.com
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Waiman Long <longman@redhat.com>
      Cc: linux-arm-kernel@lists.infradead.org
      Link: http://lkml.kernel.org/r/1507810851-306-2-git-send-email-will.deacon@arm.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      e0d02285
  2. 24 10月, 2017 1 次提交
  3. 22 10月, 2017 3 次提交
  4. 21 10月, 2017 2 次提交
  5. 20 10月, 2017 6 次提交
  6. 19 10月, 2017 3 次提交
    • D
      bpf: do not test for PCPU_MIN_UNIT_SIZE before percpu allocations · bc6d5031
      Daniel Borkmann 提交于
      PCPU_MIN_UNIT_SIZE is an implementation detail of the percpu
      allocator. Given we support __GFP_NOWARN now, lets just let
      the allocation request fail naturally instead. The two call
      sites from BPF mistakenly assumed __GFP_NOWARN would work, so
      no changes needed to their actual __alloc_percpu_gfp() calls
      which use the flag already.
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: NAlexei Starovoitov <ast@kernel.org>
      Acked-by: NJohn Fastabend <john.fastabend@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      bc6d5031
    • D
      bpf: fix splat for illegal devmap percpu allocation · 82f8dd28
      Daniel Borkmann 提交于
      It was reported that syzkaller was able to trigger a splat on
      devmap percpu allocation due to illegal/unsupported allocation
      request size passed to __alloc_percpu():
      
        [   70.094249] illegal size (32776) or align (8) for percpu allocation
        [   70.094256] ------------[ cut here ]------------
        [   70.094259] WARNING: CPU: 3 PID: 3451 at mm/percpu.c:1365 pcpu_alloc+0x96/0x630
        [...]
        [   70.094325] Call Trace:
        [   70.094328]  __alloc_percpu_gfp+0x12/0x20
        [   70.094330]  dev_map_alloc+0x134/0x1e0
        [   70.094331]  SyS_bpf+0x9bc/0x1610
        [   70.094333]  ? selinux_task_setrlimit+0x5a/0x60
        [   70.094334]  ? security_task_setrlimit+0x43/0x60
        [   70.094336]  entry_SYSCALL_64_fastpath+0x1a/0xa5
      
      This was due to too large max_entries for the map such that we
      surpassed the upper limit of PCPU_MIN_UNIT_SIZE. It's fine to
      fail naturally here, so switch to __alloc_percpu_gfp() and pass
      __GFP_NOWARN instead.
      
      Fixes: 11393cc9 ("xdp: Add batching support to redirect map")
      Reported-by: NMark Rutland <mark.rutland@arm.com>
      Reported-by: NShankara Pailoor <sp3485@columbia.edu>
      Reported-by: NRichard Weinberger <richard@nod.at>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Cc: John Fastabend <john.fastabend@gmail.com>
      Acked-by: NAlexei Starovoitov <ast@kernel.org>
      Acked-by: NJohn Fastabend <john.fastabend@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      82f8dd28
    • B
      locking/static_keys: Improve uninitialized key warning · 5cdda511
      Borislav Petkov 提交于
      Right now it says:
      
        static_key_disable_cpuslocked used before call to jump_label_init
        ------------[ cut here ]------------
        WARNING: CPU: 0 PID: 0 at kernel/jump_label.c:161 static_key_disable_cpuslocked+0x68/0x70
        Modules linked in:
        CPU: 0 PID: 0 Comm: swapper Not tainted 4.14.0-rc5+ #1
        Hardware name: SGI.COM C2112-4GP3/X10DRT-P-Series, BIOS 2.0a 05/09/2016
        task: ffffffff81c0e480 task.stack: ffffffff81c00000
        RIP: 0010:static_key_disable_cpuslocked+0x68/0x70
        RSP: 0000:ffffffff81c03ef0 EFLAGS: 00010096 ORIG_RAX: 0000000000000000
        RAX: 0000000000000041 RBX: ffffffff81c32680 RCX: ffffffff81c5cbf8
        RDX: 0000000000000001 RSI: 0000000000000092 RDI: 0000000000000002
        RBP: ffff88807fffd240 R08: 726f666562206465 R09: 0000000000000136
        R10: 0000000000000000 R11: 696e695f6c656261 R12: ffffffff82158900
        R13: ffffffff8215f760 R14: 0000000000000001 R15: 0000000000000008
        FS:  0000000000000000(0000) GS:ffff883f7f400000(0000) knlGS:0000000000000000
        CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
        CR2: ffff88807ffff000 CR3: 0000000001c09000 CR4: 00000000000606b0
        Call Trace:
         static_key_disable+0x16/0x20
         start_kernel+0x15a/0x45d
         ? load_ucode_intel_bsp+0x11/0x2d
         secondary_startup_64+0xa5/0xb0
        Code: 48 c7 c7 a0 15 cf 81 e9 47 53 4b 00 48 89 df e8 5f fc ff ff eb e8 48 c7 c6 \
      	c0 97 83 81 48 c7 c7 d0 ff a2 81 31 c0 e8 c5 9d f5 ff <0f> ff eb a7 0f ff eb \
      	b0 e8 eb a2 4b 00 53 48 89 fb e8 42 0e f0
      
      but it doesn't tell me which key it is. So dump the key's name too:
      
        static_key_disable_cpuslocked(): static key 'virt_spin_lock_key' used before call to jump_label_init()
      
      And that makes pinpointing which key is causing that a lot easier.
      
       include/linux/jump_label.h           |   14 +++++++-------
       include/linux/jump_label_ratelimit.h |    6 +++---
       kernel/jump_label.c                  |   14 +++++++-------
       3 files changed, 17 insertions(+), 17 deletions(-)
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Reviewed-by: NSteven Rostedt (VMware) <rostedt@goodmis.org>
      Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Cc: Hannes Frederic Sowa <hannes@stressinduktion.org>
      Cc: Jason Baron <jbaron@akamai.com>
      Cc: Juergen Gross <jgross@suse.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/20171018152428.ffjgak4o25f7ept6@pd.tnicSigned-off-by: NIngo Molnar <mingo@kernel.org>
      5cdda511
  7. 18 10月, 2017 1 次提交
    • J
      bpf: disallow arithmetic operations on context pointer · 28e33f9d
      Jakub Kicinski 提交于
      Commit f1174f77 ("bpf/verifier: rework value tracking")
      removed the crafty selection of which pointer types are
      allowed to be modified.  This is OK for most pointer types
      since adjust_ptr_min_max_vals() will catch operations on
      immutable pointers.  One exception is PTR_TO_CTX which is
      now allowed to be offseted freely.
      
      The intent of aforementioned commit was to allow context
      access via modified registers.  The offset passed to
      ->is_valid_access() verifier callback has been adjusted
      by the value of the variable offset.
      
      What is missing, however, is taking the variable offset
      into account when the context register is used.  Or in terms
      of the code adding the offset to the value passed to the
      ->convert_ctx_access() callback.  This leads to the following
      eBPF user code:
      
           r1 += 68
           r0 = *(u32 *)(r1 + 8)
           exit
      
      being translated to this in kernel space:
      
         0: (07) r1 += 68
         1: (61) r0 = *(u32 *)(r1 +180)
         2: (95) exit
      
      Offset 8 is corresponding to 180 in the kernel, but offset
      76 is valid too.  Verifier will "accept" access to offset
      68+8=76 but then "convert" access to offset 8 as 180.
      Effective access to offset 248 is beyond the kernel context.
      (This is a __sk_buff example on a debug-heavy kernel -
      packet mark is 8 -> 180, 76 would be data.)
      
      Dereferencing the modified context pointer is not as easy
      as dereferencing other types, because we have to translate
      the access to reading a field in kernel structures which is
      usually at a different offset and often of a different size.
      To allow modifying the pointer we would have to make sure
      that given eBPF instruction will always access the same
      field or the fields accessed are "compatible" in terms of
      offset and size...
      
      Disallow dereferencing modified context pointers and add
      to selftests the test case described here.
      
      Fixes: f1174f77 ("bpf/verifier: rework value tracking")
      Signed-off-by: NJakub Kicinski <jakub.kicinski@netronome.com>
      Acked-by: NDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: NAlexei Starovoitov <ast@kernel.org>
      Acked-by: NEdward Cree <ecree@solarflare.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      28e33f9d
  8. 14 10月, 2017 1 次提交
  9. 13 10月, 2017 2 次提交
    • D
      genirq: generic chip: remove irq_gc_mask_disable_reg_and_ack() · 0d08af35
      Doug Berger 提交于
      Any usage of the irq_gc_mask_disable_reg_and_ack() function has
      been replaced with the desired functionality.
      
      The incorrect and ambiguously named function is removed here to
      prevent accidental misuse.
      Signed-off-by: NDoug Berger <opendmb@gmail.com>
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      0d08af35
    • D
      genirq: generic chip: Add irq_gc_mask_disable_and_ack_set() · 20608924
      Doug Berger 提交于
      The irq_gc_mask_disable_reg_and_ack() function name implies that it
      provides the combined functions of irq_gc_mask_disable_reg() and
      irq_gc_ack().  However, the implementation does not actually do
      that since it writes the mask instead of the disable register. It
      also does not maintain the mask cache which makes it inappropriate
      to use with other masking functions.
      
      In addition, commit 659fb32d ("genirq: replace irq_gc_ack() with
      {set,clr}_bit variants (fwd)") effectively renamed irq_gc_ack() to
      irq_gc_ack_set_bit() so this function probably should have also been
      renamed at that time.
      
      The generic chip code currently provides three functions for use
      with the irq_mask member of the irq_chip structure and two functions
      for use with the irq_ack member of the irq_chip structure. These
      functions could be combined into six functions for use with the
      irq_mask_ack member of the irq_chip structure.  However, since only
      one of the combinations is currently used, only the function
      irq_gc_mask_disable_and_ack_set() is added by this commit.
      
      The '_reg' and '_bit' portions of the base function name were left
      out of the new combined function name in an attempt to keep the
      function name length manageable with the 80 character source code
      line length while still allowing the distinct aspects of each
      combination to be captured by the name.
      
      If other combinations are desired in the future please add them to
      the irq generic chip library at that time.
      Signed-off-by: NDoug Berger <opendmb@gmail.com>
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      20608924
  10. 11 10月, 2017 3 次提交
  11. 10 10月, 2017 10 次提交
    • W
      locking/core: Remove {read,spin,write}_can_lock() · a8a217c2
      Will Deacon 提交于
      Outside of the locking code itself, {read,spin,write}_can_lock() have no
      users in tree. Apparmor (the last remaining user of write_can_lock()) got
      moved over to lockdep by the previous patch.
      
      This patch removes the use of {read,spin,write}_can_lock() from the
      BUILD_LOCK_OPS macro, deferring to the trylock operation for testing the
      lock status, and subsequently removes the unused macros altogether. They
      aren't guaranteed to work in a concurrent environment and can give
      incorrect results in the case of qrwlock.
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: paulmck@linux.vnet.ibm.com
      Link: http://lkml.kernel.org/r/1507055129-12300-2-git-send-email-will.deacon@arm.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      a8a217c2
    • K
      locking/rwsem: Add down_read_killable() · 76f8507f
      Kirill Tkhai 提交于
      Similar to down_read() and down_write_killable(),
      add killable version of down_read(), based on
      __down_read_killable() function, added in previous
      patches.
      Signed-off-by: NKirill Tkhai <ktkhai@virtuozzo.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: arnd@arndb.de
      Cc: avagin@virtuozzo.com
      Cc: davem@davemloft.net
      Cc: fenghua.yu@intel.com
      Cc: gorcunov@virtuozzo.com
      Cc: heiko.carstens@de.ibm.com
      Cc: hpa@zytor.com
      Cc: ink@jurassic.park.msu.ru
      Cc: mattst88@gmail.com
      Cc: rientjes@google.com
      Cc: rth@twiddle.net
      Cc: schwidefsky@de.ibm.com
      Cc: tony.luck@intel.com
      Cc: viro@zeniv.linux.org.uk
      Link: http://lkml.kernel.org/r/150670119884.23930.2585570605960763239.stgit@localhost.localdomainSigned-off-by: NIngo Molnar <mingo@kernel.org>
      76f8507f
    • P
      sched/core: Ensure load_balance() respects the active_mask · 024c9d2f
      Peter Zijlstra 提交于
      While load_balance() masks the source CPUs against active_mask, it had
      a hole against the destination CPU. Ensure the destination CPU is also
      part of the 'domain-mask & active-mask' set.
      Reported-by: NLevin, Alexander (Sasha Levin) <alexander.levin@verizon.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Fixes: 77d1dfda ("sched/topology, cpuset: Avoid spurious/wrong domain rebuilds")
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      024c9d2f
    • P
      sched/core: Address more wake_affine() regressions · f2cdd9cc
      Peter Zijlstra 提交于
      The trivial wake_affine_idle() implementation is very good for a
      number of workloads, but it comes apart at the moment there are no
      idle CPUs left, IOW. the overloaded case.
      
      hackbench:
      
      		NO_WA_WEIGHT		WA_WEIGHT
      
      hackbench-20  : 7.362717561 seconds	6.450509391 seconds
      
      (win)
      
      netperf:
      
      		  NO_WA_WEIGHT		WA_WEIGHT
      
      TCP_SENDFILE-1	: Avg: 54524.6		Avg: 52224.3
      TCP_SENDFILE-10	: Avg: 48185.2          Avg: 46504.3
      TCP_SENDFILE-20	: Avg: 29031.2          Avg: 28610.3
      TCP_SENDFILE-40	: Avg: 9819.72          Avg: 9253.12
      TCP_SENDFILE-80	: Avg: 5355.3           Avg: 4687.4
      
      TCP_STREAM-1	: Avg: 41448.3          Avg: 42254
      TCP_STREAM-10	: Avg: 24123.2          Avg: 25847.9
      TCP_STREAM-20	: Avg: 15834.5          Avg: 18374.4
      TCP_STREAM-40	: Avg: 5583.91          Avg: 5599.57
      TCP_STREAM-80	: Avg: 2329.66          Avg: 2726.41
      
      TCP_RR-1	: Avg: 80473.5          Avg: 82638.8
      TCP_RR-10	: Avg: 72660.5          Avg: 73265.1
      TCP_RR-20	: Avg: 52607.1          Avg: 52634.5
      TCP_RR-40	: Avg: 57199.2          Avg: 56302.3
      TCP_RR-80	: Avg: 25330.3          Avg: 26867.9
      
      UDP_RR-1	: Avg: 108266           Avg: 107844
      UDP_RR-10	: Avg: 95480            Avg: 95245.2
      UDP_RR-20	: Avg: 68770.8          Avg: 68673.7
      UDP_RR-40	: Avg: 76231            Avg: 75419.1
      UDP_RR-80	: Avg: 34578.3          Avg: 35639.1
      
      UDP_STREAM-1	: Avg: 64684.3          Avg: 66606
      UDP_STREAM-10	: Avg: 52701.2          Avg: 52959.5
      UDP_STREAM-20	: Avg: 30376.4          Avg: 29704
      UDP_STREAM-40	: Avg: 15685.8          Avg: 15266.5
      UDP_STREAM-80	: Avg: 8415.13          Avg: 7388.97
      
      (wins and losses)
      
      sysbench:
      
      		    NO_WA_WEIGHT		WA_WEIGHT
      
      sysbench-mysql-2  :  2135.17 per sec.		 2142.51 per sec.
      sysbench-mysql-5  :  4809.68 per sec.            4800.19 per sec.
      sysbench-mysql-10 :  9158.59 per sec.            9157.05 per sec.
      sysbench-mysql-20 : 14570.70 per sec.           14543.55 per sec.
      sysbench-mysql-40 : 22130.56 per sec.           22184.82 per sec.
      sysbench-mysql-80 : 20995.56 per sec.           21904.18 per sec.
      
      sysbench-psql-2   :  1679.58 per sec.            1705.06 per sec.
      sysbench-psql-5   :  3797.69 per sec.            3879.93 per sec.
      sysbench-psql-10  :  7253.22 per sec.            7258.06 per sec.
      sysbench-psql-20  : 11166.75 per sec.           11220.00 per sec.
      sysbench-psql-40  : 17277.28 per sec.           17359.78 per sec.
      sysbench-psql-80  : 17112.44 per sec.           17221.16 per sec.
      
      (increase on the top end)
      
      tbench:
      
      NO_WA_WEIGHT
      
      Throughput 685.211 MB/sec   2 clients   2 procs  max_latency=0.123 ms
      Throughput 1596.64 MB/sec   5 clients   5 procs  max_latency=0.119 ms
      Throughput 2985.47 MB/sec  10 clients  10 procs  max_latency=0.262 ms
      Throughput 4521.15 MB/sec  20 clients  20 procs  max_latency=0.506 ms
      Throughput 9438.1  MB/sec  40 clients  40 procs  max_latency=2.052 ms
      Throughput 8210.5  MB/sec  80 clients  80 procs  max_latency=8.310 ms
      
      WA_WEIGHT
      
      Throughput 697.292 MB/sec   2 clients   2 procs  max_latency=0.127 ms
      Throughput 1596.48 MB/sec   5 clients   5 procs  max_latency=0.080 ms
      Throughput 2975.22 MB/sec  10 clients  10 procs  max_latency=0.254 ms
      Throughput 4575.14 MB/sec  20 clients  20 procs  max_latency=0.502 ms
      Throughput 9468.65 MB/sec  40 clients  40 procs  max_latency=2.069 ms
      Throughput 8631.73 MB/sec  80 clients  80 procs  max_latency=8.605 ms
      
      (increase on the top end)
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      f2cdd9cc
    • P
      sched/core: Fix wake_affine() performance regression · d153b153
      Peter Zijlstra 提交于
      Eric reported a sysbench regression against commit:
      
        3fed382b ("sched/numa: Implement NUMA node level wake_affine()")
      
      Similarly, Rik was looking at the NAS-lu.C benchmark, which regressed
      against his v3.10 enterprise kernel.
      
      PRE (current tip/master):
      
       ivb-ep sysbench:
      
         2: [30 secs]     transactions:                        64110  (2136.94 per sec.)
         5: [30 secs]     transactions:                        143644 (4787.99 per sec.)
        10: [30 secs]     transactions:                        274298 (9142.93 per sec.)
        20: [30 secs]     transactions:                        418683 (13955.45 per sec.)
        40: [30 secs]     transactions:                        320731 (10690.15 per sec.)
        80: [30 secs]     transactions:                        355096 (11834.28 per sec.)
      
       hsw-ex NAS:
      
       OMP_PROC_BIND/lu.C.x_threads_144_run_1.log: Time in seconds =                    18.01
       OMP_PROC_BIND/lu.C.x_threads_144_run_2.log: Time in seconds =                    17.89
       OMP_PROC_BIND/lu.C.x_threads_144_run_3.log: Time in seconds =                    17.93
       lu.C.x_threads_144_run_1.log: Time in seconds =                   434.68
       lu.C.x_threads_144_run_2.log: Time in seconds =                   405.36
       lu.C.x_threads_144_run_3.log: Time in seconds =                   433.83
      
      POST (+patch):
      
       ivb-ep sysbench:
      
         2: [30 secs]     transactions:                        64494  (2149.75 per sec.)
         5: [30 secs]     transactions:                        145114 (4836.99 per sec.)
        10: [30 secs]     transactions:                        278311 (9276.69 per sec.)
        20: [30 secs]     transactions:                        437169 (14571.60 per sec.)
        40: [30 secs]     transactions:                        669837 (22326.73 per sec.)
        80: [30 secs]     transactions:                        631739 (21055.88 per sec.)
      
       hsw-ex NAS:
      
       lu.C.x_threads_144_run_1.log: Time in seconds =                    23.36
       lu.C.x_threads_144_run_2.log: Time in seconds =                    22.96
       lu.C.x_threads_144_run_3.log: Time in seconds =                    22.52
      
      This patch takes out all the shiny wake_affine() stuff and goes back to
      utter basics. Between the two CPUs involved with the wakeup (the CPU
      doing the wakeup and the CPU we ran on previously) pick the CPU we can
      run on _now_.
      
      This restores much of the regressions against the older kernels,
      but leaves some ground in the overloaded case. The default-enabled
      WA_WEIGHT (which will be introduced in the next patch) is an attempt
      to address the overloaded situation.
      Reported-by: NEric Farman <farman@linux.vnet.ibm.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Christian Borntraeger <borntraeger@de.ibm.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Matthew Rosato <mjrosato@linux.vnet.ibm.com>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: jinpuwang@gmail.com
      Cc: vcaputo@pengaru.com
      Fixes: 3fed382b ("sched/numa: Implement NUMA node level wake_affine()")
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      d153b153
    • L
      perf/core: Fix cgroup time when scheduling descendants · e6a52033
      leilei.lin 提交于
      Update cgroup time when an event is scheduled in by descendants.
      Reviewed-and-tested-by: NJiri Olsa <jolsa@kernel.org>
      Signed-off-by: Nleilei.lin <leilei.lin@alibaba-inc.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@gmail.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: acme@kernel.org
      Cc: alexander.shishkin@linux.intel.com
      Cc: brendan.d.gregg@gmail.com
      Cc: yang_oliver@hotmail.com
      Link: http://lkml.kernel.org/r/CALPjY3mkHiekRkRECzMi9G-bjUQOvOjVBAqxmWkTzc-g+0LwMg@mail.gmail.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      e6a52033
    • W
      perf/core: Avoid freeing static PMU contexts when PMU is unregistered · df0062b2
      Will Deacon 提交于
      Since commit:
      
        1fd7e416 ("perf/core: Remove perf_cpu_context::unique_pmu")
      
      ... when a PMU is unregistered then its associated ->pmu_cpu_context is
      unconditionally freed. Whilst this is fine for dynamically allocated
      context types (i.e. those registered using perf_invalid_context), this
      causes a problem for sharing of static contexts such as
      perf_{sw,hw}_context, which are used by multiple built-in PMUs and
      effectively have a global lifetime.
      
      Whilst testing the ARM SPE driver, which must use perf_sw_context to
      support per-task AUX tracing, unregistering the driver as a result of a
      module unload resulted in:
      
       Unable to handle kernel NULL pointer dereference at virtual address 00000038
       Internal error: Oops: 96000004 [#1] PREEMPT SMP
       Modules linked in: [last unloaded: arm_spe_pmu]
       PC is at ctx_resched+0x38/0xe8
       LR is at perf_event_exec+0x20c/0x278
       [...]
       ctx_resched+0x38/0xe8
       perf_event_exec+0x20c/0x278
       setup_new_exec+0x88/0x118
       load_elf_binary+0x26c/0x109c
       search_binary_handler+0x90/0x298
       do_execveat_common.isra.14+0x540/0x618
       SyS_execve+0x38/0x48
      
      since the software context has been freed and the ctx.pmu->pmu_disable_count
      field has been set to NULL.
      
      This patch fixes the problem by avoiding the freeing of static PMU contexts
      altogether. Whilst the sharing of dynamic contexts is questionable, this
      actually requires the caller to share their context pointer explicitly
      and so the burden is on them to manage the object lifetime.
      Reported-by: NKim Phillips <kim.phillips@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Acked-by: NMark Rutland <mark.rutland@arm.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Fixes: 1fd7e416 ("perf/core: Remove perf_cpu_context::unique_pmu")
      Link: http://lkml.kernel.org/r/1507040450-7730-1-git-send-email-will.deacon@arm.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      df0062b2
    • P
      locking/lockdep: Fix stacktrace mess · 8b405d5c
      Peter Zijlstra 提交于
      There is some complication between check_prevs_add() and
      check_prev_add() wrt. saving stack traces. The problem is that we want
      to be frugal with saving stack traces, since it consumes static
      resources.
      
      We'll only know in check_prev_add() if we need the trace, but we can
      call into it multiple times. So we want to do on-demand and re-use.
      
      A further complication is that check_prev_add() can drop graph_lock
      and mess with our static resources.
      
      In any case, the current state; after commit:
      
        ce07a941 ("locking/lockdep: Make check_prev_add() able to handle external stack_trace")
      
      is that we'll assume the trace contains valid data once
      check_prev_add() returns '2'. However, as noted by Josh, this is
      false, check_prev_add() can return '2' before having saved a trace,
      this then result in the possibility of using uninitialized data.
      Testing, as reported by Wu, shows a NULL deref.
      
      So simplify.
      
      Since the graph_lock() thing is a debug path that hasn't
      really been used in a long while, take it out back and avoid the
      head-ache.
      
      Further initialize the stack_trace to a known 'empty' state; as long
      as nr_entries == 0, nothing should deref entries. We can then use the
      'entries == NULL' test for a valid trace / on-demand saving.
      Analyzed-by: NJosh Poimboeuf <jpoimboe@redhat.com>
      Reported-by: NFengguang Wu <fengguang.wu@intel.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Byungchul Park <byungchul.park@lge.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Fixes: ce07a941 ("locking/lockdep: Make check_prev_add() able to handle external stack_trace")
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      8b405d5c
    • E
      net: defer call to cgroup_sk_alloc() · fbb1fb4a
      Eric Dumazet 提交于
      sk_clone_lock() might run while TCP/DCCP listener already vanished.
      
      In order to prevent use after free, it is better to defer cgroup_sk_alloc()
      to the point we know both parent and child exist, and from process context.
      
      Fixes: e994b2f0 ("tcp: do not lock listener to process SYN packets")
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Tejun Heo <tj@kernel.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      fbb1fb4a
    • K
      waitid(): Add missing access_ok() checks · 96ca579a
      Kees Cook 提交于
      Adds missing access_ok() checks.
      
      CVE-2017-5123
      Reported-by: NChris Salls <chrissalls5@gmail.com>
      Signed-off-by: NKees Cook <keescook@chromium.org>
      Acked-by: NAl Viro <viro@zeniv.linux.org.uk>
      Fixes: 4c48abe9 ("waitid(): switch copyout of siginfo to unsafe_put_user()")
      Cc: stable@kernel.org # 4.13
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      96ca579a