1. 21 11月, 2016 1 次提交
  2. 15 11月, 2016 2 次提交
    • D
      perf/core: Do not set cpuctx->cgrp for unscheduled cgroups · 864c2357
      David Carrillo-Cisneros 提交于
      Commit:
      
        db4a8356 ("perf/core: Set cgroup in CPU contexts for new cgroup events")
      
      failed to verify that event->cgrp is actually the scheduled cgroup
      in a CPU before setting cpuctx->cgrp. This patch fixes that.
      
      Now that there is a different path for scheduled and unscheduled
      cgroup, add a warning to catch when cpuctx->cgrp is still set after
      the last cgroup event has been unsheduled.
      
      To verify the bug:
      
        # Create 2 cgroups.
        mkdir /dev/cgroups/devices/g1
        mkdir /dev/cgroups/devices/g2
      
        # launch a task, bind it to a cpu and move it to g1
        CPU=2
        while :; do : ; done &
        P=$!
      
        taskset -pc $CPU $P
        echo $P > /dev/cgroups/devices/g1/tasks
      
        # monitor g2 (it runs no tasks) and observe output
        perf stat -e cycles -I 1000 -C $CPU -G g2
      
        #           time             counts unit events
           1.000091408          7,579,527      cycles                    g2
           2.000350111      <not counted>      cycles                    g2
           3.000589181      <not counted>      cycles                    g2
           4.000771428      <not counted>      cycles                    g2
      
        # note first line that displays that a task run in g2, despite
        # g2 having no tasks. This is because cpuctx->cgrp was wrongly
        # set when context of new event was installed.
        # After applying the fix we obtain the right output:
      
        perf stat -e cycles -I 1000 -C $CPU -G g2
        #           time             counts unit events
           1.000119615      <not counted>      cycles                    g2
           2.000389430      <not counted>      cycles                    g2
           3.000590962      <not counted>      cycles                    g2
      Signed-off-by: NDavid Carrillo-Cisneros <davidcc@google.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Kan Liang <kan.liang@intel.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Nilay Vaish <nilayvaish@gmail.com>
      Cc: Paul Turner <pjt@google.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vegard Nossum <vegard.nossum@gmail.com>
      Link: http://lkml.kernel.org/r/1478026378-86083-1-git-send-email-davidcc@google.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      864c2357
    • L
      Revert "printk: make reading the kernel log flush pending lines" · f5c9f9c7
      Linus Torvalds 提交于
      This reverts commit bfd8d3f2.
      
      It turns out that this flushes things much too aggressiverly, and causes
      lines to break up when the system logger races with new continuation
      lines being printed.
      
      There's a pending patch to make printk() flushing much more
      straightforward, but it's too invasive for 4.9, so in the meantime let's
      just not make the system message logging flush continuation lines.
      They'll be flushed by the final newline anyway.
      Suggested-by: NPetr Mladek <pmladek@suse.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      f5c9f9c7
  3. 12 11月, 2016 1 次提交
    • H
      Revert "console: don't prefer first registered if DT specifies stdout-path" · c6c7d83b
      Hans de Goede 提交于
      This reverts commit 05fd007e ("console: don't prefer first
      registered if DT specifies stdout-path").
      
      The reverted commit changes existing behavior on which many ARM boards
      rely.  Many ARM small-board-computers, like e.g.  the Raspberry Pi have
      both a video output and a serial console.  Depending on whether the user
      is using the device as a more regular computer; or as a headless device
      we need to have the console on either one or the other.
      
      Many users rely on the kernel behavior of the console being present on
      both outputs, before the reverted commit the console setup with no
      console= kernel arguments on an ARM board which sets stdout-path in dt
      would look like this:
      
        [root@localhost ~]# cat /proc/consoles
        ttyS0                -W- (EC p a)    4:64
        tty0                 -WU (E  p  )    4:1
      
      Where as after the reverted commit, it looks like this:
      
        [root@localhost ~]# cat /proc/consoles
        ttyS0                -W- (EC p a)    4:64
      
      This commit reverts commit 05fd007e ("console: don't prefer first
      registered if DT specifies stdout-path") restoring the original
      behavior.
      
      Fixes: 05fd007e ("console: don't prefer first registered if DT specifies stdout-path")
      Link: http://lkml.kernel.org/r/20161104121135.4780-2-hdegoede@redhat.comSigned-off-by: NHans de Goede <hdegoede@redhat.com>
      Cc: Paul Burton <paul.burton@imgtec.com>
      Cc: Rob Herring <robh+dt@kernel.org>
      Cc: Frank Rowand <frowand.list@gmail.com>
      Cc: Thorsten Leemhuis <regressions@leemhuis.info>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      c6c7d83b
  4. 08 11月, 2016 3 次提交
    • T
      genirq: Use irq type from irqdata instead of irqdesc · 7ee7e87d
      Thomas Gleixner 提交于
      The type flags in the irq descriptor are there for historical reasons and
      only updated via irq_modify_status() or irq_set_type(). Both functions also
      update the type flags in irqdata. __setup_irq() is the only left over user
      of the type flags in the irq descriptor.
      
      If __setup_irq() is called with empty irq type flags, then the type flags
      are retrieved from irqdata. If an interrupt is shared, then the type flags
      are compared with the type flags stored in the irq descriptor. 
      
      On x86 the ioapic does not have a irq_set_type() callback because the type
      is defined in the BIOS tables and cannot be changed. The type is stored in
      irqdata at setup time without updating the type data in the irq
      descriptor. As a result the comparison described above fails.
      
      There is no point in updating the irq descriptor flags because the only
      relevant storage is irqdata. Use the type flags from irqdata for both
      retrieval and comparison in __setup_irq() instead.
      
      Aside of that the print out in case of non matching type flags has the old
      and new type flags arguments flipped. Fix that as well.
      
      For correctness sake the flags stored in the irq descriptor should be
      removed, but this is beyond the scope of this bugfix and will be done in a
      later patch.
      
      Fixes: 4b357dae ("genirq: Look-up trigger type if not specified by caller")
      Reported-and-tested-by: NMika Westerberg <mika.westerberg@linux.intel.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Marc Zyngier <marc.zyngier@arm.com>
      Cc: Jon Hunter <jonathanh@nvidia.com>
      Cc: stable@vger.kernel.org
      Link: http://lkml.kernel.org/r/alpine.DEB.2.20.1611072020360.3501@nanosSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      7ee7e87d
    • D
      bpf: fix map not being uncharged during map creation failure · 20b2b24f
      Daniel Borkmann 提交于
      In map_create(), we first find and create the map, then once that
      suceeded, we charge it to the user's RLIMIT_MEMLOCK, and then fetch
      a new anon fd through anon_inode_getfd(). The problem is, once the
      latter fails f.e. due to RLIMIT_NOFILE limit, then we only destruct
      the map via map->ops->map_free(), but without uncharging the previously
      locked memory first. That means that the user_struct allocation is
      leaked as well as the accounted RLIMIT_MEMLOCK memory not released.
      Make the label names in the fix consistent with bpf_prog_load().
      
      Fixes: aaac3ba9 ("bpf: charge user for creation of BPF maps and programs")
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: NAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      20b2b24f
    • D
      bpf: fix htab map destruction when extra reserve is in use · 483bed2b
      Daniel Borkmann 提交于
      Commit a6ed3ea6 ("bpf: restore behavior of bpf_map_update_elem")
      added an extra per-cpu reserve to the hash table map to restore old
      behaviour from pre prealloc times. When non-prealloc is in use for a
      map, then problem is that once a hash table extra element has been
      linked into the hash-table, and the hash table is destroyed due to
      refcount dropping to zero, then htab_map_free() -> delete_all_elements()
      will walk the whole hash table and drop all elements via htab_elem_free().
      The problem is that the element from the extra reserve is first fed
      to the wrong backend allocator and eventually freed twice.
      
      Fixes: a6ed3ea6 ("bpf: restore behavior of bpf_map_update_elem")
      Reported-by: NDmitry Vyukov <dvyukov@google.com>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: NAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      483bed2b
  5. 04 11月, 2016 1 次提交
  6. 03 11月, 2016 2 次提交
  7. 02 11月, 2016 1 次提交
  8. 01 11月, 2016 1 次提交
  9. 28 10月, 2016 4 次提交
    • J
      perf/powerpc: Don't call perf_event_disable() from atomic context · 5aab90ce
      Jiri Olsa 提交于
      The trinity syscall fuzzer triggered following WARN() on powerpc:
      
        WARNING: CPU: 9 PID: 2998 at arch/powerpc/kernel/hw_breakpoint.c:278
        ...
        NIP [c00000000093aedc] .hw_breakpoint_handler+0x28c/0x2b0
        LR [c00000000093aed8] .hw_breakpoint_handler+0x288/0x2b0
        Call Trace:
        [c0000002f7933580] [c00000000093aed8] .hw_breakpoint_handler+0x288/0x2b0 (unreliable)
        [c0000002f7933630] [c0000000000f671c] .notifier_call_chain+0x7c/0xf0
        [c0000002f79336d0] [c0000000000f6abc] .__atomic_notifier_call_chain+0xbc/0x1c0
        [c0000002f7933780] [c0000000000f6c40] .notify_die+0x70/0xd0
        [c0000002f7933820] [c00000000001a74c] .do_break+0x4c/0x100
        [c0000002f7933920] [c0000000000089fc] handle_dabr_fault+0x14/0x48
      
      Followed by a lockdep warning:
      
        ===============================
        [ INFO: suspicious RCU usage. ]
        4.8.0-rc5+ #7 Tainted: G        W
        -------------------------------
        ./include/linux/rcupdate.h:556 Illegal context switch in RCU read-side critical section!
      
        other info that might help us debug this:
      
        rcu_scheduler_active = 1, debug_locks = 0
        2 locks held by ls/2998:
         #0:  (rcu_read_lock){......}, at: [<c0000000000f6a00>] .__atomic_notifier_call_chain+0x0/0x1c0
         #1:  (rcu_read_lock){......}, at: [<c00000000093ac50>] .hw_breakpoint_handler+0x0/0x2b0
      
        stack backtrace:
        CPU: 9 PID: 2998 Comm: ls Tainted: G        W       4.8.0-rc5+ #7
        Call Trace:
        [c0000002f7933150] [c00000000094b1f8] .dump_stack+0xe0/0x14c (unreliable)
        [c0000002f79331e0] [c00000000013c468] .lockdep_rcu_suspicious+0x138/0x180
        [c0000002f7933270] [c0000000001005d8] .___might_sleep+0x278/0x2e0
        [c0000002f7933300] [c000000000935584] .mutex_lock_nested+0x64/0x5a0
        [c0000002f7933410] [c00000000023084c] .perf_event_ctx_lock_nested+0x16c/0x380
        [c0000002f7933500] [c000000000230a80] .perf_event_disable+0x20/0x60
        [c0000002f7933580] [c00000000093aeec] .hw_breakpoint_handler+0x29c/0x2b0
        [c0000002f7933630] [c0000000000f671c] .notifier_call_chain+0x7c/0xf0
        [c0000002f79336d0] [c0000000000f6abc] .__atomic_notifier_call_chain+0xbc/0x1c0
        [c0000002f7933780] [c0000000000f6c40] .notify_die+0x70/0xd0
        [c0000002f7933820] [c00000000001a74c] .do_break+0x4c/0x100
        [c0000002f7933920] [c0000000000089fc] handle_dabr_fault+0x14/0x48
      
      While it looks like the first WARN() is probably valid, the other one is
      triggered by disabling event via perf_event_disable() from atomic context.
      
      The event is disabled here in case we were not able to emulate
      the instruction that hit the breakpoint. By disabling the event
      we unschedule the event and make sure it's not scheduled back.
      
      But we can't call perf_event_disable() from atomic context, instead
      we need to use the event's pending_disable irq_work method to disable it.
      Reported-by: NJan Stancek <jstancek@redhat.com>
      Signed-off-by: NJiri Olsa <jolsa@kernel.org>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Huang Ying <ying.huang@intel.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Michael Neuling <mikey@neuling.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/20161026094824.GA21397@kravaSigned-off-by: NIngo Molnar <mingo@kernel.org>
      5aab90ce
    • J
      perf/core: Protect PMU device removal with a 'pmu_bus_running' check, to fix... · 0933840a
      Jiri Olsa 提交于
      perf/core: Protect PMU device removal with a 'pmu_bus_running' check, to fix CONFIG_DEBUG_TEST_DRIVER_REMOVE=y kernel panic
      
      CAI Qian reported a crash in the PMU uncore device removal code,
      enabled by the CONFIG_DEBUG_TEST_DRIVER_REMOVE=y option:
      
        https://marc.info/?l=linux-kernel&m=147688837328451
      
      The reason for the crash is that perf_pmu_unregister() tries to remove
      a PMU device which is not added at this point. We add PMU devices
      only after pmu_bus is registered, which happens in the
      perf_event_sysfs_init() call and sets the 'pmu_bus_running' flag.
      
      The fix is to get the 'pmu_bus_running' flag state at the point
      the PMU is taken out of the PMU list and remove the device
      later only if it's set.
      Reported-by: NCAI Qian <caiqian@redhat.com>
      Tested-by: NCAI Qian <caiqian@redhat.com>
      Signed-off-by: NJiri Olsa <jolsa@kernel.org>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Kan Liang <kan.liang@intel.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rob Herring <robh@kernel.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/20161020111011.GA13361@kravaSigned-off-by: NIngo Molnar <mingo@kernel.org>
      0933840a
    • A
      kcov: properly check if we are in an interrupt · b274c0bb
      Andrey Konovalov 提交于
      in_interrupt() returns a nonzero value when we are either in an
      interrupt or have bh disabled via local_bh_disable().  Since we are
      interested in only ignoring coverage from actual interrupts, do a proper
      check instead of just calling in_interrupt().
      
      As a result of this change, kcov will start to collect coverage from
      within local_bh_disable()/local_bh_enable() sections.
      
      Link: http://lkml.kernel.org/r/1476115803-20712-1-git-send-email-andreyknvl@google.comSigned-off-by: NAndrey Konovalov <andreyknvl@google.com>
      Acked-by: NDmitry Vyukov <dvyukov@google.com>
      Cc: Nicolai Stange <nicstange@gmail.com>
      Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: James Morse <james.morse@arm.com>
      Cc: Vegard Nossum <vegard.nossum@oracle.com>
      Cc: Quentin Casasnovas <quentin.casasnovas@oracle.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      b274c0bb
    • L
      mm: remove per-zone hashtable of bitlock waitqueues · 9dcb8b68
      Linus Torvalds 提交于
      The per-zone waitqueues exist because of a scalability issue with the
      page waitqueues on some NUMA machines, but it turns out that they hurt
      normal loads, and now with the vmalloced stacks they also end up
      breaking gfs2 that uses a bit_wait on a stack object:
      
           wait_on_bit(&gh->gh_iflags, HIF_WAIT, TASK_UNINTERRUPTIBLE)
      
      where 'gh' can be a reference to the local variable 'mount_gh' on the
      stack of fill_super().
      
      The reason the per-zone hash table breaks for this case is that there is
      no "zone" for virtual allocations, and trying to look up the physical
      page to get at it will fail (with a BUG_ON()).
      
      It turns out that I actually complained to the mm people about the
      per-zone hash table for another reason just a month ago: the zone lookup
      also hurts the regular use of "unlock_page()" a lot, because the zone
      lookup ends up forcing several unnecessary cache misses and generates
      horrible code.
      
      As part of that earlier discussion, we had a much better solution for
      the NUMA scalability issue - by just making the page lock have a
      separate contention bit, the waitqueue doesn't even have to be looked at
      for the normal case.
      
      Peter Zijlstra already has a patch for that, but let's see if anybody
      even notices.  In the meantime, let's fix the actual gfs2 breakage by
      simplifying the bitlock waitqueues and removing the per-zone issue.
      Reported-by: NAndreas Gruenbacher <agruenba@redhat.com>
      Tested-by: NBob Peterson <rpeterso@redhat.com>
      Acked-by: NMel Gorman <mgorman@techsingularity.net>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Steven Whitehouse <swhiteho@redhat.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      9dcb8b68
  10. 27 10月, 2016 1 次提交
  11. 25 10月, 2016 4 次提交
    • T
      timers: Prevent base clock corruption when forwarding · 6bad6bcc
      Thomas Gleixner 提交于
      When a timer is enqueued we try to forward the timer base clock. This
      mechanism has two issues:
      
      1) Forwarding a remote base unlocked
      
      The forwarding function is called from get_target_base() with the current
      timer base lock held. But if the new target base is a different base than
      the current base (can happen with NOHZ, sigh!) then the forwarding is done
      on an unlocked base. This can lead to corruption of base->clk.
      
      Solution is simple: Invoke the forwarding after the target base is locked.
      
      2) Possible corruption due to jiffies advancing
      
      This is similar to the issue in get_net_timer_interrupt() which was fixed
      in the previous patch. jiffies can advance between check and assignement
      and therefore advancing base->clk beyond the next expiry value.
      
      So we need to read jiffies into a local variable once and do the checks and
      assignment with the local copy.
      
      Fixes: a683f390("timers: Forward the wheel clock whenever possible")
      Reported-by: NAshton Holmes <scoopta@gmail.com>
      Reported-by: NMichael Thayer <michael.thayer@oracle.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Michal Necasek <michal.necasek@oracle.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: knut.osmundsen@oracle.com
      Cc: stable@vger.kernel.org
      Cc: stern@rowland.harvard.edu
      Cc: rt@linutronix.de
      Link: http://lkml.kernel.org/r/20161022110552.253640125@linutronix.deSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      6bad6bcc
    • T
      timers: Prevent base clock rewind when forwarding clock · 041ad7bc
      Thomas Gleixner 提交于
      Ashton and Michael reported, that kernel versions 4.8 and later suffer from
      USB timeouts which are caused by the timer wheel rework.
      
      This is caused by a bug in the base clock forwarding mechanism, which leads
      to timers expiring early. The scenario which leads to this is:
      
      run_timers()
        while (jiffies >= base->clk) {
          collect_expired_timers();
          base->clk++;
          expire_timers();
        }          
      
      So base->clk = jiffies + 1. Now the cpu goes idle:
      
      idle()
        get_next_timer_interrupt()
          nextevt = __next_time_interrupt();
          if (time_after(nextevt, base->clk))
             	base->clk = jiffies;
      
      jiffies has not advanced since run_timers(), so this assignment effectively
      decrements base->clk by one.
      
      base->clk is the index into the timer wheel arrays. So let's assume the
      following state after the base->clk increment in run_timers():
      
       jiffies = 0
       base->clk = 1
      
      A timer gets enqueued with an expiry delta of 63 ticks (which is the case
      with the USB timeout and HZ=250) so the resulting bucket index is:
      
        base->clk + delta = 1 + 63 = 64
      
      The timer goes into the first wheel level. The array size is 64 so it ends
      up in bucket 0, which is correct as it takes 63 ticks to advance base->clk
      to index into bucket 0 again.
      
      If the cpu goes idle before jiffies advance, then the bug in the forwarding
      mechanism sets base->clk back to 0, so the next invocation of run_timers()
      at the next tick will index into bucket 0 and therefore expire the timer 62
      ticks too early.
      
      Instead of blindly setting base->clk to jiffies we must make the forwarding
      conditional on jiffies > base->clk, but we cannot use jiffies for this as
      we might run into the following issue:
      
        if (time_after(jiffies, base->clk) {
          if (time_after(nextevt, base->clk))
             base->clk = jiffies;
      
      jiffies can increment between the check and the assigment far enough to
      advance beyond nextevt. So we need to use a stable value for checking.
      
      get_next_timer_interrupt() has the basej argument which is the jiffies
      value snapshot taken in the calling code. So we can just that.
      
      Thanks to Ashton for bisecting and providing trace data!
      
      Fixes: a683f390 ("timers: Forward the wheel clock whenever possible")
      Reported-by: NAshton Holmes <scoopta@gmail.com>
      Reported-by: NMichael Thayer <michael.thayer@oracle.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Michal Necasek <michal.necasek@oracle.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: knut.osmundsen@oracle.com
      Cc: stable@vger.kernel.org
      Cc: stern@rowland.harvard.edu
      Cc: rt@linutronix.de
      Link: http://lkml.kernel.org/r/20161022110552.175308322@linutronix.deSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      041ad7bc
    • T
      timers: Lock base for same bucket optimization · 4da9152a
      Thomas Gleixner 提交于
      Linus stumbled over the unlocked modification of the timer expiry value in
      mod_timer() which is an optimization for timers which stay in the same
      bucket - due to the bucket granularity - despite their expiry time getting
      updated.
      
      The optimization itself still makes sense even if we take the lock, because
      in case that the bucket stays the same, we avoid the pointless
      queue/enqueue dance.
      
      Make the check and the modification of timer->expires protected by the base
      lock and shuffle the remaining code around so we can keep the lock held
      when we actually have to requeue the timer to a different bucket.
      
      Fixes: f00c0afd ("timers: Implement optimization for same expiry time in mod_timer()")
      Reported-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/alpine.DEB.2.20.1610241711220.4983@nanos
      Cc: stable@vger.kernel.org
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      4da9152a
    • T
      timers: Plug locking race vs. timer migration · b831275a
      Thomas Gleixner 提交于
      Linus noticed that lock_timer_base() lacks a READ_ONCE() for accessing the
      timer flags. As a consequence the compiler is allowed to reload the flags
      between the initial check for TIMER_MIGRATION and the following timer base
      computation and the spin lock of the base.
      
      While this has not been observed (yet), we need to make sure that it never
      happens.
      
      Fixes: 0eeda71b ("timer: Replace timer base by a cpu index")
      Reported-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/alpine.DEB.2.20.1610241711220.4983@nanos
      Cc: stable@vger.kernel.org
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      b831275a
  12. 24 10月, 2016 1 次提交
    • J
      PM / suspend: Fix missing KERN_CONT for suspend message · 1adb469b
      Jon Hunter 提交于
      Commit 4bcc595c (printk: reinstate KERN_CONT for printing
      continuation lines) exposed a missing KERN_CONT from one of the
      messages shown on entering suspend. With v4.9-rc1, the 'done.' shown
      after syncing the filesystems no longer appears as a continuation but
      a new message with its own timestamp.
      
      [    9.259566] PM: Syncing filesystems ... [    9.264119] done.
      
      Fix this by adding the KERN_CONT log level for the 'done.' part of the
      message seen after syncing filesystems. While we are at it, convert
      these suspend printks to pr_info and pr_cont, respectively.
      Signed-off-by: NJon Hunter <jonathanh@nvidia.com>
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      1adb469b
  13. 23 10月, 2016 1 次提交
  14. 22 10月, 2016 1 次提交
  15. 21 10月, 2016 2 次提交
  16. 20 10月, 2016 1 次提交
    • L
      printk: suppress empty continuation lines · 8835ca59
      Linus Torvalds 提交于
      We have a fairly common pattern where you print several things as
      continuations on one single line in a loop, and then at the end you do
      
      	printk(KERN_CONT "\n");
      
      to flush the buffered output.
      
      But if the output was flushed by something else (concurrent printk
      activity, or just system logging), we don't want that final flushing to
      just print an empty line.
      
      So just suppress empty continuation lines when they couldn't be merged
      into the line they are a continuation of.
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      8835ca59
  17. 19 10月, 2016 3 次提交
  18. 17 10月, 2016 1 次提交
  19. 16 10月, 2016 1 次提交
  20. 13 10月, 2016 1 次提交
    • L
      Disable the __builtin_return_address() warning globally after all · ef6000b4
      Linus Torvalds 提交于
      This affectively reverts commit 377ccbb4 ("Makefile: Mute warning
      for __builtin_return_address(>0) for tracing only") because it turns out
      that it really isn't tracing only - it's all over the tree.
      
      We already also had the warning disabled separately for mm/usercopy.c
      (which this commit also removes), and it turns out that we will also
      want to disable it for get_lock_parent_ip(), that is used for at least
      TRACE_IRQFLAGS.  Which (when enabled) ends up being all over the tree.
      
      Steven Rostedt had a patch that tried to limit it to just the config
      options that actually triggered this, but quite frankly, the extra
      complexity and abstraction just isn't worth it.  We have never actually
      had a case where the warning is actually useful, so let's just disable
      it globally and not worry about it.
      Acked-by: NSteven Rostedt <rostedt@goodmis.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Peter Anvin <hpa@zytor.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      ef6000b4
  21. 12 10月, 2016 7 次提交
    • J
      hung_task: allow hung_task_panic when hung_task_warnings is 0 · 48a6d64e
      John Siddle 提交于
      Previously hung_task_panic would not be respected if enabled after
      hung_task_warnings had already been decremented to 0.
      
      Permit the kernel to panic if hung_task_panic is enabled after
      hung_task_warnings has already been decremented to 0 and another task
      hangs for hung_task_timeout_secs seconds.
      
      Check if hung_task_panic is enabled so we don't return prematurely, and
      check if hung_task_warnings is non-zero so we don't print the warning
      unnecessarily.
      
      [akpm@linux-foundation.org: fix off-by-one]
      Link: http://lkml.kernel.org/r/1473450214-4049-1-git-send-email-jsiddle@redhat.comSigned-off-by: NJohn Siddle <jsiddle@redhat.com>
      Cc: Tetsuo Handa <penguin-kernel@i-love.sakura.ne.jp>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      48a6d64e
    • P
      kthread: better support freezable kthread workers · dbf52682
      Petr Mladek 提交于
      This patch allows to make kthread worker freezable via a new @flags
      parameter. It will allow to avoid an init work in some kthreads.
      
      It currently does not affect the function of kthread_worker_fn()
      but it might help to do some optimization or fixes eventually.
      
      I currently do not know about any other use for the @flags
      parameter but I believe that we will want more flags
      in the future.
      
      Finally, I hope that it will not cause confusion with @flags member
      in struct kthread. Well, I guess that we will want to rework the
      basic kthreads implementation once all kthreads are converted into
      kthread workers or workqueues. It is possible that we will merge
      the two structures.
      
      Link: http://lkml.kernel.org/r/1470754545-17632-12-git-send-email-pmladek@suse.comSigned-off-by: NPetr Mladek <pmladek@suse.com>
      Acked-by: NTejun Heo <tj@kernel.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
      Cc: Josh Triplett <josh@joshtriplett.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Jiri Kosina <jkosina@suse.cz>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: Michal Hocko <mhocko@suse.cz>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      dbf52682
    • P
      kthread: allow to modify delayed kthread work · 9a6b06c8
      Petr Mladek 提交于
      There are situations when we need to modify the delay of a delayed kthread
      work. For example, when the work depends on an event and the initial delay
      means a timeout. Then we want to queue the work immediately when the event
      happens.
      
      This patch implements kthread_mod_delayed_work() as inspired workqueues.
      It cancels the timer, removes the work from any worker list and queues it
      again with the given timeout.
      
      A very special case is when the work is being canceled at the same time.
      It might happen because of the regular kthread_cancel_delayed_work_sync()
      or by another kthread_mod_delayed_work(). In this case, we do nothing and
      let the other operation win. This should not normally happen as the caller
      is supposed to synchronize these operations a reasonable way.
      
      Link: http://lkml.kernel.org/r/1470754545-17632-11-git-send-email-pmladek@suse.comSigned-off-by: NPetr Mladek <pmladek@suse.com>
      Acked-by: NTejun Heo <tj@kernel.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
      Cc: Josh Triplett <josh@joshtriplett.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Jiri Kosina <jkosina@suse.cz>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: Michal Hocko <mhocko@suse.cz>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      9a6b06c8
    • P
      kthread: allow to cancel kthread work · 37be45d4
      Petr Mladek 提交于
      We are going to use kthread workers more widely and sometimes we will need
      to make sure that the work is neither pending nor running.
      
      This patch implements cancel_*_sync() operations as inspired by
      workqueues.  Well, we are synchronized against the other operations via
      the worker lock, we use del_timer_sync() and a counter to count parallel
      cancel operations.  Therefore the implementation might be easier.
      
      First, we check if a worker is assigned.  If not, the work has newer been
      queued after it was initialized.
      
      Second, we take the worker lock.  It must be the right one.  The work must
      not be assigned to another worker unless it is initialized in between.
      
      Third, we try to cancel the timer when it exists.  The timer is deleted
      synchronously to make sure that the timer call back is not running.  We
      need to temporary release the worker->lock to avoid a possible deadlock
      with the callback.  In the meantime, we set work->canceling counter to
      avoid any queuing.
      
      Fourth, we try to remove the work from a worker list. It might be
      the list of either normal or delayed works.
      
      Fifth, if the work is running, we call kthread_flush_work().  It might
      take an arbitrary time.  We need to release the worker-lock again.  In the
      meantime, we again block any queuing by the canceling counter.
      
      As already mentioned, the check for a pending kthread work is done under a
      lock.  In compare with workqueues, we do not need to fight for a single
      PENDING bit to block other operations.  Therefore we do not suffer from
      the thundering storm problem and all parallel canceling jobs might use
      kthread_flush_work().  Any queuing is blocked until the counter gets zero.
      
      Link: http://lkml.kernel.org/r/1470754545-17632-10-git-send-email-pmladek@suse.comSigned-off-by: NPetr Mladek <pmladek@suse.com>
      Acked-by: NTejun Heo <tj@kernel.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
      Cc: Josh Triplett <josh@joshtriplett.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Jiri Kosina <jkosina@suse.cz>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: Michal Hocko <mhocko@suse.cz>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      37be45d4
    • P
      kthread: initial support for delayed kthread work · 22597dc3
      Petr Mladek 提交于
      We are going to use kthread_worker more widely and delayed works
      will be pretty useful.
      
      The implementation is inspired by workqueues.  It uses a timer to queue
      the work after the requested delay.  If the delay is zero, the work is
      queued immediately.
      
      In compare with workqueues, each work is associated with a single worker
      (kthread).  Therefore the implementation could be much easier.  In
      particular, we use the worker->lock to synchronize all the operations with
      the work.  We do not need any atomic operation with a flags variable.
      
      In fact, we do not need any state variable at all.  Instead, we add a list
      of delayed works into the worker.  Then the pending work is listed either
      in the list of queued or delayed works.  And the existing check of pending
      works is the same even for the delayed ones.
      
      A work must not be assigned to another worker unless reinitialized.
      Therefore the timer handler might expect that dwork->work->worker is valid
      and it could simply take the lock.  We just add some sanity checks to help
      with debugging a potential misuse.
      
      Link: http://lkml.kernel.org/r/1470754545-17632-9-git-send-email-pmladek@suse.comSigned-off-by: NPetr Mladek <pmladek@suse.com>
      Acked-by: NTejun Heo <tj@kernel.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
      Cc: Josh Triplett <josh@joshtriplett.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Jiri Kosina <jkosina@suse.cz>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: Michal Hocko <mhocko@suse.cz>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      22597dc3
    • P
      kthread: detect when a kthread work is used by more workers · 8197b3d4
      Petr Mladek 提交于
      Nothing currently prevents a work from queuing for a kthread worker when
      it is already running on another one.  This means that the work might run
      in parallel on more than one worker.  Also some operations are not
      reliable, e.g.  flush.
      
      This problem will be even more visible after we add kthread_cancel_work()
      function.  It will only have "work" as the parameter and will use
      worker->lock to synchronize with others.
      
      Well, normally this is not a problem because the API users are sane.
      But bugs might happen and users also might be crazy.
      
      This patch adds a warning when we try to insert the work for another
      worker.  It does not fully prevent the misuse because it would make the
      code much more complicated without a big benefit.
      
      It adds the same warning also into kthread_flush_work() instead of the
      repeated attempts to get the right lock.
      
      A side effect is that one needs to explicitly reinitialize the work if it
      must be queued into another worker.  This is needed, for example, when the
      worker is stopped and started again.  It is a bit inconvenient.  But it
      looks like a good compromise between the stability and complexity.
      
      I have double checked all existing users of the kthread worker API and
      they all seems to initialize the work after the worker gets started.
      
      Just for completeness, the patch adds a check that the work is not already
      in a queue.
      
      The patch also puts all the checks into a separate function.  It will be
      reused when implementing delayed works.
      
      Link: http://lkml.kernel.org/r/1470754545-17632-8-git-send-email-pmladek@suse.comSigned-off-by: NPetr Mladek <pmladek@suse.com>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
      Cc: Josh Triplett <josh@joshtriplett.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Jiri Kosina <jkosina@suse.cz>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: Michal Hocko <mhocko@suse.cz>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      8197b3d4
    • P
      kthread: add kthread_destroy_worker() · 35033fe9
      Petr Mladek 提交于
      The current kthread worker users call flush() and stop() explicitly.
      This function does the same plus it frees the kthread_worker struct
      in one call.
      
      It is supposed to be used together with kthread_create_worker*() that
      allocates struct kthread_worker.
      
      Link: http://lkml.kernel.org/r/1470754545-17632-7-git-send-email-pmladek@suse.comSigned-off-by: NPetr Mladek <pmladek@suse.com>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
      Cc: Josh Triplett <josh@joshtriplett.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Jiri Kosina <jkosina@suse.cz>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: Michal Hocko <mhocko@suse.cz>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      35033fe9