“0a4201fcd49a859b686e0d7a31891ced0fe3a5ff”上不存在“drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c”
  1. 13 11月, 2013 5 次提交
  2. 08 11月, 2013 1 次提交
  3. 30 9月, 2013 2 次提交
    • K
      nohz: Drop generic vtime obsolete dependency on CONFIG_64BIT · ff3fb254
      Kevin Hilman 提交于
      The CONFIG_64BIT requirement on vtime can finally be removed
      since we now depend on HAVE_VIRT_CPU_ACCOUNTING_GEN which
      already takes care of the arch ability to handle nsecs based
      cputime_t safely.
      Signed-off-by: NKevin Hilman <khilman@linaro.org>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Russell King <rmk@arm.linux.org.uk>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Arm Linux <linux-arm-kernel@lists.infradead.org>
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      ff3fb254
    • K
      vtime: Add HAVE_VIRT_CPU_ACCOUNTING_GEN Kconfig · 554b0004
      Kevin Hilman 提交于
      With VIRT_CPU_ACCOUNTING_GEN, cputime_t becomes 64-bit. In order
      to use that feature, arch code should be audited to ensure there are no
      races in concurrent read/write of cputime_t. For example,
      reading/writing 64-bit cputime_t on some 32-bit arches may require
      multiple accesses for low and high value parts, so proper locking
      is needed to protect against concurrent accesses.
      
      Therefore, add CONFIG_HAVE_VIRT_CPU_ACCOUNTING_GEN which arches can
      enable after they've been audited for potential races.
      
      This option is automatically enabled on 64-bit platforms.
      
      Feature requested by Frederic Weisbecker.
      Signed-off-by: NKevin Hilman <khilman@linaro.org>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Russell King <rmk@arm.linux.org.uk>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Arm Linux <linux-arm-kernel@lists.infradead.org>
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      554b0004
  4. 25 9月, 2013 1 次提交
  5. 23 9月, 2013 1 次提交
  6. 12 9月, 2013 3 次提交
    • R
      initmpfs: use initramfs if rootfstype= or root= specified · 6e19eded
      Rob Landley 提交于
      Command line option rootfstype=ramfs to obtain old initramfs behavior, and
      use ramfs instead of tmpfs for stub when root= defined (for cosmetic
      reasons).
      
      [akpm@linux-foundation.org: coding-style fixes]
      Signed-off-by: NRob Landley <rob@landley.net>
      Cc: Jeff Layton <jlayton@redhat.com>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Stephen Warren <swarren@nvidia.com>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: Jim Cromie <jim.cromie@gmail.com>
      Cc: Sam Ravnborg <sam@ravnborg.org>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: "Eric W. Biederman" <ebiederm@xmission.com>
      Cc: Alexander Viro <viro@zeniv.linux.org.uk>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      6e19eded
    • R
      initmpfs: make rootfs use tmpfs when CONFIG_TMPFS enabled · 16203a7a
      Rob Landley 提交于
      Conditionally call the appropriate fs_init function and fill_super
      functions.  Add a use once guard to shmem_init() to simply succeed on a
      second call.
      
      (Note that IS_ENABLED() is a compile time constant so dead code
      elimination removes unused function calls when CONFIG_TMPFS is disabled.)
      Signed-off-by: NRob Landley <rob@landley.net>
      Cc: Jeff Layton <jlayton@redhat.com>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Stephen Warren <swarren@nvidia.com>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: Jim Cromie <jim.cromie@gmail.com>
      Cc: Sam Ravnborg <sam@ravnborg.org>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: "Eric W. Biederman" <ebiederm@xmission.com>
      Cc: Alexander Viro <viro@zeniv.linux.org.uk>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      16203a7a
    • R
      initmpfs: move rootfs code from fs/ramfs/ to init/ · 57f150a5
      Rob Landley 提交于
      When the rootfs code was a wrapper around ramfs, having them in the same
      file made sense.  Now that it can wrap another filesystem type, move it in
      with the init code instead.
      
      This also allows a subsequent patch to access rootfstype= command line
      arg.
      Signed-off-by: NRob Landley <rob@landley.net>
      Cc: Jeff Layton <jlayton@redhat.com>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Stephen Warren <swarren@nvidia.com>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: Jim Cromie <jim.cromie@gmail.com>
      Cc: Sam Ravnborg <sam@ravnborg.org>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: "Eric W. Biederman" <ebiederm@xmission.com>
      Cc: Alexander Viro <viro@zeniv.linux.org.uk>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      57f150a5
  7. 24 8月, 2013 1 次提交
  8. 19 8月, 2013 1 次提交
  9. 16 8月, 2013 2 次提交
  10. 14 8月, 2013 1 次提交
    • F
      context_tracking: Ground setup for static key use · 65f382fd
      Frederic Weisbecker 提交于
      Prepare for using a static key in the context tracking subsystem.
      This will help optimizing the off case on its many users:
      
      * user_enter, user_exit, exception_enter, exception_exit, guest_enter,
        guest_exit, vtime_*()
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Li Zhong <zhong@linux.vnet.ibm.com>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Kevin Hilman <khilman@linaro.org>
      65f382fd
  11. 13 8月, 2013 2 次提交
    • U
      slub: don't use cpu partial pages on UP · b39ffbf8
      Uwe Kleine-König 提交于
      cpu partial pages are used to avoid contention which does not exist in
      the UP case. So let SLUB_CPU_PARTIAL depend on SMP.
      Acked-by: NChristoph Lameter <cl@linux.com>
      Signed-off-by: NUwe Kleine-König <u.kleine-koenig@pengutronix.de>
      Signed-off-by: NPekka Enberg <penberg@kernel.org>
      b39ffbf8
    • F
      context_tracking: Remove full dynticks' hacky dependency on wide context tracking · d84d27a4
      Frederic Weisbecker 提交于
      Now that the full dynticks subsystem only enables the context tracking
      on full dynticks CPUs, lets remove the dependency on CONTEXT_TRACKING_FORCE
      
      This dependency was a hack to enable the context tracking widely for the
      full dynticks susbsystem until the latter becomes able to enable it in a
      more CPU-finegrained fashion.
      
      Now CONTEXT_TRACKING_FORCE only stands for testing on archs that
      work on support for the context tracking while full dynticks can't be
      used yet due to unmet dependencies. It simulates a system where all CPUs
      are full dynticks so that RCU user extended quiescent states and dynticks
      cputime accounting can be tested on the given arch.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Li Zhong <zhong@linux.vnet.ibm.com>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Kevin Hilman <khilman@linaro.org>
      d84d27a4
  12. 15 7月, 2013 1 次提交
    • P
      kernel: delete __cpuinit usage from all core kernel files · 0db0628d
      Paul Gortmaker 提交于
      The __cpuinit type of throwaway sections might have made sense
      some time ago when RAM was more constrained, but now the savings
      do not offset the cost and complications.  For example, the fix in
      commit 5e427ec2 ("x86: Fix bit corruption at CPU resume time")
      is a good example of the nasty type of bugs that can be created
      with improper use of the various __init prefixes.
      
      After a discussion on LKML[1] it was decided that cpuinit should go
      the way of devinit and be phased out.  Once all the users are gone,
      we can then finally remove the macros themselves from linux/init.h.
      
      This removes all the uses of the __cpuinit macros from C files in
      the core kernel directories (kernel, init, lib, mm, and include)
      that don't really have a specific maintainer.
      
      [1] https://lkml.org/lkml/2013/5/20/589Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
      0db0628d
  13. 10 7月, 2013 1 次提交
  14. 08 7月, 2013 1 次提交
  15. 04 7月, 2013 3 次提交
  16. 25 6月, 2013 1 次提交
    • J
      build some drivers only when compile-testing · 4bb16672
      Jiri Slaby 提交于
      Some drivers can be built on more platforms than they run on. This is
      a burden for users and distributors who package a kernel. They have to
      manually deselect some (for them useless) drivers when updating their
      configs via oldconfig. And yet, sometimes it is even impossible to
      disable the drivers without patching the kernel.
      
      Introduce a new config option COMPILE_TEST and make all those drivers
      to depend on the platform they run on, or on the COMPILE_TEST option.
      Now, when users/distributors choose COMPILE_TEST=n they will not have
      the drivers in their allmodconfig setups, but developers still can
      compile-test them with COMPILE_TEST=y.
      
      Now the drivers where we use this new option:
      * PTP_1588_CLOCK_PCH: The PCH EG20T is only compatible with Intel Atom
        processors so it should depend on x86.
      * FB_GEODE: Geode is 32-bit only so only enable it for X86_32.
      * USB_CHIPIDEA_IMX: The OF_DEVICE dependency will be met on powerpc
        systems -- which do not actually support the hardware via that
        method.
      * INTEL_MID_PTI: It is specific to the Penwell type of Intel Atom
        device.
      
      [v2]
      * remove EXPERT dependency
      
      [gregkh - remove chipidea portion, as it's incorrect, and also doesn't
       apply to my driver-core tree]
      Signed-off-by: NJiri Slaby <jslaby@suse.cz>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Jeff Mahoney <jeffm@suse.com>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: linux-usb@vger.kernel.org
      Cc: Florian Tobias Schandinat <FlorianSchandinat@gmx.de>
      Cc: linux-geode@lists.infradead.org
      Cc: linux-fbdev@vger.kernel.org
      Cc: Richard Cochran <richardcochran@gmail.com>
      Cc: netdev@vger.kernel.org
      Cc: Ben Hutchings <ben@decadent.org.uk>
      Cc: "Keller, Jacob E" <jacob.e.keller@intel.com>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      4bb16672
  17. 13 6月, 2013 1 次提交
  18. 11 6月, 2013 4 次提交
    • P
      rcu: Remove TINY_PREEMPT_RCU · 127781d1
      Paul E. McKenney 提交于
      TINY_PREEMPT_RCU adds significant code and complexity, but does not
      offer commensurate benefits.  People currently using TINY_PREEMPT_RCU
      can get much better memory footprint with TINY_RCU, or, if they really
      need preemptible RCU, they can use TREE_PREEMPT_RCU with a relatively
      minor degradation in memory footprint.  Please note that this move
      has been widely publicized on LKML (https://lkml.org/lkml/2012/11/12/545)
      and on LWN (http://lwn.net/Articles/541037/).
      
      This commit therefore removes TINY_PREEMPT_RCU.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      [ paulmck: Updated to eliminate #else in rcutiny.h as suggested by Josh ]
      Reviewed-by: NJosh Triplett <josh@joshtriplett.org>
      127781d1
    • P
      rcu: Apply Dave Jones's NOCB Kconfig help feedback · 676c3dc2
      Paul E. McKenney 提交于
      The Kconfig help text for the RCU_NOCB_CPU_NONE, RCU_NOCB_CPU_ZERO,
      and RCU_NOCB_CPU_ALL Kconfig options was unclear, so this commit
      adds a bit more detail.
      Reported-by: NDave Jones <davej@redhat.com>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      676c3dc2
    • P
      rcu: Remove "Experimental" flags · 9a5739d7
      Paul E. McKenney 提交于
      After a release or two, features are no longer experimental.  Therefore,
      this commit removes the "Experimental" tag from them.
      Reported-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Reviewed-by: NJosh Triplett <josh@joshtriplett.org>
      9a5739d7
    • S
      rcu: Don't call wakeup() with rcu_node structure ->lock held · 016a8d5b
      Steven Rostedt 提交于
      This commit fixes a lockdep-detected deadlock by moving a wake_up()
      call out from a rnp->lock critical section.  Please see below for
      the long version of this story.
      
      On Tue, 2013-05-28 at 16:13 -0400, Dave Jones wrote:
      
      > [12572.705832] ======================================================
      > [12572.750317] [ INFO: possible circular locking dependency detected ]
      > [12572.796978] 3.10.0-rc3+ #39 Not tainted
      > [12572.833381] -------------------------------------------------------
      > [12572.862233] trinity-child17/31341 is trying to acquire lock:
      > [12572.870390]  (rcu_node_0){..-.-.}, at: [<ffffffff811054ff>] rcu_read_unlock_special+0x9f/0x4c0
      > [12572.878859]
      > but task is already holding lock:
      > [12572.894894]  (&ctx->lock){-.-...}, at: [<ffffffff811390ed>] perf_lock_task_context+0x7d/0x2d0
      > [12572.903381]
      > which lock already depends on the new lock.
      >
      > [12572.927541]
      > the existing dependency chain (in reverse order) is:
      > [12572.943736]
      > -> #4 (&ctx->lock){-.-...}:
      > [12572.960032]        [<ffffffff810b9851>] lock_acquire+0x91/0x1f0
      > [12572.968337]        [<ffffffff816ebc90>] _raw_spin_lock+0x40/0x80
      > [12572.976633]        [<ffffffff8113c987>] __perf_event_task_sched_out+0x2e7/0x5e0
      > [12572.984969]        [<ffffffff81088953>] perf_event_task_sched_out+0x93/0xa0
      > [12572.993326]        [<ffffffff816ea0bf>] __schedule+0x2cf/0x9c0
      > [12573.001652]        [<ffffffff816eacfe>] schedule_user+0x2e/0x70
      > [12573.009998]        [<ffffffff816ecd64>] retint_careful+0x12/0x2e
      > [12573.018321]
      > -> #3 (&rq->lock){-.-.-.}:
      > [12573.034628]        [<ffffffff810b9851>] lock_acquire+0x91/0x1f0
      > [12573.042930]        [<ffffffff816ebc90>] _raw_spin_lock+0x40/0x80
      > [12573.051248]        [<ffffffff8108e6a7>] wake_up_new_task+0xb7/0x260
      > [12573.059579]        [<ffffffff810492f5>] do_fork+0x105/0x470
      > [12573.067880]        [<ffffffff81049686>] kernel_thread+0x26/0x30
      > [12573.076202]        [<ffffffff816cee63>] rest_init+0x23/0x140
      > [12573.084508]        [<ffffffff81ed8e1f>] start_kernel+0x3f1/0x3fe
      > [12573.092852]        [<ffffffff81ed856f>] x86_64_start_reservations+0x2a/0x2c
      > [12573.101233]        [<ffffffff81ed863d>] x86_64_start_kernel+0xcc/0xcf
      > [12573.109528]
      > -> #2 (&p->pi_lock){-.-.-.}:
      > [12573.125675]        [<ffffffff810b9851>] lock_acquire+0x91/0x1f0
      > [12573.133829]        [<ffffffff816ebe9b>] _raw_spin_lock_irqsave+0x4b/0x90
      > [12573.141964]        [<ffffffff8108e881>] try_to_wake_up+0x31/0x320
      > [12573.150065]        [<ffffffff8108ebe2>] default_wake_function+0x12/0x20
      > [12573.158151]        [<ffffffff8107bbf8>] autoremove_wake_function+0x18/0x40
      > [12573.166195]        [<ffffffff81085398>] __wake_up_common+0x58/0x90
      > [12573.174215]        [<ffffffff81086909>] __wake_up+0x39/0x50
      > [12573.182146]        [<ffffffff810fc3da>] rcu_start_gp_advanced.isra.11+0x4a/0x50
      > [12573.190119]        [<ffffffff810fdb09>] rcu_start_future_gp+0x1c9/0x1f0
      > [12573.198023]        [<ffffffff810fe2c4>] rcu_nocb_kthread+0x114/0x930
      > [12573.205860]        [<ffffffff8107a91d>] kthread+0xed/0x100
      > [12573.213656]        [<ffffffff816f4b1c>] ret_from_fork+0x7c/0xb0
      > [12573.221379]
      > -> #1 (&rsp->gp_wq){..-.-.}:
      > [12573.236329]        [<ffffffff810b9851>] lock_acquire+0x91/0x1f0
      > [12573.243783]        [<ffffffff816ebe9b>] _raw_spin_lock_irqsave+0x4b/0x90
      > [12573.251178]        [<ffffffff810868f3>] __wake_up+0x23/0x50
      > [12573.258505]        [<ffffffff810fc3da>] rcu_start_gp_advanced.isra.11+0x4a/0x50
      > [12573.265891]        [<ffffffff810fdb09>] rcu_start_future_gp+0x1c9/0x1f0
      > [12573.273248]        [<ffffffff810fe2c4>] rcu_nocb_kthread+0x114/0x930
      > [12573.280564]        [<ffffffff8107a91d>] kthread+0xed/0x100
      > [12573.287807]        [<ffffffff816f4b1c>] ret_from_fork+0x7c/0xb0
      
      Notice the above call chain.
      
      rcu_start_future_gp() is called with the rnp->lock held. Then it calls
      rcu_start_gp_advance, which does a wakeup.
      
      You can't do wakeups while holding the rnp->lock, as that would mean
      that you could not do a rcu_read_unlock() while holding the rq lock, or
      any lock that was taken while holding the rq lock. This is because...
      (See below).
      
      > [12573.295067]
      > -> #0 (rcu_node_0){..-.-.}:
      > [12573.309293]        [<ffffffff810b8d36>] __lock_acquire+0x1786/0x1af0
      > [12573.316568]        [<ffffffff810b9851>] lock_acquire+0x91/0x1f0
      > [12573.323825]        [<ffffffff816ebc90>] _raw_spin_lock+0x40/0x80
      > [12573.331081]        [<ffffffff811054ff>] rcu_read_unlock_special+0x9f/0x4c0
      > [12573.338377]        [<ffffffff810760a6>] __rcu_read_unlock+0x96/0xa0
      > [12573.345648]        [<ffffffff811391b3>] perf_lock_task_context+0x143/0x2d0
      > [12573.352942]        [<ffffffff8113938e>] find_get_context+0x4e/0x1f0
      > [12573.360211]        [<ffffffff811403f4>] SYSC_perf_event_open+0x514/0xbd0
      > [12573.367514]        [<ffffffff81140e49>] SyS_perf_event_open+0x9/0x10
      > [12573.374816]        [<ffffffff816f4dd4>] tracesys+0xdd/0xe2
      
      Notice the above trace.
      
      perf took its own ctx->lock, which can be taken while holding the rq
      lock. While holding this lock, it did a rcu_read_unlock(). The
      perf_lock_task_context() basically looks like:
      
      rcu_read_lock();
      raw_spin_lock(ctx->lock);
      rcu_read_unlock();
      
      Now, what looks to have happened, is that we scheduled after taking that
      first rcu_read_lock() but before taking the spin lock. When we scheduled
      back in and took the ctx->lock, the following rcu_read_unlock()
      triggered the "special" code.
      
      The rcu_read_unlock_special() takes the rnp->lock, which gives us a
      possible deadlock scenario.
      
      	CPU0		CPU1		CPU2
      	----		----		----
      
      				     rcu_nocb_kthread()
          lock(rq->lock);
      		    lock(ctx->lock);
      				     lock(rnp->lock);
      
      				     wake_up();
      
      				     lock(rq->lock);
      
      		    rcu_read_unlock();
      
      		    rcu_read_unlock_special();
      
      		    lock(rnp->lock);
          lock(ctx->lock);
      
      **** DEADLOCK ****
      
      > [12573.382068]
      > other info that might help us debug this:
      >
      > [12573.403229] Chain exists of:
      >   rcu_node_0 --> &rq->lock --> &ctx->lock
      >
      > [12573.424471]  Possible unsafe locking scenario:
      >
      > [12573.438499]        CPU0                    CPU1
      > [12573.445599]        ----                    ----
      > [12573.452691]   lock(&ctx->lock);
      > [12573.459799]                                lock(&rq->lock);
      > [12573.467010]                                lock(&ctx->lock);
      > [12573.474192]   lock(rcu_node_0);
      > [12573.481262]
      >  *** DEADLOCK ***
      >
      > [12573.501931] 1 lock held by trinity-child17/31341:
      > [12573.508990]  #0:  (&ctx->lock){-.-...}, at: [<ffffffff811390ed>] perf_lock_task_context+0x7d/0x2d0
      > [12573.516475]
      > stack backtrace:
      > [12573.530395] CPU: 1 PID: 31341 Comm: trinity-child17 Not tainted 3.10.0-rc3+ #39
      > [12573.545357]  ffffffff825b4f90 ffff880219f1dbc0 ffffffff816e375b ffff880219f1dc00
      > [12573.552868]  ffffffff816dfa5d ffff880219f1dc50 ffff88023ce4d1f8 ffff88023ce4ca40
      > [12573.560353]  0000000000000001 0000000000000001 ffff88023ce4d1f8 ffff880219f1dcc0
      > [12573.567856] Call Trace:
      > [12573.575011]  [<ffffffff816e375b>] dump_stack+0x19/0x1b
      > [12573.582284]  [<ffffffff816dfa5d>] print_circular_bug+0x200/0x20f
      > [12573.589637]  [<ffffffff810b8d36>] __lock_acquire+0x1786/0x1af0
      > [12573.596982]  [<ffffffff810918f5>] ? sched_clock_cpu+0xb5/0x100
      > [12573.604344]  [<ffffffff810b9851>] lock_acquire+0x91/0x1f0
      > [12573.611652]  [<ffffffff811054ff>] ? rcu_read_unlock_special+0x9f/0x4c0
      > [12573.619030]  [<ffffffff816ebc90>] _raw_spin_lock+0x40/0x80
      > [12573.626331]  [<ffffffff811054ff>] ? rcu_read_unlock_special+0x9f/0x4c0
      > [12573.633671]  [<ffffffff811054ff>] rcu_read_unlock_special+0x9f/0x4c0
      > [12573.640992]  [<ffffffff811390ed>] ? perf_lock_task_context+0x7d/0x2d0
      > [12573.648330]  [<ffffffff810b429e>] ? put_lock_stats.isra.29+0xe/0x40
      > [12573.655662]  [<ffffffff813095a0>] ? delay_tsc+0x90/0xe0
      > [12573.662964]  [<ffffffff810760a6>] __rcu_read_unlock+0x96/0xa0
      > [12573.670276]  [<ffffffff811391b3>] perf_lock_task_context+0x143/0x2d0
      > [12573.677622]  [<ffffffff81139070>] ? __perf_event_enable+0x370/0x370
      > [12573.684981]  [<ffffffff8113938e>] find_get_context+0x4e/0x1f0
      > [12573.692358]  [<ffffffff811403f4>] SYSC_perf_event_open+0x514/0xbd0
      > [12573.699753]  [<ffffffff8108cd9d>] ? get_parent_ip+0xd/0x50
      > [12573.707135]  [<ffffffff810b71fd>] ? trace_hardirqs_on_caller+0xfd/0x1c0
      > [12573.714599]  [<ffffffff81140e49>] SyS_perf_event_open+0x9/0x10
      > [12573.721996]  [<ffffffff816f4dd4>] tracesys+0xdd/0xe2
      
      This commit delays the wakeup via irq_work(), which is what
      perf and ftrace use to perform wakeups in critical sections.
      Reported-by: NDave Jones <davej@redhat.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      016a8d5b
  19. 04 6月, 2013 1 次提交
  20. 28 5月, 2013 1 次提交
    • S
      perf: Use hrtimers for event multiplexing · 9e630205
      Stephane Eranian 提交于
      The current scheme of using the timer tick was fine for per-thread
      events. However, it was causing bias issues in system-wide mode
      (including for uncore PMUs). Event groups would not get their fair
      share of runtime on the PMU. With tickless kernels, if a core is idle
      there is no timer tick, and thus no event rotation (multiplexing).
      However, there are events (especially uncore events) which do count
      even though cores are asleep.
      
      This patch changes the timer source for multiplexing.  It introduces a
      per-PMU per-cpu hrtimer. The advantage is that even when a core goes
      idle, it will come back to service the hrtimer, thus multiplexing on
      system-wide events works much better.
      
      The per-PMU implementation (suggested by PeterZ) enables adjusting the
      multiplexing interval per PMU. The preferred interval is stashed into
      the struct pmu. If not set, it will be forced to the default interval
      value.
      
      In order to minimize the impact of the hrtimer, it is turned on and
      off on demand. When the PMU on a CPU is overcommited, the hrtimer is
      activated.  It is stopped when the PMU is not overcommitted.
      
      In order for this to work properly, we had to change the order of
      initialization in start_kernel() such that hrtimer_init() is run
      before perf_event_init().
      
      The default interval in milliseconds is set to a timer tick just like
      with the old code. We will provide a sysctl to tune this in another
      patch.
      Signed-off-by: NStephane Eranian <eranian@google.com>
      Signed-off-by: NPeter Zijlstra <peterz@infradead.org>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Link: http://lkml.kernel.org/r/1364991694-5876-2-git-send-email-eranian@google.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      9e630205
  21. 04 5月, 2013 1 次提交
    • F
      rcu: Fix full dynticks' dependency on wide RCU nocb mode · 73c30828
      Frederic Weisbecker 提交于
      Commit 0637e029
      ("nohz: Select wide RCU nocb for full dynticks") intended
      to force CONFIG_RCU_NOCB_CPU_ALL=y when full dynticks is
      enabled.
      
      However this option is part of a choice menu and Kconfig's
      "select" instruction has no effect on such targets.
      
      Fix this by using reverse dependencies on the targets we
      don't want instead.
      Reviewed-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Hakan Akkan <hakanakkan@gmail.com>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Kevin Hilman <khilman@linaro.org>
      Cc: Li Zhong <zhong@linux.vnet.ibm.com>
      Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      73c30828
  22. 02 5月, 2013 2 次提交
  23. 01 5月, 2013 2 次提交
  24. 30 4月, 2013 1 次提交