1. 03 8月, 2016 1 次提交
  2. 06 5月, 2016 1 次提交
  3. 23 3月, 2016 1 次提交
  4. 21 1月, 2016 3 次提交
  5. 28 5月, 2015 1 次提交
    • R
      cpumask_set_cpu_local_first => cpumask_local_spread, lament · f36963c9
      Rusty Russell 提交于
      da91309e (cpumask: Utility function to set n'th cpu...) created a
      genuinely weird function.  I never saw it before, it went through DaveM.
      (He only does this to make us other maintainers feel better about our own
      mistakes.)
      
      cpumask_set_cpu_local_first's purpose is say "I need to spread things
      across N online cpus, choose the ones on this numa node first"; you call
      it in a loop.
      
      It can fail.  One of the two callers ignores this, the other aborts and
      fails the device open.
      
      It can fail in two ways: allocating the off-stack cpumask, or through a
      convoluted codepath which AFAICT can only occur if cpu_online_mask
      changes.  Which shouldn't happen, because if cpu_online_mask can change
      while you call this, it could return a now-offline cpu anyway.
      
      It contains a nonsensical test "!cpumask_of_node(numa_node)".  This was
      drawn to my attention by Geert, who said this causes a warning on Sparc.
      It sets a single bit in a cpumask instead of returning a cpu number,
      because that's what the callers want.
      
      It could be made more efficient by passing the previous cpu rather than
      an index, but that would be more invasive to the callers.
      
      Fixes: da91309e
      Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> (then rebased)
      Tested-by: NAmir Vadai <amirv@mellanox.com>
      Acked-by: NAmir Vadai <amirv@mellanox.com>
      Acked-by: NDavid S. Miller <davem@davemloft.net>
      f36963c9
  6. 16 4月, 2015 1 次提交
  7. 31 3月, 2015 1 次提交
  8. 10 3月, 2015 2 次提交
  9. 05 3月, 2015 1 次提交
  10. 14 2月, 2015 3 次提交
    • T
      bitmap, cpumask, nodemask: remove dedicated formatting functions · 46385326
      Tejun Heo 提交于
      Now that all bitmap formatting usages have been converted to
      '%*pb[l]', the separate formatting functions are unnecessary.  The
      following functions are removed.
      
      * bitmap_scn[list]printf()
      * cpumask_scnprintf(), cpulist_scnprintf()
      * [__]nodemask_scnprintf(), [__]nodelist_scnprintf()
      * seq_bitmap[_list](), seq_cpumask[_list](), seq_nodemask[_list]()
      * seq_buf_bitmask()
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      46385326
    • T
      cpumask, nodemask: implement cpumask/nodemask_pr_args() · f1bbc032
      Tejun Heo 提交于
      printf family of functions can now format bitmaps using '%*pb[l]' and
      all cpumask and nodemask formatting will be converted to use it.  To
      ease printing these masks with '%*pb[l]' which require two params -
      the number of bits and the actual bitmap, this patch implement
      cpumask_pr_args() and nodemask_pr_args() which can be used to provide
      arguments for '%*pb[l]'
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>
      Cc: "John W. Linville" <linville@tuxdriver.com>
      Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Chris Metcalf <cmetcalf@tilera.com>
      Cc: Chris Zankel <chris@zankel.net>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Dmitry Torokhov <dmitry.torokhov@gmail.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Li Zefan <lizefan@huawei.com>
      Cc: Max Filippov <jcmvbkbc@gmail.com>
      Cc: Mike Travis <travis@sgi.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: Steffen Klassert <steffen.klassert@secunet.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tony Luck <tony.luck@intel.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      f1bbc032
    • T
      cpumask: always use nr_cpu_ids in formatting and parsing functions · 513e3d2d
      Tejun Heo 提交于
      bitmap implements two variants of scnprintf functions to format a bitmap
      into a string and cpumask and nodemask wrap them to provide equivalent
      interfaces.  The scnprintf family of functions require a string buffer as
      an output target which complicates code paths which just want to print out
      the mask through printk for informational or debug purposes as they have
      to worry about how large the buffer should be and whether it's too large
      to allocate on stack.
      
      Neither cpumask or nodemask provides a guildeline on how large the target
      buffer should be forcing users come up with their own solutions - some
      allocate an arbitrarily sized buffer which is small enough to allocate on
      stack but may be too short in corner cases, other come up with a custom
      upper limit calculation considering the output format, some allocate the
      buffer dynamically while one resorted to using lock to synchronize access
      to a static buffer.
      
      This is an artificial problem which is being solved repeatedly for no
      benefit.  In a lot of cases, the output area already exists and can be
      targeted directly making the intermediate buffer unnecessary.  This
      patchset teaches printf family of functions how to format bitmaps and
      replace the dedicated formatting functions with it.
      
      Pointer formatting is extended to cover bitmap formatting.  It uses the
      field width for the number of bits instead of precision.  The format used
      is '%*pb[l]', with the optional trailing 'l' specifying list format
      instead of hex masks.  For more details, please see 0002.
      
      This patch (of 31):
      
      Currently, the formatting and parsing functions in cpumask.h use
      nr_cpumask_bits like other cpumask functions; however, nr_cpumask_bits
      is either NR_CPUS or nr_cpu_ids depending on CONFIG_CPUMASK_OFFSTACK.
      This leads to inconsistent behaviors.
      
      With CONFIG_NR_CPUS=512 and !CONFIG_CPUMASK_OFFSTACK
      
        # cat /sys/devices/virtual/net/lo/queues/rx-0/rps_cpus
        00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000
        # cat /proc/self/status | grep Cpus_allowed:
        Cpus_allowed:   f
      
      With CONFIG_NR_CPUS=1024 and CONFIG_CPUMASK_OFFSTACK (fedora default)
      
        # cat /sys/devices/virtual/net/lo/queues/rx-0/rps_cpus
        0
        # cat /proc/self/status | grep Cpus_allowed:
        Cpus_allowed:   f
      
      Note that /proc/self/status is always using nr_cpu_ids regardless of
      config.  This is because seq cpumask formattings functions always use
      nr_cpu_ids.
      
      Given that the same output fields may switch between the two forms,
      converging on nr_cpu_ids always isn't too likely to surprise userland.
      This patch updates the formatting and parsing functions in cpumask.h
      to always use nr_cpu_ids.  There's no point in dealing with CPUs which
      aren't even possible on the machine.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>
      Cc: "John W. Linville" <linville@tuxdriver.com>
      Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Chris Metcalf <cmetcalf@tilera.com>
      Cc: Chris Zankel <chris@zankel.net>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Dmitry Torokhov <dmitry.torokhov@gmail.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Li Zefan <lizefan@huawei.com>
      Cc: Max Filippov <jcmvbkbc@gmail.com>
      Cc: Mike Travis <travis@sgi.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Russell King <linux@arm.linux.org.uk>
      Acked-by: NRusty Russell <rusty@rustcorp.com.au>
      Cc: Steffen Klassert <steffen.klassert@secunet.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tony Luck <tony.luck@intel.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      513e3d2d
  11. 13 2月, 2015 1 次提交
  12. 08 11月, 2014 1 次提交
  13. 28 8月, 2014 1 次提交
    • C
      percpu: Resolve ambiguities in __get_cpu_var/cpumask_var_t · 4ba29684
      Christoph Lameter 提交于
      __get_cpu_var can paper over differences in the definitions of
      cpumask_var_t and either use the address of the cpumask variable
      directly or perform a fetch of the address of the struct cpumask
      allocated elsewhere. This is important particularly when using per cpu
      cpumask_var_t declarations because in one case we have an offset into
      a per cpu area to handle and in the other case we need to fetch a
      pointer from the offset.
      
      This patch introduces a new macro
      
      this_cpu_cpumask_var_ptr()
      
      that is defined where cpumask_var_t is defined and performs the proper
      actions. All use cases where __get_cpu_var is used with cpumask_var_t
      are converted to the use of this_cpu_cpumask_var_ptr().
      Signed-off-by: NChristoph Lameter <cl@linux.com>
      Signed-off-by: NTejun Heo <tj@kernel.org>
      4ba29684
  14. 12 6月, 2014 1 次提交
    • A
      cpumask: Utility function to set n'th cpu - local cpu first · da91309e
      Amir Vadai 提交于
      This function sets the n'th cpu - local cpu's first.
      For example: in a 16 cores server with even cpu's local, will get the
      following values:
      cpumask_set_cpu_local_first(0, numa, cpumask) => cpu 0 is set
      cpumask_set_cpu_local_first(1, numa, cpumask) => cpu 2 is set
      ...
      cpumask_set_cpu_local_first(7, numa, cpumask) => cpu 14 is set
      cpumask_set_cpu_local_first(8, numa, cpumask) => cpu 1 is set
      cpumask_set_cpu_local_first(9, numa, cpumask) => cpu 3 is set
      ...
      cpumask_set_cpu_local_first(15, numa, cpumask) => cpu 15 is set
      
      Curently this function will be used by multi queue networking devices to
      calculate the irq affinity mask, such that as many local cpu's as
      possible will be utilized to handle the mq device irq's.
      Signed-off-by: NAmir Vadai <amirv@mellanox.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      da91309e
  15. 02 6月, 2014 2 次提交
    • D
      net: Revert mlx4 cpumask changes. · ee39facb
      David S. Miller 提交于
      This reverts commit 70a640d0
      ("net/mlx4_en: Use affinity hint") and commit
      c8865b64 ("cpumask: Utility function
      to set n'th cpu - local cpu first") because these changes break
      the build when SMP is disabled amongst other things.
      Reported-by: NEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      ee39facb
    • A
      cpumask: Utility function to set n'th cpu - local cpu first · c8865b64
      Amir Vadai 提交于
      This function sets the n'th cpu - local cpu's first.
      For example: in a 16 cores server with even cpu's local, will get the
      following values:
      cpumask_set_cpu_local_first(0, numa, cpumask) => cpu 0 is set
      cpumask_set_cpu_local_first(1, numa, cpumask) => cpu 2 is set
      ...
      cpumask_set_cpu_local_first(7, numa, cpumask) => cpu 14 is set
      cpumask_set_cpu_local_first(8, numa, cpumask) => cpu 1 is set
      cpumask_set_cpu_local_first(9, numa, cpumask) => cpu 3 is set
      ...
      cpumask_set_cpu_local_first(15, numa, cpumask) => cpu 15 is set
      
      Curently this function will be used by multi queue networking devices to
      calculate the irq affinity mask, such that as many local cpu's as
      possible will be utilized to handle the mq device irq's.
      Signed-off-by: NAmir Vadai <amirv@mellanox.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      c8865b64
  16. 14 5月, 2014 1 次提交
  17. 13 3月, 2013 1 次提交
    • T
      cpumask: implement cpumask_parse() · ba630e49
      Tejun Heo 提交于
      We have cpulist_parse() but not cpumask_parse().  Implement it using
      bitmap_parse().
      
      bitmap_parse() is weird in that it takes @len for a string in
      kernel-memory which also is inconsistent with bitmap_parselist().
      Make cpumask_parse() calculate the length and don't expose the
      inconsistency to cpumask users.  Maybe we can fix up bitmap_parse()
      later.
      
      This will be used to expose workqueue cpumask knobs to userland via
      sysfs.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      ba630e49
  18. 27 7月, 2012 2 次提交
  19. 29 3月, 2012 2 次提交
  20. 05 3月, 2012 1 次提交
    • P
      BUG: headers with BUG/BUG_ON etc. need linux/bug.h · 187f1882
      Paul Gortmaker 提交于
      If a header file is making use of BUG, BUG_ON, BUILD_BUG_ON, or any
      other BUG variant in a static inline (i.e. not in a #define) then
      that header really should be including <linux/bug.h> and not just
      expecting it to be implicitly present.
      
      We can make this change risk-free, since if the files using these
      headers didn't have exposure to linux/bug.h already, they would have
      been causing compile failures/warnings.
      Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
      187f1882
  21. 27 7月, 2011 1 次提交
  22. 25 5月, 2011 1 次提交
    • M
      bitmap, irq: add smp_affinity_list interface to /proc/irq · 4b060420
      Mike Travis 提交于
      Manually adjusting the smp_affinity for IRQ's becomes unwieldy when the
      cpu count is large.
      
      Setting smp affinity to cpus 256 to 263 would be:
      
      	echo 000000ff,00000000,00000000,00000000,00000000,00000000,00000000,00000000 > smp_affinity
      
      instead of:
      
      	echo 256-263 > smp_affinity_list
      
      Think about what it looks like for cpus around say, 4088 to 4095.
      
      We already have many alternate "list" interfaces:
      
      /sys/devices/system/cpu/cpuX/indexY/shared_cpu_list
      /sys/devices/system/cpu/cpuX/topology/thread_siblings_list
      /sys/devices/system/cpu/cpuX/topology/core_siblings_list
      /sys/devices/system/node/nodeX/cpulist
      /sys/devices/pci***/***/local_cpulist
      
      Add a companion interface, smp_affinity_list to use cpu lists instead of
      cpu maps.  This conforms to other companion interfaces where both a map
      and a list interface exists.
      
      This required adding a bitmap_parselist_user() function in a manner
      similar to the bitmap_parse_user() function.
      
      [akpm@linux-foundation.org: make __bitmap_parselist() static]
      Signed-off-by: NMike Travis <travis@sgi.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Jack Steiner <steiner@sgi.com>
      Cc: Lee Schermerhorn <lee.schermerhorn@hp.com>
      Cc: Andy Shevchenko <andy.shevchenko@gmail.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      4b060420
  23. 07 3月, 2010 1 次提交
  24. 25 2月, 2010 1 次提交
    • P
      rcu: Accelerate grace period if last non-dynticked CPU · 8bd93a2c
      Paul E. McKenney 提交于
      Currently, rcu_needs_cpu() simply checks whether the current CPU
      has an outstanding RCU callback, which means that the last CPU
      to go into dyntick-idle mode might wait a few ticks for the
      relevant grace periods to complete.  However, if all the other
      CPUs are in dyntick-idle mode, and if this CPU is in a quiescent
      state (which it is for RCU-bh and RCU-sched any time that we are
      considering going into dyntick-idle mode), then the grace period
      is instantly complete.
      
      This patch therefore repeatedly invokes the RCU grace-period
      machinery in order to force any needed grace periods to complete
      quickly.  It does so a limited number of times in order to
      prevent starvation by an RCU callback function that might pass
      itself to call_rcu().
      
      However, if any CPU other than the current one is not in
      dyntick-idle mode, fall back to simply checking (with fix to bug
      noted by Lai Jiangshan).  Also, take advantage of last
      grace-period forcing, the opportunity to do so noted by Steve
      Rostedt.  And apply simplified #ifdef condition suggested by
      Frederic Weisbecker.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: laijs@cn.fujitsu.com
      Cc: dipankar@in.ibm.com
      Cc: mathieu.desnoyers@polymtl.ca
      Cc: josh@joshtriplett.org
      Cc: dvhltc@us.ibm.com
      Cc: niv@us.ibm.com
      Cc: peterz@infradead.org
      Cc: rostedt@goodmis.org
      Cc: Valdis.Kletnieks@vt.edu
      Cc: dhowells@redhat.com
      LKML-Reference: <1266887105-1528-15-git-send-email-paulmck@linux.vnet.ibm.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      8bd93a2c
  25. 07 12月, 2009 1 次提交
    • P
      sched: Fix balance vs hotplug race · 6ad4c188
      Peter Zijlstra 提交于
      Since (e761b772: cpu hotplug, sched: Introduce cpu_active_map and redo
      sched domain managment) we have cpu_active_mask which is suppose to rule
      scheduler migration and load-balancing, except it never (fully) did.
      
      The particular problem being solved here is a crash in try_to_wake_up()
      where select_task_rq() ends up selecting an offline cpu because
      select_task_rq_fair() trusts the sched_domain tree to reflect the
      current state of affairs, similarly select_task_rq_rt() trusts the
      root_domain.
      
      However, the sched_domains are updated from CPU_DEAD, which is after the
      cpu is taken offline and after stop_machine is done. Therefore it can
      race perfectly well with code assuming the domains are right.
      
      Cure this by building the domains from cpu_active_mask on
      CPU_DOWN_PREPARE.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      6ad4c188
  26. 24 9月, 2009 4 次提交
  27. 23 9月, 2009 1 次提交
    • X
      generic-ipi: make struct call_function_data lockless · 54fdade1
      Xiao Guangrong 提交于
      This patch can remove spinlock from struct call_function_data, the
      reasons are below:
      
      1: add a new interface for cpumask named cpumask_test_and_clear_cpu(),
         it can atomically test and clear specific cpu, we can use it instead
         of cpumask_test_cpu() and cpumask_clear_cpu() and no need data->lock
         to protect those in generic_smp_call_function_interrupt().
      
      2: in smp_call_function_many(), after csd_lock() return, the current's
         cfd_data is deleted from call_function list, so it not have race
         between other cpus, then cfs_data is only used in
         smp_call_function_many() that must disable preemption and not from
         a hardware interrupthandler or from a bottom half handler to call,
         only the correspond cpu can use it, so it not have race in current
         cpu, no need cfs_data->lock to protect it.
      
      3: after 1 and 2, cfs_data->lock is only use to protect cfs_data->refs in
         generic_smp_call_function_interrupt(), so we can define cfs_data->refs
         to atomic_t, and no need cfs_data->lock any more.
      Signed-off-by: NXiao Guangrong <xiaoguangrong@cn.fujitsu.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Jens Axboe <jens.axboe@oracle.com>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Acked-by: NRusty Russell <rusty@rustcorp.com.au>
      [akpm@linux-foundation.org: use atomic_dec_return()]
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      54fdade1
  28. 22 8月, 2009 1 次提交
    • L
      Make bitmask 'and' operators return a result code · f4b0373b
      Linus Torvalds 提交于
      When 'and'ing two bitmasks (where 'andnot' is a variation on it), some
      cases want to know whether the result is the empty set or not.  In
      particular, the TLB IPI sending code wants to do cpumask operations and
      determine if there are any CPU's left in the final set.
      
      So this just makes the bitmask (and cpumask) functions return a boolean
      for whether the result has any bits set.
      
      Cc: stable@kernel.org (2.6.30, needed by TLB shootdown fix)
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      f4b0373b
  29. 09 6月, 2009 1 次提交