1. 05 8月, 2015 1 次提交
  2. 17 4月, 2015 1 次提交
  3. 16 4月, 2015 1 次提交
    • R
      linux/bitmap.h: improve BITMAP_{LAST,FIRST}_WORD_MASK · 89c1e79e
      Rasmus Villemoes 提交于
      The macro BITMAP_LAST_WORD_MASK can be implemented without a conditional,
      which will generally lead to slightly better generated code (221 bytes
      saved for allmodconfig-GCOV_KERNEL, ~2k with GCOV_KERNEL).  As a small
      bonus, this also ensures that the nbits parameter is expanded exactly
      once.
      
      In BITMAP_FIRST_WORD_MASK, if start is signed gcc is technically allowed
      to assume it is positive (or divisible by BITS_PER_LONG), and hence just
      do the simple mask.  It doesn't seem to use this, and even on an
      architecture like x86 where the shift only depends on the lower 5 or 6
      bits, and these bits are not affected by the signedness of the expression,
      gcc still generates code to compute the C99 mandated value of start %
      BITS_PER_LONG.  So just use a mask explicitly, also for consistency with
      BITMAP_LAST_WORD_MASK.
      Signed-off-by: NRasmus Villemoes <linux@rasmusvillemoes.dk>
      Cc: Tejun Heo <tj@kernel.org>
      Reviewed-by: NGeorge Spelvin <linux@horizon.com>
      Cc: Yury Norov <yury.norov@gmail.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      89c1e79e
  4. 14 2月, 2015 5 次提交
  5. 13 2月, 2015 5 次提交
  6. 14 12月, 2014 1 次提交
  7. 08 11月, 2014 1 次提交
  8. 07 8月, 2014 15 次提交
  9. 03 8月, 2011 1 次提交
    • H
      lib, Make gen_pool memory allocator lockless · 7f184275
      Huang Ying 提交于
      This version of the gen_pool memory allocator supports lockless
      operation.
      
      This makes it safe to use in NMI handlers and other special
      unblockable contexts that could otherwise deadlock on locks.  This is
      implemented by using atomic operations and retries on any conflicts.
      The disadvantage is that there may be livelocks in extreme cases.  For
      better scalability, one gen_pool allocator can be used for each CPU.
      
      The lockless operation only works if there is enough memory available.
      If new memory is added to the pool a lock has to be still taken.  So
      any user relying on locklessness has to ensure that sufficient memory
      is preallocated.
      
      The basic atomic operation of this allocator is cmpxchg on long.  On
      architectures that don't have NMI-safe cmpxchg implementation, the
      allocator can NOT be used in NMI handler.  So code uses the allocator
      in NMI handler should depend on CONFIG_ARCH_HAVE_NMI_SAFE_CMPXCHG.
      Signed-off-by: NHuang Ying <ying.huang@intel.com>
      Reviewed-by: NAndi Kleen <ak@linux.intel.com>
      Reviewed-by: NMathieu Desnoyers <mathieu.desnoyers@efficios.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLen Brown <len.brown@intel.com>
      7f184275
  10. 27 7月, 2011 1 次提交
    • M
      cpusets: randomize node rotor used in cpuset_mem_spread_node() · 778d3b0f
      Michal Hocko 提交于
      [ This patch has already been accepted as commit 0ac0c0d0 but later
        reverted (commit 35926ff5) because it itroduced arch specific
        __node_random which was defined only for x86 code so it broke other
        archs.  This is a followup without any arch specific code.  Other than
        that there are no functional changes.]
      
      Some workloads that create a large number of small files tend to assign
      too many pages to node 0 (multi-node systems).  Part of the reason is
      that the rotor (in cpuset_mem_spread_node()) used to assign nodes starts
      at node 0 for newly created tasks.
      
      This patch changes the rotor to be initialized to a random node number
      of the cpuset.
      
      [akpm@linux-foundation.org: fix layout]
      [Lee.Schermerhorn@hp.com: Define stub numa_random() for !NUMA configuration]
      [mhocko@suse.cz: Make it arch independent]
      [akpm@linux-foundation.org: fix CONFIG_NUMA=y, MAX_NUMNODES>1 build]
      Signed-off-by: NJack Steiner <steiner@sgi.com>
      Signed-off-by: NLee Schermerhorn <lee.schermerhorn@hp.com>
      Signed-off-by: NMichal Hocko <mhocko@suse.cz>
      Reviewed-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Christoph Lameter <cl@linux-foundation.org>
      Cc: Pekka Enberg <penberg@cs.helsinki.fi>
      Cc: Paul Menage <menage@google.com>
      Cc: Jack Steiner <steiner@sgi.com>
      Cc: Robin Holt <holt@sgi.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Christoph Lameter <cl@linux-foundation.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Jack Steiner <steiner@sgi.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Lee Schermerhorn <lee.schermerhorn@hp.com>
      Cc: Michal Hocko <mhocko@suse.cz>
      Cc: Paul Menage <menage@google.com>
      Cc: Pekka Enberg <penberg@cs.helsinki.fi>
      Cc: Robin Holt <holt@sgi.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      778d3b0f
  11. 25 5月, 2011 1 次提交
    • M
      bitmap, irq: add smp_affinity_list interface to /proc/irq · 4b060420
      Mike Travis 提交于
      Manually adjusting the smp_affinity for IRQ's becomes unwieldy when the
      cpu count is large.
      
      Setting smp affinity to cpus 256 to 263 would be:
      
      	echo 000000ff,00000000,00000000,00000000,00000000,00000000,00000000,00000000 > smp_affinity
      
      instead of:
      
      	echo 256-263 > smp_affinity_list
      
      Think about what it looks like for cpus around say, 4088 to 4095.
      
      We already have many alternate "list" interfaces:
      
      /sys/devices/system/cpu/cpuX/indexY/shared_cpu_list
      /sys/devices/system/cpu/cpuX/topology/thread_siblings_list
      /sys/devices/system/cpu/cpuX/topology/core_siblings_list
      /sys/devices/system/node/nodeX/cpulist
      /sys/devices/pci***/***/local_cpulist
      
      Add a companion interface, smp_affinity_list to use cpu lists instead of
      cpu maps.  This conforms to other companion interfaces where both a map
      and a list interface exists.
      
      This required adding a bitmap_parselist_user() function in a manner
      similar to the bitmap_parse_user() function.
      
      [akpm@linux-foundation.org: make __bitmap_parselist() static]
      Signed-off-by: NMike Travis <travis@sgi.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Jack Steiner <steiner@sgi.com>
      Cc: Lee Schermerhorn <lee.schermerhorn@hp.com>
      Cc: Andy Shevchenko <andy.shevchenko@gmail.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      4b060420
  12. 31 5月, 2010 1 次提交
  13. 28 5月, 2010 1 次提交
  14. 16 12月, 2009 1 次提交
    • A
      bitmap: introduce bitmap_set, bitmap_clear, bitmap_find_next_zero_area · c1a2a962
      Akinobu Mita 提交于
      This introduces new bitmap functions:
      
      bitmap_set: Set specified bit area
      bitmap_clear: Clear specified bit area
      bitmap_find_next_zero_area: Find free bit area
      
      These are mostly stolen from iommu helper. The differences are:
      
      - Use find_next_bit instead of doing test_bit for each bit
      
      - Rewrite bitmap_set and bitmap_clear
      
        Instead of setting or clearing for each bit.
      
      - Check the last bit of the limit
      
        iommu-helper doesn't want to find such area
      
      - The return value if there is no zero area
      
        find_next_zero_area in iommu helper: returns -1
        bitmap_find_next_zero_area: return >= bitmap size
      Signed-off-by: NAkinobu Mita <akinobu.mita@gmail.com>
      Cc: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Greg Kroah-Hartman <gregkh@suse.de>
      Cc: Lothar Wassmann <LW@KARO-electronics.de>
      Cc: Roland Dreier <rolandd@cisco.com>
      Cc: Yevgeny Petrilin <yevgenyp@mellanox.co.il>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: Joerg Roedel <joerg.roedel@amd.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      c1a2a962
  15. 22 8月, 2009 1 次提交
    • L
      Make bitmask 'and' operators return a result code · f4b0373b
      Linus Torvalds 提交于
      When 'and'ing two bitmasks (where 'andnot' is a variation on it), some
      cases want to know whether the result is the empty set or not.  In
      particular, the TLB IPI sending code wants to do cpumask operations and
      determine if there are any CPU's left in the final set.
      
      So this just makes the bitmask (and cpumask) functions return a boolean
      for whether the result has any bits set.
      
      Cc: stable@kernel.org (2.6.30, needed by TLB shootdown fix)
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      f4b0373b
  16. 30 12月, 2008 1 次提交
    • R
      bitmap: test for constant as well as small size for inline versions · 4b0bc0bc
      Rusty Russell 提交于
      Impact: reduce text size
      
      bitmap_zero et al have a fastpath for nbits <= BITS_PER_LONG, but this
      should really only apply where the nbits is known at compile time.
      
      This only saves about 1200 bytes on an allyesconfig kernel, but with
      cpumasks going variable that number will increase.
      
         text		data	bss	dec		hex	filename
      35327852        5035607 6782976 47146435        2cf65c3 vmlinux-before
      35326640        5035607 6782976 47145223        2cf6107 vmlinux-after
      Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
      4b0bc0bc
  17. 20 10月, 2008 1 次提交
  18. 17 9月, 2008 1 次提交
    • D
      bitmap: add bitmap_copy_le() · ccbe329b
      David Vrabel 提交于
      bitmap_copy_le() copies a bitmap, putting the bits into little-endian
      order (i.e., each unsigned long word in the bitmap is put into
      little-endian order).
      
      The UWB stack used bitmaps to manage Medium Access Slot availability,
      and these bitmaps need to be written to the hardware in LE order.
      Signed-off-by: NDavid Vrabel <david.vrabel@csr.com>
      ccbe329b
新手
引导
客服 返回
顶部