1. 03 8月, 2015 1 次提交
  2. 20 7月, 2015 2 次提交
    • D
      rtmutex: Delete scriptable tester · 1b0b7c17
      Davidlohr Bueso 提交于
      No one uses this anymore, and this is not the first time the
      idea of replacing it with a (now possible) userspace side.
      Lock stealing logic was removed long ago in when the lock
      was granted to the highest prio.
      Signed-off-by: NDavidlohr Bueso <dbueso@suse.de>
      Cc: Darren Hart <dvhart@infradead.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Mike Galbraith <umgwanakikbuti@gmail.com>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
      Cc: Davidlohr Bueso <dave@stgolabs.net>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Link: http://lkml.kernel.org/r/1435782588-4177-2-git-send-email-dave@stgolabs.netSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      1b0b7c17
    • D
      futex: Fault/error injection capabilities · ab51fbab
      Davidlohr Bueso 提交于
      Although futexes are well known for being a royal pita,
      we really have very little debugging capabilities - except
      for relying on tglx's eye half the time.
      
      By simply making use of the existing fault-injection machinery,
      we can improve this situation, allowing generating artificial
      uaddress faults and deadlock scenarios. Of course, when this is
      disabled in production systems, the overhead for failure checks
      is practically zero -- so this is very cheap at the same time.
      Future work would be nice to now enhance trinity to make use of
      this.
      
      There is a special tunable 'ignore-private', which can filter
      out private futexes. Given the tsk->make_it_fail filter and
      this option, pi futexes can be narrowed down pretty closely.
      Signed-off-by: NDavidlohr Bueso <dbueso@suse.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Darren Hart <darren@dvhart.com>
      Cc: Davidlohr Bueso <dave@stgolabs.net>
      Link: http://lkml.kernel.org/r/1435645562-975-3-git-send-email-dave@stgolabs.netSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      ab51fbab
  3. 18 7月, 2015 4 次提交
  4. 09 7月, 2015 1 次提交
    • P
      rhashtable: fix for resize events during table walk · 142b942a
      Phil Sutter 提交于
      If rhashtable_walk_next detects a resize operation in progress, it jumps
      to the new table and continues walking that one. But it misses to drop
      the reference to it's current item, leading it to continue traversing
      the new table's bucket in which the current item is sorted into, and
      after reaching that bucket's end continues traversing the new table's
      second bucket instead of the first one, thereby potentially missing
      items.
      
      This fixes the rhashtable runtime test for me. Bug probably introduced
      by Herbert Xu's patch eddee5ba ("rhashtable: Fix walker behaviour during
      rehash") although not explicitly tested.
      
      Fixes: eddee5ba ("rhashtable: Fix walker behaviour during rehash")
      Signed-off-by: NPhil Sutter <phil@nwl.cc>
      Acked-by: NHerbert Xu <herbert@gondor.apana.org.au>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      142b942a
  5. 06 7月, 2015 1 次提交
  6. 04 7月, 2015 1 次提交
  7. 01 7月, 2015 5 次提交
    • V
      genalloc: rename of_get_named_gen_pool() to of_gen_pool_get() · abdd4a70
      Vladimir Zapolskiy 提交于
      To be consistent with other kernel interface namings, rename
      of_get_named_gen_pool() to of_gen_pool_get().  In the original function
      name "_named" suffix references to a device tree property, which contains
      a phandle to a device and the corresponding device driver is assumed to
      register a gen_pool object.
      
      Due to a weak relation and to avoid any confusion (e.g.  in future
      possible scenario if gen_pool objects are named) the suffix is removed.
      
      [sfr@canb.auug.org.au: crypto/marvell/cesa - fix up for of_get_named_gen_pool() rename]
      Signed-off-by: NVladimir Zapolskiy <vladimir_zapolskiy@mentor.com>
      Cc: Nicolas Ferre <nicolas.ferre@atmel.com>
      Cc: Philipp Zabel <p.zabel@pengutronix.de>
      Cc: Shawn Guo <shawn.guo@linaro.org>
      Cc: Sascha Hauer <kernel@pengutronix.de>
      Cc: Alexandre Belloni <alexandre.belloni@free-electrons.com>
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: Mauro Carvalho Chehab <mchehab@osg.samsung.com>
      Cc: Vinod Koul <vinod.koul@intel.com>
      Cc: Takashi Iwai <tiwai@suse.de>
      Cc: Jaroslav Kysela <perex@perex.cz>
      Signed-off-by: NStephen Rothwell <sfr@canb.auug.org.au>
      Cc: Herbert Xu <herbert@gondor.apana.org.au>
      Cc: Boris BREZILLON <boris.brezillon@free-electrons.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      abdd4a70
    • V
      genalloc: rename dev_get_gen_pool() to gen_pool_get() · 0030edf2
      Vladimir Zapolskiy 提交于
      To be consistent with other genalloc interface namings, rename
      dev_get_gen_pool() to gen_pool_get().  The original omitted "dev_" prefix
      is removed, since it points to argument type of the function, and so it
      does not bring any useful information.
      
      [akpm@linux-foundation.org: update arch/arm/mach-socfpga/pm.c]
      Signed-off-by: NVladimir Zapolskiy <vladimir_zapolskiy@mentor.com>
      Acked-by: NNicolas Ferre <nicolas.ferre@atmel.com>
      Cc: Philipp Zabel <p.zabel@pengutronix.de>
      Cc: Shawn Guo <shawn.guo@linaro.org>
      Cc: Sascha Hauer <kernel@pengutronix.de>
      Cc: Alexandre Belloni <alexandre.belloni@free-electrons.com>
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: Mauro Carvalho Chehab <mchehab@osg.samsung.com>
      Cc: Vinod Koul <vinod.koul@intel.com>
      Cc: Takashi Iwai <tiwai@suse.de>
      Cc: Jaroslav Kysela <perex@perex.cz>
      Cc: Mark Brown <broonie@kernel.org>
      Cc: Nicolas Ferre <nicolas.ferre@atmel.com>
      Cc: Alan Tull <atull@opensource.altera.com>
      Cc: Dinh Nguyen <dinguyen@opensource.altera.com>
      Cc: Kevin Hilman <khilman@linaro.org>
      Cc: Stephen Rothwell <sfr@canb.auug.org.au>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      0030edf2
    • D
      drivers/scsi/scsi_debug.c: resolve sg buffer const-ness issue · 386ecb12
      Dave Gordon 提交于
      do_device_access() takes a separate parameter to indicate the direction of
      data transfer, which it used to use to select the appropriate function out
      of sg_pcopy_{to,from}_buffer().  However these two functions now have
      
      So this patch makes it bypass these wrappers and call the underlying
      function sg_copy_buffer() directly; this has the same calling style as
      do_device_access() i.e.  a separate direction-of-transfer parameter and no
      pointers-to-const, so skipping the wrappers not only eliminates the
      warning, it also make the code simpler :)
      
      [akpm@linux-foundation.org: fix very broken build]
      Signed-off-by: NDave Gordon <david.s.gordon@intel.com>
      Acked-by: NArnd Bergmann <arnd@arndb.de>
      Cc: James Bottomley <James.Bottomley@HansenPartnership.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      386ecb12
    • D
      lib/scatterlist: mark input buffer parameters as 'const' · 2a1bf8f9
      Dave Gordon 提交于
      The 'buf' parameter of sg(p)copy_from_buffer() can and should be
      const-qualified, although because of the shared implementation of
      _to_buffer() and _from_buffer(), we have to cast this away internally.
      
      This means that callers who have a 'const' buffer containing the data to
      be copied to the sg-list no longer have to cast away the const-ness
      themselves.  It also enables improved coverage by code analysis tools.
      Signed-off-by: NDave Gordon <david.s.gordon@intel.com>
      Cc: Akinobu Mita <akinobu.mita@gmail.com>
      Cc: "Martin K. Petersen" <martin.petersen@oracle.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      2a1bf8f9
    • D
      lib/scatterlist.c: fix kerneldoc for sg_pcopy_{to,from}_buffer() · 4dc7daf8
      Dave Gordon 提交于
      The kerneldoc for the functions doesn't match the code; the last two
      parameters (buflen, skip) have been transposed, which is confusing,
      especially as they're both integral types and the compiler won't warn
      about swapping them.
      
      These functions and the kerneldoc were introduced in commit:
          df642cea lib/scatterlist: introduce sg_pcopy_from_buffer() ...
          Author: Akinobu Mita <akinobu.mita@gmail.com>
          Date:   Mon Jul 8 16:01:54 2013 -0700
      
          The only difference between sg_pcopy_{from,to}_buffer() and
          sg_copy_{from,to}_buffer() is an additional argument that
          specifies the number of bytes to skip the SG list before
          copying.
      
      The functions have the extra argument at the end, but the kerneldoc
      lists it in penultimate position.
      Signed-off-by: NDave Gordon <david.s.gordon@intel.com>
      Reviewed-by: NAkinobu Mita <akinobu.mita@gmail.com>
      Cc: "Martin K. Petersen" <martin.petersen@oracle.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      4dc7daf8
  8. 26 6月, 2015 8 次提交
    • R
      arch, x86: pmem api for ensuring durability of persistent memory updates · 61031952
      Ross Zwisler 提交于
      Based on an original patch by Ross Zwisler [1].
      
      Writes to persistent memory have the potential to be posted to cpu
      cache, cpu write buffers, and platform write buffers (memory controller)
      before being committed to persistent media.  Provide apis,
      memcpy_to_pmem(), wmb_pmem(), and memremap_pmem(), to write data to
      pmem and assert that it is durable in PMEM (a persistent linear address
      range).  A '__pmem' attribute is added so sparse can track proper usage
      of pointers to pmem.
      
      This continues the status quo of pmem being x86 only for 4.2, but
      reworks to ioremap, and wider implementation of memremap() will enable
      other archs in 4.3.
      
      [1]: https://lists.01.org/pipermail/linux-nvdimm/2015-May/000932.html
      
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Signed-off-by: NRoss Zwisler <ross.zwisler@linux.intel.com>
      [djbw: various reworks]
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      61031952
    • R
      lib/kobject.c: use strreplace() · 2abf114f
      Rasmus Villemoes 提交于
      There's probably not many slashes in the name, but starting over when
      we see one feels wrong.
      Signed-off-by: NRasmus Villemoes <linux@rasmusvillemoes.dk>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      2abf114f
    • R
      lib/string.c: introduce strreplace() · 94df2904
      Rasmus Villemoes 提交于
      Strings are sometimes sanitized by replacing a certain character (often
      '/') by another (often '!').  In a few places, this is done the same way
      Schlemiel the Painter would do it.  Others are slightly smarter but still
      do multiple strchr() calls.  Introduce strreplace() to do this using a
      single function call and a single pass over the string.
      
      One would expect the return value to be one of three things: void, s, or
      the number of replacements made.  I chose the fourth, returning a pointer
      to the end of the string.  This is more likely to be useful (for example
      allowing the caller to avoid a strlen call).
      Signed-off-by: NRasmus Villemoes <linux@rasmusvillemoes.dk>
      Cc: "Theodore Ts'o" <tytso@mit.edu>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Neil Brown <neilb@suse.de>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Joe Perches <joe@perches.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      94df2904
    • K
      radix-tree: replace preallocated node array with linked list · 9d2a8da0
      Kirill A. Shutemov 提交于
      Currently we use per-cpu array to hold pointers to preallocated nodes.
      Let's replace it with linked list.  On x86_64 it saves 256 bytes in
      per-cpu ELF section which may translate into freeing up 2MB of memory for
      NR_CPUS==8192.
      
      [akpm@linux-foundation.org: fix comment, coding style]
      Signed-off-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Acked-by: NJohannes Weiner <hannes@cmpxchg.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      9d2a8da0
    • S
      bitmap: remove explicit newline handling using scnprintf format string · 9cf79d11
      Sudeep Holla 提交于
      bitmap_print_to_pagebuf uses scnprintf to copy the cpumask/list to page
      buffer.  It handles the newline and trailing null character explicitly.
      
      It's unnecessary and also partially duplicated as scnprintf already adds
      trailing null character.  The newline can be passed through format
      string to scnprintf.  This patch does that simplification.
      
      However theoretically there's one behavior difference: when the buffer
      is too small, the original code would still output '\n' at the end while
      the new code(with this patch) would just continue to print the formatted
      string.  Since this function is dealing with only page buffers, it's
      highly unlikely to hit that corner case.
      
      This patch will help in auditing the users of bitmap_print_to_pagebuf to
      verify that the buffer passed is large enough and get rid of it
      completely by replacing them with direct scnprintf()
      
      [akpm@linux-foundation.org: tweak comment]
      Signed-off-by: NSudeep Holla <sudeep.holla@arm.com>
      Suggested-by: NPawel Moll <Pawel.Moll@arm.com>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: "Peter Zijlstra (Intel)" <peterz@infradead.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      9cf79d11
    • D
      lib/sort: Add 64 bit swap function · ca96ab85
      Daniel Wagner 提交于
      In case the call side is not providing a swap function, we either use a
      32 bit or a generic swap function.  When swapping around pointers on 64
      bit architectures falling back to use the generic swap function seems
      like an unnecessary waste.
      
      There at least 9 users ('sort' is of difficult to grep for) of sort()
      and all of them use the sort function without a customized swap
      function.  Furthermore, they are all using pointers to swap around:
      
      arch/x86/kernel/e820.c:sanitize_e820_map()
      arch/x86/mm/extable.c:sort_extable()
      drivers/acpi/fan.c:acpi_fan_get_fps()
      fs/btrfs/super.c:btrfs_descending_sort_devices()
      fs/xfs/libxfs/xfs_dir2_block.c:xfs_dir2_sf_to_block()
      kernel/range.c:clean_sort_range()
      mm/memcontrol.c:__mem_cgroup_usage_register_event()
      sound/pci/hda/hda_auto_parser.c:snd_hda_parse_pin_defcfg()
      sound/pci/hda/hda_auto_parser.c:sort_pins_by_sequence()
      
      Obviously, we could improve the swap for other sizes as well
      but this is overkill at this point.
      
      A simple test shows sorting a 400 element array (try to stay in one
      page) with either with u32_swap() or u64_swap() show that the theory
      actually works. This test was done on a x86_64 (Intel Xeon E5-4610)
      machine.
      
      - swap_32:
      
      NumSamples = 100; Min = 48.00; Max = 49.00
      Mean = 48.320000; Variance = 0.217600; SD = 0.466476; Median 48.000000
      each * represents a count of 1
         48.0000 -    48.1000 [    68]: ********************************************************************
         48.1000 -    48.2000 [     0]:
         48.2000 -    48.3000 [     0]:
         48.3000 -    48.4000 [     0]:
         48.4000 -    48.5000 [     0]:
         48.5000 -    48.6000 [     0]:
         48.6000 -    48.7000 [     0]:
         48.7000 -    48.8000 [     0]:
         48.8000 -    48.9000 [     0]:
         48.9000 -    49.0000 [    32]: ********************************
      
      - swap_64:
      
      NumSamples = 100; Min = 44.00; Max = 63.00
      Mean = 48.250000; Variance = 18.687500; SD = 4.322904; Median 47.000000
      each * represents a count of 1
         44.0000 -    45.9000 [    15]: ***************
         45.9000 -    47.8000 [    37]: *************************************
         47.8000 -    49.7000 [    39]: ***************************************
         49.7000 -    51.6000 [     0]:
         51.6000 -    53.5000 [     0]:
         53.5000 -    55.4000 [     0]:
         55.4000 -    57.3000 [     0]:
         57.3000 -    59.2000 [     1]: *
         59.2000 -    61.1000 [     3]: ***
         61.1000 -    63.0000 [     5]: *****
      
      - swap_72:
      
      NumSamples = 100; Min = 53.00; Max = 71.00
      Mean = 55.070000; Variance = 21.565100; SD = 4.643824; Median 53.000000
      each * represents a count of 1
         53.0000 -    54.8000 [    73]: *************************************************************************
         54.8000 -    56.6000 [     9]: *********
         56.6000 -    58.4000 [     9]: *********
         58.4000 -    60.2000 [     0]:
         60.2000 -    62.0000 [     0]:
         62.0000 -    63.8000 [     0]:
         63.8000 -    65.6000 [     0]:
         65.6000 -    67.4000 [     1]: *
         67.4000 -    69.2000 [     4]: ****
         69.2000 -    71.0000 [     4]: ****
      
      - test program:
      
      static int cmp_32(const void *a, const void *b)
      {
      	u32 l = *(u32 *)a;
      	u32 r = *(u32 *)b;
      
      	if (l < r)
      		return -1;
      	if (l > r)
      		return 1;
      	return 0;
      }
      
      static int cmp_64(const void *a, const void *b)
      {
      	u64 l = *(u64 *)a;
      	u64 r = *(u64 *)b;
      
      	if (l < r)
      		return -1;
      	if (l > r)
      		return 1;
      	return 0;
      }
      
      static int cmp_72(const void *a, const void *b)
      {
      	u32 l = get_unaligned((u32 *) a);
      	u32 r = get_unaligned((u32 *) b);
      
      	if (l < r)
      		return -1;
      	if (l > r)
      		return 1;
      	return 0;
      }
      
      static void init_array32(void *array)
      {
      	u32 *a = array;
      	int i;
      
      	a[0] = 3821;
      	for (i = 1; i < ARRAY_ELEMENTS; i++)
      		a[i] = next_pseudo_random32(a[i-1]);
      }
      
      static void init_array64(void *array)
      {
      	u64 *a = array;
      	int i;
      
      	a[0] = 3821;
      	for (i = 1; i < ARRAY_ELEMENTS; i++)
      		a[i] = next_pseudo_random32(a[i-1]);
      }
      
      static void init_array72(void *array)
      {
      	char *p;
      	u32 v;
      	int i;
      
      	v = 3821;
      	for (i = 0; i < ARRAY_ELEMENTS; i++) {
      		p = (char *)array + (i * 9);
      		put_unaligned(v, (u32*) p);
      		v = next_pseudo_random32(v);
      	}
      }
      
      static void sort_test(void (*init)(void *array),
      		      int (*cmp) (const void *, const void *),
      		      void *array, size_t size)
      {
      	ktime_t start, stop;
      	int i;
      
      	for (i = 0; i < 10000; i++) {
      		init(array);
      
      		local_irq_disable();
      		start = ktime_get();
      
      		sort(array, ARRAY_ELEMENTS, size, cmp, NULL);
      
      		stop = ktime_get();
      		local_irq_enable();
      
      		if (i > 10000 - 101)
      		  pr_info("%lld\n",  ktime_to_us(ktime_sub(stop, start)));
      	}
      }
      
      static void *create_array(size_t size)
      {
      	void *array;
      
      	array = kmalloc(ARRAY_ELEMENTS * size, GFP_KERNEL);
      	if (!array)
      		return NULL;
      
      	return array;
      }
      
      static int perform_test(size_t size)
      {
      	void *array;
      
      	array = create_array(size);
      	if (!array)
      		return -ENOMEM;
      
      	pr_info("test element size %d bytes\n", (int)size);
      	switch (size) {
      	case 4:
      		sort_test(init_array32, cmp_32, array, size);
      		break;
      	case 8:
      		sort_test(init_array64, cmp_64, array, size);
      		break;
      	case 9:
      		sort_test(init_array72, cmp_72, array, size);
      		break;
      	}
      	kfree(array);
      
      	return 0;
      }
      
      static int __init sort_tests_init(void)
      {
      	int err;
      
      	err = perform_test(sizeof(u32));
      	if (err)
      		return err;
      
      	err = perform_test(sizeof(u64));
      	if (err)
      		return err;
      
      	err = perform_test(sizeof(u64)+1);
      	if (err)
      		return err;
      
      	return 0;
      }
      
      static void __exit sort_tests_exit(void)
      {
      }
      
      module_init(sort_tests_init);
      module_exit(sort_tests_exit);
      
      MODULE_LICENSE("GPL v2");
      MODULE_AUTHOR("Daniel Wagner");
      MODULE_DESCRIPTION("sort perfomance tests");
      Signed-off-by: NDaniel Wagner <daniel.wagner@bmw-carit.de>
      Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      ca96ab85
    • G
      hexdump: Make test data really const · 79e23d57
      Geert Uytterhoeven 提交于
      The test data arrays, containing pointers to test strings, are never
      modified, so they can be const, too.  Hence mark them "const" and
      "__initconst".
      
      This moves 28 pointers from ".init.data" to ".init.rodata".
      Signed-off-by: NGeert Uytterhoeven <geert@linux-m68k.org>
      Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
      Cc: Andi Kleen <ak@linux.intel.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      79e23d57
    • C
      __bitmap_parselist: fix bug in empty string handling · 2528a8b8
      Chris Metcalf 提交于
      bitmap_parselist("", &mask, nmaskbits) will erroneously set bit zero in
      the mask.  The same bug is visible in cpumask_parselist() since it is
      layered on top of the bitmask code, e.g.  if you boot with "isolcpus=",
      you will actually end up with cpu zero isolated.
      
      The bug was introduced in commit 4b060420 ("bitmap, irq: add
      smp_affinity_list interface to /proc/irq") when bitmap_parselist() was
      generalized to support userspace as well as kernelspace.
      
      Fixes: 4b060420 ("bitmap, irq: add smp_affinity_list interface to /proc/irq")
      Signed-off-by: NChris Metcalf <cmetcalf@ezchip.com>
      Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      2528a8b8
  9. 19 6月, 2015 2 次提交
  10. 17 6月, 2015 1 次提交
    • P
      lib/list_sort: use late_initcall to hook in self tests · 4c7217f1
      Paul Gortmaker 提交于
      This was using module_init, but there is no way this code can
      be modular.  In the non-modular case, a module_init becomes a
      device_initcall, but this really isn't a device.   So we should
      choose a more appropriate initcall bucket to put it in.
      
      Assuming boot time self tests need to be observed over a console
      to be useful, and that the console device could possibly not be
      fully functional until after device_initcall, we move this to the
      late_initcall bucket, which is immediately after device_initcall.
      
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
      4c7217f1
  11. 16 6月, 2015 1 次提交
  12. 13 6月, 2015 1 次提交
  13. 11 6月, 2015 3 次提交
  14. 07 6月, 2015 1 次提交
  15. 06 6月, 2015 1 次提交
  16. 03 6月, 2015 3 次提交
  17. 31 5月, 2015 1 次提交
    • A
      lib: introduce crc_t10dif_update() · 10081fb5
      Akinobu Mita 提交于
      This introduces crc_t10dif_update() which enables to calculate CRC
      for a block which straddles multiple SG elements by calling multiple
      times.  This also converts crc_t10dif() to use crc_t10dif_update() as
      they are almost same.
      
      (remove extra function call in crc_t10dif() and crc_t10dif_update -
       Tim + Herbert)
      Signed-off-by: NAkinobu Mita <akinobu.mita@gmail.com>
      Acked-by: NMartin K. Petersen <martin.petersen@oracle.com>
      Cc: Tim Chen <tim.c.chen@linux.intel.com>
      Cc: Herbert Xu <herbert@gondor.apana.org.au>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: linux-crypto@vger.kernel.org
      Cc: Nicholas Bellinger <nab@linux-iscsi.org>
      Cc: Sagi Grimberg <sagig@mellanox.com>
      Cc: "Martin K. Petersen" <martin.petersen@oracle.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>
      Cc: target-devel@vger.kernel.org
      Signed-off-by: NNicholas Bellinger <nab@linux-iscsi.org>
      10081fb5
  18. 29 5月, 2015 1 次提交
    • D
      percpu_counter: batch size aware __percpu_counter_compare() · 80188b0d
      Dave Chinner 提交于
      XFS uses non-stanard batch sizes for avoiding frequent global
      counter updates on it's allocated inode counters, as they increment
      or decrement in batches of 64 inodes. Hence the standard percpu
      counter batch of 32 means that the counter is effectively a global
      counter. Currently Xfs uses a batch size of 128 so that it doesn't
      take the global lock on every single modification.
      
      However, Xfs also needs to compare accurately against zero, which
      means we need to use percpu_counter_compare(), and that has a
      hard-coded batch size of 32, and hence will spuriously fail to
      detect when it is supposed to use precise comparisons and hence
      the accounting goes wrong.
      
      Add __percpu_counter_compare() to take a custom batch size so we can
      use it sanely in XFS and factor percpu_counter_compare() to use it.
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Acked-by: NTejun Heo <tj@kernel.org>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      80188b0d
  19. 28 5月, 2015 2 次提交
    • P
      rbtree: Make lockless searches non-fatal · d72da4a4
      Peter Zijlstra 提交于
      Change the insert and erase code such that lockless searches are
      non-fatal.
      
      In and of itself an rbtree cannot be correctly searched while
      in-modification, we can however provide weaker guarantees that will
      allow the rbtree to be used in conjunction with other techniques, such
      as latches; see 9b0fd802 ("seqcount: Add raw_write_seqcount_latch()").
      
      For this to work we need the following guarantees from the rbtree
      code:
      
       1) a lockless reader must not see partial stores, this would allow it
          to observe nodes that are invalid memory.
      
       2) there must not be (temporary) loops in the tree structure in the
          modifier's program order, this would cause a lookup which
          interrupts the modifier to get stuck indefinitely.
      
      For 1) we must use WRITE_ONCE() for all updates to the tree structure;
      in particular this patch only does rb_{left,right} as those are the
      only element required for simple searches.
      
      It generates slightly worse code, probably because volatile. But in
      pointer chasing heavy code a few instructions more should not matter.
      
      For 2) I have carefully audited the code and drawn every intermediate
      link state and not found a loop.
      
      Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
      Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: David Woodhouse <David.Woodhouse@intel.com>
      Cc: Rik van Riel <riel@redhat.com>
      Reviewed-by: NMichel Lespinasse <walken@google.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
      d72da4a4
    • P
      module: Sanitize RCU usage and locking · 0be964be
      Peter Zijlstra 提交于
      Currently the RCU usage in module is an inconsistent mess of RCU and
      RCU-sched, this is broken for CONFIG_PREEMPT where synchronize_rcu()
      does not imply synchronize_sched().
      
      Most usage sites use preempt_{dis,en}able() which is RCU-sched, but
      (most of) the modification sites use synchronize_rcu(). With the
      exception of the module bug list, which actually uses RCU.
      
      Convert everything over to RCU-sched.
      
      Furthermore add lockdep asserts to all sites, because it's not at all
      clear to me the required locking is observed, esp. on exported
      functions.
      
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Acked-by: N"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
      0be964be