1. 14 2月, 2015 14 次提交
    • A
      mm: slub: share object_err function · 75c66def
      Andrey Ryabinin 提交于
      Remove static and add function declarations to linux/slub_def.h so it
      could be used by kernel address sanitizer.
      Signed-off-by: NAndrey Ryabinin <a.ryabinin@samsung.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Konstantin Serebryany <kcc@google.com>
      Cc: Dmitry Chernenkov <dmitryc@google.com>
      Signed-off-by: NAndrey Konovalov <adech.fo@gmail.com>
      Cc: Yuri Gribov <tetra2005@gmail.com>
      Cc: Konstantin Khlebnikov <koct9i@gmail.com>
      Cc: Sasha Levin <sasha.levin@oracle.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Andi Kleen <andi@firstfloor.org>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      75c66def
    • A
      mm: slub: introduce virt_to_obj function · 912f5fbf
      Andrey Ryabinin 提交于
      virt_to_obj takes kmem_cache address, address of slab page, address x
      pointing somewhere inside slab object, and returns address of the
      beginning of object.
      Signed-off-by: NAndrey Ryabinin <a.ryabinin@samsung.com>
      Acked-by: NChristoph Lameter <cl@linux.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Konstantin Serebryany <kcc@google.com>
      Cc: Dmitry Chernenkov <dmitryc@google.com>
      Signed-off-by: NAndrey Konovalov <adech.fo@gmail.com>
      Cc: Yuri Gribov <tetra2005@gmail.com>
      Cc: Konstantin Khlebnikov <koct9i@gmail.com>
      Cc: Sasha Levin <sasha.levin@oracle.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Andi Kleen <andi@firstfloor.org>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      912f5fbf
    • A
      mm: page_alloc: add kasan hooks on alloc and free paths · b8c73fc2
      Andrey Ryabinin 提交于
      Add kernel address sanitizer hooks to mark allocated page's addresses as
      accessible in corresponding shadow region.  Mark freed pages as
      inaccessible.
      Signed-off-by: NAndrey Ryabinin <a.ryabinin@samsung.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Konstantin Serebryany <kcc@google.com>
      Cc: Dmitry Chernenkov <dmitryc@google.com>
      Signed-off-by: NAndrey Konovalov <adech.fo@gmail.com>
      Cc: Yuri Gribov <tetra2005@gmail.com>
      Cc: Konstantin Khlebnikov <koct9i@gmail.com>
      Cc: Sasha Levin <sasha.levin@oracle.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Andi Kleen <andi@firstfloor.org>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      b8c73fc2
    • A
      kasan: add kernel address sanitizer infrastructure · 0b24becc
      Andrey Ryabinin 提交于
      Kernel Address sanitizer (KASan) is a dynamic memory error detector.  It
      provides fast and comprehensive solution for finding use-after-free and
      out-of-bounds bugs.
      
      KASAN uses compile-time instrumentation for checking every memory access,
      therefore GCC > v4.9.2 required.  v4.9.2 almost works, but has issues with
      putting symbol aliases into the wrong section, which breaks kasan
      instrumentation of globals.
      
      This patch only adds infrastructure for kernel address sanitizer.  It's
      not available for use yet.  The idea and some code was borrowed from [1].
      
      Basic idea:
      
      The main idea of KASAN is to use shadow memory to record whether each byte
      of memory is safe to access or not, and use compiler's instrumentation to
      check the shadow memory on each memory access.
      
      Address sanitizer uses 1/8 of the memory addressable in kernel for shadow
      memory and uses direct mapping with a scale and offset to translate a
      memory address to its corresponding shadow address.
      
      Here is function to translate address to corresponding shadow address:
      
           unsigned long kasan_mem_to_shadow(unsigned long addr)
           {
                      return (addr >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET;
           }
      
      where KASAN_SHADOW_SCALE_SHIFT = 3.
      
      So for every 8 bytes there is one corresponding byte of shadow memory.
      The following encoding used for each shadow byte: 0 means that all 8 bytes
      of the corresponding memory region are valid for access; k (1 <= k <= 7)
      means that the first k bytes are valid for access, and other (8 - k) bytes
      are not; Any negative value indicates that the entire 8-bytes are
      inaccessible.  Different negative values used to distinguish between
      different kinds of inaccessible memory (redzones, freed memory) (see
      mm/kasan/kasan.h).
      
      To be able to detect accesses to bad memory we need a special compiler.
      Such compiler inserts a specific function calls (__asan_load*(addr),
      __asan_store*(addr)) before each memory access of size 1, 2, 4, 8 or 16.
      
      These functions check whether memory region is valid to access or not by
      checking corresponding shadow memory.  If access is not valid an error
      printed.
      
      Historical background of the address sanitizer from Dmitry Vyukov:
      
      	"We've developed the set of tools, AddressSanitizer (Asan),
      	ThreadSanitizer and MemorySanitizer, for user space. We actively use
      	them for testing inside of Google (continuous testing, fuzzing,
      	running prod services). To date the tools have found more than 10'000
      	scary bugs in Chromium, Google internal codebase and various
      	open-source projects (Firefox, OpenSSL, gcc, clang, ffmpeg, MySQL and
      	lots of others): [2] [3] [4].
      	The tools are part of both gcc and clang compilers.
      
      	We have not yet done massive testing under the Kernel AddressSanitizer
      	(it's kind of chicken and egg problem, you need it to be upstream to
      	start applying it extensively). To date it has found about 50 bugs.
      	Bugs that we've found in upstream kernel are listed in [5].
      	We've also found ~20 bugs in out internal version of the kernel. Also
      	people from Samsung and Oracle have found some.
      
      	[...]
      
      	As others noted, the main feature of AddressSanitizer is its
      	performance due to inline compiler instrumentation and simple linear
      	shadow memory. User-space Asan has ~2x slowdown on computational
      	programs and ~2x memory consumption increase. Taking into account that
      	kernel usually consumes only small fraction of CPU and memory when
      	running real user-space programs, I would expect that kernel Asan will
      	have ~10-30% slowdown and similar memory consumption increase (when we
      	finish all tuning).
      
      	I agree that Asan can well replace kmemcheck. We have plans to start
      	working on Kernel MemorySanitizer that finds uses of unitialized
      	memory. Asan+Msan will provide feature-parity with kmemcheck. As
      	others noted, Asan will unlikely replace debug slab and pagealloc that
      	can be enabled at runtime. Asan uses compiler instrumentation, so even
      	if it is disabled, it still incurs visible overheads.
      
      	Asan technology is easily portable to other architectures. Compiler
      	instrumentation is fully portable. Runtime has some arch-dependent
      	parts like shadow mapping and atomic operation interception. They are
      	relatively easy to port."
      
      Comparison with other debugging features:
      ========================================
      
      KMEMCHECK:
      
        - KASan can do almost everything that kmemcheck can.  KASan uses
          compile-time instrumentation, which makes it significantly faster than
          kmemcheck.  The only advantage of kmemcheck over KASan is detection of
          uninitialized memory reads.
      
          Some brief performance testing showed that kasan could be
          x500-x600 times faster than kmemcheck:
      
      $ netperf -l 30
      		MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to localhost (127.0.0.1) port 0 AF_INET
      		Recv   Send    Send
      		Socket Socket  Message  Elapsed
      		Size   Size    Size     Time     Throughput
      		bytes  bytes   bytes    secs.    10^6bits/sec
      
      no debug:	87380  16384  16384    30.00    41624.72
      
      kasan inline:	87380  16384  16384    30.00    12870.54
      
      kasan outline:	87380  16384  16384    30.00    10586.39
      
      kmemcheck: 	87380  16384  16384    30.03      20.23
      
        - Also kmemcheck couldn't work on several CPUs.  It always sets
          number of CPUs to 1.  KASan doesn't have such limitation.
      
      DEBUG_PAGEALLOC:
      	- KASan is slower than DEBUG_PAGEALLOC, but KASan works on sub-page
      	  granularity level, so it able to find more bugs.
      
      SLUB_DEBUG (poisoning, redzones):
      	- SLUB_DEBUG has lower overhead than KASan.
      
      	- SLUB_DEBUG in most cases are not able to detect bad reads,
      	  KASan able to detect both reads and writes.
      
      	- In some cases (e.g. redzone overwritten) SLUB_DEBUG detect
      	  bugs only on allocation/freeing of object. KASan catch
      	  bugs right before it will happen, so we always know exact
      	  place of first bad read/write.
      
      [1] https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel
      [2] https://code.google.com/p/address-sanitizer/wiki/FoundBugs
      [3] https://code.google.com/p/thread-sanitizer/wiki/FoundBugs
      [4] https://code.google.com/p/memory-sanitizer/wiki/FoundBugs
      [5] https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel#Trophies
      
      Based on work by Andrey Konovalov.
      Signed-off-by: NAndrey Ryabinin <a.ryabinin@samsung.com>
      Acked-by: NMichal Marek <mmarek@suse.cz>
      Signed-off-by: NAndrey Konovalov <adech.fo@gmail.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Konstantin Serebryany <kcc@google.com>
      Cc: Dmitry Chernenkov <dmitryc@google.com>
      Cc: Yuri Gribov <tetra2005@gmail.com>
      Cc: Konstantin Khlebnikov <koct9i@gmail.com>
      Cc: Sasha Levin <sasha.levin@oracle.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Andi Kleen <andi@firstfloor.org>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Stephen Rothwell <sfr@canb.auug.org.au>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      0b24becc
    • A
      compiler: introduce __alias(symbol) shortcut · cb4188ac
      Andrey Ryabinin 提交于
      To be consistent with other compiler attributes introduce __alias(symbol)
      macro expanding into __attribute__((alias(#symbol)))
      Signed-off-by: NAndrey Ryabinin <a.ryabinin@samsung.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Konstantin Serebryany <kcc@google.com>
      Cc: Dmitry Chernenkov <dmitryc@google.com>
      Signed-off-by: NAndrey Konovalov <adech.fo@gmail.com>
      Cc: Yuri Gribov <tetra2005@gmail.com>
      Cc: Konstantin Khlebnikov <koct9i@gmail.com>
      Cc: Sasha Levin <sasha.levin@oracle.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Andi Kleen <andi@firstfloor.org>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      cb4188ac
    • T
      bitmap, cpumask, nodemask: remove dedicated formatting functions · 46385326
      Tejun Heo 提交于
      Now that all bitmap formatting usages have been converted to
      '%*pb[l]', the separate formatting functions are unnecessary.  The
      following functions are removed.
      
      * bitmap_scn[list]printf()
      * cpumask_scnprintf(), cpulist_scnprintf()
      * [__]nodemask_scnprintf(), [__]nodelist_scnprintf()
      * seq_bitmap[_list](), seq_cpumask[_list](), seq_nodemask[_list]()
      * seq_buf_bitmask()
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      46385326
    • T
      cpumask, nodemask: implement cpumask/nodemask_pr_args() · f1bbc032
      Tejun Heo 提交于
      printf family of functions can now format bitmaps using '%*pb[l]' and
      all cpumask and nodemask formatting will be converted to use it.  To
      ease printing these masks with '%*pb[l]' which require two params -
      the number of bits and the actual bitmap, this patch implement
      cpumask_pr_args() and nodemask_pr_args() which can be used to provide
      arguments for '%*pb[l]'
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>
      Cc: "John W. Linville" <linville@tuxdriver.com>
      Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Chris Metcalf <cmetcalf@tilera.com>
      Cc: Chris Zankel <chris@zankel.net>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Dmitry Torokhov <dmitry.torokhov@gmail.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Li Zefan <lizefan@huawei.com>
      Cc: Max Filippov <jcmvbkbc@gmail.com>
      Cc: Mike Travis <travis@sgi.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: Steffen Klassert <steffen.klassert@secunet.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tony Luck <tony.luck@intel.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      f1bbc032
    • T
      cpumask: always use nr_cpu_ids in formatting and parsing functions · 513e3d2d
      Tejun Heo 提交于
      bitmap implements two variants of scnprintf functions to format a bitmap
      into a string and cpumask and nodemask wrap them to provide equivalent
      interfaces.  The scnprintf family of functions require a string buffer as
      an output target which complicates code paths which just want to print out
      the mask through printk for informational or debug purposes as they have
      to worry about how large the buffer should be and whether it's too large
      to allocate on stack.
      
      Neither cpumask or nodemask provides a guildeline on how large the target
      buffer should be forcing users come up with their own solutions - some
      allocate an arbitrarily sized buffer which is small enough to allocate on
      stack but may be too short in corner cases, other come up with a custom
      upper limit calculation considering the output format, some allocate the
      buffer dynamically while one resorted to using lock to synchronize access
      to a static buffer.
      
      This is an artificial problem which is being solved repeatedly for no
      benefit.  In a lot of cases, the output area already exists and can be
      targeted directly making the intermediate buffer unnecessary.  This
      patchset teaches printf family of functions how to format bitmaps and
      replace the dedicated formatting functions with it.
      
      Pointer formatting is extended to cover bitmap formatting.  It uses the
      field width for the number of bits instead of precision.  The format used
      is '%*pb[l]', with the optional trailing 'l' specifying list format
      instead of hex masks.  For more details, please see 0002.
      
      This patch (of 31):
      
      Currently, the formatting and parsing functions in cpumask.h use
      nr_cpumask_bits like other cpumask functions; however, nr_cpumask_bits
      is either NR_CPUS or nr_cpu_ids depending on CONFIG_CPUMASK_OFFSTACK.
      This leads to inconsistent behaviors.
      
      With CONFIG_NR_CPUS=512 and !CONFIG_CPUMASK_OFFSTACK
      
        # cat /sys/devices/virtual/net/lo/queues/rx-0/rps_cpus
        00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000
        # cat /proc/self/status | grep Cpus_allowed:
        Cpus_allowed:   f
      
      With CONFIG_NR_CPUS=1024 and CONFIG_CPUMASK_OFFSTACK (fedora default)
      
        # cat /sys/devices/virtual/net/lo/queues/rx-0/rps_cpus
        0
        # cat /proc/self/status | grep Cpus_allowed:
        Cpus_allowed:   f
      
      Note that /proc/self/status is always using nr_cpu_ids regardless of
      config.  This is because seq cpumask formattings functions always use
      nr_cpu_ids.
      
      Given that the same output fields may switch between the two forms,
      converging on nr_cpu_ids always isn't too likely to surprise userland.
      This patch updates the formatting and parsing functions in cpumask.h
      to always use nr_cpu_ids.  There's no point in dealing with CPUs which
      aren't even possible on the machine.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>
      Cc: "John W. Linville" <linville@tuxdriver.com>
      Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Chris Metcalf <cmetcalf@tilera.com>
      Cc: Chris Zankel <chris@zankel.net>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Dmitry Torokhov <dmitry.torokhov@gmail.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Li Zefan <lizefan@huawei.com>
      Cc: Max Filippov <jcmvbkbc@gmail.com>
      Cc: Mike Travis <travis@sgi.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Russell King <linux@arm.linux.org.uk>
      Acked-by: NRusty Russell <rusty@rustcorp.com.au>
      Cc: Steffen Klassert <steffen.klassert@secunet.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tony Luck <tony.luck@intel.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      513e3d2d
    • T
      kernfs: remove KERNFS_STATIC_NAME · dfeb0750
      Tejun Heo 提交于
      When a new kernfs node is created, KERNFS_STATIC_NAME is used to avoid
      making a separate copy of its name.  It's currently only used for sysfs
      attributes whose filenames are required to stay accessible and unchanged.
      There are rare exceptions where these names are allocated and formatted
      dynamically but for the vast majority of cases they're consts in the
      rodata section.
      
      Now that kernfs is converted to use kstrdup_const() and kfree_const(),
      there's little point in keeping KERNFS_STATIC_NAME around.  Remove it.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Andrzej Hajda <a.hajda@samsung.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      dfeb0750
    • A
      mm/util: add kstrdup_const · a4bb1e43
      Andrzej Hajda 提交于
      kstrdup() is often used to duplicate strings where neither source neither
      destination will be ever modified.  In such case we can just reuse the
      source instead of duplicating it.  The problem is that we must be sure
      that the source is non-modifiable and its life-time is long enough.
      
      I suspect the good candidates for such strings are strings located in
      kernel .rodata section, they cannot be modifed because the section is
      read-only and their life-time is equal to kernel life-time.
      
      This small patchset proposes alternative version of kstrdup -
      kstrdup_const, which returns source string if it is located in .rodata
      otherwise it fallbacks to kstrdup.  To verify if the source is in
      .rodata function checks if the address is between sentinels
      __start_rodata, __end_rodata.  I guess it should work with all
      architectures.
      
      The main patch is accompanied by four patches constifying kstrdup for
      cases where situtation described above happens frequently.
      
      I have tested the patchset on mobile platform (exynos4210-trats) and it
      saves 3272 string allocations.  Since minimal allocation is 32 or 64
      bytes depending on Kconfig options the patchset saves respectively about
      100KB or 200KB of memory.
      
      Stats from tested platform show that the main offender is sysfs:
      
      By caller:
        2260 __kernfs_new_node
          631 clk_register+0xc8/0x1b8
          318 clk_register+0x34/0x1b8
            51 kmem_cache_create
            12 alloc_vfsmnt
      
      By string (with count >= 5):
          883 power
          876 subsystem
          135 parameters
          132 device
           61 iommu_group
          ...
      
      This patch (of 5):
      
      Add an alternative version of kstrdup which returns pointer to constant
      char array.  The function checks if input string is in persistent and
      read-only memory section, if yes it returns the input string, otherwise it
      fallbacks to kstrdup.
      
      kstrdup_const is accompanied by kfree_const performing conditional memory
      deallocation of the string.
      Signed-off-by: NAndrzej Hajda <a.hajda@samsung.com>
      Cc: Marek Szyprowski <m.szyprowski@samsung.com>
      Cc: Kyungmin Park <kyungmin.park@samsung.com>
      Cc: Mike Turquette <mturquette@linaro.org>
      Cc: Alexander Viro <viro@zeniv.linux.org.uk>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Greg KH <greg@kroah.com>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      a4bb1e43
    • R
      lib: bitmap: change bitmap_shift_left to take unsigned parameters · dba94c25
      Rasmus Villemoes 提交于
      gcc can generate slightly better code for stuff like "nbits %
      BITS_PER_LONG" when it knows nbits is not negative.  Since negative size
      bitmaps or shift amounts don't make sense, change these parameters of
      bitmap_shift_right to unsigned.
      
      If off >= lim (which requires shift >= nbits), k is initialized with a
      large positive value, but since I've let k continue to be signed, the loop
      will never run and dst will be zeroed as expected.  Inside the loop, k is
      guaranteed to be non-negative, so the fact that it is promoted to unsigned
      in the various expressions it appears in is harmless.
      
      Also use "shift" and "nbits" consistently for the parameter names.
      Signed-off-by: NRasmus Villemoes <linux@rasmusvillemoes.dk>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      dba94c25
    • R
      lib: bitmap: change bitmap_shift_right to take unsigned parameters · 2fbad299
      Rasmus Villemoes 提交于
      I've previously changed the nbits parameter of most bitmap_* functions to
      unsigned; now it is bitmap_shift_{left,right}'s turn.  This alone saves
      some .text, but while at it I found that there were a few other things one
      could do.  The end result of these seven patches is
      
        $ scripts/bloat-o-meter /tmp/bitmap.o.{old,new}
        add/remove: 0/0 grow/shrink: 0/2 up/down: 0/-328 (-328)
        function                                     old     new   delta
        __bitmap_shift_right                         384     226    -158
        __bitmap_shift_left                          306     136    -170
      
      and less importantly also a smaller stack footprint
      
        $ stack-o-meter.pl master bitmap
        file                 function                       old  new  delta
        lib/bitmap.o         __bitmap_shift_right             24    8  -16
        lib/bitmap.o         __bitmap_shift_left              24    0  -24
      
      For each pair of 0 <= shift <= nbits <= 256 I've tested the end result
      with a few randomly filled src buffers (including garbage beyond nbits),
      in each case verifying that the shift {left,right}-most bits of dst are
      zero and the remaining nbits-shift bits correspond to src, so I'm fairly
      confident I didn't screw up.  That hasn't stopped me from being wrong
      before, though.
      
      This patch (of 7):
      
      gcc can generate slightly better code for stuff like "nbits %
      BITS_PER_LONG" when it knows nbits is not negative.  Since negative size
      bitmaps or shift amounts don't make sense, change these parameters of
      bitmap_shift_right to unsigned.
      
      The expressions involving "lim - 1" are still ok, since if lim is 0 the
      loop is never executed.
      
      Also use "shift" and "nbits" consistently for the parameter names.
      Signed-off-by: NRasmus Villemoes <linux@rasmusvillemoes.dk>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      2fbad299
    • R
      lib/bitmap.c: elide bitmap_copy_le on little-endian · e8f24278
      Rasmus Villemoes 提交于
      On little-endian, there's no reason to have an extra, presumably less
      efficient, way of copying a bitmap.
      Signed-off-by: NRasmus Villemoes <linux@rasmusvillemoes.dk>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      e8f24278
    • R
      lib/bitmap.c: change prototype of bitmap_copy_le · 9b6c2d2e
      Rasmus Villemoes 提交于
      Make the prototype of bitmap_copy_le the same as bitmap_copy's.  All other
      bitmap_* functions take unsigned long* parameters; there's no reason this
      should be special.
      
      The only current user is the static inline uwb_mas_bm_copy_le, which
      already does the void* laundering, so the end users can pass their u8 or
      __le32 buffers without a cast.
      
      Furthermore, this allows us to simply let bitmap_copy_le be an alias for
      bitmap_copy on little-endian; see next patch.
      Signed-off-by: NRasmus Villemoes <linux@rasmusvillemoes.dk>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      9b6c2d2e
  2. 13 2月, 2015 26 次提交