1. 08 2月, 2017 1 次提交
  2. 04 2月, 2017 1 次提交
    • A
      kbuild: modversions: add infrastructure for emitting relative CRCs · 56067812
      Ard Biesheuvel 提交于
      This add the kbuild infrastructure that will allow architectures to emit
      vmlinux symbol CRCs as 32-bit offsets to another location in the kernel
      where the actual value is stored. This works around problems with CRCs
      being mistaken for relocatable symbols on kernels that self relocate at
      runtime (i.e., powerpc with CONFIG_RELOCATABLE=y)
      
      For the kbuild side of things, this comes down to the following:
      
       - introducing a Kconfig symbol MODULE_REL_CRCS
      
       - adding a -R switch to genksyms to instruct it to emit the CRC symbols
         as references into the .rodata section
      
       - making modpost distinguish such references from absolute CRC symbols
         by the section index (SHN_ABS)
      
       - making kallsyms disregard non-absolute symbols with a __crc_ prefix
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      56067812
  3. 24 1月, 2017 1 次提交
  4. 19 1月, 2017 1 次提交
  5. 17 1月, 2017 1 次提交
  6. 11 1月, 2017 1 次提交
    • A
      cgroup: move CONFIG_SOCK_CGROUP_DATA to init/Kconfig · 73b35147
      Arnd Bergmann 提交于
      We now 'select SOCK_CGROUP_DATA' but Kconfig complains that this is
      not right when CONFIG_NET is disabled and there is no socket interface:
      
      warning: (CGROUP_BPF) selects SOCK_CGROUP_DATA which has unmet direct dependencies (NET)
      
      I don't know what the correct solution for this is, but simply removing
      the dependency on NET from SOCK_CGROUP_DATA by moving it out of the
      'if NET' section avoids the warning and does not produce other build
      errors.
      
      Fixes: 483c4933 ("cgroup: Fix CGROUP_BPF config")
      Signed-off-by: NArnd Bergmann <arnd@arndb.de>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      73b35147
  7. 18 12月, 2016 1 次提交
  8. 30 11月, 2016 1 次提交
    • L
      Re-enable CONFIG_MODVERSIONS in a slightly weaker form · faaae2a5
      Linus Torvalds 提交于
      This enables CONFIG_MODVERSIONS again, but allows for missing symbol CRC
      information in order to work around the issue that newer binutils
      versions seem to occasionally drop the CRC on the floor.  binutils 2.26
      seems to work fine, while binutils 2.27 seems to break MODVERSIONS of
      symbols that have been defined in assembler files.
      
      [ We've had random missing CRC's before - it may be an old problem that
        just is now reliably triggered with the weak asm symbols and a new
        version of binutils ]
      
      Some day I really do want to remove MODVERSIONS entirely.  Sadly, today
      does not appear to be that day: Debian people apparently do want the
      option to enable MODVERSIONS to make it easier to have external modules
      across kernel versions, and this seems to be a fairly minimal fix for
      the annoying problem.
      
      Cc: Ben Hutchings <ben@decadent.org.uk>
      Acked-by: NMichal Marek <mmarek@suse.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      faaae2a5
  9. 26 11月, 2016 2 次提交
    • L
      Fix subtle CONFIG_MODVERSIONS problems · cd3caefb
      Linus Torvalds 提交于
      CONFIG_MODVERSIONS has been broken for pretty much the whole 4.9 series,
      and quite frankly, nobody has cared very deeply.  We absolutely know how
      to fix it, and it's not _complicated_, but it's not exactly pretty
      either.
      
      This oneliner fixes it without the ugliness, and allows for further
      future cleanups.
      
        "We've secretly replaced their regular MODVERSIONS with nothing at
         all, let's see if they notice"
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      cd3caefb
    • D
      cgroup: add support for eBPF programs · 30070984
      Daniel Mack 提交于
      This patch adds two sets of eBPF program pointers to struct cgroup.
      One for such that are directly pinned to a cgroup, and one for such
      that are effective for it.
      
      To illustrate the logic behind that, assume the following example
      cgroup hierarchy.
      
        A - B - C
              \ D - E
      
      If only B has a program attached, it will be effective for B, C, D
      and E. If D then attaches a program itself, that will be effective for
      both D and E, and the program in B will only affect B and C. Only one
      program of a given type is effective for a cgroup.
      
      Attaching and detaching programs will be done through the bpf(2)
      syscall. For now, ingress and egress inet socket filtering are the
      only supported use-cases.
      Signed-off-by: NDaniel Mack <daniel@zonque.org>
      Acked-by: NAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      30070984
  10. 16 11月, 2016 1 次提交
  11. 24 10月, 2016 1 次提交
  12. 12 10月, 2016 1 次提交
    • P
      relay: Use irq_work instead of plain timer for deferred wakeup · 26b5679e
      Peter Zijlstra 提交于
      Relay avoids calling wake_up_interruptible() for doing the wakeup of
      readers/consumers, waiting for the generation of new data, from the
      context of a process which produced the data.  This is apparently done to
      prevent the possibility of a deadlock in case Scheduler itself is is
      generating data for the relay, after acquiring rq->lock.
      
      The following patch used a timer (to be scheduled at next jiffy), for
      delegating the wakeup to another context.
      	commit 7c9cb383
      	Author: Tom Zanussi <zanussi@comcast.net>
      	Date:   Wed May 9 02:34:01 2007 -0700
      
      	relay: use plain timer instead of delayed work
      
      	relay doesn't need to use schedule_delayed_work() for waking readers
      	when a simple timer will do.
      
      Scheduling a plain timer, at next jiffies boundary, to do the wakeup
      causes a significant wakeup latency for the Userspace client, which makes
      relay less suitable for the high-frequency low-payload use cases where the
      data gets generated at a very high rate, like multiple sub buffers getting
      filled within a milli second.  Moreover the timer is re-scheduled on every
      newly produced sub buffer so the timer keeps getting pushed out if sub
      buffers are filled in a very quick succession (less than a jiffy gap
      between filling of 2 sub buffers).  As a result relay runs out of sub
      buffers to store the new data.
      
      By using irq_work it is ensured that wakeup of userspace client, blocked
      in the poll call, is done at earliest (through self IPI or next timer
      tick) enabling it to always consume the data in time.  Also this makes
      relay consistent with printk & ring buffers (trace), as they too use
      irq_work for deferred wake up of readers.
      
      [arnd@arndb.de: select CONFIG_IRQ_WORK]
       Link: http://lkml.kernel.org/r/20160912154035.3222156-1-arnd@arndb.de
      [akpm@linux-foundation.org: coding-style fixes]
      Link: http://lkml.kernel.org/r/1472906487-1559-1-git-send-email-akash.goel@intel.comSigned-off-by: NPeter Zijlstra <peterz@infradead.org>
      Signed-off-by: NAkash Goel <akash.goel@intel.com>
      Cc: Tom Zanussi <tzanussi@gmail.com>
      Cc: Chris Wilson <chris@chris-wilson.co.uk>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
      Signed-off-by: NArnd Bergmann <arnd@arndb.de>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      26b5679e
  13. 21 9月, 2016 1 次提交
  14. 16 9月, 2016 1 次提交
  15. 15 9月, 2016 1 次提交
  16. 03 8月, 2016 4 次提交
  17. 27 7月, 2016 3 次提交
    • T
      mm: SLUB freelist randomization · 210e7a43
      Thomas Garnier 提交于
      Implements freelist randomization for the SLUB allocator.  It was
      previous implemented for the SLAB allocator.  Both use the same
      configuration option (CONFIG_SLAB_FREELIST_RANDOM).
      
      The list is randomized during initialization of a new set of pages.  The
      order on different freelist sizes is pre-computed at boot for
      performance.  Each kmem_cache has its own randomized freelist.
      
      This security feature reduces the predictability of the kernel SLUB
      allocator against heap overflows rendering attacks much less stable.
      
      For example these attacks exploit the predictability of the heap:
       - Linux Kernel CAN SLUB overflow (https://goo.gl/oMNWkU)
       - Exploiting Linux Kernel Heap corruptions (http://goo.gl/EXLn95)
      
      Performance results:
      
      slab_test impact is between 3% to 4% on average for 100000 attempts
      without smp.  It is a very focused testing, kernbench show the overall
      impact on the system is way lower.
      
      Before:
      
        Single thread testing
        =====================
        1. Kmalloc: Repeatedly allocate then free test
        100000 times kmalloc(8) -> 49 cycles kfree -> 77 cycles
        100000 times kmalloc(16) -> 51 cycles kfree -> 79 cycles
        100000 times kmalloc(32) -> 53 cycles kfree -> 83 cycles
        100000 times kmalloc(64) -> 62 cycles kfree -> 90 cycles
        100000 times kmalloc(128) -> 81 cycles kfree -> 97 cycles
        100000 times kmalloc(256) -> 98 cycles kfree -> 121 cycles
        100000 times kmalloc(512) -> 95 cycles kfree -> 122 cycles
        100000 times kmalloc(1024) -> 96 cycles kfree -> 126 cycles
        100000 times kmalloc(2048) -> 115 cycles kfree -> 140 cycles
        100000 times kmalloc(4096) -> 149 cycles kfree -> 171 cycles
        2. Kmalloc: alloc/free test
        100000 times kmalloc(8)/kfree -> 70 cycles
        100000 times kmalloc(16)/kfree -> 70 cycles
        100000 times kmalloc(32)/kfree -> 70 cycles
        100000 times kmalloc(64)/kfree -> 70 cycles
        100000 times kmalloc(128)/kfree -> 70 cycles
        100000 times kmalloc(256)/kfree -> 69 cycles
        100000 times kmalloc(512)/kfree -> 70 cycles
        100000 times kmalloc(1024)/kfree -> 73 cycles
        100000 times kmalloc(2048)/kfree -> 72 cycles
        100000 times kmalloc(4096)/kfree -> 71 cycles
      
      After:
      
        Single thread testing
        =====================
        1. Kmalloc: Repeatedly allocate then free test
        100000 times kmalloc(8) -> 57 cycles kfree -> 78 cycles
        100000 times kmalloc(16) -> 61 cycles kfree -> 81 cycles
        100000 times kmalloc(32) -> 76 cycles kfree -> 93 cycles
        100000 times kmalloc(64) -> 83 cycles kfree -> 94 cycles
        100000 times kmalloc(128) -> 106 cycles kfree -> 107 cycles
        100000 times kmalloc(256) -> 118 cycles kfree -> 117 cycles
        100000 times kmalloc(512) -> 114 cycles kfree -> 116 cycles
        100000 times kmalloc(1024) -> 115 cycles kfree -> 118 cycles
        100000 times kmalloc(2048) -> 147 cycles kfree -> 131 cycles
        100000 times kmalloc(4096) -> 214 cycles kfree -> 161 cycles
        2. Kmalloc: alloc/free test
        100000 times kmalloc(8)/kfree -> 66 cycles
        100000 times kmalloc(16)/kfree -> 66 cycles
        100000 times kmalloc(32)/kfree -> 66 cycles
        100000 times kmalloc(64)/kfree -> 66 cycles
        100000 times kmalloc(128)/kfree -> 65 cycles
        100000 times kmalloc(256)/kfree -> 67 cycles
        100000 times kmalloc(512)/kfree -> 67 cycles
        100000 times kmalloc(1024)/kfree -> 64 cycles
        100000 times kmalloc(2048)/kfree -> 67 cycles
        100000 times kmalloc(4096)/kfree -> 67 cycles
      
      Kernbench, before:
      
        Average Optimal load -j 12 Run (std deviation):
        Elapsed Time 101.873 (1.16069)
        User Time 1045.22 (1.60447)
        System Time 88.969 (0.559195)
        Percent CPU 1112.9 (13.8279)
        Context Switches 189140 (2282.15)
        Sleeps 99008.6 (768.091)
      
      After:
      
        Average Optimal load -j 12 Run (std deviation):
        Elapsed Time 102.47 (0.562732)
        User Time 1045.3 (1.34263)
        System Time 88.311 (0.342554)
        Percent CPU 1105.8 (6.49444)
        Context Switches 189081 (2355.78)
        Sleeps 99231.5 (800.358)
      
      Link: http://lkml.kernel.org/r/1464295031-26375-3-git-send-email-thgarnie@google.comSigned-off-by: NThomas Garnier <thgarnie@google.com>
      Reviewed-by: NKees Cook <keescook@chromium.org>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      210e7a43
    • K
      mm: SLUB hardened usercopy support · ed18adc1
      Kees Cook 提交于
      Under CONFIG_HARDENED_USERCOPY, this adds object size checking to the
      SLUB allocator to catch any copies that may span objects. Includes a
      redzone handling fix discovered by Michael Ellerman.
      
      Based on code from PaX and grsecurity.
      Signed-off-by: NKees Cook <keescook@chromium.org>
      Tested-by: NMichael Ellerman <mpe@ellerman.id.au>
      Reviwed-by: NLaura Abbott <labbott@redhat.com>
      ed18adc1
    • K
      mm: SLAB hardened usercopy support · 04385fc5
      Kees Cook 提交于
      Under CONFIG_HARDENED_USERCOPY, this adds object size checking to the
      SLAB allocator to catch any copies that may span objects.
      
      Based on code from PaX and grsecurity.
      Signed-off-by: NKees Cook <keescook@chromium.org>
      Tested-by: NValdis Kletnieks <valdis.kletnieks@vt.edu>
      04385fc5
  18. 14 7月, 2016 1 次提交
  19. 07 7月, 2016 1 次提交
    • R
      init/Kconfig: keep Expert users menu together · 076501ff
      Randy Dunlap 提交于
      The "expert" menu was broken (split) such that all entries in it after
      KALLSYMS were displayed in the "General setup" area instead of in the
      "Expert users" area.  Fix this by adding one kconfig dependency.
      
      Yes, the Expert users menu is fragile.  Problems like this have happened
      several times in the past.  I will attempt to isolate the Expert users
      menu if there is interest in that.
      
      Fixes: 4d5d5664 ("x86: kallsyms: disable absolute percpu symbols on !SMP")
      Signed-off-by: NRandy Dunlap <rdunlap@infradead.org>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: stable@vger.kernel.org  # 4.6
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      076501ff
  20. 21 6月, 2016 1 次提交
  21. 16 6月, 2016 1 次提交
  22. 21 5月, 2016 2 次提交
    • P
      printk/nmi: increase the size of NMI buffer and make it configurable · 427934b8
      Petr Mladek 提交于
      Testing has shown that the backtrace sometimes does not fit into the 4kB
      temporary buffer that is used in NMI context.  The warnings are gone
      when I double the temporary buffer size.
      
      This patch doubles the buffer size and makes it configurable.
      
      Note that this problem existed even in the x86-specific implementation
      that was added by the commit a9edc880 ("x86/nmi: Perform a safe NMI
      stack trace on all CPUs").  Nobody noticed it because it did not print
      any warnings.
      Signed-off-by: NPetr Mladek <pmladek@suse.com>
      Cc: Jan Kara <jack@suse.cz>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Russell King <rmk+kernel@arm.linux.org.uk>
      Cc: Daniel Thompson <daniel.thompson@linaro.org>
      Cc: Jiri Kosina <jkosina@suse.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: David Miller <davem@davemloft.net>
      Cc: Daniel Thompson <daniel.thompson@linaro.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      427934b8
    • P
      printk/nmi: generic solution for safe printk in NMI · 42a0bb3f
      Petr Mladek 提交于
      printk() takes some locks and could not be used a safe way in NMI
      context.
      
      The chance of a deadlock is real especially when printing stacks from
      all CPUs.  This particular problem has been addressed on x86 by the
      commit a9edc880 ("x86/nmi: Perform a safe NMI stack trace on all
      CPUs").
      
      The patchset brings two big advantages.  First, it makes the NMI
      backtraces safe on all architectures for free.  Second, it makes all NMI
      messages almost safe on all architectures (the temporary buffer is
      limited.  We still should keep the number of messages in NMI context at
      minimum).
      
      Note that there already are several messages printed in NMI context:
      WARN_ON(in_nmi()), BUG_ON(in_nmi()), anything being printed out from MCE
      handlers.  These are not easy to avoid.
      
      This patch reuses most of the code and makes it generic.  It is useful
      for all messages and architectures that support NMI.
      
      The alternative printk_func is set when entering and is reseted when
      leaving NMI context.  It queues IRQ work to copy the messages into the
      main ring buffer in a safe context.
      
      __printk_nmi_flush() copies all available messages and reset the buffer.
      Then we could use a simple cmpxchg operations to get synchronized with
      writers.  There is also used a spinlock to get synchronized with other
      flushers.
      
      We do not longer use seq_buf because it depends on external lock.  It
      would be hard to make all supported operations safe for a lockless use.
      It would be confusing and error prone to make only some operations safe.
      
      The code is put into separate printk/nmi.c as suggested by Steven
      Rostedt.  It needs a per-CPU buffer and is compiled only on
      architectures that call nmi_enter().  This is achieved by the new
      HAVE_NMI Kconfig flag.
      
      The are MN10300 and Xtensa architectures.  We need to clean up NMI
      handling there first.  Let's do it separately.
      
      The patch is heavily based on the draft from Peter Zijlstra, see
      
        https://lkml.org/lkml/2015/6/10/327
      
      [arnd@arndb.de: printk-nmi: use %zu format string for size_t]
      [akpm@linux-foundation.org: min_t->min - all types are size_t here]
      Signed-off-by: NPetr Mladek <pmladek@suse.com>
      Suggested-by: NPeter Zijlstra <peterz@infradead.org>
      Suggested-by: NSteven Rostedt <rostedt@goodmis.org>
      Cc: Jan Kara <jack@suse.cz>
      Acked-by: Russell King <rmk+kernel@arm.linux.org.uk>	[arm part]
      Cc: Daniel Thompson <daniel.thompson@linaro.org>
      Cc: Jiri Kosina <jkosina@suse.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: David Miller <davem@davemloft.net>
      Cc: Daniel Thompson <daniel.thompson@linaro.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      42a0bb3f
  23. 20 5月, 2016 1 次提交
    • T
      mm: SLAB freelist randomization · c7ce4f60
      Thomas Garnier 提交于
      Provides an optional config (CONFIG_SLAB_FREELIST_RANDOM) to randomize
      the SLAB freelist.  The list is randomized during initialization of a
      new set of pages.  The order on different freelist sizes is pre-computed
      at boot for performance.  Each kmem_cache has its own randomized
      freelist.  Before pre-computed lists are available freelists are
      generated dynamically.  This security feature reduces the predictability
      of the kernel SLAB allocator against heap overflows rendering attacks
      much less stable.
      
      For example this attack against SLUB (also applicable against SLAB)
      would be affected:
      
        https://jon.oberheide.org/blog/2010/09/10/linux-kernel-can-slub-overflow/
      
      Also, since v4.6 the freelist was moved at the end of the SLAB.  It
      means a controllable heap is opened to new attacks not yet publicly
      discussed.  A kernel heap overflow can be transformed to multiple
      use-after-free.  This feature makes this type of attack harder too.
      
      To generate entropy, we use get_random_bytes_arch because 0 bits of
      entropy is available in the boot stage.  In the worse case this function
      will fallback to the get_random_bytes sub API.  We also generate a shift
      random number to shift pre-computed freelist for each new set of pages.
      
      The config option name is not specific to the SLAB as this approach will
      be extended to other allocators like SLUB.
      
      Performance results highlighted no major changes:
      
      Hackbench (running 90 10 times):
      
        Before average: 0.0698
        After average: 0.0663 (-5.01%)
      
      slab_test 1 run on boot.  Difference only seen on the 2048 size test
      being the worse case scenario covered by freelist randomization.  New
      slab pages are constantly being created on the 10000 allocations.
      Variance should be mainly due to getting new pages every few
      allocations.
      
      Before:
      
        Single thread testing
        =====================
        1. Kmalloc: Repeatedly allocate then free test
        10000 times kmalloc(8) -> 99 cycles kfree -> 112 cycles
        10000 times kmalloc(16) -> 109 cycles kfree -> 140 cycles
        10000 times kmalloc(32) -> 129 cycles kfree -> 137 cycles
        10000 times kmalloc(64) -> 141 cycles kfree -> 141 cycles
        10000 times kmalloc(128) -> 152 cycles kfree -> 148 cycles
        10000 times kmalloc(256) -> 195 cycles kfree -> 167 cycles
        10000 times kmalloc(512) -> 257 cycles kfree -> 199 cycles
        10000 times kmalloc(1024) -> 393 cycles kfree -> 251 cycles
        10000 times kmalloc(2048) -> 649 cycles kfree -> 228 cycles
        10000 times kmalloc(4096) -> 806 cycles kfree -> 370 cycles
        10000 times kmalloc(8192) -> 814 cycles kfree -> 411 cycles
        10000 times kmalloc(16384) -> 892 cycles kfree -> 455 cycles
        2. Kmalloc: alloc/free test
        10000 times kmalloc(8)/kfree -> 121 cycles
        10000 times kmalloc(16)/kfree -> 121 cycles
        10000 times kmalloc(32)/kfree -> 121 cycles
        10000 times kmalloc(64)/kfree -> 121 cycles
        10000 times kmalloc(128)/kfree -> 121 cycles
        10000 times kmalloc(256)/kfree -> 119 cycles
        10000 times kmalloc(512)/kfree -> 119 cycles
        10000 times kmalloc(1024)/kfree -> 119 cycles
        10000 times kmalloc(2048)/kfree -> 119 cycles
        10000 times kmalloc(4096)/kfree -> 121 cycles
        10000 times kmalloc(8192)/kfree -> 119 cycles
        10000 times kmalloc(16384)/kfree -> 119 cycles
      
      After:
      
        Single thread testing
        =====================
        1. Kmalloc: Repeatedly allocate then free test
        10000 times kmalloc(8) -> 130 cycles kfree -> 86 cycles
        10000 times kmalloc(16) -> 118 cycles kfree -> 86 cycles
        10000 times kmalloc(32) -> 121 cycles kfree -> 85 cycles
        10000 times kmalloc(64) -> 176 cycles kfree -> 102 cycles
        10000 times kmalloc(128) -> 178 cycles kfree -> 100 cycles
        10000 times kmalloc(256) -> 205 cycles kfree -> 109 cycles
        10000 times kmalloc(512) -> 262 cycles kfree -> 136 cycles
        10000 times kmalloc(1024) -> 342 cycles kfree -> 157 cycles
        10000 times kmalloc(2048) -> 701 cycles kfree -> 238 cycles
        10000 times kmalloc(4096) -> 803 cycles kfree -> 364 cycles
        10000 times kmalloc(8192) -> 835 cycles kfree -> 404 cycles
        10000 times kmalloc(16384) -> 896 cycles kfree -> 441 cycles
        2. Kmalloc: alloc/free test
        10000 times kmalloc(8)/kfree -> 121 cycles
        10000 times kmalloc(16)/kfree -> 121 cycles
        10000 times kmalloc(32)/kfree -> 123 cycles
        10000 times kmalloc(64)/kfree -> 142 cycles
        10000 times kmalloc(128)/kfree -> 121 cycles
        10000 times kmalloc(256)/kfree -> 119 cycles
        10000 times kmalloc(512)/kfree -> 119 cycles
        10000 times kmalloc(1024)/kfree -> 119 cycles
        10000 times kmalloc(2048)/kfree -> 119 cycles
        10000 times kmalloc(4096)/kfree -> 119 cycles
        10000 times kmalloc(8192)/kfree -> 119 cycles
        10000 times kmalloc(16384)/kfree -> 119 cycles
      
      [akpm@linux-foundation.org: propagate gfp_t into cache_random_seq_create()]
      Signed-off-by: NThomas Garnier <thgarnie@google.com>
      Acked-by: NChristoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Greg Thelen <gthelen@google.com>
      Cc: Laura Abbott <labbott@fedoraproject.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      c7ce4f60
  24. 10 5月, 2016 1 次提交
    • A
      Kbuild: change CC_OPTIMIZE_FOR_SIZE definition · 877417e6
      Arnd Bergmann 提交于
      CC_OPTIMIZE_FOR_SIZE disables the often useful -Wmaybe-unused warning,
      because that causes a ridiculous amount of false positives when combined
      with -Os.
      
      This means a lot of warnings don't show up in testing by the developers
      that should see them with an 'allmodconfig' kernel that has
      CC_OPTIMIZE_FOR_SIZE enabled, but only later in randconfig builds
      that don't.
      
      This changes the Kconfig logic around CC_OPTIMIZE_FOR_SIZE to make
      it a 'choice' statement defaulting to CC_OPTIMIZE_FOR_PERFORMANCE
      that gets added for this purpose. The allmodconfig and allyesconfig
      kernels now default to -O2 with the maybe-unused warning enabled.
      Signed-off-by: NArnd Bergmann <arnd@arndb.de>
      Signed-off-by: NMichal Marek <mmarek@suse.com>
      877417e6
  25. 02 4月, 2016 1 次提交
    • A
      Make CONFIG_FHANDLE default y · f76be617
      Andi Kleen 提交于
      Newer Fedora and OpenSUSE didn't boot with my standard configuration.
      It took me some time to figure out why, in fact I had to write a script
      to try different config options systematically.
      
      The problem is that something (systemd) in dracut depends on
      CONFIG_FHANDLE, which adds open by file handle syscalls.
      
      While it is set in defconfigs it is very easy to miss when updating
      older configs because it is not default y.
      
      Make it default y and also depend on EXPERT, as dracut use is likely
      widespread.
      Signed-off-by: NAndi Kleen <ak@linux.intel.com>
      Cc: Richard Weinberger <richard.weinberger@gmail.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      f76be617
  26. 30 3月, 2016 1 次提交
  27. 16 3月, 2016 2 次提交
    • A
      kallsyms: add support for relative offsets in kallsyms address table · 2213e9a6
      Ard Biesheuvel 提交于
      Similar to how relative extables are implemented, it is possible to emit
      the kallsyms table in such a way that it contains offsets relative to
      some anchor point in the kernel image rather than absolute addresses.
      
      On 64-bit architectures, it cuts the size of the kallsyms address table
      in half, since offsets between kernel symbols can typically be expressed
      in 32 bits.  This saves several hundreds of kilobytes of permanent
      .rodata on average.  In addition, the kallsyms address table is no
      longer subject to dynamic relocation when CONFIG_RELOCATABLE is in
      effect, so the relocation work done after decompression now doesn't have
      to do relocation updates for all these values.  This saves up to 24
      bytes (i.e., the size of a ELF64 RELA relocation table entry) per value,
      which easily adds up to a couple of megabytes of uncompressed __init
      data on ppc64 or arm64.  Even if these relocation entries typically
      compress well, the combined size reduction of 2.8 MB uncompressed for a
      ppc64_defconfig build (of which 2.4 MB is __init data) results in a ~500
      KB space saving in the compressed image.
      
      Since it is useful for some architectures (like x86) to retain the
      ability to emit absolute values as well, this patch also adds support
      for capturing both absolute and relative values when
      KALLSYMS_ABSOLUTE_PERCPU is in effect, by emitting absolute per-cpu
      addresses as positive 32-bit values, and addresses relative to the
      lowest encountered relative symbol as negative values, which are
      subtracted from the runtime address of this base symbol to produce the
      actual address.
      
      Support for the above is enabled by default for all architectures except
      IA-64 and Tile-GX, whose symbols are too far apart to capture in this
      manner.
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Tested-by: NGuenter Roeck <linux@roeck-us.net>
      Reviewed-by: NKees Cook <keescook@chromium.org>
      Tested-by: NKees Cook <keescook@chromium.org>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Michal Marek <mmarek@suse.cz>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      2213e9a6
    • A
      x86: kallsyms: disable absolute percpu symbols on !SMP · 4d5d5664
      Ard Biesheuvel 提交于
      scripts/kallsyms.c has a special --absolute-percpu command line option
      which deals with the zero based per cpu offsets that are used when
      building for SMP on x86_64.  This means that the option should only be
      passed in that case, so add a Kconfig symbol with the correct predicate,
      and use that instead.
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Tested-by: NGuenter Roeck <linux@roeck-us.net>
      Reviewed-by: NKees Cook <keescook@chromium.org>
      Tested-by: NKees Cook <keescook@chromium.org>
      Acked-by: NRusty Russell <rusty@rustcorp.com.au>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Michal Marek <mmarek@suse.cz>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      4d5d5664
  28. 05 3月, 2016 1 次提交
  29. 04 3月, 2016 1 次提交
    • D
      akcipher: Move the RSA DER encoding check to the crypto layer · d43de6c7
      David Howells 提交于
      Move the RSA EMSA-PKCS1-v1_5 encoding from the asymmetric-key public_key
      subtype to the rsa crypto module's pkcs1pad template.  This means that the
      public_key subtype no longer has any dependencies on public key type.
      
      To make this work, the following changes have been made:
      
       (1) The rsa pkcs1pad template is now used for RSA keys.  This strips off the
           padding and returns just the message hash.
      
       (2) In a previous patch, the pkcs1pad template gained an optional second
           parameter that, if given, specifies the hash used.  We now give this,
           and pkcs1pad checks the encoded message E(M) for the EMSA-PKCS1-v1_5
           encoding and verifies that the correct digest OID is present.
      
       (3) The crypto driver in crypto/asymmetric_keys/rsa.c is now reduced to
           something that doesn't care about what the encryption actually does
           and and has been merged into public_key.c.
      
       (4) CONFIG_PUBLIC_KEY_ALGO_RSA is gone.  Module signing must set
           CONFIG_CRYPTO_RSA=y instead.
      
      Thoughts:
      
       (*) Should the encoding style (eg. raw, EMSA-PKCS1-v1_5) also be passed to
           the padding template?  Should there be multiple padding templates
           registered that share most of the code?
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      Signed-off-by: NTadeusz Struk <tadeusz.struk@intel.com>
      Acked-by: NHerbert Xu <herbert@gondor.apana.org.au>
      d43de6c7
  30. 21 1月, 2016 2 次提交
  31. 17 1月, 2016 1 次提交