1. 01 7月, 2017 1 次提交
    • K
      task_struct: Allow randomized layout · 29e48ce8
      Kees Cook 提交于
      This marks most of the layout of task_struct as randomizable, but leaves
      thread_info and scheduler state untouched at the start, and thread_struct
      untouched at the end.
      
      Other parts of the kernel use unnamed structures, but the 0-day builder
      using gcc-4.4 blows up on static initializers. Officially, it's documented
      as only working on gcc 4.6 and later, which further confuses me:
      	https://gcc.gnu.org/wiki/C11Status
      The structure layout randomization already requires gcc 4.7, but instead
      of depending on the plugin being enabled, just check the gcc versions
      for wider build testing. At Linus's suggestion, the marking is hidden
      in a macro to reduce how ugly it looks. Additionally, indenting is left
      unchanged since it would make things harder to read.
      
      Randomization of task_struct is modified from Brad Spengler/PaX Team's
      code in the last public patch of grsecurity/PaX based on my understanding
      of the code. Changes or omissions from the original code are mine and
      don't reflect the original grsecurity/PaX code.
      
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: NKees Cook <keescook@chromium.org>
      29e48ce8
  2. 23 6月, 2017 1 次提交
    • K
      gcc-plugins: Add the randstruct plugin · 313dd1b6
      Kees Cook 提交于
      This randstruct plugin is modified from Brad Spengler/PaX Team's code
      in the last public patch of grsecurity/PaX based on my understanding
      of the code. Changes or omissions from the original code are mine and
      don't reflect the original grsecurity/PaX code.
      
      The randstruct GCC plugin randomizes the layout of selected structures
      at compile time, as a probabilistic defense against attacks that need to
      know the layout of structures within the kernel. This is most useful for
      "in-house" kernel builds where neither the randomization seed nor other
      build artifacts are made available to an attacker. While less useful for
      distribution kernels (where the randomization seed must be exposed for
      third party kernel module builds), it still has some value there since now
      all kernel builds would need to be tracked by an attacker.
      
      In more performance sensitive scenarios, GCC_PLUGIN_RANDSTRUCT_PERFORMANCE
      can be selected to make a best effort to restrict randomization to
      cacheline-sized groups of elements, and will not randomize bitfields. This
      comes at the cost of reduced randomization.
      
      Two annotations are defined,__randomize_layout and __no_randomize_layout,
      which respectively tell the plugin to either randomize or not to
      randomize instances of the struct in question. Follow-on patches enable
      the auto-detection logic for selecting structures for randomization
      that contain only function pointers. It is disabled here to assist with
      bisection.
      
      Since any randomized structs must be initialized using designated
      initializers, __randomize_layout includes the __designated_init annotation
      even when the plugin is disabled so that all builds will require
      the needed initialization. (With the plugin enabled, annotations for
      automatically chosen structures are marked as well.)
      
      The main differences between this implemenation and grsecurity are:
      - disable automatic struct selection (to be enabled in follow-up patch)
      - add designated_init attribute at runtime and for manual marking
      - clarify debugging output to differentiate bad cast warnings
      - add whitelisting infrastructure
      - support gcc 7's DECL_ALIGN and DECL_MODE changes (Laura Abbott)
      - raise minimum required GCC version to 4.7
      
      Earlier versions of this patch series were ported by Michael Leibowitz.
      Signed-off-by: NKees Cook <keescook@chromium.org>
      313dd1b6
  3. 09 6月, 2017 1 次提交
  4. 29 5月, 2017 1 次提交
  5. 28 2月, 2017 1 次提交
    • L
      kprobes: move kprobe declarations to asm-generic/kprobes.h · 7d134b2c
      Luis R. Rodriguez 提交于
      Often all is needed is these small helpers, instead of compiler.h or a
      full kprobes.h.  This is important for asm helpers, in fact even some
      asm/kprobes.h make use of these helpers...  instead just keep a generic
      asm file with helpers useful for asm code with the least amount of
      clutter as possible.
      
      Likewise we need now to also address what to do about this file for both
      when architectures have CONFIG_HAVE_KPROBES, and when they do not.  Then
      for when architectures have CONFIG_HAVE_KPROBES but have disabled
      CONFIG_KPROBES.
      
      Right now most asm/kprobes.h do not have guards against CONFIG_KPROBES,
      this means most architecture code cannot include asm/kprobes.h safely.
      Correct this and add guards for architectures missing them.
      Additionally provide architectures that not have kprobes support with
      the default asm-generic solution.  This lets us force asm/kprobes.h on
      the header include/linux/kprobes.h always, but most importantly we can
      now safely include just asm/kprobes.h on architecture code without
      bringing the full kitchen sink of header files.
      
      Two architectures already provided a guard against CONFIG_KPROBES on its
      kprobes.h: sh, arch.  The rest of the architectures needed gaurds added.
      We avoid including any not-needed headers on asm/kprobes.h unless
      kprobes have been enabled.
      
      In a subsequent atomic change we can try now to remove compiler.h from
      include/linux/kprobes.h.
      
      During this sweep I've also identified a few architectures defining a
      common macro needed for both kprobes and ftrace, that of the definition
      of the breakput instruction up.  Some refer to this as
      BREAKPOINT_INSTRUCTION.  This must be kept outside of the #ifdef
      CONFIG_KPROBES guard.
      
      [mcgrof@kernel.org: fix arm64 build]
        Link: http://lkml.kernel.org/r/CAB=NE6X1WMByuARS4mZ1g9+W=LuVBnMDnh_5zyN0CLADaVh=Jw@mail.gmail.com
      [sfr@canb.auug.org.au: fixup for kprobes declarations moving]
        Link: http://lkml.kernel.org/r/20170214165933.13ebd4f4@canb.auug.org.au
      Link: http://lkml.kernel.org/r/20170203233139.32682-1-mcgrof@kernel.orgSigned-off-by: NLuis R. Rodriguez <mcgrof@kernel.org>
      Signed-off-by: NStephen Rothwell <sfr@canb.auug.org.au>
      Acked-by: NMasami Hiramatsu <mhiramat@kernel.org>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Masami Hiramatsu <mhiramat@kernel.org>
      Cc: Ananth N Mavinakayanahalli <ananth@linux.vnet.ibm.com>
      Cc: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      7d134b2c
  6. 19 1月, 2017 2 次提交
  7. 18 1月, 2017 1 次提交
    • S
      tracing: Process constants for (un)likely() profiler · d45ae1f7
      Steven Rostedt (VMware) 提交于
      When running the likely/unlikely profiler, one of the results did not look
      accurate. It noted that the unlikely() in link_path_walk() was 100%
      incorrect. When I added a trace_printk() to see what was happening there, it
      became 80% correct! Looking deeper into what whas happening, I found that
      gcc split that if statement into two paths. One where the if statement
      became a constant, the other path a variable. The other path had the if
      statement always hit (making the unlikely there, always false), but since
      the #define unlikely() has:
      
        #define unlikely() (__builtin_constant_p(x) ? !!(x) : __branch_check__(x, 0))
      
      Where constants are ignored by the branch profiler, the "constant" path
      made by the compiler was ignored, even though it was hit 80% of the time.
      
      By just passing the constant value to the __branch_check__() function and
      tracing it out of line (as always correct, as likely/unlikely isn't a factor
      for constants), then we get back the accurate readings of branches that were
      optimized by gcc causing part of the execution to become constant.
      Signed-off-by: NSteven Rostedt (VMware) <rostedt@goodmis.org>
      d45ae1f7
  8. 11 10月, 2016 1 次提交
    • E
      latent_entropy: Mark functions with __latent_entropy · 0766f788
      Emese Revfy 提交于
      The __latent_entropy gcc attribute can be used only on functions and
      variables.  If it is on a function then the plugin will instrument it for
      gathering control-flow entropy. If the attribute is on a variable then
      the plugin will initialize it with random contents.  The variable must
      be an integer, an integer array type or a structure with integer fields.
      
      These specific functions have been selected because they are init
      functions (to help gather boot-time entropy), are called at unpredictable
      times, or they have variable loops, each of which provide some level of
      latent entropy.
      Signed-off-by: NEmese Revfy <re.emese@gmail.com>
      [kees: expanded commit message]
      Signed-off-by: NKees Cook <keescook@chromium.org>
      0766f788
  9. 09 9月, 2016 1 次提交
    • N
      kbuild: allow archs to select link dead code/data elimination · b67067f1
      Nicholas Piggin 提交于
      Introduce LD_DEAD_CODE_DATA_ELIMINATION option for architectures to
      select to build with -ffunction-sections, -fdata-sections, and link
      with --gc-sections. It requires some work (documented) to ensure all
      unreferenced entrypoints are live, and requires toolchain and build
      verification, so it is made a per-arch option for now.
      
      On a random powerpc64le build, this yelds a significant size saving,
      it boots and runs fine, but there is a lot I haven't tested as yet, so
      these savings may be reduced if there are bugs in the link.
      
          text      data        bss        dec   filename
      11169741   1180744    1923176	14273661   vmlinux
      10445269   1004127    1919707	13369103   vmlinux.dce
      
      ~700K text, ~170K data, 6% removed from kernel image size.
      Signed-off-by: NNicholas Piggin <npiggin@gmail.com>
      Signed-off-by: NMichal Marek <mmarek@suse.com>
      b67067f1
  10. 05 9月, 2016 1 次提交
  11. 18 8月, 2016 1 次提交
  12. 13 7月, 2016 1 次提交
    • D
      pmem: kill __pmem address space · 7a9eb206
      Dan Williams 提交于
      The __pmem address space was meant to annotate codepaths that touch
      persistent memory and need to coordinate a call to wmb_pmem().  Now that
      wmb_pmem() is gone, there is little need to keep this annotation.
      
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Ross Zwisler <ross.zwisler@linux.intel.com>
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      7a9eb206
  13. 14 6月, 2016 3 次提交
  14. 08 6月, 2016 1 次提交
  15. 20 5月, 2016 1 次提交
    • R
      compiler.h: add support for malloc attribute · d64e85d3
      Rasmus Villemoes 提交于
      gcc as far back as at least 3.04 documents the function attribute
      __malloc__.  Add a shorthand for attaching that to a function
      declaration.  This was also suggested by Andi Kleen way back in 2002
      [1], but didn't get applied, perhaps because gcc at that time generated
      the exact same code with and without this attribute.
      
      This attribute tells the compiler that the return value (if non-NULL)
      can be assumed not to alias any other valid pointers at the time of the
      call.
      
      Please note that the documentation for a range of gcc versions (starting
      from around 4.7) contained a somewhat confusing and self-contradicting
      text:
      
        The malloc attribute is used to tell the compiler that a function may
        be treated as if any non-NULL pointer it returns cannot alias any other
        pointer valid when the function returns and *that the memory has
        undefined content*.  [...] Standard functions with this property include
        malloc and *calloc*.
      
      (emphasis mine). The intended meaning has later been clarified [2]:
      
        This tells the compiler that a function is malloc-like, i.e., that the
        pointer P returned by the function cannot alias any other pointer valid
        when the function returns, and moreover no pointers to valid objects
        occur in any storage addressed by P.
      
      What this means is that we can apply the attribute to kmalloc and
      friends, and it is ok for the returned memory to have well-defined
      contents (__GFP_ZERO).  But it is not ok to apply it to kmemdup(), nor
      to other functions which both allocate and possibly initialize the
      memory with existing pointers.  So unless someone is doing something
      pretty perverted kstrdup() should also be a fine candidate.
      
      [1] http://thread.gmane.org/gmane.linux.kernel/57172
      [2] https://gcc.gnu.org/bugzilla/show_bug.cgi?id=56955Signed-off-by: NRasmus Villemoes <linux@rasmusvillemoes.dk>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Andi Kleen <ak@linux.intel.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      d64e85d3
  16. 24 2月, 2016 1 次提交
    • B
      sparse: Add __private to privatize members of structs · ad315455
      Boqun Feng 提交于
      In C programming language, we don't have a easy way to privatize a
      member of a structure. However in kernel, sometimes there is a need to
      privatize a member in case of potential bugs or misuses.
      
      Fortunately, the noderef attribute of sparse is a way to privatize a
      member, as by defining a member as noderef, the address-of operator on
      the member will produce a noderef pointer to that member, and if anyone
      wants to dereference that kind of pointers to read or modify the member,
      sparse will yell.
      
      Based on this, __private modifier and related operation ACCESS_PRIVATE()
      are introduced, which could help detect undesigned public uses of
      private members of structs. Here is an example of sparse's output if it
      detect an undersigned public use:
      
      | kernel/rcu/tree.c:4453:25: warning: incorrect type in argument 1 (different modifiers)
      | kernel/rcu/tree.c:4453:25:    expected struct raw_spinlock [usertype] *lock
      | kernel/rcu/tree.c:4453:25:    got struct raw_spinlock [noderef] *<noident>
      
      Also, this patch improves compiler.h a little bit by adding comments for
      "#else" and "#endif".
      Signed-off-by: NBoqun Feng <boqun.feng@gmail.com>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      ad315455
  17. 16 2月, 2016 1 次提交
    • A
      tracing: Fix freak link error caused by branch tracer · b33c8ff4
      Arnd Bergmann 提交于
      In my randconfig tests, I came across a bug that involves several
      components:
      
      * gcc-4.9 through at least 5.3
      * CONFIG_GCOV_PROFILE_ALL enabling -fprofile-arcs for all files
      * CONFIG_PROFILE_ALL_BRANCHES overriding every if()
      * The optimized implementation of do_div() that tries to
        replace a library call with an division by multiplication
      * code in drivers/media/dvb-frontends/zl10353.c doing
      
              u32 adc_clock = 450560; /* 45.056 MHz */
              if (state->config.adc_clock)
                      adc_clock = state->config.adc_clock;
              do_div(value, adc_clock);
      
      In this case, gcc fails to determine whether the divisor
      in do_div() is __builtin_constant_p(). In particular, it
      concludes that __builtin_constant_p(adc_clock) is false, while
      __builtin_constant_p(!!adc_clock) is true.
      
      That in turn throws off the logic in do_div() that also uses
      __builtin_constant_p(), and instead of picking either the
      constant- optimized division, and the code in ilog2() that uses
      __builtin_constant_p() to figure out whether it knows the answer at
      compile time. The result is a link error from failing to find
      multiple symbols that should never have been called based on
      the __builtin_constant_p():
      
      dvb-frontends/zl10353.c:138: undefined reference to `____ilog2_NaN'
      dvb-frontends/zl10353.c:138: undefined reference to `__aeabi_uldivmod'
      ERROR: "____ilog2_NaN" [drivers/media/dvb-frontends/zl10353.ko] undefined!
      ERROR: "__aeabi_uldivmod" [drivers/media/dvb-frontends/zl10353.ko] undefined!
      
      This patch avoids the problem by changing __trace_if() to check
      whether the condition is known at compile-time to be nonzero, rather
      than checking whether it is actually a constant.
      
      I see this one link error in roughly one out of 1600 randconfig builds
      on ARM, and the patch fixes all known instances.
      
      Link: http://lkml.kernel.org/r/1455312410-1058841-1-git-send-email-arnd@arndb.deAcked-by: NNicolas Pitre <nico@linaro.org>
      Fixes: ab3c9c68 ("branch tracer, intel-iommu: fix build with CONFIG_BRANCH_TRACER=y")
      Cc: stable@vger.kernel.org # v2.6.30+
      Signed-off-by: NArnd Bergmann <arnd@arndb.de>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      b33c8ff4
  18. 09 2月, 2016 1 次提交
  19. 04 12月, 2015 1 次提交
    • P
      locking, sched: Introduce smp_cond_acquire() and use it · b3e0b1b6
      Peter Zijlstra 提交于
      Introduce smp_cond_acquire() which combines a control dependency and a
      read barrier to form acquire semantics.
      
      This primitive has two benefits:
      
       - it documents control dependencies,
       - its typically cheaper than using smp_load_acquire() in a loop.
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      b3e0b1b6
  20. 06 11月, 2015 1 次提交
  21. 04 11月, 2015 1 次提交
    • L
      atomic: remove all traces of READ_ONCE_CTRL() and atomic*_read_ctrl() · 105ff3cb
      Linus Torvalds 提交于
      This seems to be a mis-reading of how alpha memory ordering works, and
      is not backed up by the alpha architecture manual.  The helper functions
      don't do anything special on any other architectures, and the arguments
      that support them being safe on other architectures also argue that they
      are safe on alpha.
      
      Basically, the "control dependency" is between a previous read and a
      subsequent write that is dependent on the value read.  Even if the
      subsequent write is actually done speculatively, there is no way that
      such a speculative write could be made visible to other cpu's until it
      has been committed, which requires validating the speculation.
      
      Note that most weakely ordered architectures (very much including alpha)
      do not guarantee any ordering relationship between two loads that depend
      on each other on a control dependency:
      
          read A
          if (val == 1)
              read B
      
      because the conditional may be predicted, and the "read B" may be
      speculatively moved up to before reading the value A.  So we require the
      user to insert a smp_rmb() between the two accesses to be correct:
      
          read A;
          if (A == 1)
              smp_rmb()
              read B
      
      Alpha is further special in that it can break that ordering even if the
      *address* of B depends on the read of A, because the cacheline that is
      read later may be stale unless you have a memory barrier in between the
      pointer read and the read of the value behind a pointer:
      
          read ptr
          read offset(ptr)
      
      whereas all other weakly ordered architectures guarantee that the data
      dependency (as opposed to just a control dependency) will order the two
      accesses.  As a result, alpha needs a "smp_read_barrier_depends()" in
      between those two reads for them to be ordered.
      
      The coontrol dependency that "READ_ONCE_CTRL()" and "atomic_read_ctrl()"
      had was a control dependency to a subsequent *write*, however, and
      nobody can finalize such a subsequent write without having actually done
      the read.  And were you to write such a value to a "stale" cacheline
      (the way the unordered reads came to be), that would seem to lose the
      write entirely.
      
      So the things that make alpha able to re-order reads even more
      aggressively than other weak architectures do not seem to be relevant
      for a subsequent write.  Alpha memory ordering may be strange, but
      there's no real indication that it is *that* strange.
      
      Also, the alpha architecture reference manual very explicitly talks
      about the definition of "Dependence Constraints" in section 5.6.1.7,
      where a preceding read dominates a subsequent write.
      
      Such a dependence constraint admittedly does not impose a BEFORE (alpha
      architecture term for globally visible ordering), but it does guarantee
      that there can be no "causal loop".  I don't see how you could avoid
      such a loop if another cpu could see the stored value and then impact
      the value of the first read.  Put another way: the read and the write
      could not be seen as being out of order wrt other cpus.
      
      So I do not see how these "x_ctrl()" functions can currently be necessary.
      
      I may have to eat my words at some point, but in the absense of clear
      proof that alpha actually needs this, or indeed even an explanation of
      how alpha could _possibly_ need it, I do not believe these functions are
      called for.
      
      And if it turns out that alpha really _does_ need a barrier for this
      case, that barrier still should not be "smp_read_barrier_depends()".
      We'd have to make up some new speciality barrier just for alpha, along
      with the documentation for why it really is necessary.
      
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul E McKenney <paulmck@us.ibm.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Ingo Molnar <mingo@kernel.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      105ff3cb
  22. 20 10月, 2015 1 次提交
    • A
      compiler, atomics, kasan: Provide READ_ONCE_NOCHECK() · d976441f
      Andrey Ryabinin 提交于
      Some code may perform racy by design memory reads. This could be
      harmless, yet such code may produce KASAN warnings.
      
      To hide such accesses from KASAN this patch introduces
      READ_ONCE_NOCHECK() macro. KASAN will not check the memory
      accessed by READ_ONCE_NOCHECK(). The KernelThreadSanitizer
      (KTSAN) is going to ignore it as well.
      
      This patch creates __read_once_size_nocheck() a clone of
      __read_once_size(). The only difference between them is
      'no_sanitized_address' attribute appended to '*_nocheck'
      function. This attribute tells the compiler that instrumentation
      of memory accesses should not be applied to that function. We
      declare it as static '__maybe_unsed' because GCC is not capable
      to inline such function:
      https://gcc.gnu.org/bugzilla/show_bug.cgi?id=67368
      
      With KASAN=n READ_ONCE_NOCHECK() is just a clone of READ_ONCE().
      Signed-off-by: NAndrey Ryabinin <aryabinin@virtuozzo.com>
      Cc: Alexander Potapenko <glider@google.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andrey Konovalov <andreyknvl@google.com>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Kostya Serebryany <kcc@google.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Sasha Levin <sasha.levin@oracle.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Wolfram Gloger <wmglo@dent.med.uni-muenchen.de>
      Cc: kasan-dev <kasan-dev@googlegroups.com>
      Link: http://lkml.kernel.org/r/1445243838-17763-2-git-send-email-aryabinin@virtuozzo.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      d976441f
  23. 14 10月, 2015 1 次提交
  24. 12 8月, 2015 1 次提交
  25. 01 7月, 2015 1 次提交
  26. 26 6月, 2015 1 次提交
  27. 24 6月, 2015 1 次提交
  28. 28 5月, 2015 2 次提交
    • P
      rcu: Move lockless_dereference() out of rcupdate.h · 0a04b016
      Peter Zijlstra 提交于
      I want to use lockless_dereference() from seqlock.h, which would mean
      including rcupdate.h from it, however rcupdate.h already includes
      seqlock.h.
      
      Avoid this by moving lockless_dereference() into compiler.h. This is
      somewhat tricky since it uses smp_read_barrier_depends() which isn't
      available there, but its a CPP macro so we can get away with it.
      
      The alternative would be moving it into asm/barrier.h, but that would
      be updating each arch (I can do if people feel that is more
      appropriate).
      
      Cc: Paul McKenney <paulmck@linux.vnet.ibm.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
      0a04b016
    • P
      smp: Make control dependencies work on Alpha, improve documentation · 5af4692a
      Paul E. McKenney 提交于
      The current formulation of control dependencies fails on DEC Alpha,
      which does not respect dependencies of any kind unless an explicit
      memory barrier is provided.  This means that the current fomulation of
      control dependencies fails on Alpha.  This commit therefore creates a
      READ_ONCE_CTRL() that has the same overhead on non-Alpha systems, but
      causes Alpha to produce the needed ordering.  This commit also applies
      READ_ONCE_CTRL() to the one known use of control dependencies.
      
      Use of READ_ONCE_CTRL() also has the beneficial effect of adding a bit
      of self-documentation to control dependencies.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      5af4692a
  29. 19 5月, 2015 1 次提交
  30. 08 5月, 2015 1 次提交
  31. 04 5月, 2015 1 次提交
    • D
      lib: make memzero_explicit more robust against dead store elimination · 7829fb09
      Daniel Borkmann 提交于
      In commit 0b053c95 ("lib: memzero_explicit: use barrier instead
      of OPTIMIZER_HIDE_VAR"), we made memzero_explicit() more robust in
      case LTO would decide to inline memzero_explicit() and eventually
      find out it could be elimiated as dead store.
      
      While using barrier() works well for the case of gcc, recent efforts
      from LLVMLinux people suggest to use llvm as an alternative to gcc,
      and there, Stephan found in a simple stand-alone user space example
      that llvm could nevertheless optimize and thus elimitate the memset().
      A similar issue has been observed in the referenced llvm bug report,
      which is regarded as not-a-bug.
      
      Based on some experiments, icc is a bit special on its own, while it
      doesn't seem to eliminate the memset(), it could do so with an own
      implementation, and then result in similar findings as with llvm.
      
      The fix in this patch now works for all three compilers (also tested
      with more aggressive optimization levels). Arguably, in the current
      kernel tree it's more of a theoretical issue, but imho, it's better
      to be pedantic about it.
      
      It's clearly visible with gcc/llvm though, with the below code: if we
      would have used barrier() only here, llvm would have omitted clearing,
      not so with barrier_data() variant:
      
        static inline void memzero_explicit(void *s, size_t count)
        {
          memset(s, 0, count);
          barrier_data(s);
        }
      
        int main(void)
        {
          char buff[20];
          memzero_explicit(buff, sizeof(buff));
          return 0;
        }
      
        $ gcc -O2 test.c
        $ gdb a.out
        (gdb) disassemble main
        Dump of assembler code for function main:
         0x0000000000400400  <+0>: lea   -0x28(%rsp),%rax
         0x0000000000400405  <+5>: movq  $0x0,-0x28(%rsp)
         0x000000000040040e <+14>: movq  $0x0,-0x20(%rsp)
         0x0000000000400417 <+23>: movl  $0x0,-0x18(%rsp)
         0x000000000040041f <+31>: xor   %eax,%eax
         0x0000000000400421 <+33>: retq
        End of assembler dump.
      
        $ clang -O2 test.c
        $ gdb a.out
        (gdb) disassemble main
        Dump of assembler code for function main:
         0x00000000004004f0  <+0>: xorps  %xmm0,%xmm0
         0x00000000004004f3  <+3>: movaps %xmm0,-0x18(%rsp)
         0x00000000004004f8  <+8>: movl   $0x0,-0x8(%rsp)
         0x0000000000400500 <+16>: lea    -0x18(%rsp),%rax
         0x0000000000400505 <+21>: xor    %eax,%eax
         0x0000000000400507 <+23>: retq
        End of assembler dump.
      
      As gcc, clang, but also icc defines __GNUC__, it's sufficient to define
      this in compiler-gcc.h only to be picked up. For a fallback or otherwise
      unsupported compiler, we define it as a barrier. Similarly, for ecc which
      does not support gcc inline asm.
      
      Reference: https://llvm.org/bugs/show_bug.cgi?id=15495Reported-by: NStephan Mueller <smueller@chronox.de>
      Tested-by: NStephan Mueller <smueller@chronox.de>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Cc: Theodore Ts'o <tytso@mit.edu>
      Cc: Stephan Mueller <smueller@chronox.de>
      Cc: Hannes Frederic Sowa <hannes@stressinduktion.org>
      Cc: mancha security <mancha1@zoho.com>
      Cc: Mark Charlebois <charlebm@gmail.com>
      Cc: Behan Webster <behanw@converseincode.com>
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      7829fb09
  32. 27 3月, 2015 1 次提交
  33. 22 2月, 2015 1 次提交
    • L
      kernel: make READ_ONCE() valid on const arguments · dd369297
      Linus Torvalds 提交于
      The use of READ_ONCE() causes lots of warnings witht he pending paravirt
      spinlock fixes, because those ends up having passing a member to a
      'const' structure to READ_ONCE().
      
      There should certainly be nothing wrong with using READ_ONCE() with a
      const source, but the helper function __read_once_size() would cause
      warnings because it would drop the 'const' qualifier, but also because
      the destination would be marked 'const' too due to the use of 'typeof'.
      
      Use a union of types in READ_ONCE() to avoid this issue.
      
      Also make sure to use parenthesis around the macro arguments to avoid
      possible operator precedence issues.
      Tested-by: NIngo Molnar <mingo@kernel.org>
      Cc: Christian Borntraeger <borntraeger@de.ibm.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      dd369297
  34. 29 1月, 2015 1 次提交
  35. 19 1月, 2015 2 次提交