1. 14 7月, 2016 1 次提交
    • P
      x86/kernel: Audit and remove any unnecessary uses of module.h · 186f4360
      Paul Gortmaker 提交于
      Historically a lot of these existed because we did not have
      a distinction between what was modular code and what was providing
      support to modules via EXPORT_SYMBOL and friends.  That changed
      when we forked out support for the latter into the export.h file.
      
      This means we should be able to reduce the usage of module.h
      in code that is obj-y Makefile or bool Kconfig.  The advantage
      in doing so is that module.h itself sources about 15 other headers;
      adding significantly to what we feed cpp, and it can obscure what
      headers we are effectively using.
      
      Since module.h was the source for init.h (for __init) and for
      export.h (for EXPORT_SYMBOL) we consider each obj-y/bool instance
      for the presence of either and replace as needed.  Build testing
      revealed some implicit header usage that was fixed up accordingly.
      
      Note that some bool/obj-y instances remain since module.h is
      the header for some exception table entry stuff, and for things
      like __init_or_module (code that is tossed when MODULES=n).
      Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/20160714001901.31603-4-paul.gortmaker@windriver.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      186f4360
  2. 09 3月, 2016 1 次提交
    • T
      x86/mm, x86/mce: Add memcpy_mcsafe() · 92b0729c
      Tony Luck 提交于
      Make use of the EXTABLE_FAULT exception table entries to write
      a kernel copy routine that doesn't crash the system if it
      encounters a machine check. Prime use case for this is to copy
      from large arrays of non-volatile memory used as storage.
      
      We have to use an unrolled copy loop for now because current
      hardware implementations treat a machine check in "rep mov"
      as fatal. When that is fixed we can simplify.
      
      Return type is a "bool". True means that we copied OK, false means
      that it didn't.
      Signed-off-by: NTony Luck <tony.luck@intel.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tony Luck <tony.luck@gmail.com>
      Link: http://lkml.kernel.org/r/a44e1055efc2d2a9473307b22c91caa437aa3f8b.1456439214.git.tony.luck@intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      92b0729c
  3. 07 6月, 2015 1 次提交
  4. 14 2月, 2015 1 次提交
    • A
      x86_64: kasan: add interceptors for memset/memmove/memcpy functions · 393f203f
      Andrey Ryabinin 提交于
      Recently instrumentation of builtin functions calls was removed from GCC
      5.0.  To check the memory accessed by such functions, userspace asan
      always uses interceptors for them.
      
      So now we should do this as well.  This patch declares
      memset/memmove/memcpy as weak symbols.  In mm/kasan/kasan.c we have our
      own implementation of those functions which checks memory before accessing
      it.
      
      Default memset/memmove/memcpy now now always have aliases with '__'
      prefix.  For files that built without kasan instrumentation (e.g.
      mm/slub.c) original mem* replaced (via #define) with prefixed variants,
      cause we don't want to check memory accesses there.
      Signed-off-by: NAndrey Ryabinin <a.ryabinin@samsung.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Konstantin Serebryany <kcc@google.com>
      Cc: Dmitry Chernenkov <dmitryc@google.com>
      Signed-off-by: NAndrey Konovalov <adech.fo@gmail.com>
      Cc: Yuri Gribov <tetra2005@gmail.com>
      Cc: Konstantin Khlebnikov <koct9i@gmail.com>
      Cc: Sasha Levin <sasha.levin@oracle.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Andi Kleen <andi@firstfloor.org>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      393f203f
  5. 25 9月, 2013 1 次提交
  6. 17 11月, 2012 1 次提交
    • A
      x86: Improve __phys_addr performance by making use of carry flags and inlining · 0bdf525f
      Alexander Duyck 提交于
      This patch is meant to improve overall system performance when making use of
      the __phys_addr call.  To do this I have implemented several changes.
      
      First if CONFIG_DEBUG_VIRTUAL is not defined __phys_addr is made an inline,
      similar to how this is currently handled in 32 bit.  However in order to do
      this it is required to export phys_base so that it is available if __phys_addr
      is used in kernel modules.
      
      The second change was to streamline the code by making use of the carry flag
      on an add operation instead of performing a compare on a 64 bit value.  The
      advantage to this is that it allows us to significantly reduce the overall
      size of the call.  On my Xeon E5 system the entire __phys_addr inline call
      consumes a little less than 32 bytes and 5 instructions.  I also applied
      similar logic to the debug version of the function.  My testing shows that the
      debug version of the function with this patch applied is slightly faster than
      the non-debug version without the patch.
      
      Finally I also applied the same logic changes to __virt_addr_valid since it
      used the same general code flow as __phys_addr and could achieve similar gains
      though these changes.
      Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com>
      Link: http://lkml.kernel.org/r/20121116215315.8521.46270.stgit@ahduyck-cp1.jf.intel.comSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      0bdf525f
  7. 23 8月, 2012 1 次提交
    • S
      ftrace/x86: Add support for -mfentry to x86_64 · d57c5d51
      Steven Rostedt 提交于
      If the kernel is compiled with gcc 4.6.0 which supports -mfentry,
      then use that instead of mcount.
      
      With mcount, frame pointers are forced with the -pg option and we
      get something like:
      
      <can_vma_merge_before>:
             55                      push   %rbp
             48 89 e5                mov    %rsp,%rbp
             53                      push   %rbx
             41 51                   push   %r9
             e8 fe 6a 39 00          callq  ffffffff81483d00 <mcount>
             31 c0                   xor    %eax,%eax
             48 89 fb                mov    %rdi,%rbx
             48 89 d7                mov    %rdx,%rdi
             48 33 73 30             xor    0x30(%rbx),%rsi
             48 f7 c6 ff ff ff f7    test   $0xfffffffff7ffffff,%rsi
      
      With -mfentry, frame pointers are no longer forced and the call looks
      like this:
      
      <can_vma_merge_before>:
             e8 33 af 37 00          callq  ffffffff81461b40 <__fentry__>
             53                      push   %rbx
             48 89 fb                mov    %rdi,%rbx
             31 c0                   xor    %eax,%eax
             48 89 d7                mov    %rdx,%rdi
             41 51                   push   %r9
             48 33 73 30             xor    0x30(%rbx),%rsi
             48 f7 c6 ff ff ff f7    test   $0xfffffffff7ffffff,%rsi
      
      This adds the ftrace hook at the beginning of the function before a
      frame is set up, and allows the function callbacks to be able to access
      parameters. As kprobes now can use function tracing (at least on x86)
      this speeds up the kprobe hooks that are at the beginning of the
      function.
      
      Link: http://lkml.kernel.org/r/20120807194100.130477900@goodmis.orgAcked-by: NIngo Molnar <mingo@kernel.org>
      Reviewed-by: NMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Cc: Andi Kleen <andi@firstfloor.org>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      d57c5d51
  8. 30 6月, 2012 1 次提交
  9. 26 1月, 2011 1 次提交
  10. 29 4月, 2010 1 次提交
  11. 30 12月, 2009 1 次提交
    • J
      x86-64: Modify copy_user_generic() alternatives mechanism · 1b1d9258
      Jan Beulich 提交于
      In order to avoid unnecessary chains of branches, rather than
      implementing copy_user_generic() as a function consisting of
      just a single (possibly patched) branch, instead properly deal
      with patching call instructions in the alternative instructions
      framework, and move the patching into the callers.
      
      As a follow-on, one could also introduce something like
      __EXPORT_SYMBOL_ALT() to avoid patching call sites in modules.
      Signed-off-by: NJan Beulich <jbeulich@novell.com>
      Cc: Nick Piggin <npiggin@suse.de>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      LKML-Reference: <4B2BB8180200007800026AE7@vpn.id2.novell.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      1b1d9258
  12. 15 12月, 2009 1 次提交
    • R
      x86: don't export inline function · e6428047
      Rusty Russell 提交于
      For CONFIG_PARAVIRT, load_gs_index is an inline function (it's #defined
      to native_load_gs_index otherwise).
      
      Exporting an inline function breaks the new assembler-based alphabetical
      sorted symbol list:
      
      Today's linux-next build (x86_64 allmodconfig) failed like this:
      
      	.tmp_exports-asm.o: In function `__ksymtab_load_gs_index':
      	(__ksymtab_sorted+0x5b40): undefined reference to `load_gs_index'
      Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
      To: x86@kernel.org
      Cc: alan-jenkins@tuffmail.co.uk
      e6428047
  13. 11 12月, 2009 1 次提交
  14. 16 11月, 2009 1 次提交
    • F
      x86: Add missing might_fault() checks to copy_{to,from}_user() · 3c93ca00
      Frederic Weisbecker 提交于
      On x86-64, copy_[to|from]_user() rely on assembly routines that
      never call might_fault(), making us missing various lockdep
      checks.
      
      This doesn't apply to __copy_from,to_user() that explicitly
      handle these calls, neither is it a problem in x86-32 where
      copy_to,from_user() rely on the "__" prefixed versions that
      also call might_fault().
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Arjan van de Ven <arjan@infradead.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Nick Piggin <npiggin@suse.de>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <1258382538-30979-1-git-send-email-fweisbec@gmail.com>
      [ v2: fix module export ]
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      3c93ca00
  15. 15 11月, 2009 1 次提交
    • J
      x86-64: __copy_from_user_inatomic() adjustments · 14722485
      Jan Beulich 提交于
      This v2.6.26 commit:
      
          ad2fc2cd: x86: fix copy_user on x86
      
      rendered __copy_from_user_inatomic() identical to
      copy_user_generic(), yet didn't make the former just call the
      latter from an inline function.
      
      Furthermore, this v2.6.19 commit:
      
          b885808e: [PATCH] Add proper sparse __user casts to __copy_to_user_inatomic
      
      converted the return type of __copy_to_user_inatomic() from
      unsigned long to int, but didn't do the same to
      __copy_from_user_inatomic().
      Signed-off-by: NJan Beulich <jbeulich@novell.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Alexander Viro <viro@zeniv.linux.org.uk>
      Cc: Arjan van de Ven <arjan@infradead.org>
      Cc: Andi Kleen <andi@firstfloor.org>
      Cc: <v.mayatskih@gmail.com>
      LKML-Reference: <4AFD5778020000780001F8F4@vpn.id2.novell.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      14722485
  16. 26 9月, 2009 1 次提交
    • A
      x86: Use __builtin_object_size() to validate the buffer size for copy_from_user() · 9f0cf4ad
      Arjan van de Ven 提交于
      gcc (4.x) supports the __builtin_object_size() builtin, which
      reports the size of an object that a pointer point to, when known
      at compile time. If the buffer size is not known at compile time, a
      constant -1 is returned.
      
      This patch uses this feature to add a sanity check to
      copy_from_user(); if the target buffer is known to be smaller than
      the copy size, the copy is aborted and a WARNing is emitted in
      memory debug mode.
      
      These extra checks compile away when the object size is not known,
      or if both the buffer size and the copy length are constants.
      Signed-off-by: NArjan van de Ven <arjan@linux.intel.com>
      LKML-Reference: <20090926143301.2c396b94@infradead.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      9f0cf4ad
  17. 16 1月, 2009 1 次提交
  18. 21 10月, 2008 1 次提交
  19. 08 7月, 2008 1 次提交
  20. 24 6月, 2008 1 次提交
  21. 24 5月, 2008 1 次提交
  22. 14 5月, 2008 1 次提交
    • I
      x86: fix csum_partial() export · 89804c02
      Ingo Molnar 提交于
      Fix this symbol export problem:
      
          Building modules, stage 2.
          MODPOST 193 modules
          ERROR: "csum_partial" [fs/reiserfs/reiserfs.ko] undefined!
          make[1]: *** [__modpost] Error 1
          make: *** [modules] Error 2
      
      This is due to a known weakness of symbol exports: if a symbol's
      only in-core user is an EXPORT_SYMBOL from a lib-y section, the
      symbol is not linked in.
      
      The solution is to move the export to x8664_ksyms_64.c - but the real
      solution would be to fix kbuild.
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      89804c02
  23. 17 4月, 2008 2 次提交
  24. 30 1月, 2008 2 次提交
  25. 11 10月, 2007 2 次提交
  26. 17 3月, 2007 1 次提交
  27. 13 2月, 2007 2 次提交
  28. 04 10月, 2006 1 次提交
  29. 26 9月, 2006 1 次提交
    • A
      [PATCH] Fix zeroing on exception in copy_*_user · 3022d734
      Andi Kleen 提交于
      - Don't zero for __copy_from_user_inatomic following i386.
      This will prevent spurious zeros for parallel file system writers when
      one does a exception
      - The string instruction version didn't zero the output on
      exception. Oops.
      
      Also I cleaned up the code a bit while I was at it and added a minor
      optimization to the string instruction path.
      Signed-off-by: NAndi Kleen <ak@suse.de>
      3022d734
  30. 27 6月, 2006 1 次提交
  31. 11 4月, 2006 1 次提交
  32. 10 4月, 2006 1 次提交
  33. 01 4月, 2006 1 次提交
  34. 26 3月, 2006 2 次提交
  35. 22 3月, 2006 1 次提交
    • A
      [PATCH] multiple exports of strpbrk · f4a641d6
      Andrew Morton 提交于
      Sam's tree includes a new check, which found that we're exporting strpbrk()
      multiple times.
      
      It seems that the convention is that this is exported from the arch files, so
      reove the lib/string.c export.
      
      Cc: Sam Ravnborg <sam@ravnborg.org>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Cc: David Howells <dhowells@redhat.com>
      Cc: Greg Ungerer <gerg@uclinux.org>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      f4a641d6