1. 05 1月, 2014 1 次提交
    • S
      x86, sparse: Do not force removal of __user when calling copy_to/from_user_nocheck() · df90ca96
      Steven Rostedt 提交于
      Commit ff47ab4f "x86: Add 1/2/4/8 byte optimization to 64bit
      __copy_{from,to}_user_inatomic" added a "_nocheck" call in between
      the copy_to/from_user() and copy_user_generic(). As both the
      normal and nocheck versions of theses calls use the proper __user
      annotation, a typecast to remove it should not be added.
      This causes sparse to spin out the following warnings:
      
      arch/x86/include/asm/uaccess_64.h:207:47: warning: incorrect type in argument 2 (different address spaces)
      arch/x86/include/asm/uaccess_64.h:207:47:    expected void const [noderef] <asn:1>*src
      arch/x86/include/asm/uaccess_64.h:207:47:    got void const *<noident>
      arch/x86/include/asm/uaccess_64.h:207:47: warning: incorrect type in argument 2 (different address spaces)
      arch/x86/include/asm/uaccess_64.h:207:47:    expected void const [noderef] <asn:1>*src
      arch/x86/include/asm/uaccess_64.h:207:47:    got void const *<noident>
      arch/x86/include/asm/uaccess_64.h:207:47: warning: incorrect type in argument 2 (different address spaces)
      arch/x86/include/asm/uaccess_64.h:207:47:    expected void const [noderef] <asn:1>*src
      arch/x86/include/asm/uaccess_64.h:207:47:    got void const *<noident>
      arch/x86/include/asm/uaccess_64.h:207:47: warning: incorrect type in argument 2 (different address spaces)
      arch/x86/include/asm/uaccess_64.h:207:47:    expected void const [noderef] <asn:1>*src
      arch/x86/include/asm/uaccess_64.h:207:47:    got void const *<noident>
      
      Cc: Andi Kleen <ak@linux.intel.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      Link: http://lkml.kernel.org/r/20140103164500.5f6478f5@gandalf.local.homeSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      df90ca96
  2. 26 10月, 2013 2 次提交
    • J
      x86: Unify copy_to_user() and add size checking to it · 7a3d9b0f
      Jan Beulich 提交于
      Similarly to copy_from_user(), where the range check is to
      protect against kernel memory corruption, copy_to_user() can
      benefit from such checking too: Here it protects against kernel
      information leaks.
      Signed-off-by: NJan Beulich <jbeulich@suse.com>
      Cc: <arjan@linux.intel.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Link: http://lkml.kernel.org/r/5265059502000078000FC4F6@nat28.tlf.novell.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      Cc: Arjan van de Ven <arjan@linux.intel.com>
      7a3d9b0f
    • J
      x86: Unify copy_from_user() size checking · 3df7b41a
      Jan Beulich 提交于
      Commits 4a312769 ("x86: Turn the
      copy_from_user check into an (optional) compile time warning")
      and 63312b6a ("x86: Add a
      Kconfig option to turn the copy_from_user warnings into errors")
      touched only the 32-bit variant of copy_from_user(), whereas the
      original commit 9f0cf4ad ("x86:
      Use __builtin_object_size() to validate the buffer size for
      copy_from_user()") also added the same code to the 64-bit one.
      
      Further the earlier conversion from an inline WARN() to the call
      to copy_from_user_overflow() went a little too far: When the
      number of bytes to be copied is not a constant (e.g. [looking at
      3.11] in drivers/net/tun.c:__tun_chr_ioctl() or
      drivers/pci/pcie/aer/aer_inject.c:aer_inject_write()), the
      compiler will always have to keep the funtion call, and hence
      there will always be a warning. By using __builtin_constant_p()
      we can avoid this.
      
      And then this slightly extends the effect of
      CONFIG_DEBUG_STRICT_USER_COPY_CHECKS in that apart from
      converting warnings to errors in the constant size case, it
      retains the (possibly wrong) warnings in the non-constant size
      case, such that if someone is prepared to get a few false
      positives, (s)he'll be able to recover the current behavior
      (except that these diagnostics now will never be converted to
      errors).
      
      Since the 32-bit variant (intentionally) didn't call
      might_fault(), the unification results in this being called
      twice now. Adding a suitable #ifdef would be the alternative if
      that's a problem.
      
      I'd like to point out though that with
      __compiletime_object_size() being restricted to gcc before 4.6,
      the whole construct is going to become more and more pointless
      going forward. I would question however that commit
      2fb0815c ("gcc4: disable
      __compiletime_object_size for GCC 4.6+") was really necessary,
      and instead this should have been dealt with as is done here
      from the beginning.
      Signed-off-by: NJan Beulich <jbeulich@suse.com>
      Cc: Arjan van de Ven <arjan@linux.intel.com>
      Cc: Guenter Roeck <linux@roeck-us.net>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/5265056D02000078000FC4F3@nat28.tlf.novell.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      3df7b41a
  3. 11 9月, 2013 1 次提交
  4. 28 5月, 2013 1 次提交
  5. 22 9月, 2012 1 次提交
  6. 30 6月, 2012 1 次提交
  7. 27 5月, 2012 1 次提交
  8. 12 4月, 2012 1 次提交
    • L
      x86: merge 32/64-bit versions of 'strncpy_from_user()' and speed it up · 92ae03f2
      Linus Torvalds 提交于
      This merges the 32- and 64-bit versions of the x86 strncpy_from_user()
      by just rewriting it in C rather than the ancient inline asm versions
      that used lodsb/stosb and had been duplicated for (trivial) differences
      between the 32-bit and 64-bit versions.
      
      While doing that, it also speeds them up by doing the accesses a word at
      a time.  Finally, the new routines also properly handle the case of
      hitting the end of the address space, which we have never done correctly
      before (fs/namei.c has a hack around it for that reason).
      
      Despite all these improvements, it actually removes more lines than it
      adds, due to the de-duplication.  Also, we no longer export (or define)
      the legacy __strncpy_from_user() function (that was defined to not do
      the user permission checks), since it's not actually used anywhere, and
      the user address space checks are built in to the new code.
      
      Other architecture maintainers have been notified that the old hack in
      fs/namei.c will be going away in the 3.5 merge window, in case they
      copied the x86 approach of being a bit cavalier about the end of the
      address space.
      
      Cc: linux-arch@vger.kernel.org
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Peter Anvin" <hpa@zytor.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      92ae03f2
  9. 21 5月, 2011 1 次提交
    • L
      sanitize <linux/prefetch.h> usage · 268bb0ce
      Linus Torvalds 提交于
      Commit e66eed65 ("list: remove prefetching from regular list
      iterators") removed the include of prefetch.h from list.h, which
      uncovered several cases that had apparently relied on that rather
      obscure header file dependency.
      
      So this fixes things up a bit, using
      
         grep -L linux/prefetch.h $(git grep -l '[^a-z_]prefetchw*(' -- '*.[ch]')
         grep -L 'prefetchw*(' $(git grep -l 'linux/prefetch.h' -- '*.[ch]')
      
      to guide us in finding files that either need <linux/prefetch.h>
      inclusion, or have it despite not needing it.
      
      There are more of them around (mostly network drivers), but this gets
      many core ones.
      Reported-by: NStephen Rothwell <sfr@canb.auug.org.au>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      268bb0ce
  10. 06 1月, 2010 1 次提交
  11. 30 12月, 2009 1 次提交
    • J
      x86-64: Modify copy_user_generic() alternatives mechanism · 1b1d9258
      Jan Beulich 提交于
      In order to avoid unnecessary chains of branches, rather than
      implementing copy_user_generic() as a function consisting of
      just a single (possibly patched) branch, instead properly deal
      with patching call instructions in the alternative instructions
      framework, and move the patching into the callers.
      
      As a follow-on, one could also introduce something like
      __EXPORT_SYMBOL_ALT() to avoid patching call sites in modules.
      Signed-off-by: NJan Beulich <jbeulich@novell.com>
      Cc: Nick Piggin <npiggin@suse.de>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      LKML-Reference: <4B2BB8180200007800026AE7@vpn.id2.novell.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      1b1d9258
  12. 16 11月, 2009 1 次提交
    • F
      x86: Add missing might_fault() checks to copy_{to,from}_user() · 3c93ca00
      Frederic Weisbecker 提交于
      On x86-64, copy_[to|from]_user() rely on assembly routines that
      never call might_fault(), making us missing various lockdep
      checks.
      
      This doesn't apply to __copy_from,to_user() that explicitly
      handle these calls, neither is it a problem in x86-32 where
      copy_to,from_user() rely on the "__" prefixed versions that
      also call might_fault().
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Arjan van de Ven <arjan@infradead.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Nick Piggin <npiggin@suse.de>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <1258382538-30979-1-git-send-email-fweisbec@gmail.com>
      [ v2: fix module export ]
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      3c93ca00
  13. 15 11月, 2009 1 次提交
    • J
      x86-64: __copy_from_user_inatomic() adjustments · 14722485
      Jan Beulich 提交于
      This v2.6.26 commit:
      
          ad2fc2cd: x86: fix copy_user on x86
      
      rendered __copy_from_user_inatomic() identical to
      copy_user_generic(), yet didn't make the former just call the
      latter from an inline function.
      
      Furthermore, this v2.6.19 commit:
      
          b885808e: [PATCH] Add proper sparse __user casts to __copy_to_user_inatomic
      
      converted the return type of __copy_to_user_inatomic() from
      unsigned long to int, but didn't do the same to
      __copy_from_user_inatomic().
      Signed-off-by: NJan Beulich <jbeulich@novell.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Alexander Viro <viro@zeniv.linux.org.uk>
      Cc: Arjan van de Ven <arjan@infradead.org>
      Cc: Andi Kleen <andi@firstfloor.org>
      Cc: <v.mayatskih@gmail.com>
      LKML-Reference: <4AFD5778020000780001F8F4@vpn.id2.novell.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      14722485
  14. 26 9月, 2009 1 次提交
    • A
      x86: Use __builtin_object_size() to validate the buffer size for copy_from_user() · 9f0cf4ad
      Arjan van de Ven 提交于
      gcc (4.x) supports the __builtin_object_size() builtin, which
      reports the size of an object that a pointer point to, when known
      at compile time. If the buffer size is not known at compile time, a
      constant -1 is returned.
      
      This patch uses this feature to add a sanity check to
      copy_from_user(); if the target buffer is known to be smaller than
      the copy size, the copy is aborted and a WARNing is emitted in
      memory debug mode.
      
      These extra checks compile away when the object size is not known,
      or if both the buffer size and the copy length are constants.
      Signed-off-by: NArjan van de Ven <arjan@linux.intel.com>
      LKML-Reference: <20090926143301.2c396b94@infradead.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      9f0cf4ad
  15. 21 7月, 2009 1 次提交
  16. 02 3月, 2009 1 次提交
    • I
      x86, mm: dont use non-temporal stores in pagecache accesses · f1800536
      Ingo Molnar 提交于
      Impact: standardize IO on cached ops
      
      On modern CPUs it is almost always a bad idea to use non-temporal stores,
      as the regression in this commit has shown it:
      
        30d697fa: x86: fix performance regression in write() syscall
      
      The kernel simply has no good information about whether using non-temporal
      stores is a good idea or not - and trying to add heuristics only increases
      complexity and inserts fragility.
      
      The regression on cached write()s took very long to be found - over two
      years. So dont take any chances and let the hardware decide how it makes
      use of its caches.
      
      The only exception is drivers/gpu/drm/i915/i915_gem.c: there were we are
      absolutely sure that another entity (the GPU) will pick up the dirty
      data immediately and that the CPU will not touch that data before the
      GPU will.
      
      Also, keep the _nocache() primitives to make it easier for people to
      experiment with these details. There may be more clear-cut cases where
      non-cached copies can be used, outside of filemap.c.
      
      Cc: Salman Qazi <sqazi@google.com>
      Cc: Nick Piggin <npiggin@suse.de>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      f1800536
  17. 25 2月, 2009 3 次提交
    • I
      x86: usercopy: check for total size when deciding non-temporal cutoff · 95108fa3
      Ingo Molnar 提交于
      Impact: make more types of copies non-temporal
      
      This change makes the following simple fix:
      
        30d697fa: x86: fix performance regression in write() syscall
      
      A bit more sophisticated: we check the 'total' number of bytes
      written to decide whether to copy in a cached or a non-temporal
      way.
      
      This will for example cause the tail (modulo 4096 bytes) chunk
      of a large write() to be non-temporal too - not just the page-sized
      chunks.
      
      Cc: Salman Qazi <sqazi@google.com>
      Cc: Nick Piggin <npiggin@suse.de>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      95108fa3
    • I
      x86, mm: pass in 'total' to __copy_from_user_*nocache() · 3255aa2e
      Ingo Molnar 提交于
      Impact: cleanup, enable future change
      
      Add a 'total bytes copied' parameter to __copy_from_user_*nocache(),
      and update all the callsites.
      
      The parameter is not used yet - architecture code can use it to
      more intelligently decide whether the copy should be cached or
      non-temporal.
      
      Cc: Salman Qazi <sqazi@google.com>
      Cc: Nick Piggin <npiggin@suse.de>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      3255aa2e
    • S
      x86: fix performance regression in write() syscall · 30d697fa
      Salman Qazi 提交于
      While the introduction of __copy_from_user_nocache (see commit:
      0812a579) may have been an improvement
      for sufficiently large writes, there is evidence to show that it is
      deterimental for small writes.  Unixbench's fstime test gives the
      following results for 256 byte writes with MAX_BLOCK of 2000:
      
          2.6.29-rc6 ( 5 samples, each in KB/sec ):
          283750, 295200, 294500, 293000, 293300
      
          2.6.29-rc6 + this patch (5 samples, each in KB/sec):
          313050, 3106750, 293350, 306300, 307900
      
          2.6.18
          395700, 342000, 399100, 366050, 359850
      
          See w_test() in src/fstime.c in unixbench version 4.1.0.  Basically, the above test
          consists of counting how much we can write in this manner:
      
          alarm(10);
          while (!sigalarm) {
                  for (f_blocks = 0; f_blocks < 2000; ++f_blocks) {
                         write(f, buf, 256);
                  }
                  lseek(f, 0L, 0);
          }
      
      Note, there are other components to the write syscall regression
      that are not addressed here.
      Signed-off-by: NSalman Qazi <sqazi@google.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      30d697fa
  18. 19 11月, 2008 1 次提交
  19. 23 10月, 2008 2 次提交
  20. 03 10月, 2008 1 次提交
  21. 11 9月, 2008 1 次提交
  22. 10 9月, 2008 1 次提交
    • N
      x86: some lock annotations for user copy paths · c10d38dd
      Nick Piggin 提交于
      copy_to/from_user and all its variants (except the atomic ones) can take a
      page fault and perform non-trivial work like taking mmap_sem and entering
      the filesyste/pagecache.
      
      Unfortunately, this often escapes lockdep because a common pattern is to
      use it to read in some arguments just set up from userspace, or write data
      back to a hot buffer. In those cases, it will be unlikely for page reclaim
      to get a window in to cause copy_*_user to fault.
      
      With the new might_lock primitives, add some annotations to x86. I don't
      know if I caught all possible faulting points (it's a bit of a maze, and I
      didn't really look at 32-bit). But this is a starting point.
      
      Boots and runs OK so far.
      Signed-off-by: NNick Piggin <npiggin@suse.de>
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      c10d38dd
  23. 23 7月, 2008 1 次提交
    • V
      x86: consolidate header guards · 77ef50a5
      Vegard Nossum 提交于
      This patch is the result of an automatic script that consolidates the
      format of all the headers in include/asm-x86/.
      
      The format:
      
      1. No leading underscore. Names with leading underscores are reserved.
      2. Pathname components are separated by two underscores. So we can
         distinguish between mm_types.h and mm/types.h.
      3. Everything except letters and numbers are turned into single
         underscores.
      Signed-off-by: NVegard Nossum <vegard.nossum@gmail.com>
      77ef50a5
  24. 09 7月, 2008 13 次提交