1. 28 4月, 2020 2 次提交
  2. 18 3月, 2020 2 次提交
  3. 17 1月, 2020 2 次提交
  4. 02 1月, 2020 1 次提交
  5. 27 12月, 2019 1 次提交
    • L
      make 'user_access_begin()' do 'access_ok()' · 83460ef1
      Linus Torvalds 提交于
      commit 594cc251fdd0d231d342d88b2fdff4bc42fb0690 upstream.
      
      Originally, the rule used to be that you'd have to do access_ok()
      separately, and then user_access_begin() before actually doing the
      direct (optimized) user access.
      
      But experience has shown that people then decide not to do access_ok()
      at all, and instead rely on it being implied by other operations or
      similar.  Which makes it very hard to verify that the access has
      actually been range-checked.
      
      If you use the unsafe direct user accesses, hardware features (either
      SMAP - Supervisor Mode Access Protection - on x86, or PAN - Privileged
      Access Never - on ARM) do force you to use user_access_begin().  But
      nothing really forces the range check.
      
      By putting the range check into user_access_begin(), we actually force
      people to do the right thing (tm), and the range check vill be visible
      near the actual accesses.  We have way too long a history of people
      trying to avoid them.
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      
      [ Shile: fix following conflicts by adding a dummy arguments ]
      Conflicts:
      	kernel/compat.c
      	kernel/exit.c
      Signed-off-by: NShile Zhang <shile.zhang@linux.alibaba.com>
      83460ef1
  6. 18 12月, 2019 2 次提交
  7. 05 12月, 2019 4 次提交
  8. 01 12月, 2019 1 次提交
  9. 24 11月, 2019 1 次提交
    • M
      idr: Fix idr_get_next race with idr_remove · a16a3669
      Matthew Wilcox (Oracle) 提交于
      commit 5c089fd0c73411f2170ab795c9ffc16718c7d007 upstream.
      
      If the entry is deleted from the IDR between the call to
      radix_tree_iter_find() and rcu_dereference_raw(), idr_get_next()
      will return NULL, which will end the iteration prematurely.  We should
      instead continue to the next entry in the IDR.  This only happens if the
      iteration is protected by the RCU lock.  Most IDR users use a spinlock
      or semaphore to exclude simultaneous modifications.  It was noticed once
      the PID allocator was converted to use the IDR, as it uses the RCU lock,
      but there may be other users elsewhere in the kernel.
      
      We can't use the normal pattern of calling radix_tree_deref_retry()
      (which catches both a retry entry in a leaf node and a node entry in
      the root) as the IDR supports storing entries which are unaligned,
      which will trigger an infinite loop if they are encountered.  Instead,
      we have to explicitly check whether the entry is a retry entry.
      
      Fixes: 0a835c4f ("Reimplement IDR and IDA using the radix tree")
      Reported-by: NBrendan Gregg <bgregg@netflix.com>
      Tested-by: NBrendan Gregg <bgregg@netflix.com>
      Signed-off-by: NMatthew Wilcox (Oracle) <willy@infradead.org>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      
      a16a3669
  10. 21 11月, 2019 1 次提交
  11. 13 11月, 2019 1 次提交
  12. 29 10月, 2019 1 次提交
  13. 08 10月, 2019 1 次提交
  14. 06 9月, 2019 3 次提交
  15. 16 8月, 2019 1 次提交
  16. 07 8月, 2019 2 次提交
  17. 26 7月, 2019 3 次提交
  18. 10 7月, 2019 1 次提交
  19. 11 6月, 2019 1 次提交
    • D
      test_firmware: Use correct snprintf() limit · 7fbcb7d1
      Dan Carpenter 提交于
      commit bd17cc5a20ae9aaa3ed775f360b75ff93cd66a1d upstream.
      
      The limit here is supposed to be how much of the page is left, but it's
      just using PAGE_SIZE as the limit.
      
      The other thing to remember is that snprintf() returns the number of
      bytes which would have been copied if we had had enough room.  So that
      means that if we run out of space then this code would end up passing a
      negative value as the limit and the kernel would print an error message.
      I have change the code to use scnprintf() which returns the number of
      bytes that were successfully printed (not counting the NUL terminator).
      
      Fixes: c92316bf ("test_firmware: add batched firmware tests")
      Signed-off-by: NDan Carpenter <dan.carpenter@oracle.com>
      Cc: stable <stable@vger.kernel.org>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      7fbcb7d1
  20. 04 6月, 2019 1 次提交
  21. 31 5月, 2019 3 次提交
  22. 26 5月, 2019 1 次提交
    • G
      x86/mm/mem_encrypt: Disable all instrumentation for early SME setup · f037116f
      Gary Hook 提交于
      [ Upstream commit b51ce3744f115850166f3d6c292b9c8cb849ad4f ]
      
      Enablement of AMD's Secure Memory Encryption feature is determined very
      early after start_kernel() is entered. Part of this procedure involves
      scanning the command line for the parameter 'mem_encrypt'.
      
      To determine intended state, the function sme_enable() uses library
      functions cmdline_find_option() and strncmp(). Their use occurs early
      enough such that it cannot be assumed that any instrumentation subsystem
      is initialized.
      
      For example, making calls to a KASAN-instrumented function before KASAN
      is set up will result in the use of uninitialized memory and a boot
      failure.
      
      When AMD's SME support is enabled, conditionally disable instrumentation
      of these dependent functions in lib/string.c and arch/x86/lib/cmdline.c.
      
       [ bp: Get rid of intermediary nostackp var and cleanup whitespace. ]
      
      Fixes: aca20d54 ("x86/mm: Add support to make use of Secure Memory Encryption")
      Reported-by: NLi RongQing <lirongqing@baidu.com>
      Signed-off-by: NGary R Hook <gary.hook@amd.com>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
      Cc: Boris Brezillon <bbrezillon@kernel.org>
      Cc: Coly Li <colyli@suse.de>
      Cc: "dave.hansen@linux.intel.com" <dave.hansen@linux.intel.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Kent Overstreet <kent.overstreet@gmail.com>
      Cc: "luto@kernel.org" <luto@kernel.org>
      Cc: Masahiro Yamada <yamada.masahiro@socionext.com>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: "mingo@redhat.com" <mingo@redhat.com>
      Cc: "peterz@infradead.org" <peterz@infradead.org>
      Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: x86-ml <x86@kernel.org>
      Link: https://lkml.kernel.org/r/155657657552.7116.18363762932464011367.stgit@sosrh3.amd.comSigned-off-by: NSasha Levin <sashal@kernel.org>
      f037116f
  23. 22 5月, 2019 1 次提交
    • E
      iov_iter: optimize page_copy_sane() · 627bb2d9
      Eric Dumazet 提交于
      commit 6daef95b8c914866a46247232a048447fff97279 upstream.
      
      Avoid cache line miss dereferencing struct page if we can.
      
      page_copy_sane() mostly deals with order-0 pages.
      
      Extra cache line miss is visible on TCP recvmsg() calls dealing
      with GRO packets (typically 45 page frags are attached to one skb).
      
      Bringing the 45 struct pages into cpu cache while copying the data
      is not free, since the freeing of the skb (and associated
      page frags put_page()) can happen after cache lines have been evicted.
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      Cc: Matthew Wilcox <willy@infradead.org>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      627bb2d9
  24. 10 5月, 2019 1 次提交
  25. 02 5月, 2019 1 次提交
  26. 20 4月, 2019 1 次提交