1. 05 5月, 2017 1 次提交
    • M
      x86/mm/kaslr: Use the _ASM_MUL macro for multiplication to work around Clang incompatibility · 121843eb
      Matthias Kaehlcke 提交于
      The constraint "rm" allows the compiler to put mix_const into memory.
      When the input operand is a memory location then MUL needs an operand
      size suffix, since Clang can't infer the multiplication width from the
      operand.
      
      Add and use the _ASM_MUL macro which determines the operand size and
      resolves to the NUL instruction with the corresponding suffix.
      
      This fixes the following error when building with clang:
      
        CC      arch/x86/lib/kaslr.o
        /tmp/kaslr-dfe1ad.s: Assembler messages:
        /tmp/kaslr-dfe1ad.s:182: Error: no instruction mnemonic suffix given and no register operands; can't size instruction
      Signed-off-by: NMatthias Kaehlcke <mka@chromium.org>
      Cc: Grant Grundler <grundler@chromium.org>
      Cc: Greg Hackmann <ghackmann@google.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Michael Davidson <md@google.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/20170501224741.133938-1-mka@chromium.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      121843eb
  2. 26 4月, 2017 1 次提交
  3. 23 4月, 2017 1 次提交
    • I
      Revert "x86/mm/gup: Switch GUP to the generic get_user_page_fast() implementation" · 6dd29b3d
      Ingo Molnar 提交于
      This reverts commit 2947ba05.
      
      Dan Williams reported dax-pmem kernel warnings with the following signature:
      
         WARNING: CPU: 8 PID: 245 at lib/percpu-refcount.c:155 percpu_ref_switch_to_atomic_rcu+0x1f5/0x200
         percpu ref (dax_pmem_percpu_release [dax_pmem]) <= 0 (0) after switching to atomic
      
      ... and bisected it to this commit, which suggests possible memory corruption
      caused by the x86 fast-GUP conversion.
      
      He also pointed out:
      
       "
        This is similar to the backtrace when we were not properly handling
        pud faults and was fixed with this commit: 220ced16 "mm: fix
        get_user_pages() vs device-dax pud mappings"
      
        I've found some missing _devmap checks in the generic
        get_user_pages_fast() path, but this does not fix the regression
        [...]
       "
      
      So given that there are known bugs, and a pretty robust looking bisection
      points to this commit suggesting that are unknown bugs in the conversion
      as well, revert it for the time being - we'll re-try in v4.13.
      Reported-by: NDan Williams <dan.j.williams@intel.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Michal Hocko <mhocko@suse.cz>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: aneesh.kumar@linux.vnet.ibm.com
      Cc: dann.frazier@canonical.com
      Cc: dave.hansen@intel.com
      Cc: steve.capper@linaro.org
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      6dd29b3d
  4. 17 4月, 2017 1 次提交
  5. 15 4月, 2017 3 次提交
  6. 14 4月, 2017 11 次提交
  7. 13 4月, 2017 1 次提交
    • D
      x86, pmem: fix broken __copy_user_nocache cache-bypass assumptions · 11e63f6d
      Dan Williams 提交于
      Before we rework the "pmem api" to stop abusing __copy_user_nocache()
      for memcpy_to_pmem() we need to fix cases where we may strand dirty data
      in the cpu cache. The problem occurs when copy_from_iter_pmem() is used
      for arbitrary data transfers from userspace. There is no guarantee that
      these transfers, performed by dax_iomap_actor(), will have aligned
      destinations or aligned transfer lengths. Backstop the usage
      __copy_user_nocache() with explicit cache management in these unaligned
      cases.
      
      Yes, copy_from_iter_pmem() is now too big for an inline, but addressing
      that is saved for a later patch that moves the entirety of the "pmem
      api" into the pmem driver directly.
      
      Fixes: 5de490da ("pmem: add copy_from_iter_pmem() and clear_pmem()")
      Cc: <stable@vger.kernel.org>
      Cc: <x86@kernel.org>
      Cc: Jan Kara <jack@suse.cz>
      Cc: Jeff Moyer <jmoyer@redhat.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Matthew Wilcox <mawilcox@microsoft.com>
      Reviewed-by: NRoss Zwisler <ross.zwisler@linux.intel.com>
      Signed-off-by: NToshi Kani <toshi.kani@hpe.com>
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      11e63f6d
  8. 12 4月, 2017 1 次提交
    • M
      kprobes/x86: Make boostable flag boolean · 490154bc
      Masami Hiramatsu 提交于
      Make arch_specific_insn.boostable to boolean, since it has
      only 2 states, boostable or not. So it is better to use
      boolean from the viewpoint of code readability.
      Signed-off-by: NMasami Hiramatsu <mhiramat@kernel.org>
      Cc: Ananth N Mavinakayanahalli <ananth@linux.vnet.ibm.com>
      Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
      Cc: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: David S . Miller <davem@davemloft.net>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ye Xiaolong <xiaolong.ye@intel.com>
      Link: http://lkml.kernel.org/r/149076368566.22469.6322906866458231844.stgit@devboxSigned-off-by: NIngo Molnar <mingo@kernel.org>
      490154bc
  9. 11 4月, 2017 3 次提交
  10. 05 4月, 2017 2 次提交
  11. 04 4月, 2017 5 次提交
  12. 30 3月, 2017 5 次提交
    • P
      debug: Add _ONCE() logic to report_bug() · 19d43626
      Peter Zijlstra 提交于
      Josh suggested moving the _ONCE logic inside the trap handler, using a
      bit in the bug_entry::flags field, avoiding the need for the extra
      variable.
      
      Sadly this only works for WARN_ON_ONCE(), since the others have
      printk() statements prior to triggering the trap.
      
      Still, this saves a fair amount of text and some data:
      
        text         data       filename
        10682460     4530992    defconfig-build/vmlinux.orig
        10665111     4530096    defconfig-build/vmlinux.patched
      Suggested-by: NJosh Poimboeuf <jpoimboe@redhat.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      19d43626
    • P
      locking/atomic: Fix atomic_try_cmpxchg() semantics · 44fe8445
      Peter Zijlstra 提交于
      Dmitry noted that the new atomic_try_cmpxchg() primitive is broken when
      the old pointer doesn't point to the local stack.
      
      He writes:
      
        "Consider a classical lock-free stack push:
      
          node->next = atomic_read(&head);
          do {
          } while (!atomic_try_cmpxchg(&head, &node->next, node));
      
        This code is broken with the current implementation, the problem is
        with unconditional update of *__po.
      
        In case of success it writes the same value back into *__po, but in
        case of cmpxchg success we might have lose ownership of some memory
        locations and potentially over what __po has pointed to. The same
        holds for the re-read of *__po. "
      
      He also points out that this makes it surprisingly different from the
      similar C/C++ atomic operation.
      
      After investigating the code-gen differences caused by this patch; and
      a number of alternatives (Linus dislikes this interface lots), we
      arrived at these results (size x86_64-defconfig/vmlinux):
      
        GCC-6.3.0:
      
        10735757        cmpxchg
        10726413        try_cmpxchg
        10730509        try_cmpxchg + patch
        10730445        try_cmpxchg-linus
      
        GCC-7 (20170327):
      
        10709514        cmpxchg
        10704266        try_cmpxchg
        10704266        try_cmpxchg + patch
        10704394        try_cmpxchg-linus
      
      From this we see that the patch has the advantage of better code-gen
      on GCC-7 and keeps the interface roughly consistent with the C
      language variant.
      Reported-by: NDmitry Vyukov <dvyukov@google.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: linux-kernel@vger.kernel.org
      Fixes: a9ebf306 ("locking/atomic: Introduce atomic_try_cmpxchg()")
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      44fe8445
    • A
      x86/debug: Define BUG() again for !CONFIG_BUG · 70579a86
      Arnd Bergmann 提交于
      The latest change to the BUG() macro inadvertently reverted the earlier
      commit:
      
        b06dd879 ("x86: always define BUG() and HAVE_ARCH_BUG, even with !CONFIG_BUG")
      
      ... that sanitized the behavior with CONFIG_BUG=n.
      
      I noticed this as some warnings have appeared again that were previously
      fixed as a side effect of that patch:
      
        kernel/seccomp.c: In function '__seccomp_filter':
        kernel/seccomp.c:670:1: error: no return statement in function returning non-void [-Werror=return-type]
        ...
      
      This combines the two patches and uses the ud2 macro to define BUG()
      in case of CONFIG_BUG=n.
      Signed-off-by: NArnd Bergmann <arnd@arndb.de>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Josh Triplett <josh@joshtriplett.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Fixes: 9a93848f ("x86/debug: Implement __WARN() using UD0")
      Link: http://lkml.kernel.org/r/20170329211646.2707365-1-arnd@arndb.deSigned-off-by: NIngo Molnar <mingo@kernel.org>
      70579a86
    • A
      x86: switch to RAW_COPY_USER · beba3a20
      Al Viro 提交于
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      beba3a20
    • A
      x86: don't wank with magical size in __copy_in_user() · a41e0d75
      Al Viro 提交于
      ... especially since copy_in_user() doesn't
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      a41e0d75
  13. 29 3月, 2017 2 次提交
  14. 28 3月, 2017 3 次提交