1. 21 11月, 2018 1 次提交
    • R
      um: Drop own definition of PTRACE_SYSEMU/_SINGLESTEP · 4a0344c5
      Richard Weinberger 提交于
      commit 0676b957c24bfb6e495449ba7b7e72c5b5d79233 upstream.
      
      32bit UML used to define PTRACE_SYSEMU and PTRACE_SYSEMU_SINGLESTEP
      own its own because many years ago not all libcs had these request codes
      in their UAPI.
      These days PTRACE_SYSEMU/_SINGLESTEP is well known and part of glibc
      and our own define becomes problematic.
      
      With change c48831d0eebf ("linux/x86: sync sys/ptrace.h with Linux 4.14
      [BZ #22433]") glibc turned PTRACE_SYSEMU/_SINGLESTEP into a enum and
      UML failed to build.
      
      Let's drop our define and rely on the fact that every libc has
      PTRACE_SYSEMU/_SINGLESTEP.
      
      Cc: <stable@vger.kernel.org>
      Cc: Ritesh Raj Sarraf <rrs@researchut.com>
      Reported-and-tested-by: NRitesh Raj Sarraf <rrs@researchut.com>
      Signed-off-by: NRichard Weinberger <richard@nod.at>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      4a0344c5
  2. 14 11月, 2018 18 次提交
  3. 19 10月, 2018 1 次提交
  4. 17 10月, 2018 4 次提交
  5. 14 10月, 2018 5 次提交
    • N
      x86/boot: Add -Wno-pointer-sign to KBUILD_CFLAGS · dca5203e
      Nathan Chancellor 提交于
      When compiling the kernel with Clang, this warning appears even though
      it is disabled for the whole kernel because this folder has its own set
      of KBUILD_CFLAGS. It was disabled before the beginning of git history.
      
      In file included from arch/x86/boot/compressed/kaslr.c:29:
      In file included from arch/x86/boot/compressed/misc.h:21:
      In file included from ./include/linux/elf.h:5:
      In file included from ./arch/x86/include/asm/elf.h:77:
      In file included from ./arch/x86/include/asm/vdso.h:11:
      In file included from ./include/linux/mm_types.h:9:
      In file included from ./include/linux/spinlock.h:88:
      In file included from ./arch/x86/include/asm/spinlock.h:43:
      In file included from ./arch/x86/include/asm/qrwlock.h:6:
      ./include/asm-generic/qrwlock.h:101:53: warning: passing 'u32 *' (aka
      'unsigned int *') to parameter of type 'int *' converts between pointers
      to integer types with different sign [-Wpointer-sign]
              if (likely(atomic_try_cmpxchg_acquire(&lock->cnts, &cnts, _QW_LOCKED)))
                                                                 ^~~~~
      ./include/linux/compiler.h:76:40: note: expanded from macro 'likely'
      # define likely(x)      __builtin_expect(!!(x), 1)
                                                  ^
      ./include/asm-generic/atomic-instrumented.h:69:66: note: passing
      argument to parameter 'old' here
      static __always_inline bool atomic_try_cmpxchg(atomic_t *v, int *old, int new)
                                                                       ^
      Signed-off-by: NNathan Chancellor <natechancellor@gmail.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Nick Desaulniers <ndesaulniers@google.com>
      Link: https://lkml.kernel.org/r/20181013010713.6999-1-natechancellor@gmail.com
      dca5203e
    • N
      x86/time: Correct the attribute on jiffies' definition · 53c13ba8
      Nathan Chancellor 提交于
      Clang warns that the declaration of jiffies in include/linux/jiffies.h
      doesn't match the definition in arch/x86/time/kernel.c:
      
      arch/x86/kernel/time.c:29:42: warning: section does not match previous declaration [-Wsection]
      __visible volatile unsigned long jiffies __cacheline_aligned = INITIAL_JIFFIES;
                                               ^
      ./include/linux/cache.h:49:4: note: expanded from macro '__cacheline_aligned'
                       __section__(".data..cacheline_aligned")))
                       ^
      ./include/linux/jiffies.h:81:31: note: previous attribute is here
      extern unsigned long volatile __cacheline_aligned_in_smp __jiffy_arch_data jiffies;
                                    ^
      ./arch/x86/include/asm/cache.h:20:2: note: expanded from macro '__cacheline_aligned_in_smp'
              __page_aligned_data
              ^
      ./include/linux/linkage.h:39:29: note: expanded from macro '__page_aligned_data'
      #define __page_aligned_data     __section(.data..page_aligned) __aligned(PAGE_SIZE)
                                      ^
      ./include/linux/compiler_attributes.h:233:56: note: expanded from macro '__section'
      #define __section(S)                    __attribute__((__section__(#S)))
                                                             ^
      1 warning generated.
      
      The declaration was changed in commit 7c30f352 ("jiffies.h: declare
      jiffies and jiffies_64 with ____cacheline_aligned_in_smp") but wasn't
      updated here. Make them match so Clang no longer warns.
      
      Fixes: 7c30f352 ("jiffies.h: declare jiffies and jiffies_64 with ____cacheline_aligned_in_smp")
      Signed-off-by: NNathan Chancellor <natechancellor@gmail.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Nick Desaulniers <ndesaulniers@google.com>
      Cc: stable@vger.kernel.org
      Link: https://lkml.kernel.org/r/20181013005311.28617-1-natechancellor@gmail.com
      53c13ba8
    • D
      x86/entry: Add some paranoid entry/exit CR3 handling comments · 16561f27
      Dave Hansen 提交于
      Andi Kleen was just asking me about the NMI CR3 handling and why
      we restore it unconditionally.  I was *sure* we had documented it
      well.  We did not.
      
      Add some documentation.  We have common entry code where the CR3
      value is stashed, but three places in two big code paths where we
      restore it.  I put bulk of the comments in this common path and
      then refer to it from the other spots.
      Signed-off-by: NDave Hansen <dave.hansen@linux.intel.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: luto@kernel.org
      Cc: bp@alien8.de
      Cc: "H. Peter Anvin" <hpa@zytor.come
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Link: https://lkml.kernel.org/r/20181012232118.3EAAE77B@viggo.jf.intel.com
      16561f27
    • P
      x86/percpu: Fix this_cpu_read() · b59167ac
      Peter Zijlstra 提交于
      Eric reported that a sequence count loop using this_cpu_read() got
      optimized out. This is wrong, this_cpu_read() must imply READ_ONCE()
      because the interface is IRQ-safe, therefore an interrupt can have
      changed the per-cpu value.
      
      Fixes: 7c3576d2 ("[PATCH] i386: Convert PDA into the percpu section")
      Reported-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Acked-by: NEric Dumazet <edumazet@google.com>
      Cc: hpa@zytor.com
      Cc: eric.dumazet@gmail.com
      Cc: bp@alien8.de
      Cc: stable@vger.kernel.org
      Link: https://lkml.kernel.org/r/20181011104019.748208519@infradead.org
      b59167ac
    • P
      x86/tsc: Force inlining of cyc2ns bits · 4907c68a
      Peter Zijlstra 提交于
      Looking at the asm for native_sched_clock() I noticed we don't inline
      enough. Mostly caused by sharing code with cyc2ns_read_begin(), which
      we didn't used to do. So mark all that __force_inline to make it DTRT.
      
      Fixes: 59eaef78 ("x86/tsc: Remodel cyc2ns to use seqcount_latch()")
      Reported-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: hpa@zytor.com
      Cc: eric.dumazet@gmail.com
      Cc: bp@alien8.de
      Cc: stable@vger.kernel.org
      Link: https://lkml.kernel.org/r/20181011104019.695196158@infradead.org
      4907c68a
  6. 13 10月, 2018 1 次提交
  7. 10 10月, 2018 2 次提交
    • J
      mm: Preserve _PAGE_DEVMAP across mprotect() calls · 4628a645
      Jan Kara 提交于
      Currently _PAGE_DEVMAP bit is not preserved in mprotect(2) calls. As a
      result we will see warnings such as:
      
      BUG: Bad page map in process JobWrk0013  pte:800001803875ea25 pmd:7624381067
      addr:00007f0930720000 vm_flags:280000f9 anon_vma:          (null) mapping:ffff97f2384056f0 index:0
      file:457-000000fe00000030-00000009-000000ca-00000001_2001.fileblock fault:xfs_filemap_fault [xfs] mmap:xfs_file_mmap [xfs] readpage:          (null)
      CPU: 3 PID: 15848 Comm: JobWrk0013 Tainted: G        W          4.12.14-2.g7573215-default #1 SLE12-SP4 (unreleased)
      Hardware name: Intel Corporation S2600WFD/S2600WFD, BIOS SE5C620.86B.01.00.0833.051120182255 05/11/2018
      Call Trace:
       dump_stack+0x5a/0x75
       print_bad_pte+0x217/0x2c0
       ? enqueue_task_fair+0x76/0x9f0
       _vm_normal_page+0xe5/0x100
       zap_pte_range+0x148/0x740
       unmap_page_range+0x39a/0x4b0
       unmap_vmas+0x42/0x90
       unmap_region+0x99/0xf0
       ? vma_gap_callbacks_rotate+0x1a/0x20
       do_munmap+0x255/0x3a0
       vm_munmap+0x54/0x80
       SyS_munmap+0x1d/0x30
       do_syscall_64+0x74/0x150
       entry_SYSCALL_64_after_hwframe+0x3d/0xa2
      ...
      
      when mprotect(2) gets used on DAX mappings. Also there is a wide variety
      of other failures that can result from the missing _PAGE_DEVMAP flag
      when the area gets used by get_user_pages() later.
      
      Fix the problem by including _PAGE_DEVMAP in a set of flags that get
      preserved by mprotect(2).
      
      Fixes: 69660fd7 ("x86, mm: introduce _PAGE_DEVMAP")
      Fixes: ebd31197 ("powerpc/mm: Add devmap support for ppc64")
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NJan Kara <jack@suse.cz>
      Acked-by: NMichal Hocko <mhocko@suse.com>
      Reviewed-by: NJohannes Thumshirn <jthumshirn@suse.de>
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      4628a645
    • P
      KVM: x86: support CONFIG_KVM_AMD=y with CONFIG_CRYPTO_DEV_CCP_DD=m · 853c1109
      Paolo Bonzini 提交于
      SEV requires access to the AMD cryptographic device APIs, and this
      does not work when KVM is builtin and the crypto driver is a module.
      Actually the Kconfig conditions for CONFIG_KVM_AMD_SEV try to disable
      SEV in that case, but it does not work because the actual crypto
      calls are not culled, only sev_hardware_setup() is.
      
      This patch adds two CONFIG_KVM_AMD_SEV checks that gate all the remaining
      SEV code; it fixes this particular configuration, and drops 5 KiB of
      code when CONFIG_KVM_AMD_SEV=n.
      Reported-by: NGuenter Roeck <linux@roeck-us.net>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      853c1109
  8. 09 10月, 2018 2 次提交
    • K
      x86/mm: Avoid VLA in pgd_alloc() · 184d47f0
      Kees Cook 提交于
      Arnd Bergmann reported that turning on -Wvla found a new (unintended) VLA usage:
      
        arch/x86/mm/pgtable.c: In function 'pgd_alloc':
        include/linux/build_bug.h:29:45: error: ISO C90 forbids variable length array 'u_pmds' [-Werror=vla]
        arch/x86/mm/pgtable.c:190:34: note: in expansion of macro 'static_cpu_has'
         #define PREALLOCATED_USER_PMDS  (static_cpu_has(X86_FEATURE_PTI) ? \
                                          ^~~~~~~~~~~~~~
        arch/x86/mm/pgtable.c:431:16: note: in expansion of macro 'PREALLOCATED_USER_PMDS'
          pmd_t *u_pmds[PREALLOCATED_USER_PMDS];
                      ^~~~~~~~~~~~~~~~~~~~~~
      
      Use the actual size of the array that is used for X86_FEATURE_PTI,
      which is known at build time, instead of the variable size.
      
      [ mingo: Squashed original fix with followup fix to avoid bisection breakage, wrote new changelog. ]
      Reported-by: NArnd Bergmann <arnd@arndb.de>
      Original-written-by: NArnd Bergmann <arnd@arndb.de>
      Reported-by: NBorislav Petkov <bp@alien8.de>
      Signed-off-by: NKees Cook <keescook@chromium.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Joerg Roedel <jroedel@suse.de>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Toshi Kani <toshi.kani@hpe.com>
      Fixes: 1be3f247c288 ("x86/mm: Avoid VLA in pgd_alloc()")
      Link: http://lkml.kernel.org/r/20181008235434.GA35035@beastSigned-off-by: NIngo Molnar <mingo@kernel.org>
      184d47f0
    • R
      x86/intel_rdt: Fix out-of-bounds memory access in CBM tests · 49e00eee
      Reinette Chatre 提交于
      While the DOC at the beginning of lib/bitmap.c explicitly states that
      "The number of valid bits in a given bitmap does _not_ need to be an
      exact multiple of BITS_PER_LONG.", some of the bitmap operations do
      indeed access BITS_PER_LONG portions of the provided bitmap no matter
      the size of the provided bitmap. For example, if bitmap_intersects()
      is provided with an 8 bit bitmap the operation will access
      BITS_PER_LONG bits from the provided bitmap. While the operation
      ensures that these extra bits do not affect the result, the memory
      is still accessed.
      
      The capacity bitmasks (CBMs) are typically stored in u32 since they
      can never exceed 32 bits. A few instances exist where a bitmap_*
      operation is performed on a CBM by simply pointing the bitmap operation
      to the stored u32 value.
      
      The consequence of this pattern is that some bitmap_* operations will
      access out-of-bounds memory when interacting with the provided CBM. This
      is confirmed with a KASAN test that reports:
      
       BUG: KASAN: stack-out-of-bounds in __bitmap_intersects+0xa2/0x100
      
      and
      
       BUG: KASAN: stack-out-of-bounds in __bitmap_weight+0x58/0x90
      
      Fix this by moving any CBM provided to a bitmap operation needing
      BITS_PER_LONG to an 'unsigned long' variable.
      
      [ tglx: Changed related function arguments to unsigned long and got rid
      	of the _cbm extra step ]
      
      Fixes: 72d50505 ("x86/intel_rdt: Add utilities to test pseudo-locked region possibility")
      Fixes: 49f7b4ef ("x86/intel_rdt: Enable setting of exclusive mode")
      Fixes: d9b48c86 ("x86/intel_rdt: Display resource groups' allocations' size in bytes")
      Fixes: 95f0b77e ("x86/intel_rdt: Initialize new resource group with sane defaults")
      Signed-off-by: NReinette Chatre <reinette.chatre@intel.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: fenghua.yu@intel.com
      Cc: tony.luck@intel.com
      Cc: gavin.hindman@intel.com
      Cc: jithu.joseph@intel.com
      Cc: dave.hansen@intel.com
      Cc: hpa@zytor.com
      Link: https://lkml.kernel.org/r/69a428613a53f10e80594679ac726246020ff94f.1538686926.git.reinette.chatre@intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      49e00eee
  9. 04 10月, 2018 4 次提交
  10. 03 10月, 2018 2 次提交