1. 16 9月, 2016 6 次提交
  2. 15 9月, 2016 1 次提交
  3. 10 9月, 2016 6 次提交
    • P
      perf/x86/intel: Fix PEBSv3 record drain · 8ef9b845
      Peter Zijlstra 提交于
      Alexander hit the WARN_ON_ONCE(!event) on his Skylake while running
      the perf fuzzer.
      
      This means the PEBSv3 record included a status bit for an inactive
      event, something that _should_ not happen.
      
      Move the code that filters the status bits against our known PEBS
      events up a spot to guarantee we only deal with events we know about.
      
      Further add "continue" statements to the WARN_ON_ONCE()s such that
      we'll not die nor generate silly events in case we ever do hit them
      again.
      Reported-by: NAlexander Shishkin <alexander.shishkin@linux.intel.com>
      Tested-by: NAlexander Shishkin <alexander.shishkin@linux.intel.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Kan Liang <kan.liang@intel.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vince@deater.net>
      Cc: stable@vger.kernel.org
      Fixes: a3d86542 ("perf/x86/intel/pebs: Add PEBSv3 decoding")
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      8ef9b845
    • A
      perf/x86/intel/bts: Kill a silly warning · ef9ef3be
      Alexander Shishkin 提交于
      At the moment, intel_bts will WARN() out if there is more than one
      event writing to the same ring buffer, via SET_OUTPUT, and will only
      send data from one event to a buffer.
      
      There is no reason to have this warning in, so kill it.
      Signed-off-by: NAlexander Shishkin <alexander.shishkin@linux.intel.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Arnaldo Carvalho de Melo <acme@infradead.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: vince@deater.net
      Link: http://lkml.kernel.org/r/20160906132353.19887-6-alexander.shishkin@linux.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      ef9ef3be
    • A
      perf/x86/intel/bts: Fix BTS PMI detection · 4d4c4741
      Alexander Shishkin 提交于
      Since BTS doesn't have a dedicated PMI status bit, the driver needs to
      take extra care to check for the condition that triggers it to avoid
      spurious NMI warnings.
      
      Regardless of the local BTS context state, the only way of knowing that
      the NMI is ours is to compare the write pointer against the interrupt
      threshold.
      Reported-by: NVince Weaver <vincent.weaver@maine.edu>
      Signed-off-by: NAlexander Shishkin <alexander.shishkin@linux.intel.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Arnaldo Carvalho de Melo <acme@infradead.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: vince@deater.net
      Link: http://lkml.kernel.org/r/20160906132353.19887-5-alexander.shishkin@linux.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      4d4c4741
    • A
      perf/x86/intel/bts: Fix confused ordering of PMU callbacks · a9a94401
      Alexander Shishkin 提交于
      The intel_bts driver is using a CPU-local 'started' variable to order
      callbacks and PMIs and make sure that AUX transactions don't get messed
      up. However, the ordering rules in regard to this variable is a complete
      mess, which recently resulted in perf_fuzzer-triggered warnings and
      panics.
      
      The general ordering rule that is patch is enforcing is that this
      cpu-local variable be set only when the cpu-local AUX transaction is
      active; consequently, this variable is to be checked before the AUX
      related bits can be touched.
      Reported-by: NVince Weaver <vincent.weaver@maine.edu>
      Signed-off-by: NAlexander Shishkin <alexander.shishkin@linux.intel.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Arnaldo Carvalho de Melo <acme@infradead.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: vince@deater.net
      Link: http://lkml.kernel.org/r/20160906132353.19887-4-alexander.shishkin@linux.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      a9a94401
    • D
      mm: fix cache mode of dax pmd mappings · 9049771f
      Dan Williams 提交于
      track_pfn_insert() in vmf_insert_pfn_pmd() is marking dax mappings as
      uncacheable rendering them impractical for application usage.  DAX-pte
      mappings are cached and the goal of establishing DAX-pmd mappings is to
      attain more performance, not dramatically less (3 orders of magnitude).
      
      track_pfn_insert() relies on a previous call to reserve_memtype() to
      establish the expected page_cache_mode for the range.  While memremap()
      arranges for reserve_memtype() to be called, devm_memremap_pages() does
      not.  So, teach track_pfn_insert() and untrack_pfn() how to handle
      tracking without a vma, and arrange for devm_memremap_pages() to
      establish the write-back-cache reservation in the memtype tree.
      
      Cc: <stable@vger.kernel.org>
      Cc: Matthew Wilcox <mawilcox@microsoft.com>
      Cc: Ross Zwisler <ross.zwisler@linux.intel.com>
      Cc: Nilesh Choudhury <nilesh.choudhury@oracle.com>
      Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Reported-by: NToshi Kani <toshi.kani@hpe.com>
      Reported-by: NKai Zhang <kai.ka.zhang@oracle.com>
      Acked-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      9049771f
    • S
      perf/x86/amd/uncore: Prevent use after free · 7d762e49
      Sebastian Andrzej Siewior 提交于
      The resent conversion of the cpu hotplug support in the uncore driver
      introduced a regression due to the way the callbacks are invoked at
      initialization time.
      
      The old code called the prepare/starting/online function on each online cpu
      as a block. The new code registers the hotplug callbacks in the core for
      each state. The core invokes the callbacks at each registration on all
      online cpus.
      
      The code implicitely relied on the prepare/starting/online callbacks being
      called as combo on a particular cpu, which was not obvious and completely
      undocumented.
      
      The resulting subtle wreckage happens due to the way how the uncore code
      manages shared data structures for cpus which share an uncore resource in
      hardware. The sharing is determined in the cpu starting callback, but the
      prepare callback allocates per cpu data for the upcoming cpu because
      potential sharing is unknown at this point. If the starting callback finds
      a online cpu which shares the hardware resource it takes a refcount on the
      percpu data of that cpu and puts the own data structure into a
      'free_at_online' pointer of that shared data structure. The online callback
      frees that.
      
      With the old model this worked because in a starting callback only one non
      unused structure (the one of the starting cpu) was available. The new code
      allocates the data structures for all cpus when the prepare callback is
      registered.
      
      Now the starting function iterates through all online cpus and looks for a
      data structure (skipping its own) which has a matching hardware id. The id
      member of the data structure is initialized to 0, but the hardware id can
      be 0 as well. The resulting wreckage is:
      
        CPU0 finds a matching id on CPU1, takes a refcount on CPU1 data and puts
        its own data structure into CPU1s data structure to be freed.
      
        CPU1 skips CPU0 because the data structure is its allegedly unsued own.
        It finds a matching id on CPU2, takes a refcount on CPU1 data and puts
        its own data structure into CPU2s data structure to be freed.
      
        ....
      
      Now the online callbacks are invoked.
      
        CPU0 has a pointer to CPU1s data and frees the original CPU0 data. So
        far so good.
      
        CPU1 has a pointer to CPU2s data and frees the original CPU1 data, which
        is still referenced by CPU0 ---> Booom
      
      So there are two issues to be solved here:
      
      1) The id field must be initialized at allocation time to a value which
         cannot be a valid hardware id, i.e. -1
      
         This prevents the above scenario, but now CPU1 and CPU2 both stick their
         own data structure into the free_at_online pointer of CPU0. So we leak
         CPU1s data structure.
      
      2) Fix the memory leak described in #1
      
         Instead of having a single pointer, use a hlist to enqueue the
         superflous data structures which are then freed by the first cpu
         invoking the online callback.
      
      Ideally we should know the sharing _before_ invoking the prepare callback,
      but that's way beyond the scope of this bug fix.
      
      [ tglx: Rewrote changelog ]
      
      Fixes: 96b2bd38 ("perf/x86/amd/uncore: Convert to hotplug state machine")
      Reported-and-tested-by: NEric Sandeen <sandeen@sandeen.net>
      Signed-off-by: NSebastian Andrzej Siewior <bigeasy@linutronix.de>
      Cc: Borislav Petkov <bp@suse.de>
      Link: http://lkml.kernel.org/r/20160909160822.lowgmkdwms2dheyv@linutronix.deSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      7d762e49
  4. 08 9月, 2016 3 次提交
    • P
      x86, clock: Fix kvm guest tsc initialization · a4497a86
      Prarit Bhargava 提交于
      When booting a kvm guest on AMD with the latest kernel the following
      messages are displayed in the boot log:
      
       tsc: Unable to calibrate against PIT
       tsc: HPET/PMTIMER calibration failed
      
      aa297292 ("x86/tsc: Enumerate SKL cpu_khz and tsc_khz via CPUID")
      introduced a change to account for a difference in cpu and tsc frequencies for
      Intel SKL processors. Before this change the native tsc set
      x86_platform.calibrate_tsc to native_calibrate_tsc() which is a hardware
      calibration of the tsc, and in tsc_init() executed
      
      	tsc_khz = x86_platform.calibrate_tsc();
      	cpu_khz = tsc_khz;
      
      The kvm code changed x86_platform.calibrate_tsc to kvm_get_tsc_khz() and
      executed the same tsc_init() function.  This meant that KVM guests did not
      execute the native hardware calibration function.
      
      After aa297292, there are separate native calibrations for cpu_khz and
      tsc_khz.  The code sets x86_platform.calibrate_tsc to native_calibrate_tsc()
      which is now an Intel specific calibration function, and
      x86_platform.calibrate_cpu to native_calibrate_cpu() which is the "old"
      native_calibrate_tsc() function (ie, the native hardware calibration
      function).
      
      tsc_init() now does
      
      	cpu_khz = x86_platform.calibrate_cpu();
      	tsc_khz = x86_platform.calibrate_tsc();
      	if (tsc_khz == 0)
      		tsc_khz = cpu_khz;
      	else if (abs(cpu_khz - tsc_khz) * 10 > tsc_khz)
      		cpu_khz = tsc_khz;
      
      The kvm code should not call the hardware initialization in
      native_calibrate_cpu(), as it isn't applicable for kvm and it didn't do that
      prior to aa297292.
      
      This patch resolves this issue by setting x86_platform.calibrate_cpu to
      kvm_get_tsc_khz().
      
      v2: I had originally set x86_platform.calibrate_cpu to
      cpu_khz_from_cpuid(), however, pbonzini pointed out that the CPUID leaf
      in that function is not available in KVM.  I have changed the function
      pointer to kvm_get_tsc_khz().
      
      Fixes: aa297292 ("x86/tsc: Enumerate SKL cpu_khz and tsc_khz via CPUID")
      Signed-off-by: NPrarit Bhargava <prarit@redhat.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Radim Krčmář <rkrcmar@redhat.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: x86@kernel.org
      Cc: Len Brown <len.brown@intel.com>
      Cc: "Peter Zijlstra (Intel)" <peterz@infradead.org>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: Adrian Hunter <adrian.hunter@intel.com>
      Cc: "Christopher S. Hall" <christopher.s.hall@intel.com>
      Cc: David Woodhouse <dwmw2@infradead.org>
      Cc: kvm@vger.kernel.org
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      a4497a86
    • D
      x86/apic: Fix num_processors value in case of failure · c291b015
      Dou Liyang 提交于
      If the topology package map check of the APIC ID and the CPU is a failure,
      we don't generate the processor info for that APIC ID yet we increase
      disabled_cpus by one - which is buggy.
      
      Only increase num_processors once we are sure we don't fail.
      Signed-off-by: NDou Liyang <douly.fnst@cn.fujitsu.com>
      Acked-by: NDavid Rientjes <rientjes@google.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/1473214893-16481-1-git-send-email-douly.fnst@cn.fujitsu.com
      [ Rewrote the changelog. ]
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      c291b015
    • M
      um/ptrace: Fix the syscall number update after a ptrace · ce29856a
      Mickaël Salaün 提交于
      Update the syscall number after each PTRACE_SETREGS on ORIG_*AX.
      
      This is needed to get the potentially altered syscall number in the
      seccomp filters after RET_TRACE.
      
      This fix four seccomp_bpf tests:
      > [ RUN      ] TRACE_syscall.skip_after_RET_TRACE
      > seccomp_bpf.c:1560:TRACE_syscall.skip_after_RET_TRACE:Expected -1 (18446744073709551615) == syscall(39) (26)
      > seccomp_bpf.c:1561:TRACE_syscall.skip_after_RET_TRACE:Expected 1 (1) == (*__errno_location ()) (22)
      > [     FAIL ] TRACE_syscall.skip_after_RET_TRACE
      > [ RUN      ] TRACE_syscall.kill_after_RET_TRACE
      > TRACE_syscall.kill_after_RET_TRACE: Test exited normally instead of by signal (code: 1)
      > [     FAIL ] TRACE_syscall.kill_after_RET_TRACE
      > [ RUN      ] TRACE_syscall.skip_after_ptrace
      > seccomp_bpf.c:1622:TRACE_syscall.skip_after_ptrace:Expected -1 (18446744073709551615) == syscall(39) (26)
      > seccomp_bpf.c:1623:TRACE_syscall.skip_after_ptrace:Expected 1 (1) == (*__errno_location ()) (22)
      > [     FAIL ] TRACE_syscall.skip_after_ptrace
      > [ RUN      ] TRACE_syscall.kill_after_ptrace
      > TRACE_syscall.kill_after_ptrace: Test exited normally instead of by signal (code: 1)
      > [     FAIL ] TRACE_syscall.kill_after_ptrace
      
      Fixes: 26703c63 ("um/ptrace: run seccomp after ptrace")
      Signed-off-by: NMickaël Salaün <mic@digikod.net>
      Acked-by: NKees Cook <keescook@chromium.org>
      Cc: Jeff Dike <jdike@addtoit.com>
      Cc: Richard Weinberger <richard@nod.at>
      Cc: James Morris <jmorris@namei.org>
      Cc: user-mode-linux-devel@lists.sourceforge.net
      Signed-off-by: NJames Morris <james.l.morris@oracle.com>
      Signed-off-by: NKees Cook <keescook@chromium.org>
      ce29856a
  5. 07 9月, 2016 1 次提交
    • K
      x86/uaccess: force copy_*_user() to be inlined · e6971009
      Kees Cook 提交于
      As already done with __copy_*_user(), mark copy_*_user() as __always_inline.
      Without this, the checks for things like __builtin_const_p() won't work
      consistently in either hardened usercopy nor the recent adjustments for
      detecting usercopy overflows at compile time.
      
      The change in kernel text size is detectable, but very small:
      
       text      data     bss     dec      hex     filename
      12118735  5768608 14229504 32116847 1ea106f vmlinux.before
      12120207  5768608 14229504 32118319 1ea162f vmlinux.after
      Signed-off-by: NKees Cook <keescook@chromium.org>
      e6971009
  6. 06 9月, 2016 1 次提交
  7. 05 9月, 2016 4 次提交
  8. 03 9月, 2016 2 次提交
    • E
      x86/AMD: Apply erratum 665 on machines without a BIOS fix · d1992996
      Emanuel Czirai 提交于
      AMD F12h machines have an erratum which can cause DIV/IDIV to behave
      unpredictably. The workaround is to set MSRC001_1029[31] but sometimes
      there is no BIOS update containing that workaround so let's do it
      ourselves unconditionally. It is simple enough.
      
      [ Borislav: Wrote commit message. ]
      Signed-off-by: NEmanuel Czirai <icanrealizeum@gmail.com>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Cc: Yaowu Xu <yaowu@google.com>
      Cc: stable@vger.kernel.org
      Link: http://lkml.kernel.org/r/20160902053550.18097-1-bp@alien8.deSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      d1992996
    • S
      x86/paravirt: Do not trace _paravirt_ident_*() functions · 15301a57
      Steven Rostedt 提交于
      Łukasz Daniluk reported that on a RHEL kernel that his machine would lock up
      after enabling function tracer. I asked him to bisect the functions within
      available_filter_functions, which he did and it came down to three:
      
        _paravirt_nop(), _paravirt_ident_32() and _paravirt_ident_64()
      
      It was found that this is only an issue when noreplace-paravirt is added
      to the kernel command line.
      
      This means that those functions are most likely called within critical
      sections of the funtion tracer, and must not be traced.
      
      In newer kenels _paravirt_nop() is defined within gcc asm(), and is no
      longer an issue.  But both _paravirt_ident_{32,64}() causes the
      following splat when they are traced:
      
       mm/pgtable-generic.c:33: bad pmd ffff8800d2435150(0000000001d00054)
       mm/pgtable-generic.c:33: bad pmd ffff8800d3624190(0000000001d00070)
       mm/pgtable-generic.c:33: bad pmd ffff8800d36a5110(0000000001d00054)
       mm/pgtable-generic.c:33: bad pmd ffff880118eb1450(0000000001d00054)
       NMI watchdog: BUG: soft lockup - CPU#2 stuck for 22s! [systemd-journal:469]
       Modules linked in: e1000e
       CPU: 2 PID: 469 Comm: systemd-journal Not tainted 4.6.0-rc4-test+ #513
       Hardware name: Hewlett-Packard HP Compaq Pro 6300 SFF/339A, BIOS K01 v02.05 05/07/2012
       task: ffff880118f740c0 ti: ffff8800d4aec000 task.ti: ffff8800d4aec000
       RIP: 0010:[<ffffffff81134148>]  [<ffffffff81134148>] queued_spin_lock_slowpath+0x118/0x1a0
       RSP: 0018:ffff8800d4aefb90  EFLAGS: 00000246
       RAX: 0000000000000000 RBX: 0000000000000000 RCX: ffff88011eb16d40
       RDX: ffffffff82485760 RSI: 000000001f288820 RDI: ffffea0000008030
       RBP: ffff8800d4aefb90 R08: 00000000000c0000 R09: 0000000000000000
       R10: ffffffff821c8e0e R11: 0000000000000000 R12: ffff880000200fb8
       R13: 00007f7a4e3f7000 R14: ffffea000303f600 R15: ffff8800d4b562e0
       FS:  00007f7a4e3d7840(0000) GS:ffff88011eb00000(0000) knlGS:0000000000000000
       CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
       CR2: 00007f7a4e3f7000 CR3: 00000000d3e71000 CR4: 00000000001406e0
       Call Trace:
         _raw_spin_lock+0x27/0x30
         handle_pte_fault+0x13db/0x16b0
         handle_mm_fault+0x312/0x670
         __do_page_fault+0x1b1/0x4e0
         do_page_fault+0x22/0x30
         page_fault+0x28/0x30
         __vfs_read+0x28/0xe0
         vfs_read+0x86/0x130
         SyS_read+0x46/0xa0
         entry_SYSCALL_64_fastpath+0x1e/0xa8
       Code: 12 48 c1 ea 0c 83 e8 01 83 e2 30 48 98 48 81 c2 40 6d 01 00 48 03 14 c5 80 6a 5d 82 48 89 0a 8b 41 08 85 c0 75 09 f3 90 8b 41 08 <85> c0 74 f7 4c 8b 09 4d 85 c9 74 08 41 0f 18 09 eb 02 f3 90 8b
      Reported-by: NŁukasz Daniluk <lukasz.daniluk@intel.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      Cc: stable@vger.kernel.org
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      15301a57
  9. 02 9月, 2016 1 次提交
  10. 01 9月, 2016 1 次提交
  11. 31 8月, 2016 1 次提交
    • J
      mm/usercopy: get rid of CONFIG_DEBUG_STRICT_USER_COPY_CHECKS · 0d025d27
      Josh Poimboeuf 提交于
      There are three usercopy warnings which are currently being silenced for
      gcc 4.6 and newer:
      
      1) "copy_from_user() buffer size is too small" compile warning/error
      
         This is a static warning which happens when object size and copy size
         are both const, and copy size > object size.  I didn't see any false
         positives for this one.  So the function warning attribute seems to
         be working fine here.
      
         Note this scenario is always a bug and so I think it should be
         changed to *always* be an error, regardless of
         CONFIG_DEBUG_STRICT_USER_COPY_CHECKS.
      
      2) "copy_from_user() buffer size is not provably correct" compile warning
      
         This is another static warning which happens when I enable
         __compiletime_object_size() for new compilers (and
         CONFIG_DEBUG_STRICT_USER_COPY_CHECKS).  It happens when object size
         is const, but copy size is *not*.  In this case there's no way to
         compare the two at build time, so it gives the warning.  (Note the
         warning is a byproduct of the fact that gcc has no way of knowing
         whether the overflow function will be called, so the call isn't dead
         code and the warning attribute is activated.)
      
         So this warning seems to only indicate "this is an unusual pattern,
         maybe you should check it out" rather than "this is a bug".
      
         I get 102(!) of these warnings with allyesconfig and the
         __compiletime_object_size() gcc check removed.  I don't know if there
         are any real bugs hiding in there, but from looking at a small
         sample, I didn't see any.  According to Kees, it does sometimes find
         real bugs.  But the false positive rate seems high.
      
      3) "Buffer overflow detected" runtime warning
      
         This is a runtime warning where object size is const, and copy size >
         object size.
      
      All three warnings (both static and runtime) were completely disabled
      for gcc 4.6 with the following commit:
      
        2fb0815c ("gcc4: disable __compiletime_object_size for GCC 4.6+")
      
      That commit mistakenly assumed that the false positives were caused by a
      gcc bug in __compiletime_object_size().  But in fact,
      __compiletime_object_size() seems to be working fine.  The false
      positives were instead triggered by #2 above.  (Though I don't have an
      explanation for why the warnings supposedly only started showing up in
      gcc 4.6.)
      
      So remove warning #2 to get rid of all the false positives, and re-enable
      warnings #1 and #3 by reverting the above commit.
      
      Furthermore, since #1 is a real bug which is detected at compile time,
      upgrade it to always be an error.
      
      Having done all that, CONFIG_DEBUG_STRICT_USER_COPY_CHECKS is no longer
      needed.
      Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: "H . Peter Anvin" <hpa@zytor.com>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Byungchul Park <byungchul.park@lge.com>
      Cc: Nilay Vaish <nilayvaish@gmail.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      0d025d27
  12. 27 8月, 2016 1 次提交
  13. 25 8月, 2016 1 次提交
  14. 24 8月, 2016 2 次提交
  15. 18 8月, 2016 5 次提交
    • P
      kvm: nVMX: fix nested tsc scaling · c95ba92a
      Peter Feiner 提交于
      When the host supported TSC scaling, L2 would use a TSC multiplier of
      0, which causes a VM entry failure. Now L2's TSC uses the same
      multiplier as L1.
      Signed-off-by: NPeter Feiner <pfeiner@google.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      c95ba92a
    • R
      KVM: nVMX: postpone VMCS changes on MSR_IA32_APICBASE write · dccbfcf5
      Radim Krčmář 提交于
      If vmcs12 does not intercept APIC_BASE writes, then KVM will handle the
      write with vmcs02 as the current VMCS.
      This will incorrectly apply modifications intended for vmcs01 to vmcs02
      and L2 can use it to gain access to L0's x2APIC registers by disabling
      virtualized x2APIC while using msr bitmap that assumes enabled.
      
      Postpone execution of vmx_set_virtual_x2apic_mode until vmcs01 is the
      current VMCS.  An alternative solution would temporarily make vmcs01 the
      current VMCS, but it requires more care.
      
      Fixes: 8d14695f ("x86, apicv: add virtual x2apic support")
      Reported-by: NJim Mattson <jmattson@google.com>
      Reviewed-by: NWanpeng Li <wanpeng.li@hotmail.com>
      Signed-off-by: NRadim Krčmář <rkrcmar@redhat.com>
      dccbfcf5
    • R
      KVM: nVMX: fix msr bitmaps to prevent L2 from accessing L0 x2APIC · d048c098
      Radim Krčmář 提交于
      msr bitmap can be used to avoid a VM exit (interception) on guest MSR
      accesses.  In some configurations of VMX controls, the guest can even
      directly access host's x2APIC MSRs.  See SDM 29.5 VIRTUALIZING MSR-BASED
      APIC ACCESSES.
      
      L2 could read all L0's x2APIC MSRs and write TPR, EOI, and SELF_IPI.
      To do so, L1 would first trick KVM to disable all possible interceptions
      by enabling APICv features and then would turn those features off;
      nested_vmx_merge_msr_bitmap() only disabled interceptions, so VMX would
      not intercept previously enabled MSRs even though they were not safe
      with the new configuration.
      
      Correctly re-enabling interceptions is not enough as a second bug would
      still allow L1+L2 to access host's MSRs: msr bitmap was shared for all
      VMCSs, so L1 could trigger a race to get the desired combination of msr
      bitmap and VMX controls.
      
      This fix allocates a msr bitmap for every L1 VCPU, allows only safe
      x2APIC MSRs from L1's msr bitmap, and disables msr bitmaps if they would
      have to intercept everything anyway.
      
      Fixes: 3af18d9c ("KVM: nVMX: Prepare for using hardware MSR bitmap")
      Reported-by: NJim Mattson <jmattson@google.com>
      Suggested-by: NWincy Van <fanwenyi0529@gmail.com>
      Reviewed-by: NWanpeng Li <wanpeng.li@hotmail.com>
      Signed-off-by: NRadim Krčmář <rkrcmar@redhat.com>
      d048c098
    • J
      x86/smp: Fix __max_logical_packages value setup · 7b0501b1
      Jiri Olsa 提交于
      Frank reported kernel panic when he disabled several cores in BIOS
      via following option:
      
        Core Disable Bitmap(Hex)   [0]
      
      with number 0xFFE, which leaves 16 CPUs in system (out of 48).
      
      The kernel panic below goes along with following messages:
      
       smpboot: Max logical packages: 2^M
       smpboot: APIC(0) Converting physical 0 to logical package 0^M
       smpboot: APIC(20) Converting physical 1 to logical package 1^M
       smpboot: APIC(40) Package 2 exceeds logical package map^M
       smpboot: CPU 8 APICId 40 disabled^M
       smpboot: APIC(60) Package 3 exceeds logical package map^M
       smpboot: CPU 12 APICId 60 disabled^M
       ...
       general protection fault: 0000 [#1] SMP^M
       Modules linked in:^M
       CPU: 15 PID: 1 Comm: swapper/0 Not tainted 4.7.0-rc5+ #1^M
       Hardware name: SGI UV300/UV300, BIOS SGI UV 300 series BIOS 05/25/2016^M
       task: ffff8801673e0000 ti: ffff8801673ac000 task.ti: ffff8801673ac000^M
       RIP: 0010:[<ffffffff81014d54>]  [<ffffffff81014d54>] uncore_change_context+0xd4/0x180^M
       ...
        [<ffffffff810158ac>] uncore_event_init_cpu+0x6c/0x70^M
        [<ffffffff81d8c91c>] intel_uncore_init+0x1c2/0x2dd^M
        [<ffffffff81d8c75a>] ? uncore_cpu_setup+0x17/0x17^M
        [<ffffffff81002190>] do_one_initcall+0x50/0x190^M
        [<ffffffff810ab193>] ? parse_args+0x293/0x480^M
        [<ffffffff81d87365>] kernel_init_freeable+0x1a5/0x249^M
        [<ffffffff81d86a35>] ? set_debug_rodata+0x12/0x12^M
        [<ffffffff816dc19e>] kernel_init+0xe/0x110^M
        [<ffffffff816e93bf>] ret_from_fork+0x1f/0x40^M
        [<ffffffff816dc190>] ? rest_init+0x80/0x80^M
      
      The reason for the panic is wrong value of __max_logical_packages,
      which lets logical_package_map uninitialized and the uncore code
      relying on this map being properly initialized (maybe we should
      add some safety checks there as well).
      
      The __max_logical_packages is computed as:
      
        DIV_ROUND_UP(total_cpus, ncpus);
        - ncpus being number of cores
      
      With above BIOS setup we get total_cpus == 16 which set
      __max_logical_packages to 2 (ncpus is 12).
      
      Once topology_update_package_map processes CPU with logical
      pkg over 2 we display above messages and fail to initialize
      the physical_to_logical_pkg map, which makes the uncore code
      crash.
      
      The fix is to remove logical_package_map bitmap completely
      and keep and update the logical_packages number instead.
      
      After we enumerate all the present CPUs, we check if the
      enumerated logical packages count is within its computed
      maximum from BIOS data.
      
      If it's not the case, we set this maximum to the new enumerated
      value and freeze any new addition of logical packages.
      
      The freeze is because lot of init code like uncore/rapl/cqm
      depends on having maximum logical package value set to allocate
      their data, so we can't change it later on.
      
      Prarit Bhargava tested the patch and confirms that it solves
      the problem:
      
        From dmidecode:
                Core Count: 24
                Core Enabled: 24
                Thread Count: 48
      
      Orig kernel boot log:
      
       [    0.464981] smpboot: Max logical packages: 19
       [    0.469861] smpboot: APIC(0) Converting physical 0 to logical package 0
       [    0.477261] smpboot: APIC(40) Converting physical 1 to logical package 1
       [    0.484760] smpboot: APIC(80) Converting physical 2 to logical package 2
       [    0.492258] smpboot: APIC(c0) Converting physical 3 to logical package 3
      
      1.  nr_cpus=8, should stop enumerating in package 0:
      
       [    0.533664] smpboot: APIC(0) Converting physical 0 to logical package 0
       [    0.539596] smpboot: Max logical packages: 19
      
      2.  max_cpus=8, should still enumerate all packages:
      
       [    0.526494] smpboot: APIC(0) Converting physical 0 to logical package 0
       [    0.532428] smpboot: APIC(40) Converting physical 1 to logical package 1
       [    0.538456] smpboot: APIC(80) Converting physical 2 to logical package 2
       [    0.544486] smpboot: APIC(c0) Converting physical 3 to logical package 3
       [    0.550524] smpboot: Max logical packages: 19
      
      3.  nr_cpus=49 ( 2 socket + 1 core on 3rd socket), should stop enumerating in
          package 2:
      
       [    0.521378] smpboot: APIC(0) Converting physical 0 to logical package 0
       [    0.527314] smpboot: APIC(40) Converting physical 1 to logical package 1
       [    0.533345] smpboot: APIC(80) Converting physical 2 to logical package 2
       [    0.539368] smpboot: Max logical packages: 19
      
      4.  maxcpus=49, should still enumerate all packages:
      
       [    0.525591] smpboot: APIC(0) Converting physical 0 to logical package 0
       [    0.531525] smpboot: APIC(40) Converting physical 1 to logical package 1
       [    0.537547] smpboot: APIC(80) Converting physical 2 to logical package 2
       [    0.543579] smpboot: APIC(c0) Converting physical 3 to logical package 3
       [    0.549624] smpboot: Max logical packages: 19
      
      5.  kdump (nr_cpus=1) works as well.
      Reported-by: NFrank Ramsay <framsay@redhat.com>
      Tested-by: NPrarit Bhargava <prarit@redhat.com>
      Signed-off-by: NJiri Olsa <jolsa@kernel.org>
      Reviewed-by: NPrarit Bhargava <prarit@redhat.com>
      Acked-by: NPeter Zijlstra <peterz@infradead.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/20160815101700.GA30090@kravaSigned-off-by: NIngo Molnar <mingo@kernel.org>
      7b0501b1
    • B
      x86/microcode/AMD: Fix initrd loading with CONFIG_RANDOMIZE_MEMORY=y · 88b2f634
      Borislav Petkov 提交于
      Similar to:
      
        efaad554 ("x86/microcode/intel: Fix initrd loading with CONFIG_RANDOMIZE_MEMORY=y")
      
      ... fix microcode loading from the initrd on AMD by adding the
      randomization offset to the microcode patch container within the initrd.
      Reported-and-tested-by: NBrian Gerst <brgerst@gmail.com>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: linux-tip-commits@vger.kernel.org
      Link: http://lkml.kernel.org/r/20160817113314.GA19221@nazgul.tnicSigned-off-by: NIngo Molnar <mingo@kernel.org>
      88b2f634
  16. 16 8月, 2016 3 次提交
  17. 12 8月, 2016 1 次提交
    • K
      perf/x86/intel/uncore: Add enable_box for client MSR uncore · 95f3be79
      Kan Liang 提交于
      There are bug reports about miscounting uncore counters on some
      client machines like Sandybridge, Broadwell and Skylake. It is
      very likely to be observed on idle systems.
      
      This issue is caused by a hardware issue. PERF_GLOBAL_CTL could be
      cleared after Package C7, and nothing will be count.
      The related errata (HSD 158) could be found in:
      
        www.intel.com/content/dam/www/public/us/en/documents/specification-updates/4th-gen-core-family-desktop-specification-update.pdf
      
      This patch tries to work around this issue by re-enabling PERF_GLOBAL_CTL
      in ->enable_box(). The workaround does not cover all cases. It helps for new
      events after returning from C7. But it cannot prevent C7, it will still
      miscount if a counter is already active.
      
      There is no drawback in leaving it enabled, so it does not need
      disable_box() here.
      Signed-off-by: NKan Liang <kan.liang@intel.com>
      Cc: <stable@vger.kernel.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Link: http://lkml.kernel.org/r/1470925874-59943-1-git-send-email-kan.liang@intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      95f3be79