1. 16 9月, 2016 10 次提交
  2. 15 9月, 2016 9 次提交
  3. 14 9月, 2016 2 次提交
  4. 10 9月, 2016 6 次提交
    • P
      perf/x86/intel: Fix PEBSv3 record drain · 8ef9b845
      Peter Zijlstra 提交于
      Alexander hit the WARN_ON_ONCE(!event) on his Skylake while running
      the perf fuzzer.
      
      This means the PEBSv3 record included a status bit for an inactive
      event, something that _should_ not happen.
      
      Move the code that filters the status bits against our known PEBS
      events up a spot to guarantee we only deal with events we know about.
      
      Further add "continue" statements to the WARN_ON_ONCE()s such that
      we'll not die nor generate silly events in case we ever do hit them
      again.
      Reported-by: NAlexander Shishkin <alexander.shishkin@linux.intel.com>
      Tested-by: NAlexander Shishkin <alexander.shishkin@linux.intel.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Kan Liang <kan.liang@intel.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vince@deater.net>
      Cc: stable@vger.kernel.org
      Fixes: a3d86542 ("perf/x86/intel/pebs: Add PEBSv3 decoding")
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      8ef9b845
    • A
      perf/x86/intel/bts: Kill a silly warning · ef9ef3be
      Alexander Shishkin 提交于
      At the moment, intel_bts will WARN() out if there is more than one
      event writing to the same ring buffer, via SET_OUTPUT, and will only
      send data from one event to a buffer.
      
      There is no reason to have this warning in, so kill it.
      Signed-off-by: NAlexander Shishkin <alexander.shishkin@linux.intel.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Arnaldo Carvalho de Melo <acme@infradead.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: vince@deater.net
      Link: http://lkml.kernel.org/r/20160906132353.19887-6-alexander.shishkin@linux.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      ef9ef3be
    • A
      perf/x86/intel/bts: Fix BTS PMI detection · 4d4c4741
      Alexander Shishkin 提交于
      Since BTS doesn't have a dedicated PMI status bit, the driver needs to
      take extra care to check for the condition that triggers it to avoid
      spurious NMI warnings.
      
      Regardless of the local BTS context state, the only way of knowing that
      the NMI is ours is to compare the write pointer against the interrupt
      threshold.
      Reported-by: NVince Weaver <vincent.weaver@maine.edu>
      Signed-off-by: NAlexander Shishkin <alexander.shishkin@linux.intel.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Arnaldo Carvalho de Melo <acme@infradead.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: vince@deater.net
      Link: http://lkml.kernel.org/r/20160906132353.19887-5-alexander.shishkin@linux.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      4d4c4741
    • A
      perf/x86/intel/bts: Fix confused ordering of PMU callbacks · a9a94401
      Alexander Shishkin 提交于
      The intel_bts driver is using a CPU-local 'started' variable to order
      callbacks and PMIs and make sure that AUX transactions don't get messed
      up. However, the ordering rules in regard to this variable is a complete
      mess, which recently resulted in perf_fuzzer-triggered warnings and
      panics.
      
      The general ordering rule that is patch is enforcing is that this
      cpu-local variable be set only when the cpu-local AUX transaction is
      active; consequently, this variable is to be checked before the AUX
      related bits can be touched.
      Reported-by: NVince Weaver <vincent.weaver@maine.edu>
      Signed-off-by: NAlexander Shishkin <alexander.shishkin@linux.intel.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Arnaldo Carvalho de Melo <acme@infradead.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: vince@deater.net
      Link: http://lkml.kernel.org/r/20160906132353.19887-4-alexander.shishkin@linux.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      a9a94401
    • D
      mm: fix cache mode of dax pmd mappings · 9049771f
      Dan Williams 提交于
      track_pfn_insert() in vmf_insert_pfn_pmd() is marking dax mappings as
      uncacheable rendering them impractical for application usage.  DAX-pte
      mappings are cached and the goal of establishing DAX-pmd mappings is to
      attain more performance, not dramatically less (3 orders of magnitude).
      
      track_pfn_insert() relies on a previous call to reserve_memtype() to
      establish the expected page_cache_mode for the range.  While memremap()
      arranges for reserve_memtype() to be called, devm_memremap_pages() does
      not.  So, teach track_pfn_insert() and untrack_pfn() how to handle
      tracking without a vma, and arrange for devm_memremap_pages() to
      establish the write-back-cache reservation in the memtype tree.
      
      Cc: <stable@vger.kernel.org>
      Cc: Matthew Wilcox <mawilcox@microsoft.com>
      Cc: Ross Zwisler <ross.zwisler@linux.intel.com>
      Cc: Nilesh Choudhury <nilesh.choudhury@oracle.com>
      Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Reported-by: NToshi Kani <toshi.kani@hpe.com>
      Reported-by: NKai Zhang <kai.ka.zhang@oracle.com>
      Acked-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      9049771f
    • S
      perf/x86/amd/uncore: Prevent use after free · 7d762e49
      Sebastian Andrzej Siewior 提交于
      The resent conversion of the cpu hotplug support in the uncore driver
      introduced a regression due to the way the callbacks are invoked at
      initialization time.
      
      The old code called the prepare/starting/online function on each online cpu
      as a block. The new code registers the hotplug callbacks in the core for
      each state. The core invokes the callbacks at each registration on all
      online cpus.
      
      The code implicitely relied on the prepare/starting/online callbacks being
      called as combo on a particular cpu, which was not obvious and completely
      undocumented.
      
      The resulting subtle wreckage happens due to the way how the uncore code
      manages shared data structures for cpus which share an uncore resource in
      hardware. The sharing is determined in the cpu starting callback, but the
      prepare callback allocates per cpu data for the upcoming cpu because
      potential sharing is unknown at this point. If the starting callback finds
      a online cpu which shares the hardware resource it takes a refcount on the
      percpu data of that cpu and puts the own data structure into a
      'free_at_online' pointer of that shared data structure. The online callback
      frees that.
      
      With the old model this worked because in a starting callback only one non
      unused structure (the one of the starting cpu) was available. The new code
      allocates the data structures for all cpus when the prepare callback is
      registered.
      
      Now the starting function iterates through all online cpus and looks for a
      data structure (skipping its own) which has a matching hardware id. The id
      member of the data structure is initialized to 0, but the hardware id can
      be 0 as well. The resulting wreckage is:
      
        CPU0 finds a matching id on CPU1, takes a refcount on CPU1 data and puts
        its own data structure into CPU1s data structure to be freed.
      
        CPU1 skips CPU0 because the data structure is its allegedly unsued own.
        It finds a matching id on CPU2, takes a refcount on CPU1 data and puts
        its own data structure into CPU2s data structure to be freed.
      
        ....
      
      Now the online callbacks are invoked.
      
        CPU0 has a pointer to CPU1s data and frees the original CPU0 data. So
        far so good.
      
        CPU1 has a pointer to CPU2s data and frees the original CPU1 data, which
        is still referenced by CPU0 ---> Booom
      
      So there are two issues to be solved here:
      
      1) The id field must be initialized at allocation time to a value which
         cannot be a valid hardware id, i.e. -1
      
         This prevents the above scenario, but now CPU1 and CPU2 both stick their
         own data structure into the free_at_online pointer of CPU0. So we leak
         CPU1s data structure.
      
      2) Fix the memory leak described in #1
      
         Instead of having a single pointer, use a hlist to enqueue the
         superflous data structures which are then freed by the first cpu
         invoking the online callback.
      
      Ideally we should know the sharing _before_ invoking the prepare callback,
      but that's way beyond the scope of this bug fix.
      
      [ tglx: Rewrote changelog ]
      
      Fixes: 96b2bd38 ("perf/x86/amd/uncore: Convert to hotplug state machine")
      Reported-and-tested-by: NEric Sandeen <sandeen@sandeen.net>
      Signed-off-by: NSebastian Andrzej Siewior <bigeasy@linutronix.de>
      Cc: Borislav Petkov <bp@suse.de>
      Link: http://lkml.kernel.org/r/20160909160822.lowgmkdwms2dheyv@linutronix.deSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      7d762e49
  5. 08 9月, 2016 9 次提交
    • P
      x86, clock: Fix kvm guest tsc initialization · a4497a86
      Prarit Bhargava 提交于
      When booting a kvm guest on AMD with the latest kernel the following
      messages are displayed in the boot log:
      
       tsc: Unable to calibrate against PIT
       tsc: HPET/PMTIMER calibration failed
      
      aa297292 ("x86/tsc: Enumerate SKL cpu_khz and tsc_khz via CPUID")
      introduced a change to account for a difference in cpu and tsc frequencies for
      Intel SKL processors. Before this change the native tsc set
      x86_platform.calibrate_tsc to native_calibrate_tsc() which is a hardware
      calibration of the tsc, and in tsc_init() executed
      
      	tsc_khz = x86_platform.calibrate_tsc();
      	cpu_khz = tsc_khz;
      
      The kvm code changed x86_platform.calibrate_tsc to kvm_get_tsc_khz() and
      executed the same tsc_init() function.  This meant that KVM guests did not
      execute the native hardware calibration function.
      
      After aa297292, there are separate native calibrations for cpu_khz and
      tsc_khz.  The code sets x86_platform.calibrate_tsc to native_calibrate_tsc()
      which is now an Intel specific calibration function, and
      x86_platform.calibrate_cpu to native_calibrate_cpu() which is the "old"
      native_calibrate_tsc() function (ie, the native hardware calibration
      function).
      
      tsc_init() now does
      
      	cpu_khz = x86_platform.calibrate_cpu();
      	tsc_khz = x86_platform.calibrate_tsc();
      	if (tsc_khz == 0)
      		tsc_khz = cpu_khz;
      	else if (abs(cpu_khz - tsc_khz) * 10 > tsc_khz)
      		cpu_khz = tsc_khz;
      
      The kvm code should not call the hardware initialization in
      native_calibrate_cpu(), as it isn't applicable for kvm and it didn't do that
      prior to aa297292.
      
      This patch resolves this issue by setting x86_platform.calibrate_cpu to
      kvm_get_tsc_khz().
      
      v2: I had originally set x86_platform.calibrate_cpu to
      cpu_khz_from_cpuid(), however, pbonzini pointed out that the CPUID leaf
      in that function is not available in KVM.  I have changed the function
      pointer to kvm_get_tsc_khz().
      
      Fixes: aa297292 ("x86/tsc: Enumerate SKL cpu_khz and tsc_khz via CPUID")
      Signed-off-by: NPrarit Bhargava <prarit@redhat.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Radim Krčmář <rkrcmar@redhat.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: x86@kernel.org
      Cc: Len Brown <len.brown@intel.com>
      Cc: "Peter Zijlstra (Intel)" <peterz@infradead.org>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: Adrian Hunter <adrian.hunter@intel.com>
      Cc: "Christopher S. Hall" <christopher.s.hall@intel.com>
      Cc: David Woodhouse <dwmw2@infradead.org>
      Cc: kvm@vger.kernel.org
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      a4497a86
    • J
      x86/dumpstack: Remove unnecessary stack pointer arguments · 5a8ff54c
      Josh Poimboeuf 提交于
      When calling show_stack_log_lvl() or dump_trace() with a regs argument,
      providing a stack pointer or frame pointer is redundant.
      
      Signed-off-by: Josh Poimboeuf <jpoimboe@redhat.com>d
      Reviewed-by: NAndy Lutomirski <luto@kernel.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Byungchul Park <byungchul.park@lge.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Nilay Vaish <nilayvaish@gmail.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/1694e2e955e3b9a73a3c3d5ba2634344014dd550.1472057064.git.jpoimboe@redhat.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      5a8ff54c
    • J
      x86/dumpstack: Add get_stack_pointer() and get_frame_pointer() · 4b8afafb
      Josh Poimboeuf 提交于
      The various functions involved in dumping the stack all do similar
      things with regard to getting the stack pointer and the frame pointer
      based on the regs and task arguments.  Create helper functions to
      do that instead.
      Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com>
      Reviewed-by: NAndy Lutomirski <luto@kernel.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Byungchul Park <byungchul.park@lge.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Nilay Vaish <nilayvaish@gmail.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/f448914885a35f333fe04da1b97a6c2cc1f80974.1472057064.git.jpoimboe@redhat.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      4b8afafb
    • J
      x86/dumpstack: Make printk_stack_address() more generally useful · d438f5fd
      Josh Poimboeuf 提交于
      Change printk_stack_address() to be useful when called by an unwinder
      outside the context of dump_trace().
      
      Specifically:
      
      - printk_stack_address()'s 'data' argument is always used as the log
        level string.  Make that explicit.
      
      - Call touch_nmi_watchdog().
      Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Byungchul Park <byungchul.park@lge.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Nilay Vaish <nilayvaish@gmail.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/9fbe0db05bacf66d337c162edbf61450d0cff1e2.1472057064.git.jpoimboe@redhat.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      d438f5fd
    • J
      oprofile/x86: Add regs->ip to oprofile trace · 3e344a0d
      Josh Poimboeuf 提交于
      dump_trace() doesn't add the interrupted instruction's address to the
      trace, so add it manually.  This makes the profile more useful, and also
      makes it more consistent with what perf profiling does.
      Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com>
      Acked-by: NRobert Richter <rric@kernel.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Byungchul Park <byungchul.park@lge.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Nilay Vaish <nilayvaish@gmail.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/6c745a83dbd69fc6857ef9b2f8be0f011d775936.1472057064.git.jpoimboe@redhat.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      3e344a0d
    • J
      perf/x86: Check perf_callchain_store() error · 019e579d
      Josh Poimboeuf 提交于
      Add a check to perf_callchain_kernel() so that it returns early if the
      callchain entry array is already full.
      Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Byungchul Park <byungchul.park@lge.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Nilay Vaish <nilayvaish@gmail.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/dce6d60bab08be2600efd90021d9b85620646161.1472057064.git.jpoimboe@redhat.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      019e579d
    • A
      x86/mm: Improve stack-overflow #PF handling · 6271cfdf
      Andy Lutomirski 提交于
      If we get a page fault indicating kernel stack overflow, invoke
      handle_stack_overflow().  To prevent us from overflowing the stack
      again while handling the overflow (because we are likely to have
      very little stack space left), call handle_stack_overflow() on the
      double-fault stack.
      Signed-off-by: NAndy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/6d6cf96b3fb9b4c9aa303817e1dc4de0c7c36487.1472603235.git.luto@kernel.org
      [ Minor edit. ]
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      6271cfdf
    • D
      x86/apic: Fix num_processors value in case of failure · c291b015
      Dou Liyang 提交于
      If the topology package map check of the APIC ID and the CPU is a failure,
      we don't generate the processor info for that APIC ID yet we increase
      disabled_cpus by one - which is buggy.
      
      Only increase num_processors once we are sure we don't fail.
      Signed-off-by: NDou Liyang <douly.fnst@cn.fujitsu.com>
      Acked-by: NDavid Rientjes <rientjes@google.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/1473214893-16481-1-git-send-email-douly.fnst@cn.fujitsu.com
      [ Rewrote the changelog. ]
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      c291b015
    • M
      um/ptrace: Fix the syscall number update after a ptrace · ce29856a
      Mickaël Salaün 提交于
      Update the syscall number after each PTRACE_SETREGS on ORIG_*AX.
      
      This is needed to get the potentially altered syscall number in the
      seccomp filters after RET_TRACE.
      
      This fix four seccomp_bpf tests:
      > [ RUN      ] TRACE_syscall.skip_after_RET_TRACE
      > seccomp_bpf.c:1560:TRACE_syscall.skip_after_RET_TRACE:Expected -1 (18446744073709551615) == syscall(39) (26)
      > seccomp_bpf.c:1561:TRACE_syscall.skip_after_RET_TRACE:Expected 1 (1) == (*__errno_location ()) (22)
      > [     FAIL ] TRACE_syscall.skip_after_RET_TRACE
      > [ RUN      ] TRACE_syscall.kill_after_RET_TRACE
      > TRACE_syscall.kill_after_RET_TRACE: Test exited normally instead of by signal (code: 1)
      > [     FAIL ] TRACE_syscall.kill_after_RET_TRACE
      > [ RUN      ] TRACE_syscall.skip_after_ptrace
      > seccomp_bpf.c:1622:TRACE_syscall.skip_after_ptrace:Expected -1 (18446744073709551615) == syscall(39) (26)
      > seccomp_bpf.c:1623:TRACE_syscall.skip_after_ptrace:Expected 1 (1) == (*__errno_location ()) (22)
      > [     FAIL ] TRACE_syscall.skip_after_ptrace
      > [ RUN      ] TRACE_syscall.kill_after_ptrace
      > TRACE_syscall.kill_after_ptrace: Test exited normally instead of by signal (code: 1)
      > [     FAIL ] TRACE_syscall.kill_after_ptrace
      
      Fixes: 26703c63 ("um/ptrace: run seccomp after ptrace")
      Signed-off-by: NMickaël Salaün <mic@digikod.net>
      Acked-by: NKees Cook <keescook@chromium.org>
      Cc: Jeff Dike <jdike@addtoit.com>
      Cc: Richard Weinberger <richard@nod.at>
      Cc: James Morris <jmorris@namei.org>
      Cc: user-mode-linux-devel@lists.sourceforge.net
      Signed-off-by: NJames Morris <james.l.morris@oracle.com>
      Signed-off-by: NKees Cook <keescook@chromium.org>
      ce29856a
  6. 07 9月, 2016 1 次提交
    • K
      x86/uaccess: force copy_*_user() to be inlined · e6971009
      Kees Cook 提交于
      As already done with __copy_*_user(), mark copy_*_user() as __always_inline.
      Without this, the checks for things like __builtin_const_p() won't work
      consistently in either hardened usercopy nor the recent adjustments for
      detecting usercopy overflows at compile time.
      
      The change in kernel text size is detectable, but very small:
      
       text      data     bss     dec      hex     filename
      12118735  5768608 14229504 32116847 1ea106f vmlinux.before
      12120207  5768608 14229504 32118319 1ea162f vmlinux.after
      Signed-off-by: NKees Cook <keescook@chromium.org>
      e6971009
  7. 06 9月, 2016 1 次提交
  8. 05 9月, 2016 2 次提交
    • W
      KVM: lapic: adjust preemption timer correctly when goes TSC backward · e12c8f36
      Wanpeng Li 提交于
      TSC_OFFSET will be adjusted if discovers TSC backward during vCPU load.
      The preemption timer, which relies on the guest tsc to reprogram its
      preemption timer value, is also reprogrammed if vCPU is scheded in to
      a different pCPU. However, the current implementation reprogram preemption
      timer before TSC_OFFSET is adjusted to the right value, resulting in the
      preemption timer firing prematurely.
      
      This patch fix it by adjusting TSC_OFFSET before reprogramming preemption
      timer if TSC backward.
      
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Radim Krċmář <rkrcmar@redhat.com>
      Cc: Yunhong Jiang <yunhong.jiang@intel.com>
      Signed-off-by: NWanpeng Li <wanpeng.li@hotmail.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      e12c8f36
    • J
      x86/efi: Use efi_exit_boot_services() · d6493401
      Jeffrey Hugo 提交于
      The eboot code directly calls ExitBootServices.  This is inadvisable as the
      UEFI spec details a complex set of errors, race conditions, and API
      interactions that the caller of ExitBootServices must get correct.  The
      eboot code attempts allocations after calling ExitBootSerives which is
      not permitted per the spec.  Call the efi_exit_boot_services() helper
      intead, which handles the allocation scenario properly.
      Signed-off-by: NJeffrey Hugo <jhugo@codeaurora.org>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Leif Lindholm <leif.lindholm@linaro.org>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NMatt Fleming <matt@codeblueprint.co.uk>
      d6493401