1. 07 6月, 2015 1 次提交
  2. 19 4月, 2015 1 次提交
  3. 14 4月, 2015 1 次提交
    • L
      x86 msr-index: define MSR_TURBO_RATIO_LIMIT,1,2 · c4d30668
      Len Brown 提交于
      MSR_TURBO_RATIO_LIMIT has grown into a set of three registers.
      Add the documented names for them, in preparation
      for deleting the previous ad-hoc names:
      
      +#define MSR_TURBO_RATIO_LIMIT          0x000001ad
      +#define MSR_TURBO_RATIO_LIMIT1         0x000001ae
      +#define MSR_TURBO_RATIO_LIMIT2         0x000001af
      Signed-off-by: NLen Brown <len.brown@intel.com>
      Cc: x86@kernel.org
      c4d30668
  4. 03 4月, 2015 1 次提交
    • B
      x86/mm/KASLR: Propagate KASLR status to kernel proper · 78cac48c
      Borislav Petkov 提交于
      Commit:
      
        e2b32e67 ("x86, kaslr: randomize module base load address")
      
      made module base address randomization unconditional and didn't regard
      disabled KKASLR due to CONFIG_HIBERNATION and command line option
      "nokaslr". For more info see (now reverted) commit:
      
        f47233c2 ("x86/mm/ASLR: Propagate base load address calculation")
      
      In order to propagate KASLR status to kernel proper, we need a single bit
      in boot_params.hdr.loadflags and we've chosen bit 1 thus leaving the
      top-down allocated bits for bits supposed to be used by the bootloader.
      
      Originally-From: Jiri Kosina <jkosina@suse.cz>
      Suggested-by: NH. Peter Anvin <hpa@zytor.com>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Cc: Kees Cook <keescook@chromium.org>
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      78cac48c
  5. 02 4月, 2015 1 次提交
    • A
      perf/x86/intel/pt: Add Intel PT PMU driver · 52ca9ced
      Alexander Shishkin 提交于
      Add support for Intel Processor Trace (PT) to kernel's perf events.
      PT is an extension of Intel Architecture that collects information about
      software execuction such as control flow, execution modes and timings and
      formats it into highly compressed binary packets. Even being compressed,
      these packets are generated at hundreds of megabytes per second per core,
      which makes it impractical to decode them on the fly in the kernel.
      
      This driver exports trace data by through AUX space in the perf ring
      buffer, which is zero-copy mapped into userspace for faster data retrieval.
      Signed-off-by: NAlexander Shishkin <alexander.shishkin@linux.intel.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Kaixu Xia <kaixu.xia@linaro.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Robert Richter <rric@kernel.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: acme@infradead.org
      Cc: adrian.hunter@intel.com
      Cc: kan.liang@intel.com
      Cc: markus.t.metzger@intel.com
      Cc: mathieu.poirier@linaro.org
      Link: http://lkml.kernel.org/r/1422614392-114498-1-git-send-email-alexander.shishkin@linux.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      52ca9ced
  6. 01 4月, 2015 1 次提交
  7. 27 3月, 2015 1 次提交
  8. 17 3月, 2015 2 次提交
  9. 16 3月, 2015 1 次提交
    • B
      Revert "x86/mm/ASLR: Propagate base load address calculation" · 69797daf
      Borislav Petkov 提交于
      This reverts commit:
      
        f47233c2 ("x86/mm/ASLR: Propagate base load address calculation")
      
      The main reason for the revert is that the new boot flag does not work
      at all currently, and in order to make this work, we need non-trivial
      changes to the x86 boot code which we didn't manage to get done in
      time for merging.
      
      And even if we did, they would've been too risky so instead of
      rushing things and break booting 4.1 on boxes left and right, we
      will be very strict and conservative and will take our time with
      this to fix and test it properly.
      Reported-by: NYinghai Lu <yinghai@kernel.org>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Baoquan He <bhe@redhat.com>
      Cc: H. Peter Anvin <hpa@linux.intel.com
      Cc: Jiri Kosina <jkosina@suse.cz>
      Cc: Josh Triplett <josh@joshtriplett.org>
      Cc: Junjie Mao <eternal.n08@gmail.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Matt Fleming <matt.fleming@intel.com>
      Link: http://lkml.kernel.org/r/20150316100628.GD22995@pd.tnicSigned-off-by: NIngo Molnar <mingo@kernel.org>
      69797daf
  10. 05 3月, 2015 2 次提交
    • D
      x86/asm/entry/64: Fix comments · e90e147c
      Denys Vlasenko 提交于
       - Misleading and slightly incorrect comments in "struct pt_regs" are
         fixed (four instances).
      
       - Fix incorrect comment atop EMPTY_FRAME macro.
      
       - Explain in more detail what we do with stack layout during hw interrupt.
      
       - Correct comments about "partial stack frame" which are no longer
         true.
      Signed-off-by: NDenys Vlasenko <dvlasenk@redhat.com>
      Signed-off-by: NAndy Lutomirski <luto@amacapital.net>
      Cc: Alexei Starovoitov <ast@plumgrid.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Will Drewry <wad@chromium.org>
      Link: http://lkml.kernel.org/r/1423778052-21038-3-git-send-email-dvlasenk@redhat.com
      Link: http://lkml.kernel.org/r/e1f4429c491fe6ceeddb879dea2786e0f8920f9c.1424989793.git.luto@amacapital.netSigned-off-by: NIngo Molnar <mingo@kernel.org>
      e90e147c
    • D
      x86/asm/entry/64: Always allocate a complete "struct pt_regs" on the kernel stack · 76f5df43
      Denys Vlasenko 提交于
      The 64-bit entry code was using six stack slots less by not
      saving/restoring registers which are callee-preserved according
      to the C ABI, and was not allocating space for them.
      
      Only when syscalls needed a complete "struct pt_regs" was
      the complete area allocated and filled in.
      
      As an additional twist, on interrupt entry a "slightly less
      truncated pt_regs" trick is used, to make nested interrupt
      stacks easier to unwind.
      
      This proved to be a source of significant obfuscation and subtle
      bugs. For example, 'stub_fork' had to pop the return address,
      extend the struct, save registers, and push return address back.
      Ugly. 'ia32_ptregs_common' pops return address and "returns" via
      jmp insn, throwing a wrench into CPU return stack cache.
      
      This patch changes the code to always allocate a complete
      "struct pt_regs" on the kernel stack. The saving of registers
      is still done lazily.
      
      "Partial pt_regs" trick on interrupt stack is retained.
      
      Macros which manipulate "struct pt_regs" on stack are reworked:
      
       - ALLOC_PT_GPREGS_ON_STACK allocates the structure.
      
       - SAVE_C_REGS saves to it those registers which are clobbered
         by C code.
      
       - SAVE_EXTRA_REGS saves to it all other registers.
      
       - Corresponding RESTORE_* and REMOVE_PT_GPREGS_FROM_STACK macros
         reverse it.
      
      'ia32_ptregs_common', 'stub_fork' and friends lost their ugly dance
      with the return pointer.
      
      LOAD_ARGS32 in ia32entry.S now uses symbolic stack offsets
      instead of magic numbers.
      
      'error_entry' and 'save_paranoid' now use SAVE_C_REGS +
      SAVE_EXTRA_REGS instead of having it open-coded yet again.
      
      Patch was run-tested: 64-bit executables, 32-bit executables,
      strace works.
      
      Timing tests did not show measurable difference in 32-bit
      and 64-bit syscalls.
      Signed-off-by: NDenys Vlasenko <dvlasenk@redhat.com>
      Signed-off-by: NAndy Lutomirski <luto@amacapital.net>
      Cc: Alexei Starovoitov <ast@plumgrid.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Will Drewry <wad@chromium.org>
      Link: http://lkml.kernel.org/r/1423778052-21038-2-git-send-email-dvlasenk@redhat.com
      Link: http://lkml.kernel.org/r/b89763d354aa23e670b9bdf3a40ae320320a7c2e.1424989793.git.luto@amacapital.netSigned-off-by: NIngo Molnar <mingo@kernel.org>
      76f5df43
  11. 02 3月, 2015 1 次提交
  12. 19 2月, 2015 1 次提交
    • J
      x86/mm/ASLR: Propagate base load address calculation · f47233c2
      Jiri Kosina 提交于
      Commit:
      
        e2b32e67 ("x86, kaslr: randomize module base load address")
      
      makes the base address for module to be unconditionally randomized in
      case when CONFIG_RANDOMIZE_BASE is defined and "nokaslr" option isn't
      present on the commandline.
      
      This is not consistent with how choose_kernel_location() decides whether
      it will randomize kernel load base.
      
      Namely, CONFIG_HIBERNATION disables kASLR (unless "kaslr" option is
      explicitly specified on kernel commandline), which makes the state space
      larger than what module loader is looking at. IOW CONFIG_HIBERNATION &&
      CONFIG_RANDOMIZE_BASE is a valid config option, kASLR wouldn't be applied
      by default in that case, but module loader is not aware of that.
      
      Instead of fixing the logic in module.c, this patch takes more generic
      aproach. It introduces a new bootparam setup data_type SETUP_KASLR and
      uses that to pass the information whether kaslr has been applied during
      kernel decompression, and sets a global 'kaslr_enabled' variable
      accordingly, so that any kernel code (module loading, livepatching, ...)
      can make decisions based on its value.
      
      x86 module loader is converted to make use of this flag.
      Signed-off-by: NJiri Kosina <jkosina@suse.cz>
      Acked-by: NKees Cook <keescook@chromium.org>
      Cc: "H. Peter Anvin" <hpa@linux.intel.com>
      Link: https://lkml.kernel.org/r/alpine.LNX.2.00.1502101411280.10719@pobox.suse.cz
      [ Always dump correct kaslr status when panicking ]
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      f47233c2
  13. 10 2月, 2015 1 次提交
    • L
      tools/power turbostat: decode MSR_*_PERF_LIMIT_REASONS · 3a9a941d
      Len Brown 提交于
      The Processor generation code-named Haswell
      added MSR_{CORE | GFX | RING}_PERF_LIMIT_REASONS
      to explain when and how the processor limits frequency.
      
      turbostat -v
      will now decode these bits.
      
      Each MSR has an "Active" set of bits which describe
      current conditions, and a "Logged" set of bits,
      which describe what has happened since last cleared.
      
      Turbostat currently doesn't clear the log bits.
      Signed-off-by: NLen Brown <len.brown@intel.com>
      3a9a941d
  14. 30 1月, 2015 1 次提交
  15. 26 1月, 2015 1 次提交
  16. 09 1月, 2015 2 次提交
  17. 21 12月, 2014 1 次提交
  18. 18 12月, 2014 1 次提交
  19. 05 12月, 2014 1 次提交
  20. 03 12月, 2014 1 次提交
  21. 12 11月, 2014 2 次提交
  22. 24 10月, 2014 1 次提交
  23. 21 8月, 2014 1 次提交
  24. 16 8月, 2014 1 次提交
  25. 21 7月, 2014 1 次提交
  26. 17 7月, 2014 1 次提交
  27. 19 6月, 2014 1 次提交
  28. 30 5月, 2014 1 次提交
    • F
      x86/xsaves: Detect xsaves/xrstors feature · 6229ad27
      Fenghua Yu 提交于
      Detect the xsaveopt, xsavec, xgetbv, and xsaves features in processor extended
      state enumberation sub-leaf (eax=0x0d, ecx=1):
      Bit 00: XSAVEOPT is available
      Bit 01: Supports XSAVEC and the compacted form of XRSTOR if set
      Bit 02: Supports XGETBV with ECX = 1 if set
      Bit 03: Supports XSAVES/XRSTORS and IA32_XSS if set
      
      The above features are defined in the new word 10 in cpu features.
      
      The IA32_XSS MSR (index DA0H) contains a state-component bitmap that specifies
      the state components that software has enabled xsaves and xrstors to manage.
      If the bit corresponding to a state component is clear in XCR0 | IA32_XSS,
      xsaves and xrstors will not operate on that state component, regardless of
      the value of the instruction mask.
      Signed-off-by: NFenghua Yu <fenghua.yu@intel.com>
      Link: http://lkml.kernel.org/r/1401387164-43416-3-git-send-email-fenghua.yu@intel.comSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      6229ad27
  29. 09 5月, 2014 1 次提交
  30. 06 5月, 2014 1 次提交
  31. 14 3月, 2014 2 次提交
  32. 24 2月, 2014 1 次提交
  33. 21 1月, 2014 1 次提交
  34. 17 1月, 2014 2 次提交
    • J
      KVM: nVMX: Clean up handling of VMX-related MSRs · cae50139
      Jan Kiszka 提交于
      This simplifies the code and also stops issuing warning about writing to
      unhandled MSRs when VMX is disabled or the Feature Control MSR is
      locked - we do handle them all according to the spec.
      Signed-off-by: NJan Kiszka <jan.kiszka@siemens.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      cae50139
    • V
      add support for Hyper-V reference time counter · e984097b
      Vadim Rozenfeld 提交于
      Signed-off: Peter Lieven <pl@kamp.de>
      Signed-off: Gleb Natapov
      Signed-off: Vadim Rozenfeld <vrozenfe@redhat.com>
      
      After some consideration I decided to submit only Hyper-V reference
      counters support this time. I will submit iTSC support as a separate
      patch as soon as it is ready.
      
      v1 -> v2
      1. mark TSC page dirty as suggested by
          Eric Northup <digitaleric@google.com> and Gleb
      2. disable local irq when calling get_kernel_ns,
          as it was done by Peter Lieven <pl@amp.de>
      3. move check for TSC page enable from second patch
          to this one.
      
      v3 -> v4
          Get rid of ref counter offset.
      
      v4 -> v5
          replace __copy_to_user with kvm_write_guest
          when updateing iTSC page.
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      e984097b