1. 13 8月, 2015 1 次提交
  2. 23 7月, 2015 1 次提交
  3. 10 7月, 2015 1 次提交
  4. 04 7月, 2015 1 次提交
  5. 07 6月, 2015 2 次提交
  6. 04 6月, 2015 1 次提交
  7. 28 5月, 2015 2 次提交
  8. 27 5月, 2015 1 次提交
  9. 19 5月, 2015 2 次提交
    • I
      x86/fpu: Rename xsave.header::xstate_bv to 'xfeatures' · 400e4b20
      Ingo Molnar 提交于
      'xsave.header::xstate_bv' is a misnomer - what does 'bv' stand for?
      
      It probably comes from the 'XGETBV' instruction name, but I could
      not find in the Intel documentation where that abbreviation comes
      from. It could mean 'bit vector' - or something else?
      
      But how about - instead of guessing about a weird name - we named
      the field in an obvious and descriptive way that tells us exactly
      what it does?
      
      So rename it to 'xfeatures', which is a bitmask of the
      xfeatures that are fpstate_active in that context structure.
      
      Eyesore like:
      
                 fpu->state->xsave.xsave_hdr.xstate_bv |= XSTATE_FP;
      
      is now much more readable:
      
                 fpu->state->xsave.header.xfeatures |= XSTATE_FP;
      
      Which form is not just infinitely more readable, but is also
      shorter as well.
      Reviewed-by: NBorislav Petkov <bp@alien8.de>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      400e4b20
    • I
      x86/fpu: Rename 'xsave_hdr' to 'header' · 3a54450b
      Ingo Molnar 提交于
      Code like:
      
                 fpu->state->xsave.xsave_hdr.xstate_bv |= XSTATE_FP;
      
      is an eyesore, because not only is the words 'xsave' and 'state'
      are repeated twice times (!), but also because of the 'hdr' and 'bv'
      abbreviations that are pretty meaningless at a first glance.
      
      Start cleaning this up by renaming 'xsave_hdr' to 'header'.
      Reviewed-by: NBorislav Petkov <bp@alien8.de>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      3a54450b
  10. 07 5月, 2015 1 次提交
    • N
      KVM: x86: Support for disabling quirks · 90de4a18
      Nadav Amit 提交于
      Introducing KVM_CAP_DISABLE_QUIRKS for disabling x86 quirks that were previous
      created in order to overcome QEMU issues. Those issue were mostly result of
      invalid VM BIOS.  Currently there are two quirks that can be disabled:
      
      1. KVM_QUIRK_LINT0_REENABLED - LINT0 was enabled after boot
      2. KVM_QUIRK_CD_NW_CLEARED - CD and NW are cleared after boot
      
      These two issues are already resolved in recent releases of QEMU, and would
      therefore be disabled by QEMU.
      Signed-off-by: NNadav Amit <namit@cs.technion.ac.il>
      Message-Id: <1428879221-29996-1-git-send-email-namit@cs.technion.ac.il>
      [Report capability from KVM_CHECK_EXTENSION too. - Paolo]
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      90de4a18
  11. 19 4月, 2015 1 次提交
  12. 14 4月, 2015 1 次提交
    • L
      x86 msr-index: define MSR_TURBO_RATIO_LIMIT,1,2 · c4d30668
      Len Brown 提交于
      MSR_TURBO_RATIO_LIMIT has grown into a set of three registers.
      Add the documented names for them, in preparation
      for deleting the previous ad-hoc names:
      
      +#define MSR_TURBO_RATIO_LIMIT          0x000001ad
      +#define MSR_TURBO_RATIO_LIMIT1         0x000001ae
      +#define MSR_TURBO_RATIO_LIMIT2         0x000001af
      Signed-off-by: NLen Brown <len.brown@intel.com>
      Cc: x86@kernel.org
      c4d30668
  13. 03 4月, 2015 1 次提交
    • B
      x86/mm/KASLR: Propagate KASLR status to kernel proper · 78cac48c
      Borislav Petkov 提交于
      Commit:
      
        e2b32e67 ("x86, kaslr: randomize module base load address")
      
      made module base address randomization unconditional and didn't regard
      disabled KKASLR due to CONFIG_HIBERNATION and command line option
      "nokaslr". For more info see (now reverted) commit:
      
        f47233c2 ("x86/mm/ASLR: Propagate base load address calculation")
      
      In order to propagate KASLR status to kernel proper, we need a single bit
      in boot_params.hdr.loadflags and we've chosen bit 1 thus leaving the
      top-down allocated bits for bits supposed to be used by the bootloader.
      
      Originally-From: Jiri Kosina <jkosina@suse.cz>
      Suggested-by: NH. Peter Anvin <hpa@zytor.com>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Cc: Kees Cook <keescook@chromium.org>
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      78cac48c
  14. 02 4月, 2015 1 次提交
    • A
      perf/x86/intel/pt: Add Intel PT PMU driver · 52ca9ced
      Alexander Shishkin 提交于
      Add support for Intel Processor Trace (PT) to kernel's perf events.
      PT is an extension of Intel Architecture that collects information about
      software execuction such as control flow, execution modes and timings and
      formats it into highly compressed binary packets. Even being compressed,
      these packets are generated at hundreds of megabytes per second per core,
      which makes it impractical to decode them on the fly in the kernel.
      
      This driver exports trace data by through AUX space in the perf ring
      buffer, which is zero-copy mapped into userspace for faster data retrieval.
      Signed-off-by: NAlexander Shishkin <alexander.shishkin@linux.intel.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Kaixu Xia <kaixu.xia@linaro.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Robert Richter <rric@kernel.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: acme@infradead.org
      Cc: adrian.hunter@intel.com
      Cc: kan.liang@intel.com
      Cc: markus.t.metzger@intel.com
      Cc: mathieu.poirier@linaro.org
      Link: http://lkml.kernel.org/r/1422614392-114498-1-git-send-email-alexander.shishkin@linux.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      52ca9ced
  15. 01 4月, 2015 1 次提交
  16. 27 3月, 2015 1 次提交
  17. 17 3月, 2015 2 次提交
  18. 16 3月, 2015 1 次提交
    • B
      Revert "x86/mm/ASLR: Propagate base load address calculation" · 69797daf
      Borislav Petkov 提交于
      This reverts commit:
      
        f47233c2 ("x86/mm/ASLR: Propagate base load address calculation")
      
      The main reason for the revert is that the new boot flag does not work
      at all currently, and in order to make this work, we need non-trivial
      changes to the x86 boot code which we didn't manage to get done in
      time for merging.
      
      And even if we did, they would've been too risky so instead of
      rushing things and break booting 4.1 on boxes left and right, we
      will be very strict and conservative and will take our time with
      this to fix and test it properly.
      Reported-by: NYinghai Lu <yinghai@kernel.org>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Baoquan He <bhe@redhat.com>
      Cc: H. Peter Anvin <hpa@linux.intel.com
      Cc: Jiri Kosina <jkosina@suse.cz>
      Cc: Josh Triplett <josh@joshtriplett.org>
      Cc: Junjie Mao <eternal.n08@gmail.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Matt Fleming <matt.fleming@intel.com>
      Link: http://lkml.kernel.org/r/20150316100628.GD22995@pd.tnicSigned-off-by: NIngo Molnar <mingo@kernel.org>
      69797daf
  19. 05 3月, 2015 2 次提交
    • D
      x86/asm/entry/64: Fix comments · e90e147c
      Denys Vlasenko 提交于
       - Misleading and slightly incorrect comments in "struct pt_regs" are
         fixed (four instances).
      
       - Fix incorrect comment atop EMPTY_FRAME macro.
      
       - Explain in more detail what we do with stack layout during hw interrupt.
      
       - Correct comments about "partial stack frame" which are no longer
         true.
      Signed-off-by: NDenys Vlasenko <dvlasenk@redhat.com>
      Signed-off-by: NAndy Lutomirski <luto@amacapital.net>
      Cc: Alexei Starovoitov <ast@plumgrid.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Will Drewry <wad@chromium.org>
      Link: http://lkml.kernel.org/r/1423778052-21038-3-git-send-email-dvlasenk@redhat.com
      Link: http://lkml.kernel.org/r/e1f4429c491fe6ceeddb879dea2786e0f8920f9c.1424989793.git.luto@amacapital.netSigned-off-by: NIngo Molnar <mingo@kernel.org>
      e90e147c
    • D
      x86/asm/entry/64: Always allocate a complete "struct pt_regs" on the kernel stack · 76f5df43
      Denys Vlasenko 提交于
      The 64-bit entry code was using six stack slots less by not
      saving/restoring registers which are callee-preserved according
      to the C ABI, and was not allocating space for them.
      
      Only when syscalls needed a complete "struct pt_regs" was
      the complete area allocated and filled in.
      
      As an additional twist, on interrupt entry a "slightly less
      truncated pt_regs" trick is used, to make nested interrupt
      stacks easier to unwind.
      
      This proved to be a source of significant obfuscation and subtle
      bugs. For example, 'stub_fork' had to pop the return address,
      extend the struct, save registers, and push return address back.
      Ugly. 'ia32_ptregs_common' pops return address and "returns" via
      jmp insn, throwing a wrench into CPU return stack cache.
      
      This patch changes the code to always allocate a complete
      "struct pt_regs" on the kernel stack. The saving of registers
      is still done lazily.
      
      "Partial pt_regs" trick on interrupt stack is retained.
      
      Macros which manipulate "struct pt_regs" on stack are reworked:
      
       - ALLOC_PT_GPREGS_ON_STACK allocates the structure.
      
       - SAVE_C_REGS saves to it those registers which are clobbered
         by C code.
      
       - SAVE_EXTRA_REGS saves to it all other registers.
      
       - Corresponding RESTORE_* and REMOVE_PT_GPREGS_FROM_STACK macros
         reverse it.
      
      'ia32_ptregs_common', 'stub_fork' and friends lost their ugly dance
      with the return pointer.
      
      LOAD_ARGS32 in ia32entry.S now uses symbolic stack offsets
      instead of magic numbers.
      
      'error_entry' and 'save_paranoid' now use SAVE_C_REGS +
      SAVE_EXTRA_REGS instead of having it open-coded yet again.
      
      Patch was run-tested: 64-bit executables, 32-bit executables,
      strace works.
      
      Timing tests did not show measurable difference in 32-bit
      and 64-bit syscalls.
      Signed-off-by: NDenys Vlasenko <dvlasenk@redhat.com>
      Signed-off-by: NAndy Lutomirski <luto@amacapital.net>
      Cc: Alexei Starovoitov <ast@plumgrid.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Will Drewry <wad@chromium.org>
      Link: http://lkml.kernel.org/r/1423778052-21038-2-git-send-email-dvlasenk@redhat.com
      Link: http://lkml.kernel.org/r/b89763d354aa23e670b9bdf3a40ae320320a7c2e.1424989793.git.luto@amacapital.netSigned-off-by: NIngo Molnar <mingo@kernel.org>
      76f5df43
  20. 02 3月, 2015 1 次提交
  21. 19 2月, 2015 1 次提交
    • J
      x86/mm/ASLR: Propagate base load address calculation · f47233c2
      Jiri Kosina 提交于
      Commit:
      
        e2b32e67 ("x86, kaslr: randomize module base load address")
      
      makes the base address for module to be unconditionally randomized in
      case when CONFIG_RANDOMIZE_BASE is defined and "nokaslr" option isn't
      present on the commandline.
      
      This is not consistent with how choose_kernel_location() decides whether
      it will randomize kernel load base.
      
      Namely, CONFIG_HIBERNATION disables kASLR (unless "kaslr" option is
      explicitly specified on kernel commandline), which makes the state space
      larger than what module loader is looking at. IOW CONFIG_HIBERNATION &&
      CONFIG_RANDOMIZE_BASE is a valid config option, kASLR wouldn't be applied
      by default in that case, but module loader is not aware of that.
      
      Instead of fixing the logic in module.c, this patch takes more generic
      aproach. It introduces a new bootparam setup data_type SETUP_KASLR and
      uses that to pass the information whether kaslr has been applied during
      kernel decompression, and sets a global 'kaslr_enabled' variable
      accordingly, so that any kernel code (module loading, livepatching, ...)
      can make decisions based on its value.
      
      x86 module loader is converted to make use of this flag.
      Signed-off-by: NJiri Kosina <jkosina@suse.cz>
      Acked-by: NKees Cook <keescook@chromium.org>
      Cc: "H. Peter Anvin" <hpa@linux.intel.com>
      Link: https://lkml.kernel.org/r/alpine.LNX.2.00.1502101411280.10719@pobox.suse.cz
      [ Always dump correct kaslr status when panicking ]
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      f47233c2
  22. 10 2月, 2015 1 次提交
    • L
      tools/power turbostat: decode MSR_*_PERF_LIMIT_REASONS · 3a9a941d
      Len Brown 提交于
      The Processor generation code-named Haswell
      added MSR_{CORE | GFX | RING}_PERF_LIMIT_REASONS
      to explain when and how the processor limits frequency.
      
      turbostat -v
      will now decode these bits.
      
      Each MSR has an "Active" set of bits which describe
      current conditions, and a "Logged" set of bits,
      which describe what has happened since last cleared.
      
      Turbostat currently doesn't clear the log bits.
      Signed-off-by: NLen Brown <len.brown@intel.com>
      3a9a941d
  23. 30 1月, 2015 1 次提交
  24. 26 1月, 2015 1 次提交
  25. 09 1月, 2015 2 次提交
  26. 21 12月, 2014 1 次提交
  27. 18 12月, 2014 1 次提交
  28. 05 12月, 2014 1 次提交
  29. 03 12月, 2014 1 次提交
  30. 12 11月, 2014 2 次提交
  31. 24 10月, 2014 1 次提交
  32. 21 8月, 2014 1 次提交
  33. 16 8月, 2014 1 次提交