1. 19 5月, 2015 4 次提交
    • I
      x86/fpu: Move 'PER_CPU(fpu_owner_task)' to fpu/core.c · b0c050c5
      Ingo Molnar 提交于
      Move it closer to other per-cpu FPU data structures.
      
      This also unifies the 32-bit and 64-bit code.
      Reviewed-by: NBorislav Petkov <bp@alien8.de>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      b0c050c5
    • I
      x86/fpu: Fix header file dependencies of fpu-internal.h · f89e32e0
      Ingo Molnar 提交于
      Fix a minor header file dependency bug in asm/fpu-internal.h: it
      relies on i387.h but does not include it. All users of fpu-internal.h
      included it explicitly.
      
      Also remove unnecessary includes, to reduce compilation time.
      
      This also makes it easier to use it as a standalone header file
      for FPU internals, such as an upcoming C module in arch/x86/kernel/fpu/.
      Reviewed-by: NBorislav Petkov <bp@alien8.de>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      f89e32e0
    • I
      x86/fpu: Rename fpu_init() to fpu__cpu_init() · 3a9c4b0d
      Ingo Molnar 提交于
      fpu_init() is a bit of a misnomer in that it (falsely) creates the
      impression that it's related to the (old) fpu_finit() function,
      which initializes FPU ctx state.
      
      Rename it to fpu__cpu_init() to make its boot time initialization
      clear, and to move it to the fpu__*() namespace.
      
      Also fix and extend its comment block to point out that it's
      called not only on the boot CPU, but on secondary CPUs as well.
      Reviewed-by: NBorislav Petkov <bp@alien8.de>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      3a9c4b0d
    • I
      x86/fpu: Rename fpu_detect() to fpu__detect() · 1a7dc0db
      Ingo Molnar 提交于
      Use the fpu__*() namespace to organize FPU ops better.
      
      Also document fpu__detect() a bit.
      Reviewed-by: NBorislav Petkov <bp@alien8.de>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      1a7dc0db
  2. 03 4月, 2015 3 次提交
  3. 27 3月, 2015 1 次提交
    • D
      x86/asm/entry/64: Fix comment about SYSENTER MSRs · 487d1edb
      Denys Vlasenko 提交于
      The comment is ancient, it dates to the time when only AMD's
      x86_64 implementation existed. AMD wasn't (and still isn't)
      supporting SYSENTER, so these writes were "just in case" back
      then.
      
      This has changed: Intel's x86_64 appeared, and Intel does
      support SYSENTER in long mode. "Some future 64-bit CPU" is here
      already.
      
      The code may appear "buggy" for AMD as it stands, since
      MSR_IA32_SYSENTER_EIP is only 32-bit for AMD CPUs. Writing a
      kernel function's address to it would drop high bits. Subsequent
      use of this MSR for branch via SYSENTER seem to allow user to
      transition to CPL0 while executing his code. Scary, eh?
      
      Explain why that is not a bug: because SYSENTER insn would not
      work on AMD CPU.
      Signed-off-by: NDenys Vlasenko <dvlasenk@redhat.com>
      Cc: Alexei Starovoitov <ast@plumgrid.com>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Will Drewry <wad@chromium.org>
      Link: http://lkml.kernel.org/r/1427453956-21931-1-git-send-email-dvlasenk@redhat.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      487d1edb
  4. 25 3月, 2015 2 次提交
    • I
      x86/asm/entry/64: Always set up SYSENTER MSRs · d56fe4bf
      Ingo Molnar 提交于
      On CONFIG_IA32_EMULATION=y kernels we set up
      MSR_IA32_SYSENTER_CS/ESP/EIP, but on !CONFIG_IA32_EMULATION
      kernels we leave them unchanged.
      
      Clear them to make sure the instruction is disabled properly.
      
      SYSCALL is set up properly in both cases.
      Acked-by: NDenys Vlasenko <dvlasenk@redhat.com>
      Acked-by: NAndy Lutomirski <luto@amacapital.net>
      Cc: Alexei Starovoitov <ast@plumgrid.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Will Drewry <wad@chromium.org>
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      d56fe4bf
    • D
      x86/asm/entry: Get rid of KERNEL_STACK_OFFSET · ef593260
      Denys Vlasenko 提交于
      PER_CPU_VAR(kernel_stack) was set up in a way where it points
      five stack slots below the top of stack.
      
      Presumably, it was done to avoid one "sub $5*8,%rsp"
      in syscall/sysenter code paths, where iret frame needs to be
      created by hand.
      
      Ironically, none of them benefits from this optimization,
      since all of them need to allocate additional data on stack
      (struct pt_regs), so they still have to perform subtraction.
      
      This patch eliminates KERNEL_STACK_OFFSET.
      
      PER_CPU_VAR(kernel_stack) now points directly to top of stack.
      pt_regs allocations are adjusted to allocate iret frame as well.
      Hopefully we can merge it later with 32-bit specific
      PER_CPU_VAR(cpu_current_top_of_stack) variable...
      
      Net result in generated code is that constants in several insns
      are changed.
      
      This change is necessary for changing struct pt_regs creation
      in SYSCALL64 code path from MOV to PUSH instructions.
      Signed-off-by: NDenys Vlasenko <dvlasenk@redhat.com>
      Acked-by: NBorislav Petkov <bp@suse.de>
      Acked-by: NAndy Lutomirski <luto@kernel.org>
      Cc: Alexei Starovoitov <ast@plumgrid.com>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Will Drewry <wad@chromium.org>
      Link: http://lkml.kernel.org/r/1426785469-15125-2-git-send-email-dvlasenk@redhat.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      ef593260
  5. 24 3月, 2015 1 次提交
    • D
      x86/asm/entry/64: Fold syscall32_cpu_init() into its sole user · a76c7f46
      Denys Vlasenko 提交于
      Having syscall32/sysenter32 initialization in a separate tiny
      function, called from within a function that is already syscall
      init specific, serves no real purpose.
      
      Its existense also caused an unintended effect of having
      wrmsrl(MSR_CSTAR) performed twice: once we set it to a dummy
      function returning -ENOSYS, and immediately after
      (if CONFIG_IA32_EMULATION), we set it to point to the proper
      syscall32 entry point, ia32_cstar_target.
      Signed-off-by: NDenys Vlasenko <dvlasenk@redhat.com>
      Acked-by: NAndy Lutomirski <luto@amacapital.net>
      Cc: Alexei Starovoitov <ast@plumgrid.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Will Drewry <wad@chromium.org>
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      a76c7f46
  6. 17 3月, 2015 2 次提交
    • I
      x86/asm/entry: Document and clean up the enable_sep_cpu() and syscall32_cpu_init() functions · 8b6c0ab1
      Ingo Molnar 提交于
      Clean up the flow and document the functions a bit better.
      
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Alexei Starovoitov <ast@plumgrid.com>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Will Drewry <wad@chromium.org>
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      8b6c0ab1
    • D
      x86/asm/entry/32: Document the 32-bit SYSENTER "emergency stack" better · d828c71f
      Denys Vlasenko 提交于
      Before the patch, the 'tss_struct::stack' field was not referenced anywhere.
      
      It was used only to set SYSENTER's stack to point after the last byte
      of tss_struct, thus the trailing field, stack[64], was used.
      
      But grep would not know it. You can comment it out, compile,
      and kernel will even run until an unlucky NMI corrupts
      io_bitmap[] (which is also not easily detectable).
      
      This patch changes code so that the purpose and usage of this
      field is not mysterious anymore, and can be easily grepped for.
      
      This does change generated code, for a subtle reason:
      since tss_struct is ____cacheline_aligned, there happens to be
      5 longs of padding at the end. Old code was using the padding
      too; new code will strictly use it only for SYSENTER_stack[].
      Signed-off-by: NDenys Vlasenko <dvlasenk@redhat.com>
      Cc: Alexei Starovoitov <ast@plumgrid.com>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Will Drewry <wad@chromium.org>
      Link: http://lkml.kernel.org/r/1425912738-559-2-git-send-email-dvlasenk@redhat.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      d828c71f
  7. 07 3月, 2015 1 次提交
  8. 06 3月, 2015 1 次提交
  9. 28 2月, 2015 1 次提交
  10. 25 2月, 2015 1 次提交
    • P
      x86: Add support for Intel Cache QoS Monitoring (CQM) detection · cbc82b17
      Peter P Waskiewicz Jr 提交于
      This patch adds support for the new Cache QoS Monitoring (CQM)
      feature found in future Intel Xeon processors.  It includes the
      new values to track CQM resources to the cpuinfo_x86 structure,
      plus the CPUID detection routines for CQM.
      
      CQM allows a process, or set of processes, to be tracked by the CPU
      to determine the cache usage of that task group.  Using this data
      from the CPU, software can be written to extract this data and
      report cache usage and occupancy for a particular process, or
      group of processes.
      
      More information about Cache QoS Monitoring can be found in the
      Intel (R) x86 Architecture Software Developer Manual, section 17.14.
      Signed-off-by: NPeter P Waskiewicz Jr <peter.p.waskiewicz.jr@intel.com>
      Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: Chris Webb <chris@arachsys.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Igor Mammedov <imammedo@redhat.com>
      Cc: Jacob Shin <jacob.w.shin@gmail.com>
      Cc: Jan Beulich <JBeulich@suse.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Kanaka Juvva <kanaka.d.juvva@intel.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Steven Honeyman <stevenhoneyman@gmail.com>
      Cc: Steven Rostedt <srostedt@redhat.com>
      Cc: Vikas Shivappa <vikas.shivappa@linux.intel.com>
      Link: http://lkml.kernel.org/r/1422038748-21397-5-git-send-email-matt@codeblueprint.co.ukSigned-off-by: NIngo Molnar <mingo@kernel.org>
      cbc82b17
  11. 04 2月, 2015 2 次提交
  12. 22 1月, 2015 1 次提交
  13. 11 1月, 2015 1 次提交
    • S
      x86, CPU: Fix trivial printk formatting issues with dmesg · f94fe119
      Steven Honeyman 提交于
      dmesg (from util-linux) currently has two methods for reading the kernel
      message ring buffer: /dev/kmsg and syslog(2). Since kernel 3.5.0 kmsg
      has been the default, which escapes control characters (e.g. new lines)
      before they are shown.
      
      This change means that when dmesg is using /dev/kmsg, a 2 line printk
      makes the output messy, because the second line does not get a
      timestamp.
      
      For example:
      
      [    0.012863] CPU0: Thermal monitoring enabled (TM1)
      [    0.012869] Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024
      Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4
      [    0.012958] Freeing SMP alternatives memory: 28K (ffffffff81d86000 - ffffffff81d8d000)
      [    0.014961] dmar: Host address width 39
      
      Because printk.c intentionally escapes control characters, they should
      not be there in the first place. This patch fixes two occurrences of
      this.
      Signed-off-by: NSteven Honeyman <stevenhoneyman@gmail.com>
      Link: https://lkml.kernel.org/r/1414856696-8094-1-git-send-email-stevenhoneyman@gmail.com
      [ Boris: make cpu_detect_tlb() static, while at it. ]
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      f94fe119
  14. 16 11月, 2014 1 次提交
    • D
      x86: Require exact match for 'noxsave' command line option · 2cd3949f
      Dave Hansen 提交于
      We have some very similarly named command-line options:
      
      arch/x86/kernel/cpu/common.c:__setup("noxsave", x86_xsave_setup);
      arch/x86/kernel/cpu/common.c:__setup("noxsaveopt", x86_xsaveopt_setup);
      arch/x86/kernel/cpu/common.c:__setup("noxsaves", x86_xsaves_setup);
      
      __setup() is designed to match options that take arguments, like
      "foo=bar" where you would have:
      
      	__setup("foo", x86_foo_func...);
      
      The problem is that "noxsave" actually _matches_ "noxsaves" in
      the same way that "foo" matches "foo=bar".  If you boot an old
      kernel that does not know about "noxsaves" with "noxsaves" on the
      command line, it will interpret the argument as "noxsave", which
      is not what you want at all.
      
      This makes the "noxsave" handler only return success when it finds
      an *exact* match.
      
      [ tglx: We really need to make __setup() more robust. ]
      Signed-off-by: NDave Hansen <dave.hansen@linux.intel.com>
      Cc: Dave Hansen <dave@sr71.net>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: x86@kernel.org
      Cc: stable@vger.kernel.org
      Link: http://lkml.kernel.org/r/20141111220133.FE053984@viggo.jf.intel.comSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      2cd3949f
  15. 03 11月, 2014 1 次提交
  16. 14 10月, 2014 1 次提交
  17. 07 10月, 2014 1 次提交
    • A
      x86_64, entry: Filter RFLAGS.NT on entry from userspace · 8c7aa698
      Andy Lutomirski 提交于
      The NT flag doesn't do anything in long mode other than causing IRET
      to #GP.  Oddly, CPL3 code can still set NT using popf.
      
      Entry via hardware or software interrupt clears NT automatically, so
      the only relevant entries are fast syscalls.
      
      If user code causes kernel code to run with NT set, then there's at
      least some (small) chance that it could cause trouble.  For example,
      user code could cause a call to EFI code with NT set, and who knows
      what would happen?  Apparently some games on Wine sometimes do
      this (!), and, if an IRET return happens, they will segfault.  That
      segfault cannot be handled, because signal delivery fails, too.
      
      This patch programs the CPU to clear NT on entry via SYSCALL (both
      32-bit and 64-bit, by my reading of the AMD APM), and it clears NT
      in software on entry via SYSENTER.
      
      To save a few cycles, this borrows a trick from Jan Beulich in Xen:
      it checks whether NT is set before trying to clear it.  As a result,
      it seems to have very little effect on SYSENTER performance on my
      machine.
      
      There's another minor bug fix in here: it looks like the CFI
      annotations were wrong if CONFIG_AUDITSYSCALL=n.
      
      Testers beware: on Xen, SYSENTER with NT set turns into a GPF.
      
      I haven't touched anything on 32-bit kernels.
      
      The syscall mask change comes from a variant of this patch by Anish
      Bhatt.
      
      Note to stable maintainers: there is no known security issue here.
      A misguided program can set NT and cause the kernel to try and fail
      to deliver SIGSEGV, crashing the program.  This patch fixes Far Cry
      on Wine: https://bugs.winehq.org/show_bug.cgi?id=33275
      
      Cc: <stable@vger.kernel.org>
      Reported-by: NAnish Bhatt <anish@chelsio.com>
      Signed-off-by: NAndy Lutomirski <luto@amacapital.net>
      Link: http://lkml.kernel.org/r/395749a5d39a29bd3e4b35899cf3a3c1340e5595.1412189265.git.luto@amacapital.netSigned-off-by: NH. Peter Anvin <hpa@zytor.com>
      8c7aa698
  18. 16 9月, 2014 1 次提交
  19. 12 9月, 2014 1 次提交
  20. 27 8月, 2014 1 次提交
    • C
      x86: Replace __get_cpu_var uses · 89cbc767
      Christoph Lameter 提交于
      __get_cpu_var() is used for multiple purposes in the kernel source. One of
      them is address calculation via the form &__get_cpu_var(x).  This calculates
      the address for the instance of the percpu variable of the current processor
      based on an offset.
      
      Other use cases are for storing and retrieving data from the current
      processors percpu area.  __get_cpu_var() can be used as an lvalue when
      writing data or on the right side of an assignment.
      
      __get_cpu_var() is defined as :
      
      #define __get_cpu_var(var) (*this_cpu_ptr(&(var)))
      
      __get_cpu_var() always only does an address determination. However, store
      and retrieve operations could use a segment prefix (or global register on
      other platforms) to avoid the address calculation.
      
      this_cpu_write() and this_cpu_read() can directly take an offset into a
      percpu area and use optimized assembly code to read and write per cpu
      variables.
      
      This patch converts __get_cpu_var into either an explicit address
      calculation using this_cpu_ptr() or into a use of this_cpu operations that
      use the offset.  Thereby address calculations are avoided and less registers
      are used when code is generated.
      
      Transformations done to __get_cpu_var()
      
      1. Determine the address of the percpu instance of the current processor.
      
      	DEFINE_PER_CPU(int, y);
      	int *x = &__get_cpu_var(y);
      
          Converts to
      
      	int *x = this_cpu_ptr(&y);
      
      2. Same as #1 but this time an array structure is involved.
      
      	DEFINE_PER_CPU(int, y[20]);
      	int *x = __get_cpu_var(y);
      
          Converts to
      
      	int *x = this_cpu_ptr(y);
      
      3. Retrieve the content of the current processors instance of a per cpu
      variable.
      
      	DEFINE_PER_CPU(int, y);
      	int x = __get_cpu_var(y)
      
         Converts to
      
      	int x = __this_cpu_read(y);
      
      4. Retrieve the content of a percpu struct
      
      	DEFINE_PER_CPU(struct mystruct, y);
      	struct mystruct x = __get_cpu_var(y);
      
         Converts to
      
      	memcpy(&x, this_cpu_ptr(&y), sizeof(x));
      
      5. Assignment to a per cpu variable
      
      	DEFINE_PER_CPU(int, y)
      	__get_cpu_var(y) = x;
      
         Converts to
      
      	__this_cpu_write(y, x);
      
      6. Increment/Decrement etc of a per cpu variable
      
      	DEFINE_PER_CPU(int, y);
      	__get_cpu_var(y)++
      
         Converts to
      
      	__this_cpu_inc(y)
      
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: x86@kernel.org
      Acked-by: NH. Peter Anvin <hpa@linux.intel.com>
      Acked-by: NIngo Molnar <mingo@kernel.org>
      Signed-off-by: NChristoph Lameter <cl@linux.com>
      Signed-off-by: NTejun Heo <tj@kernel.org>
      89cbc767
  21. 18 8月, 2014 1 次提交
    • J
      x86: Support compiling out human-friendly processor feature names · 9def39be
      Josh Triplett 提交于
      The table mapping CPUID bits to human-readable strings takes up a
      non-trivial amount of space, and only exists to support /proc/cpuinfo
      and a couple of kernel messages.  Since programs depend on the format of
      /proc/cpuinfo, force inclusion of the table when building with /proc
      support; otherwise, support omitting that table to save space, in which
      case the kernel messages will print features numerically instead.
      
      In addition to saving 1408 bytes out of vmlinux, this also saves 1373
      bytes out of the uncompressed setup code, which contributes directly to
      the size of bzImage.
      Signed-off-by: NJosh Triplett <josh@joshtriplett.org>
      9def39be
  22. 31 7月, 2014 1 次提交
  23. 09 6月, 2014 1 次提交
  24. 05 6月, 2014 1 次提交
  25. 30 5月, 2014 2 次提交
  26. 06 5月, 2014 2 次提交
  27. 24 4月, 2014 1 次提交
  28. 07 3月, 2014 1 次提交
    • S
      x86: Keep thread_info on thread stack in x86_32 · 198d208d
      Steven Rostedt 提交于
      x86_64 uses a per_cpu variable kernel_stack to always point to
      the thread stack of current. This is where the thread_info is stored
      and is accessed from this location even when the irq or exception stack
      is in use. This removes the complexity of having to maintain the
      thread info on the stack when interrupts are running and having to
      copy the preempt_count and other fields to the interrupt stack.
      
      x86_32 uses the old method of copying the thread_info from the thread
      stack to the exception stack just before executing the exception.
      
      Having the two different requires #ifdefs and also the x86_32 way
      is a bit of a pain to maintain. By converting x86_32 to the same
      method of x86_64, we can remove #ifdefs, clean up the x86_32 code
      a little, and remove the overhead of the copy.
      
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Brian Gerst <brgerst@gmail.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      Link: http://lkml.kernel.org/r/20110806012354.263834829@goodmis.org
      Link: http://lkml.kernel.org/r/20140206144321.852942014@goodmis.orgSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      198d208d
  29. 28 2月, 2014 2 次提交