1. 19 6月, 2010 1 次提交
    • A
      x86, olpc: Add support for calling into OpenFirmware · fd699c76
      Andres Salomon 提交于
      Add support for saving OFW's cif, and later calling into it to run OFW
      commands.  OFW remains resident in memory, living within virtual range
      0xff800000 - 0xffc00000.  A single page directory entry points to the
      pgdir that OFW actually uses, so rather than saving the entire page
      table, we grab and install that one entry permanently in the kernel's
      page table.
      
      This is currently only used by the OLPC XO.  Note that this particular
      calling convention breaks PAE and PAT, and so cannot be used on newer
      x86 hardware.
      Signed-off-by: NAndres Salomon <dilinger@queued.net>
      LKML-Reference: <20100618174653.7755a39a@dev.queued.net>
      Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      fd699c76
  2. 26 3月, 2010 1 次提交
    • P
      x86, perf, bts, mm: Delete the never used BTS-ptrace code · faa4602e
      Peter Zijlstra 提交于
      Support for the PMU's BTS features has been upstreamed in
      v2.6.32, but we still have the old and disabled ptrace-BTS,
      as Linus noticed it not so long ago.
      
      It's buggy: TIF_DEBUGCTLMSR is trampling all over that MSR without
      regard for other uses (perf) and doesn't provide the flexibility
      needed for perf either.
      
      Its users are ptrace-block-step and ptrace-bts, since ptrace-bts
      was never used and ptrace-block-step can be implemented using a
      much simpler approach.
      
      So axe all 3000 lines of it. That includes the *locked_memory*()
      APIs in mm/mlock.c as well.
      Reported-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Roland McGrath <roland@redhat.com>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Markus Metzger <markus.t.metzger@intel.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      LKML-Reference: <20100325135413.938004390@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      faa4602e
  3. 25 2月, 2010 1 次提交
    • J
      x86, apbt: Moorestown APB system timer driver · bb24c471
      Jacob Pan 提交于
      Moorestown platform does not have PIT or HPET platform timers.  Instead it
      has a bank of eight APB timers.  The number of available timers to the os
      is exposed via SFI mtmr tables.  All APB timer interrupts are routed via
      ioapic rtes and delivered as MSI.
      Currently, we use timer 0 and 1 for per cpu clockevent devices, timer 2
      for clocksource.
      Signed-off-by: NJacob Pan <jacob.jun.pan@intel.com>
      LKML-Reference: <43F901BD926A4E43B106BF17856F0755A318D2D2@orsmsx508.amr.corp.intel.com>
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      bb24c471
  4. 17 2月, 2010 1 次提交
  5. 13 2月, 2010 1 次提交
  6. 16 12月, 2009 1 次提交
  7. 31 8月, 2009 2 次提交
  8. 29 8月, 2009 1 次提交
  9. 27 8月, 2009 1 次提交
    • T
      x86: Add x86_init infrastructure · 57844a8f
      Thomas Gleixner 提交于
      The upcoming Moorestown support brings the embedded world to x86. The
      setup code of x86 has already a couple of hooks which are either
      x86_quirks or paravirt ops. Some of those setup hooks are pretty
      convoluted like the timer setup and the tsc calibration code. But
      there are other places which could do with a cleanup.
      
      Instead of having inline functions/macros which are modified at
      compile time I decided to introduce x86_init ops which are
      unconditional in the code and make it clear that they can be changed
      either during compile time or in the early boot process. The function
      pointers are initialized by default functions which can be noops so
      that the pointer can be called unconditionally in the most cases. This
      also allows us to remove 32bit/64bit, paravirt and other #ifdeffery.
      
      paravirt guests are just a hardware platform in the setup code, so we
      should treat them as such and not hide all behind multiple layers of
      indirection and compile time dependencies.
      
      It's more obvious that x86_init.timers.timer_init() is a function
      pointer than the late_time_init = choose_time_init() obscurity. It's
      also way simpler to grep for x86_init.timers.timer_init and find all
      the places which modify that function pointer instead of analyzing
      weak functions, macros and paravirt indirections.
      
      Note. This is not a general paravirt_ops replacement. It just will
      move setup related hooks which are potentially useful for other
      platform setup purposes as well out of the paravirt domain.
      
      Add the base infrastructure without any functionality.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      57844a8f
  10. 22 7月, 2009 1 次提交
    • J
      x86, intel_txt: Intel TXT boot support · 31625340
      Joseph Cihula 提交于
      This patch adds kernel configuration and boot support for Intel Trusted
      Execution Technology (Intel TXT).
      
      Intel's technology for safer computing, Intel Trusted Execution
      Technology (Intel TXT), defines platform-level enhancements that
      provide the building blocks for creating trusted platforms.
      
      Intel TXT was formerly known by the code name LaGrande Technology (LT).
      
      Intel TXT in Brief:
      o  Provides dynamic root of trust for measurement (DRTM)
      o  Data protection in case of improper shutdown
      o  Measurement and verification of launched environment
      
      Intel TXT is part of the vPro(TM) brand and is also available some
      non-vPro systems.  It is currently available on desktop systems based on
      the Q35, X38, Q45, and Q43 Express chipsets (e.g. Dell Optiplex 755, HP
      dc7800, etc.) and mobile systems based on the GM45, PM45, and GS45
      Express chipsets.
      
      For more information, see http://www.intel.com/technology/security/.
      This site also has a link to the Intel TXT MLE Developers Manual, which
      has been updated for the new released platforms.
      
      A much more complete description of how these patches support TXT, how to
      configure a system for it, etc. is in the Documentation/intel_txt.txt file
      in this patch.
      
      This patch provides the TXT support routines for complete functionality,
      documentation for TXT support and for the changes to the boot_params structure,
      and boot detection of a TXT launch.  Attempts to shutdown (reboot, Sx) the system
      will result in platform resets; subsequent patches will support these shutdown modes
      properly.
      
       Documentation/intel_txt.txt      |  210 +++++++++++++++++++++
       Documentation/x86/zero-page.txt  |    1
       arch/x86/include/asm/bootparam.h |    3
       arch/x86/include/asm/fixmap.h    |    3
       arch/x86/include/asm/tboot.h     |  197 ++++++++++++++++++++
       arch/x86/kernel/Makefile         |    1
       arch/x86/kernel/setup.c          |    4
       arch/x86/kernel/tboot.c          |  379 +++++++++++++++++++++++++++++++++++++++
       security/Kconfig                 |   30 +++
       9 files changed, 827 insertions(+), 1 deletion(-)
      Signed-off-by: NJoseph Cihula <joseph.cihula@intel.com>
      Signed-off-by: NShane Wang <shane.wang@intel.com>
      Signed-off-by: NGang Wei <gang.wei@intel.com>
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      31625340
  11. 07 7月, 2009 1 次提交
  12. 19 6月, 2009 1 次提交
    • P
      gcov: enable GCOV_PROFILE_ALL for x86_64 · 7bf99fb6
      Peter Oberparleiter 提交于
      Enable gcov profiling of the entire kernel on x86_64. Required changes
      include disabling profiling for:
      
      * arch/kernel/acpi/realmode and arch/kernel/boot/compressed:
        not linked to main kernel
      * arch/vdso, arch/kernel/vsyscall_64 and arch/kernel/hpet:
        profiling causes segfaults during boot (incompatible context)
      Signed-off-by: NPeter Oberparleiter <oberpar@linux.vnet.ibm.com>
      Cc: Andi Kleen <andi@firstfloor.org>
      Cc: Huang Ying <ying.huang@intel.com>
      Cc: Li Wei <W.Li@Sun.COM>
      Cc: Michael Ellerman <michaele@au1.ibm.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Heiko Carstens <heicars2@linux.vnet.ibm.com>
      Cc: Martin Schwidefsky <mschwid2@linux.vnet.ibm.com>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: WANG Cong <xiyou.wangcong@gmail.com>
      Cc: Sam Ravnborg <sam@ravnborg.org>
      Cc: Jeff Dike <jdike@addtoit.com>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      7bf99fb6
  13. 12 6月, 2009 2 次提交
  14. 03 6月, 2009 1 次提交
  15. 16 5月, 2009 1 次提交
    • J
      x86: Fix performance regression caused by paravirt_ops on native kernels · b4ecc126
      Jeremy Fitzhardinge 提交于
      Xiaohui Xin and some other folks at Intel have been looking into what's
      behind the performance hit of paravirt_ops when running native.
      
      It appears that the hit is entirely due to the paravirtualized
      spinlocks introduced by:
      
       | commit 8efcbab6
       | Date:   Mon Jul 7 12:07:51 2008 -0700
       |
       |     paravirt: introduce a "lock-byte" spinlock implementation
      
      The extra call/return in the spinlock path is somehow
      causing an increase in the cycles/instruction of somewhere around 2-7%
      (seems to vary quite a lot from test to test).  The working theory is
      that the CPU's pipeline is getting upset about the
      call->call->locked-op->return->return, and seems to be failing to
      speculate (though I haven't seen anything definitive about the precise
      reasons).  This doesn't entirely make sense, because the performance
      hit is also visible on unlock and other operations which don't involve
      locked instructions.  But spinlock operations clearly swamp all the
      other pvops operations, even though I can't imagine that they're
      nearly as common (there's only a .05% increase in instructions
      executed).
      
      If I disable just the pv-spinlock calls, my tests show that pvops is
      identical to non-pvops performance on native (my measurements show that
      it is actually about .1% faster, but Xiaohui shows a .05% slowdown).
      
      Summary of results, averaging 10 runs of the "mmperf" test, using a
      no-pvops build as baseline:
      
      		nopv		Pv-nospin	Pv-spin
      CPU cycles	100.00%		99.89%		102.18%
      instructions	100.00%		100.10%		100.15%
      CPI		100.00%		99.79%		102.03%
      cache ref	100.00%		100.84%		100.28%
      cache miss	100.00%		90.47%		88.56%
      cache miss rate	100.00%		89.72%		88.31%
      branches	100.00%		99.93%		100.04%
      branch miss	100.00%		103.66%		107.72%
      branch miss rt	100.00%		103.73%		107.67%
      wallclock	100.00%		99.90%		102.20%
      
      The clear effect here is that the 2% increase in CPI is
      directly reflected in the final wallclock time.
      
      (The other interesting effect is that the more ops are
      out of line calls via pvops, the lower the cache access
      and miss rates.  Not too surprising, but it suggests that
      the non-pvops kernel is over-inlined.  On the flipside,
      the branch misses go up correspondingly...)
      
      So, what's the fix?
      
      Paravirt patching turns all the pvops calls into direct calls, so
      _spin_lock etc do end up having direct calls.  For example, the compiler
      generated code for paravirtualized _spin_lock is:
      
      <_spin_lock+0>:		mov    %gs:0xb4c8,%rax
      <_spin_lock+9>:		incl   0xffffffffffffe044(%rax)
      <_spin_lock+15>:	callq  *0xffffffff805a5b30
      <_spin_lock+22>:	retq
      
      The indirect call will get patched to:
      <_spin_lock+0>:		mov    %gs:0xb4c8,%rax
      <_spin_lock+9>:		incl   0xffffffffffffe044(%rax)
      <_spin_lock+15>:	callq <__ticket_spin_lock>
      <_spin_lock+20>:	nop; nop		/* or whatever 2-byte nop */
      <_spin_lock+22>:	retq
      
      One possibility is to inline _spin_lock, etc, when building an
      optimised kernel (ie, when there's no spinlock/preempt
      instrumentation/debugging enabled).  That will remove the outer
      call/return pair, returning the instruction stream to a single
      call/return, which will presumably execute the same as the non-pvops
      case.  The downsides arel 1) it will replicate the
      preempt_disable/enable code at eack lock/unlock callsite; this code is
      fairly small, but not nothing; and 2) the spinlock definitions are
      already a very heavily tangled mass of #ifdefs and other preprocessor
      magic, and making any changes will be non-trivial.
      
      The other obvious answer is to disable pv-spinlocks.  Making them a
      separate config option is fairly easy, and it would be trivial to
      enable them only when Xen is enabled (as the only non-default user).
      But it doesn't really address the common case of a distro build which
      is going to have Xen support enabled, and leaves the open question of
      whether the native performance cost of pv-spinlocks is worth the
      performance improvement on a loaded Xen system (10% saving of overall
      system CPU when guests block rather than spin).  Still it is a
      reasonable short-term workaround.
      
      [ Impact: fix pvops performance regression when running native ]
      Analysed-by: N"Xin Xiaohui" <xiaohui.xin@intel.com>
      Analysed-by: N"Li Xin" <xin.li@intel.com>
      Analysed-by: N"Nakajima Jun" <jun.nakajima@intel.com>
      Signed-off-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
      Acked-by: NH. Peter Anvin <hpa@zytor.com>
      Cc: Nick Piggin <npiggin@suse.de>
      Cc: Xen-devel <xen-devel@lists.xensource.com>
      LKML-Reference: <4A0B62F7.5030802@goop.org>
      [ fixed the help text ]
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      b4ecc126
  16. 10 4月, 2009 1 次提交
  17. 26 3月, 2009 1 次提交
  18. 14 3月, 2009 1 次提交
    • I
      tracing/syscalls: support for syscalls tracing on x86, fix · ccd50dfd
      Ingo Molnar 提交于
      Impact: build fix
      
       kernel/built-in.o: In function `ftrace_syscall_exit':
       (.text+0x76667): undefined reference to `syscall_nr_to_meta'
      
      ftrace.o is built:
      
      obj-$(CONFIG_DYNAMIC_FTRACE)    += ftrace.o
      obj-$(CONFIG_FUNCTION_GRAPH_TRACER) += ftrace.o
      
      But now a CONFIG_FTRACE_SYSCALLS dependency is needed too.
      
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Lai Jiangshan <laijs@cn.fujitsu.com>
      LKML-Reference: <1236401580-5758-3-git-send-email-fweisbec@gmail.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      ccd50dfd
  19. 13 3月, 2009 1 次提交
  20. 05 3月, 2009 1 次提交
  21. 26 2月, 2009 1 次提交
  22. 18 2月, 2009 3 次提交
  23. 17 2月, 2009 1 次提交
  24. 11 2月, 2009 1 次提交
  25. 10 2月, 2009 1 次提交
    • T
      x86: implement x86_32 stack protector · 60a5317f
      Tejun Heo 提交于
      Impact: stack protector for x86_32
      
      Implement stack protector for x86_32.  GDT entry 28 is used for it.
      It's set to point to stack_canary-20 and have the length of 24 bytes.
      CONFIG_CC_STACKPROTECTOR turns off CONFIG_X86_32_LAZY_GS and sets %gs
      to the stack canary segment on entry.  As %gs is otherwise unused by
      the kernel, the canary can be anywhere.  It's defined as a percpu
      variable.
      
      x86_32 exception handlers take register frame on stack directly as
      struct pt_regs.  With -fstack-protector turned on, gcc copies the
      whole structure after the stack canary and (of course) doesn't copy
      back on return thus losing all changed.  For now, -fno-stack-protector
      is added to all files which contain those functions.  We definitely
      need something better.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      60a5317f
  26. 29 1月, 2009 6 次提交
  27. 27 1月, 2009 2 次提交
  28. 23 1月, 2009 1 次提交
  29. 21 1月, 2009 2 次提交