1. 12 3月, 2009 2 次提交
  2. 05 3月, 2009 1 次提交
  3. 25 2月, 2009 2 次提交
  4. 14 2月, 2009 1 次提交
  5. 03 2月, 2009 1 次提交
  6. 31 1月, 2009 1 次提交
  7. 21 1月, 2009 1 次提交
  8. 18 1月, 2009 4 次提交
  9. 16 1月, 2009 1 次提交
    • T
      x86: merge 64 and 32 SMP percpu handling · 9939ddaf
      Tejun Heo 提交于
      Now that pda is allocated as part of percpu, percpu doesn't need to be
      accessed through pda.  Unify x86_64 SMP percpu access with x86_32 SMP
      one.  Other than the segment register, operand size and the base of
      percpu symbols, they behave identical now.
      
      This patch replaces now unnecessary pda->data_offset with a dummy
      field which is necessary to keep stack_canary at its place.  This
      patch also moves per_cpu_offset initialization out of init_gdt() into
      setup_per_cpu_areas().  Note that this change also necessitates
      explicit per_cpu_offset initializations in voyager_smp.c.
      
      With this change, x86_OP_percpu()'s are as efficient on x86_64 as on
      x86_32 and also x86_64 can use assembly PER_CPU macros.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      9939ddaf
  10. 11 1月, 2009 1 次提交
  11. 17 12月, 2008 1 次提交
  12. 03 12月, 2008 2 次提交
  13. 02 12月, 2008 1 次提交
    • F
      tracing/function-graph-tracer: support for x86-64 · 48d68b20
      Frederic Weisbecker 提交于
      Impact: extend and enable the function graph tracer to 64-bit x86
      
      This patch implements the support for function graph tracer under x86-64.
      Both static and dynamic tracing are supported.
      
      This causes some small CPP conditional asm on arch/x86/kernel/ftrace.c I
      wanted to use probe_kernel_read/write to make the return address
      saving/patching code more generic but it causes tracing recursion.
      
      That would be perhaps useful to implement a notrace version of these
      function for other archs ports.
      
      Note that arch/x86/process_64.c is not traced, as in X86-32. I first
      thought __switch_to() was responsible of crashes during tracing because I
      believed current task were changed inside but that's actually not the
      case (actually yes, but not the "current" pointer).
      
      So I will have to investigate to find the functions that harm here, to
      enable tracing of the other functions inside (but there is no issue at
      this time, while process_64.c stays out of -pg flags).
      
      A little possible race condition is fixed inside this patch too. When the
      tracer allocate a return stack dynamically, the current depth is not
      initialized before but after. An interrupt could occur at this time and,
      after seeing that the return stack is allocated, the tracer could try to
      trace it with a random uninitialized depth. It's a prevention, even if I
      hadn't problems with it.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Tim Bird <tim.bird@am.sony.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      48d68b20
  14. 28 11月, 2008 3 次提交
  15. 27 11月, 2008 3 次提交
  16. 24 11月, 2008 1 次提交
  17. 23 11月, 2008 3 次提交
  18. 22 11月, 2008 4 次提交
  19. 21 11月, 2008 3 次提交
    • I
      x86: entry_64.S: rename · 14ae22ba
      Ingo Molnar 提交于
      Impact: cleanup
      
      Rename:
      
         CFI_PUSHQ  =>  pushq_cfi
         CFI_POPQ   =>  popq_cfi
         CFI_MOVQ   =>  movq_cfi
      
      To make it blend better into regular assembly code.
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      14ae22ba
    • I
      x86: clean up after: move entry_64.S register saving out of the macros, fix · e8a0e276
      Ingo Molnar 提交于
      Impact: build fix
      
      The break builds with older binutils (2.16.1):
      
       arch/x86/kernel/entry_64.S: Assembler messages:
       arch/x86/kernel/entry_64.S:282: Error: too many positional arguments
       arch/x86/kernel/entry_64.S:283: Error: too many positional arguments
       arch/x86/kernel/entry_64.S:284: Error: too many positional arguments
       arch/x86/kernel/entry_64.S:285: Error: too many positional arguments
       arch/x86/kernel/entry_64.S:286: Error: too many positional arguments
       arch/x86/kernel/entry_64.S:287: Error: too many positional arguments
       arch/x86/kernel/entry_64.S:288: Error: too many positional arguments
       arch/x86/kernel/entry_64.S:289: Error: too many positional arguments
       arch/x86/kernel/entry_64.S:290: Error: too many positional arguments
      
      Took some time to figure out the detail that GAS chokes on: it's
      negative offsets. Rearrange the calculations to make sure we never
      go negative.
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      e8a0e276
    • A
      x86: clean up after: move entry_64.S register saving out of the macros · dcd072e2
      Alexander van Heukelum 提交于
      This add-on patch to x86: move entry_64.S register saving out
      of the macros visually cleans up the appearance of the code by
      introducing some basic helper macro's. It also adds some cfi
      annotations which were missing.
      Signed-off-by: NAlexander van Heukelum <heukelum@fastmail.fm>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      dcd072e2
  20. 20 11月, 2008 1 次提交
    • A
      x86: move entry_64.S register saving out of the macros · d99015b1
      Alexander van Heukelum 提交于
      Here is a combined patch that moves "save_args" out-of-line for
      the interrupt macro and moves "error_entry" mostly out-of-line
      for the zeroentry and errorentry macros.
      
      The save_args function becomes really straightforward and easy
      to understand, with the possible exception of the stack switch
      code, which now needs to copy the return address of to the
      calling function. Normal interrupts arrive with ((~vector)-0x80)
      on the stack, which gets adjusted in common_interrupt:
      
      <common_interrupt>:
      (5)  addq   $0xffffffffffffff80,(%rsp)		/* -> ~(vector) */
      (4)  sub    $0x50,%rsp				/* space for registers */
      (5)  callq  ffffffff80211290 <save_args>
      (5)  callq  ffffffff80214290 <do_IRQ>
      <ret_from_intr>:
           ...
      
      An apic interrupt stub now look like this:
      
      <thermal_interrupt>:
      (5)  pushq  $0xffffffffffffff05			/* ~(vector) */
      (4)  sub    $0x50,%rsp				/* space for registers */
      (5)  callq  ffffffff80211290 <save_args>
      (5)  callq  ffffffff80212b8f <smp_thermal_interrupt>
      (5)  jmpq   ffffffff80211f93 <ret_from_intr>
      
      Similarly the exception handler register saving function becomes
      simpler, without the need of any parameter shuffling. The stub
      for an exception without errorcode looks like this:
      
      <overflow>:
      (6)  callq  *0x1cad12(%rip)        # ffffffff803dd448 <pv_irq_ops+0x38>
      (2)  pushq  $0xffffffffffffffff			/* no syscall */
      (4)  sub    $0x78,%rsp				/* space for registers */
      (5)  callq  ffffffff8030e3b0 <error_entry>
      (3)  mov    %rsp,%rdi				/* pt_regs pointer */
      (2)  xor    %esi,%esi				/* no error code */
      (5)  callq  ffffffff80213446 <do_overflow>
      (5)  jmpq   ffffffff8030e460 <error_exit>
      
      And one for an exception with errorcode like this:
      
      <segment_not_present>:
      (6)  callq  *0x1cab92(%rip)        # ffffffff803dd448 <pv_irq_ops+0x38>
      (4)  sub    $0x78,%rsp				/* space for registers */
      (5)  callq  ffffffff8030e3b0 <error_entry>
      (3)  mov    %rsp,%rdi				/* pt_regs pointer */
      (5)  mov    0x78(%rsp),%rsi			/* load error code */
      (9)  movq   $0xffffffffffffffff,0x78(%rsp)	/* no syscall */
      (5)  callq  ffffffff80213209 <do_segment_not_present>
      (5)  jmpq   ffffffff8030e460 <error_exit>
      
      Unfortunately, this last type is more than 32 bytes. But the total space
      savings due to this patch is about 2500 bytes on an smp-configuration,
      and I think the code is clearer than it was before. The tested kernels
      were non-paravirt ones (i.e., without the indirect call at the top of
      the exception handlers).
      
      Anyhow, I tested this patch on top of a recent -tip. The machine
      was an 2x4-core Xeon at 2333MHz. Measured where the delays between
      (almost-)adjacent rdtsc instructions. The graphs show how much
      time is spent outside of the program as a function of the measured
      delay. The area under the graph represents the total time spent
      outside the program. Eight instances of the rdtsctest were
      started, each pinned to a single cpu. The histogams are added.
      For each kernel two measurements were done: one in mostly idle
      condition, the other while running "bonnie++ -f", bound to cpu 0.
      Each measurement took 40 minutes runtime. See the attached graphs
      for the results. The graphs overlap almost everywhere, but there
      are small differences.
      Signed-off-by: NAlexander van Heukelum <heukelum@fastmail.fm>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      d99015b1
  21. 17 11月, 2008 1 次提交
  22. 14 11月, 2008 1 次提交
  23. 13 11月, 2008 1 次提交