1. 15 11月, 2016 1 次提交
  2. 11 11月, 2016 1 次提交
  3. 10 5月, 2016 1 次提交
  4. 30 9月, 2015 1 次提交
  5. 17 9月, 2015 1 次提交
    • M
      s390/vtime: correct scaled cputime for SMT · 61cc3790
      Martin Schwidefsky 提交于
      The scaled cputime is supposed to be derived from the normal per-thread
      cputime by dividing it with the average thread density in the last interval.
      
      The calculation of the scaling values for the average thread density is
      incorrect. The current, incorrect calculation:
      
          Ci = cycle count with i active threads
          T = unscaled cputime, sT = scaled cputime
          sT = T * (C1 + C2 + ... + Cn) / (1*C1 + 2*C2 + ... + n*Cn)
      
      The calculation happens to yield the correct numbers for the simple cases
      with only one Ci value not zero. But for cases with multiple Ci values not
      zero it fails. E.g. on a SMT-2 system with one thread active half the time
      and two threads active for the other half of the time it fails, the scaling
      factor should be 3/4 but the formula gives 2/3.
      
      The correct formula is
      
          sT = T * (C1/1 + C2/2 + ... + Cn/n) / (C1 + C2 + ... + Cn)
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      61cc3790
  6. 04 8月, 2015 1 次提交
  7. 22 1月, 2015 1 次提交
    • M
      s390: add SMT support · 10ad34bc
      Martin Schwidefsky 提交于
      The multi-threading facility is introduced with the z13 processor family.
      This patch adds code to detect the multi-threading facility. With the
      facility enabled each core will surface multiple hardware threads to the
      system. Each hardware threads looks like a normal CPU to the operating
      system with all its registers and properties.
      
      The SCLP interface reports the SMT topology indirectly via the maximum
      thread id. Each reported CPU in the result of a read-scp-information
      is a core representing a number of hardware threads.
      
      To reflect the reduced CPU capacity if two hardware threads run on a
      single core the MT utilization counter set is used to normalize the
      raw cputime obtained by the CPU timer deltas. This scaled cputime is
      reported via the taskstats interface. The normal /proc/stat numbers
      are based on the raw cputime and are not affected by the normalization.
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      10ad34bc
  8. 18 12月, 2014 1 次提交
  9. 27 10月, 2014 1 次提交
  10. 09 10月, 2014 2 次提交
  11. 27 8月, 2014 1 次提交
    • C
      s390: Replace __get_cpu_var uses · eb7e7d76
      Christoph Lameter 提交于
      __get_cpu_var() is used for multiple purposes in the kernel source. One of
      them is address calculation via the form &__get_cpu_var(x).  This calculates
      the address for the instance of the percpu variable of the current processor
      based on an offset.
      
      Other use cases are for storing and retrieving data from the current
      processors percpu area.  __get_cpu_var() can be used as an lvalue when
      writing data or on the right side of an assignment.
      
      __get_cpu_var() is defined as :
      
      #define __get_cpu_var(var) (*this_cpu_ptr(&(var)))
      
      __get_cpu_var() always only does an address determination. However, store
      and retrieve operations could use a segment prefix (or global register on
      other platforms) to avoid the address calculation.
      
      this_cpu_write() and this_cpu_read() can directly take an offset into a
      percpu area and use optimized assembly code to read and write per cpu
      variables.
      
      This patch converts __get_cpu_var into either an explicit address
      calculation using this_cpu_ptr() or into a use of this_cpu operations that
      use the offset.  Thereby address calculations are avoided and less registers
      are used when code is generated.
      
      At the end of the patch set all uses of __get_cpu_var have been removed so
      the macro is removed too.
      
      The patch set includes passes over all arches as well. Once these operations
      are used throughout then specialized macros can be defined in non -x86
      arches as well in order to optimize per cpu access by f.e.  using a global
      register that may be set to the per cpu base.
      
      Transformations done to __get_cpu_var()
      
      1. Determine the address of the percpu instance of the current processor.
      
      	DEFINE_PER_CPU(int, y);
      	int *x = &__get_cpu_var(y);
      
          Converts to
      
      	int *x = this_cpu_ptr(&y);
      
      2. Same as #1 but this time an array structure is involved.
      
      	DEFINE_PER_CPU(int, y[20]);
      	int *x = __get_cpu_var(y);
      
          Converts to
      
      	int *x = this_cpu_ptr(y);
      
      3. Retrieve the content of the current processors instance of a per cpu
      variable.
      
      	DEFINE_PER_CPU(int, y);
      	int x = __get_cpu_var(y)
      
         Converts to
      
      	int x = __this_cpu_read(y);
      
      4. Retrieve the content of a percpu struct
      
      	DEFINE_PER_CPU(struct mystruct, y);
      	struct mystruct x = __get_cpu_var(y);
      
         Converts to
      
      	memcpy(&x, this_cpu_ptr(&y), sizeof(x));
      
      5. Assignment to a per cpu variable
      
      	DEFINE_PER_CPU(int, y)
      	__get_cpu_var(y) = x;
      
         Converts to
      
      	this_cpu_write(y, x);
      
      6. Increment/Decrement etc of a per cpu variable
      
      	DEFINE_PER_CPU(int, y);
      	__get_cpu_var(y)++
      
         Converts to
      
      	this_cpu_inc(y)
      
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      CC: linux390@de.ibm.com
      Acked-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
      Signed-off-by: NChristoph Lameter <cl@linux.com>
      Signed-off-by: NTejun Heo <tj@kernel.org>
      eb7e7d76
  12. 31 10月, 2013 1 次提交
  13. 24 10月, 2013 1 次提交
  14. 14 8月, 2013 1 次提交
    • F
      vtime: Describe overriden functions in dedicated arch headers · a5725ac2
      Frederic Weisbecker 提交于
      If the arch overrides some generic vtime APIs, let it describe
      these on a dedicated and standalone header. This way it becomes
      convenient to include it in vtime generic headers without irrelevant
      stuff in such a low level header.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Li Zhong <zhong@linux.vnet.ibm.com>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Kevin Hilman <khilman@linaro.org>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      a5725ac2
  15. 15 7月, 2013 1 次提交
    • P
      s390: delete __cpuinit usage from all s390 files · e2741f17
      Paul Gortmaker 提交于
      The __cpuinit type of throwaway sections might have made sense
      some time ago when RAM was more constrained, but now the savings
      do not offset the cost and complications.  For example, the fix in
      commit 5e427ec2 ("x86: Fix bit corruption at CPU resume time")
      is a good example of the nasty type of bugs that can be created
      with improper use of the various __init prefixes.
      
      After a discussion on LKML[1] it was decided that cpuinit should go
      the way of devinit and be phased out.  Once all the users are gone,
      we can then finally remove the macros themselves from linux/init.h.
      
      Note that some harmless section mismatch warnings may result, since
      notify_cpu_starting() and cpu_up() are arch independent (kernel/cpu.c)
      are flagged as __cpuinit  -- so if we remove the __cpuinit from
      arch specific callers, we will also get section mismatch warnings.
      As an intermediate step, we intend to turn the linux/init.h cpuinit
      content into no-ops as early as possible, since that will get rid
      of these warnings.  In any case, they are temporary and harmless.
      
      This removes all the arch/s390 uses of the __cpuinit macros from
      all C files.  Currently s390 does not have any __CPUINIT used in
      assembly files.
      
      [1] https://lkml.org/lkml/2013/5/20/589
      
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: linux390@de.ibm.com
      Cc: linux-s390@vger.kernel.org
      Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
      e2741f17
  16. 08 4月, 2013 1 次提交
  17. 14 2月, 2013 1 次提交
  18. 28 1月, 2013 1 次提交
    • F
      cputime: Safely read cputime of full dynticks CPUs · 6a61671b
      Frederic Weisbecker 提交于
      While remotely reading the cputime of a task running in a
      full dynticks CPU, the values stored in utime/stime fields
      of struct task_struct may be stale. Its values may be those
      of the last kernel <-> user transition time snapshot and
      we need to add the tickless time spent since this snapshot.
      
      To fix this, flush the cputime of the dynticks CPUs on
      kernel <-> user transition and record the time / context
      where we did this. Then on top of this snapshot and the current
      time, perform the fixup on the reader side from task_times()
      accessors.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Li Zhong <zhong@linux.vnet.ibm.com>
      Cc: Namhyung Kim <namhyung.kim@lge.com>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      [fixed kvm module related build errors]
      Signed-off-by: NSedat Dilek <sedat.dilek@gmail.com>
      6a61671b
  19. 20 11月, 2012 1 次提交
    • F
      vtime: Warn if irqs aren't disabled on system time accounting APIs · 1b2852b1
      Frederic Weisbecker 提交于
      System time accounting APIs such as vtime_account_system() and
      vtime_account_idle() need to be irqsafe. Current callers include
      irq entry, exit and kvm, all of which have been checked against that
      requirement. Now it's better to grow that with an automatic check
      in case we have further callers or we missed something.
      Suggested-by: NSteven Rostedt <rostedt@goodmis.org>
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      1b2852b1
  20. 19 11月, 2012 2 次提交
    • F
      vtime: Explicitly account pending user time on process tick · bcebdf84
      Frederic Weisbecker 提交于
      All vtime implementations just flush the user time on process
      tick. Consolidate that in generic code by calling a user time
      accounting helper. This avoids an indirect call in ia64 and
      prepare to also consolidate vtime context switch code.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Reviewed-by: NSteven Rostedt <rostedt@goodmis.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      bcebdf84
    • F
      vtime: Remove the underscore prefix invasion · fd25b4c2
      Frederic Weisbecker 提交于
      Prepending irq-unsafe vtime APIs with underscores was actually
      a bad idea as the result is a big mess in the API namespace that
      is even waiting to be further extended. Also these helpers
      are always called from irq safe callers except kvm. Just
      provide a vtime_account_system_irqsafe() for this specific
      case so that we can remove the underscore prefix on other
      vtime functions.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Reviewed-by: NSteven Rostedt <rostedt@goodmis.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      fd25b4c2
  21. 30 10月, 2012 1 次提交
    • F
      vtime: Make vtime_account_system() irqsafe · 11113334
      Frederic Weisbecker 提交于
      vtime_account_system() currently has only one caller with
      vtime_account() which is irq safe.
      
      Now we are going to call it from other places like kvm where
      irqs are not always disabled by the time we account the cputime.
      
      So let's make it irqsafe. The arch implementation part is now
      prefixed with "__".
      
      vtime_account_idle() arch implementation is prefixed accordingly
      to stay consistent.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      11113334
  22. 26 9月, 2012 1 次提交
  23. 25 9月, 2012 1 次提交
    • F
      cputime: Use a proper subsystem naming for vtime related APIs · bf9fae9f
      Frederic Weisbecker 提交于
      Use a naming based on vtime as a prefix for virtual based
      cputime accounting APIs:
      
      - account_system_vtime() -> vtime_account()
      - account_switch_vtime() -> vtime_task_switch()
      
      It makes it easier to allow for further declension such
      as vtime_account_system(), vtime_account_idle(), ... if we
      want to find out the context we account to from generic code.
      
      This also make it better to know on which subsystem these APIs
      refer to.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      bf9fae9f
  24. 20 8月, 2012 1 次提交
    • F
      cputime: Consolidate vtime handling on context switch · baa36046
      Frederic Weisbecker 提交于
      The archs that implement virtual cputime accounting all
      flush the cputime of a task when it gets descheduled
      and sometimes set up some ground initialization for the
      next task to account its cputime.
      
      These archs all put their own hooks in their context
      switch callbacks and handle the off-case themselves.
      
      Consolidate this by creating a new account_switch_vtime()
      callback called in generic code right after a context switch
      and that these archs must implement to flush the prev task
      cputime and initialize the next task cputime related state.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Acked-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      baa36046
  25. 20 7月, 2012 1 次提交
  26. 11 3月, 2012 3 次提交
    • H
      [S390] irq: external interrupt code passing · fde15c3a
      Heiko Carstens 提交于
      The external interrupt handlers have a parameter called ext_int_code.
      Besides the name this paramter does not only contain the ext_int_code
      but in addition also the "cpu address" (POP) which caused the external
      interrupt.
      To make the code a bit more obvious pass a struct instead so the called
      function can easily distinguish between external interrupt code and
      cpu address. The cpu address field however is named "subcode" since
      some external interrupt sources do not pass a cpu address but a
      different parameter (or none at all).
      Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      fde15c3a
    • M
      [S390] rework idle code · 4c1051e3
      Martin Schwidefsky 提交于
      Whenever the cpu loads an enabled wait PSW it will appear as idle to the
      underlying host system. The code in default_idle calls vtime_stop_cpu
      which does the necessary voodoo to get the cpu time accounting right.
      The udelay code just loads an enabled wait PSW. To correct this rework
      the vtime_stop_cpu/vtime_start_cpu logic and move the difficult parts
      to entry[64].S, vtime_stop_cpu can now be called from anywhere and
      vtime_start_cpu is gone. The correction of the cpu time during wakeup
      from an enabled wait PSW is done with a critical section in entry[64].S.
      As vtime_start_cpu is gone, s390_idle_check can be removed as well.
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      4c1051e3
    • M
      [S390] rework smp code · 8b646bd7
      Martin Schwidefsky 提交于
      Define struct pcpu and merge some of the NR_CPUS arrays into it, including
      __cpu_logical_map, current_set and smp_cpu_state. Split smp related
      functions to those operating on physical cpus and the functions operating
      on a logical cpu number. Make the functions for physical cpus use a
      pointer to a struct pcpu. This hides the knowledge about cpu addresses in
      smp.c, entry[64].S and swsusp_asm64.S, thus remove the sigp.h header.
      
      The PSW restart mechanism is used to start secondary cpus, calling a
      function on an online cpu, calling a function on the ipl cpu, and for
      the nmi signal. Replace the different assembler functions with a
      single function restart_int_handler. The new entry point calls a function
      whose pointer is stored in the lowcore of the target cpu and it can wait
      for the source cpu to stop. This covers all existing use cases.
      
      Overall the code is now simpler and there are ~380 lines less code.
      Reviewed-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      8b646bd7
  27. 30 10月, 2011 2 次提交
  28. 26 5月, 2011 1 次提交
  29. 31 3月, 2011 1 次提交
  30. 05 1月, 2011 2 次提交
  31. 01 12月, 2010 1 次提交
    • H
      [S390] nohz/s390: fix arch_needs_cpu() return value on offline cpus · 39881215
      Heiko Carstens 提交于
      This fixes the same problem as described in the patch "nohz: fix
      printk_needs_cpu() return value on offline cpus" for the arch_needs_cpu()
      primitive:
      
      arch_needs_cpu() may return 1 if called on offline cpus. When a cpu gets
      offlined it schedules the idle process which, before killing its own cpu,
      will call tick_nohz_stop_sched_tick().
      That function in turn will call arch_needs_cpu() in order to check if the
      local tick can be disabled. On offline cpus this function should naturally
      return 0 since regardless if the tick gets disabled or not the cpu will be
      dead short after. That is besides the fact that __cpu_disable() should already
      have made sure that no interrupts on the offlined cpu will be delivered anyway.
      
      In this case it prevents tick_nohz_stop_sched_tick() to call
      select_nohz_load_balancer(). No idea if that really is a problem. However what
      made me debug this is that on 2.6.32 the function get_nohz_load_balancer() is
      used within __mod_timer() to select a cpu on which a timer gets enqueued.
      If arch_needs_cpu() returns 1 then the nohz_load_balancer cpu doesn't get
      updated when a cpu gets offlined. It may contain the cpu number of an offline
      cpu. In turn timers get enqueued on an offline cpu and not very surprisingly
      they never expire and cause system hangs.
      
      This has been observed 2.6.32 kernels. On current kernels __mod_timer() uses
      get_nohz_timer_target() which doesn't have that problem. However there might
      be other problems because of the too early exit tick_nohz_stop_sched_tick()
      in case a cpu goes offline.
      
      This specific bug was indrocuded with 3c5d92a0 "nohz: Introduce
      arch_needs_cpu".
      
      In this case a cpu hotplug notifier is used to fix the issue in order to keep
      the normal/fast path small. All we need to do is to clear the condition that
      makes arch_needs_cpu() return 1 since it is just a performance improvement
      which is supposed to keep the local tick running for a short period if a cpu
      goes idle. Nothing special needs to be done except for clearing the condition.
      
      Cc: stable@kernel.org
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      39881215
  32. 25 10月, 2010 1 次提交
  33. 17 5月, 2010 1 次提交
    • M
      [S390] idle time accounting vs. machine checks · 6377981f
      Martin Schwidefsky 提交于
      A machine check can interrupt the i/o and external interrupt handler
      anytime. If the machine check occurs while the interrupt handler is
      waking up from idle vtime_start_cpu can get executed a second time
      and the int_clock / async_enter_timer values in the lowcore get
      clobbered. This can confuse the cpu time accounting.
      To fix this problem two changes are needed. First the machine check
      handler has to use its own copies of int_clock and async_enter_timer,
      named mcck_clock and mcck_enter_timer. Second the nested execution
      of vtime_start_cpu has to be prevented. This is done in s390_idle_check
      by checking the wait bit in the program status word.
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      6377981f
  34. 05 11月, 2009 1 次提交
    • M
      nohz: Introduce arch_needs_cpu · 3c5d92a0
      Martin Schwidefsky 提交于
      Allow the architecture to request a normal jiffy tick when the system
      goes idle and tick_nohz_stop_sched_tick is called . On s390 the hook is
      used to prevent the system going fully idle if there has been an
      interrupt other than a clock comparator interrupt since the last wakeup.
      
      On s390 the HiperSockets response time for 1 connection ping-pong goes
      down from 42 to 34 microseconds. The CPU cost decreases by 27%.
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      LKML-Reference: <20090929122533.402715150@de.ibm.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      3c5d92a0