1. 18 5月, 2009 1 次提交
    • I
      perf_counter, x86: speed up the scheduling fast-path · b68f1d2e
      Ingo Molnar 提交于
      We have to set up the LVT entry only at counter init time, not at
      every switch-in time.
      
      There's friction between NMI and non-NMI use here - we'll probably
      remove the per counter configurability of it - but until then, dont
      slow down things ...
      
      [ Impact: micro-optimization ]
      
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Srivatsa Vaddagiri <vatsa@in.ibm.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Marcelo Tosatti <mtosatti@redhat.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      b68f1d2e
  2. 17 5月, 2009 1 次提交
    • I
      perf_counter, x86: fix zero irq_period counters · d2517a49
      Ingo Molnar 提交于
      The quirk to irq_period unearthed an unrobustness we had in the
      hw_counter initialization sequence: we left irq_period at 0, which
      was then quirked up to 2 ... which then generated a _lot_ of
      interrupts during 'perf stat' runs, slowed them down and skewed
      the counter results in general.
      
      Initialize irq_period to the maximum instead.
      
      [ Impact: fix perf stat results ]
      
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Srivatsa Vaddagiri <vatsa@in.ibm.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Marcelo Tosatti <mtosatti@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      d2517a49
  3. 15 5月, 2009 9 次提交
  4. 13 5月, 2009 1 次提交
  5. 11 5月, 2009 1 次提交
  6. 08 5月, 2009 1 次提交
    • H
      x86: MCE: make cmci_discover_lock irq-safe · e5299926
      Hidetoshi Seto 提交于
      Lockdep reports the warning below when Li tries to offline one cpu:
      
      [  110.835487] =================================
      [  110.835616] [ INFO: inconsistent lock state ]
      [  110.835688] 2.6.30-rc4-00336-g8c9ed899 #52
      [  110.835757] ---------------------------------
      [  110.835828] inconsistent {HARDIRQ-ON-W} -> {IN-HARDIRQ-W} usage.
      [  110.835908] swapper/0 [HC1[1]:SC0[0]:HE0:SE1] takes:
      [  110.835982]  (cmci_discover_lock){?.+...}, at: [<ffffffff80236dc0>] cmci_clear+0x30/0x9b
      
      cmci_clear() can be called via smp_call_function_single().
      
      It is better to disable interrupt while holding cmci_discover_lock,
      to turn it into an irq-safe lock - we can deadlock otherwise.
      
      [ Impact: fix possible deadlock in the MCE code ]
      Reported-by: NShaohua Li <shaohua.li@intel.com>
      Signed-off-by: NHidetoshi Seto <seto.hidetoshi@jp.fujitsu.com>
      Cc: Andi Kleen <andi@firstfloor.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      LKML-Reference: <4A03ED38.8000700@jp.fujitsu.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Reported-by: Shaohua Li<shaohua.li@intel.com>
      e5299926
  7. 05 5月, 2009 3 次提交
    • A
      x86: show number of core_siblings instead of thread_siblings in /proc/cpuinfo · 35d11680
      Andreas Herrmann 提交于
      Commit 7ad728f9
      (cpumask: x86: convert cpu_sibling_map/cpu_core_map to cpumask_var_t)
      changed the output of /proc/cpuinfo for siblings:
      
      Example on an AMD Phenom:
      
        physical id   : 0
        siblings : 1
        core id	   : 3
        cpu cores  : 4
      
      Before that commit it was:
      
        physical id	: 0
        siblings : 4
        core id	   : 3
        cpu cores  : 4
      
      Instead of cpu_core_mask it now uses cpu_sibling_mask to count siblings.
      This is due to the following hunk of above commit:
      
      |  --- a/arch/x86/kernel/cpu/proc.c
      |  +++ b/arch/x86/kernel/cpu/proc.c
      |  @@ -14,7 +14,7 @@ static void show_cpuinfo_core(struct seq_file *m, struct cpuinf
      |          if (c->x86_max_cores * smp_num_siblings > 1) {
      |                  seq_printf(m, "physical id\t: %d\n", c->phys_proc_id);
      |                  seq_printf(m, "siblings\t: %d\n",
      |  -                          cpus_weight(per_cpu(cpu_core_map, cpu)));
      |  +                          cpumask_weight(cpu_sibling_mask(cpu)));
      |                  seq_printf(m, "core id\t\t: %d\n", c->cpu_core_id);
      |                  seq_printf(m, "cpu cores\t: %d\n", c->booted_cores);
      |                  seq_printf(m, "apicid\t\t: %d\n", c->apicid);
      
      This was a mistake, because the impact line shows that this side-effect
      was not anticipated:
      
         Impact: reduce per-cpu size for CONFIG_CPUMASK_OFFSTACK=y
      
      So revert the respective hunk to restore the old behavior.
      
      [ Impact: fix sibling-info regression in /proc/cpuinfo ]
      Signed-off-by: NAndreas Herrmann <andreas.herrmann3@amd.com>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      LKML-Reference: <20090504182859.GA29045@alberich.amd.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      35d11680
    • I
      perf_counter: fix fixed-purpose counter support on v2 Intel-PERFMON · 066d7dea
      Ingo Molnar 提交于
      Fixed-purpose counters stopped working in a simple 'perf stat ls' run:
      
         <not counted>  cache references
         <not counted>  cache misses
      
      Due to:
      
        ef7b3e09: perf_counter, x86: remove vendor check in fixed_mode_idx()
      
      Which made x86_pmu.num_counters_fixed matter: if it's nonzero, the
      fixed-purpose counters are utilized.
      
      But on v2 perfmon this field is not set (despite there being
      fixed-purpose PMCs). So add a quirk to set the number of fixed-purpose
      counters to at least three.
      
      [ Impact: add quirk for three fixed-purpose counters on certain Intel CPUs ]
      
      Cc: Robert Richter <robert.richter@amd.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <1241002046-8832-28-git-send-email-robert.richter@amd.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      066d7dea
    • P
      perf_counter: x86: fixup nmi_watchdog vs perf_counter boo-boo · ba77813a
      Peter Zijlstra 提交于
      Invert the atomic_inc_not_zero() test so that we will indeed detect the
      first activation.
      
      Also rename the global num_counters, since its easy to confuse with
      x86_pmu.num_counters.
      
      [ Impact: fix non-working perfcounters on AMD CPUs, cleanup ]
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <1241455664.7620.4938.camel@twins>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      ba77813a
  8. 02 5月, 2009 1 次提交
  9. 01 5月, 2009 1 次提交
  10. 30 4月, 2009 1 次提交
  11. 29 4月, 2009 20 次提交