1. 05 3月, 2011 1 次提交
  2. 04 3月, 2011 3 次提交
    • A
      perf: Fix LLC-* events on Intel Nehalem/Westmere · e994d7d2
      Andi Kleen 提交于
      On Intel Nehalem and Westmere CPUs the generic perf LLC-* events count the
      L2 caches, not the real L3 LLC - this was inconsistent with behavior on
      other CPUs.
      
      Fixing this requires the use of the special OFFCORE_RESPONSE
      events which need a separate mask register.
      
      This has been implemented by the previous patch, now use this infrastructure
      to set correct events for the LLC-* on Nehalem and Westmere.
      Signed-off-by: NAndi Kleen <ak@linux.intel.com>
      Signed-off-by: NLin Ming <ming.m.lin@intel.com>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <1299119690-13991-3-git-send-email-ming.m.lin@intel.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      e994d7d2
    • A
      perf: Add support for supplementary event registers · a7e3ed1e
      Andi Kleen 提交于
      Change logs against Andi's original version:
      
      - Extends perf_event_attr:config to config{,1,2} (Peter Zijlstra)
      - Fixed a major event scheduling issue. There cannot be a ref++ on an
        event that has already done ref++ once and without calling
        put_constraint() in between. (Stephane Eranian)
      - Use thread_cpumask for percore allocation. (Lin Ming)
      - Use MSR names in the extra reg lists. (Lin Ming)
      - Remove redundant "c = NULL" in intel_percore_constraints
      - Fix comment of perf_event_attr::config1
      
      Intel Nehalem/Westmere have a special OFFCORE_RESPONSE event
      that can be used to monitor any offcore accesses from a core.
      This is a very useful event for various tunings, and it's
      also needed to implement the generic LLC-* events correctly.
      
      Unfortunately this event requires programming a mask in a separate
      register. And worse this separate register is per core, not per
      CPU thread.
      
      This patch:
      
      - Teaches perf_events that OFFCORE_RESPONSE needs extra parameters.
        The extra parameters are passed by user space in the
        perf_event_attr::config1 field.
      
      - Adds support to the Intel perf_event core to schedule per
        core resources. This adds fairly generic infrastructure that
        can be also used for other per core resources.
        The basic code has is patterned after the similar AMD northbridge
        constraints code.
      
      Thanks to Stephane Eranian who pointed out some problems
      in the original version and suggested improvements.
      Signed-off-by: NAndi Kleen <ak@linux.intel.com>
      Signed-off-by: NLin Ming <ming.m.lin@intel.com>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <1299119690-13991-2-git-send-email-ming.m.lin@intel.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      a7e3ed1e
    • S
      perf_events: Update PEBS event constraints · 17e31629
      Stephane Eranian 提交于
      This patch updates PEBS event constraints for Intel Atom, Nehalem, Westmere.
      
      This patch also reorganizes the PEBS format/constraint detection code. It is
      now based on processor model and not PEBS format. Two processors may use the
      same PEBS format without have the same list of PEBS events.
      
      In this second version, we simplified the initialization of the PEBS
      constraints by leveraging the existing switch() statement in perf_event_intel.c.
      We also renamed the constraint tables to be more consistent with regular
      constraints.
      
      In this 3rd version, we drop BR_INST_RETIRED.MISPRED from Intel Atom as it does
      not seem to work. Use MISPREDICTED_BRANCH_RETIRED instead. Also add FP_ASSIST.*
      o both Intel Nehalem and Westmere. I misssed those in the earlier patches.
      Events were tested using libpfm4 perf_examples.
      Signed-off-by: NStephane Eranian <eranian@google.com>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <4d6e6b02.815bdf0a.637b.07a7@mx.google.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      17e31629
  3. 02 3月, 2011 4 次提交
  4. 25 2月, 2011 1 次提交
  5. 24 2月, 2011 1 次提交
    • J
      x86/mrst: Fix apb timer rating when lapic timer is used · 7b62dbec
      Jacob Pan 提交于
      Need to adjust the clockevent device rating for the structure
      that will be registered with clockevent system instead of the
      temporary structure.
      
      Without this fix, APB timer rating will be higher than LAPIC
      timer such that it can not be released later to be used as the
      broadcast timer.
      Signed-off-by: NJacob Pan <jacob.jun.pan@linux.intel.com>
      Cc: Arjan van de Ven <arjan@linux.intel.com>
      Cc: Alan Cox <alan@linux.intel.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: John Stultz <john.stultz@linaro.org>
      LKML-Reference: <1298506046-439-1-git-send-email-jacob.jun.pan@linux.intel.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      7b62dbec
  6. 21 2月, 2011 1 次提交
  7. 18 2月, 2011 1 次提交
  8. 16 2月, 2011 6 次提交
  9. 15 2月, 2011 2 次提交
    • N
      x86, dmi, debug: Log board name (when present) in dmesg/oops output · 84e383b3
      Naga Chumbalkar 提交于
      The "Type 2" SMBIOS record that contains Board Name is not
      strictly required and may be absent in the SMBIOS on some
      platforms.
      
      ( Please note that Type 2 is not listed in Table 3 in Sec 6.2
        ("Required Structures and Data") of the SMBIOS v2.7
        Specification. )
      
      Use the Manufacturer Name (aka System Vendor) name.
      Print Board Name only when it is present.
      
      Before the fix:
        (i) dmesg output: DMI: /ProLiant DL380 G6, BIOS P62 01/29/2011
       (ii) oops output:  Pid: 2170, comm: bash Not tainted 2.6.38-rc4+ #3 /ProLiant DL380 G6
      
      After the fix:
        (i) dmesg output: DMI: HP ProLiant DL380 G6, BIOS P62 01/29/2011
       (ii) oops output:  Pid: 2278, comm: bash Not tainted 2.6.38-rc4+ #4 HP ProLiant DL380 G6
      Signed-off-by: NNaga Chumbalkar <nagananda.chumbalkar@hp.com>
      Reviewed-by: NBjorn Helgaas <bjorn.helgaas@hp.com>
      Cc: <stable@kernel.org> # .3x - good for debugging, please apply as far back as it applies cleanly
      LKML-Reference: <20110214224423.2182.13929.sendpatchset@nchumbalkar.americas.hpqcorp.net>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      84e383b3
    • P
      x86, ioapic: Don't warn about non-existing IOAPICs if we have none · 678301ec
      Paul Bolle 提交于
      mp_find_ioapic() prints errors like:
      
          ERROR: Unable to locate IOAPIC for GSI 13
      
      if it can't find the IOAPIC that manages that specific GSI. I
      see errors like that at every boot of a laptop that apparently
      doesn't have any IOAPICs.
      
      But if there are no IOAPICs it doesn't seem to be an error that
      none can be found. A solution that gets rid of this message is
      to directly return if nr_ioapics (still) is zero. (But keep
      returning -1 in that case, so nothing breaks from this change.)
      
      The call chain that generates this error is:
      
      pnpacpi_allocated_resource()
          case ACPI_RESOURCE_TYPE_IRQ:
              pnpacpi_parse_allocated_irqresource()
                  acpi_get_override_irq()
                       mp_find_ioapic()
      Signed-off-by: NPaul Bolle <pebolle@tiscali.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      678301ec
  10. 14 2月, 2011 1 次提交
  11. 12 2月, 2011 2 次提交
    • T
      x86: Readd missing irq_to_desc() in fixup_irq() · 5117348d
      Thomas Gleixner 提交于
      commit a3c08e5d(x86: Convert irq_chip access to new functions)
      accidentally zapped desc = irq_to_desc(irq); in the vector loop.
      So we lock some random irq descriptor.
      
      Add it back.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: <stable@kernel.org> # .37
      5117348d
    • P
      x86: Fix text_poke_smp_batch() deadlock · d91309f6
      Peter Zijlstra 提交于
      Fix this deadlock - we are already holding the mutex:
      
      =======================================================
      [ INFO: possible circular locking dependency detected ] 2.6.38-rc4-test+ #1
      -------------------------------------------------------
      bash/1850 is trying to acquire lock:
       (text_mutex){+.+.+.}, at: [<ffffffff8100a9c1>] return_to_handler+0x0/0x2f
      
      but task is already holding lock:
       (smp_alt){+.+...}, at: [<ffffffff8100a9c1>] return_to_handler+0x0/0x2f
      
      which lock already depends on the new lock.
      
      the existing dependency chain (in reverse order) is:
      
      -> #2 (smp_alt){+.+...}:
             [<ffffffff81082d02>] lock_acquire+0xcd/0xf8
             [<ffffffff8192e119>] __mutex_lock_common+0x4c/0x339
             [<ffffffff8192e4ca>] mutex_lock_nested+0x3e/0x43
             [<ffffffff8101050f>] alternatives_smp_switch+0x77/0x1d8
             [<ffffffff81926a6f>] do_boot_cpu+0xd7/0x762
             [<ffffffff819277dd>] native_cpu_up+0xe6/0x16a
             [<ffffffff81928e28>] _cpu_up+0x9d/0xee
             [<ffffffff81928f4c>] cpu_up+0xd3/0xe7
             [<ffffffff82268d4b>] kernel_init+0xe8/0x20a
             [<ffffffff8100ba24>] kernel_thread_helper+0x4/0x10
      
      -> #1 (cpu_hotplug.lock){+.+.+.}:
             [<ffffffff81082d02>] lock_acquire+0xcd/0xf8
             [<ffffffff8192e119>] __mutex_lock_common+0x4c/0x339
             [<ffffffff8192e4ca>] mutex_lock_nested+0x3e/0x43
             [<ffffffff810568cc>] get_online_cpus+0x41/0x55
             [<ffffffff810a1348>] stop_machine+0x1e/0x3e
             [<ffffffff819314c1>] text_poke_smp_batch+0x3a/0x3c
             [<ffffffff81932b6c>] arch_optimize_kprobes+0x10d/0x11c
             [<ffffffff81933a51>] kprobe_optimizer+0x152/0x222
             [<ffffffff8106bb71>] process_one_work+0x1d3/0x335
             [<ffffffff8106cfae>] worker_thread+0x104/0x1a4
             [<ffffffff810707c4>] kthread+0x9d/0xa5
             [<ffffffff8100ba24>] kernel_thread_helper+0x4/0x10
      
      -> #0 (text_mutex){+.+.+.}:
      
      other info that might help us debug this:
      
      6 locks held by bash/1850:
       #0:  (&buffer->mutex){+.+.+.}, at: [<ffffffff8100a9c1>] return_to_handler+0x0/0x2f
       #1:  (s_active#75){.+.+.+}, at: [<ffffffff8100a9c1>] return_to_handler+0x0/0x2f
       #2:  (x86_cpu_hotplug_driver_mutex){+.+.+.}, at: [<ffffffff8100a9c1>] return_to_handler+0x0/0x2f
       #3:  (cpu_add_remove_lock){+.+.+.}, at: [<ffffffff8100a9c1>] return_to_handler+0x0/0x2f
       #4:  (cpu_hotplug.lock){+.+.+.}, at: [<ffffffff8100a9c1>] return_to_handler+0x0/0x2f
       #5:  (smp_alt){+.+...}, at: [<ffffffff8100a9c1>] return_to_handler+0x0/0x2f
      
      stack backtrace:
      Pid: 1850, comm: bash Not tainted 2.6.38-rc4-test+ #1
      Call Trace:
      
       [<ffffffff81080eb2>] print_circular_bug+0xa8/0xb7
       [<ffffffff8192e4ca>] mutex_lock_nested+0x3e/0x43
       [<ffffffff81010302>] alternatives_smp_unlock+0x3d/0x93
       [<ffffffff81010630>] alternatives_smp_switch+0x198/0x1d8
       [<ffffffff8102568a>] native_cpu_die+0x65/0x95
       [<ffffffff818cc4ec>] _cpu_down+0x13e/0x202
       [<ffffffff8117a619>] sysfs_write_file+0x108/0x144
       [<ffffffff8111f5a2>] vfs_write+0xac/0xff
       [<ffffffff8111f7a9>] sys_write+0x4a/0x6e
      Reported-by: NSteven Rostedt <rostedt@goodmis.org>
      Tested-by: NSteven Rostedt <rostedt@goodmis.org>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: mathieu.desnoyers@efficios.com
      Cc: rusty@rustcorp.com.au
      Cc: ananth@in.ibm.com
      Cc: masami.hiramatsu.pt@hitachi.com
      Cc: fweisbec@gmail.com
      Cc: jbeulich@novell.com
      Cc: jbaron@redhat.com
      Cc: mhiramat@redhat.com
      LKML-Reference: <1297458466.5226.93.camel@laptop>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      d91309f6
  12. 10 2月, 2011 1 次提交
    • J
      x86: Fix section mismatch in LAPIC initialization · 2fb270f3
      Jan Beulich 提交于
      Additionally doing things conditionally upon smp_processor_id()
      being zero is generally a bad idea, as this means CPU 0 cannot
      be offlined and brought back online later again.
      
      While there may be other places where this is done, I think adding
      more of those should be avoided so that some day SMP can really
      become "symmetrical".
      Signed-off-by: NJan Beulich <jbeulich@novell.com>
      Cc: Cyrill Gorcunov <gorcunov@gmail.com>
      LKML-Reference: <4D525C7E0200007800030EE1@vpn.id2.novell.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      2fb270f3
  13. 07 2月, 2011 1 次提交
  14. 05 2月, 2011 1 次提交
  15. 03 2月, 2011 1 次提交
    • S
      x86, mtrr: Avoid MTRR reprogramming on BP during boot on UP platforms · f7448548
      Suresh Siddha 提交于
      Markus Kohn ran into a hard hang regression on an acer aspire
      1310, when acpi is enabled. git bisect showed the following
      commit as the bad one that introduced the boot regression.
      
      	commit d0af9eed
      	Author: Suresh Siddha <suresh.b.siddha@intel.com>
      	Date:   Wed Aug 19 18:05:36 2009 -0700
      
      	    x86, pat/mtrr: Rendezvous all the cpus for MTRR/PAT init
      
      Because of the UP configuration of that platform,
      native_smp_prepare_cpus() bailed out (in smp_sanity_check())
      before doing the set_mtrr_aps_delayed_init()
      
      Further down the boot path, native_smp_cpus_done() will call the
      delayed MTRR initialization for the AP's (mtrr_aps_init()) with
      mtrr_aps_delayed_init not set. This resulted in the boot
      processor reprogramming its MTRR's to the values seen during the
      start of the OS boot. While this is not needed ideally, this
      shouldn't have caused any side-effects. This is because the
      reprogramming of MTRR's (set_mtrr_state() that gets called via
      set_mtrr()) will check if the live register contents are
      different from what is being asked to write and will do the actual
      write only if they are different.
      
      BP's mtrr state is read during the start of the OS boot and
      typically nothing would have changed when we ask to reprogram it
      on BP again because of the above scenario on an UP platform. So
      on a normal UP platform no reprogramming of BP MTRR MSR's
      happens and all is well.
      
      However, on this platform, bios seems to be modifying the fixed
      mtrr range registers between the start of OS boot and when we
      double check the live registers for reprogramming BP MTRR
      registers. And as the live registers are modified, we end up
      reprogramming the MTRR's to the state seen during the start of
      the OS boot.
      
      During ACPI initialization, something in the bios (probably smi
      handler?) don't like this fact and results in a hard lockup.
      
      We didn't see this boot hang issue on this platform before the
      commit d0af9eed, because only
      the AP's (if any) will program its MTRR's to the value that BP
      had at the start of the OS boot.
      
      Fix this issue by checking mtrr_aps_delayed_init before
      continuing further in the mtrr_aps_init(). Now, only AP's (if
      any) will program its MTRR's to the BP values during boot.
      
      Addresses https://bugzilla.novell.com/show_bug.cgi?id=623393
      
        [ By the way, this behavior of the bios modifying MTRR's after the start
          of the OS boot is not common and the kernel is not prepared to
          handle this situation well. Irrespective of this issue, during
          suspend/resume, linux kernel will try to reprogram the BP's MTRR values
          to the values seen during the start of the OS boot. So suspend/resume might
          be already broken on this platform for all linux kernel versions. ]
      Reported-and-bisected-by: NMarkus Kohn <jabber@gmx.org>
      Tested-by: NMarkus Kohn <jabber@gmx.org>
      Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com>
      Cc: Thomas Renninger <trenn@novell.com>
      Cc: Rafael Wysocki <rjw@novell.com>
      Cc: Venkatesh Pallipadi <venki@google.com>
      Cc: stable@kernel.org # [v2.6.32+]
      LKML-Reference: <1296694975.4418.402.camel@sbsiddha-MOBL3.sc.intel.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      f7448548
  16. 28 1月, 2011 2 次提交
    • Y
      x86, perf: Change two init functions to static · dda99116
      Yinghai Lu 提交于
      init_hw_perf_events() is called via early_initcall now.
      x86_pmu_event_init is x86_pmu member function.
      
      So we can change them to static.
      Signed-off-by: NYinghai Lu <yinghai@kernel.org>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Arnaldo Carvalho de Melo <acme@ghostprotocols.net>
      LKML-Reference: <4D3A16F9.109@kernel.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      dda99116
    • S
      perf: Fix Pentium4 raw event validation · d038b12c
      Stephane Eranian 提交于
      This patch fixes some issues with raw event validation on
      Pentium 4 (Netburst) based processors.
      
      As I was testing libpfm4 Netburst support, I ran into two
      problems in the p4_validate_raw_event() function:
      
         - the shared field must be checked ONLY when HT is on
         - the binding to ESCR register was missing
      
      The second item was causing raw events to not be encoded
      correctly compared to generic PMU events.
      
      With this patch, I can now pass Netburst events to libpfm4
      examples and get meaningful results:
      
        $ task -e global_power_events:running:u  noploop 1
        noploop for 1 seconds
        3,206,304,898 global_power_events:running
      Signed-off-by: NStephane Eranian <eranian@google.com>
      Acked-by: NCyrill Gorcunov <gorcunov@openvz.org>
      Cc: peterz@infradead.org
      Cc: paulus@samba.org
      Cc: davem@davemloft.net
      Cc: fweisbec@gmail.com
      Cc: perfmon2-devel@lists.sf.net
      Cc: eranian@gmail.com
      Cc: robert.richter@amd.com
      Cc: acme@redhat.com
      Cc: gorcunov@gmail.com
      Cc: ming.m.lin@intel.com
      LKML-Reference: <4d3efb2f.1252d80a.1a80.ffffc83f@mx.google.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      d038b12c
  17. 25 1月, 2011 1 次提交
    • J
      x86-64: Don't use pointer to out-of-scope variable in dump_trace() · 2e5aa682
      Jesper Juhl 提交于
      In arch/x86/kernel/dumpstack_64.c::dump_trace() we have this code:
      
      ...
        		if (!stack) {
        			unsigned long dummy;
        			stack = &dummy;
        			if (task && task != current)
        				stack = (unsigned long *)task->thread.sp;
        		}
      
        		bp = stack_frame(task, regs);
        		/*
        		 * Print function call entries in all stacks, starting at the
        		 * current stack address. If the stacks consist of nested
        		 * exceptions
        		 */
        		tinfo = task_thread_info(task);
      
        		for (;;) {
        			char *id;
        			unsigned long *estack_end;
        			estack_end = in_exception_stack(cpu, (unsigned long)stack,
        							&used, &id);
      ...
      
      You'll notice that we assign to 'stack' the address of the variable
      'dummy' which is only in-scope inside the 'if (!stack)'. So when we later
      access stack (at the end of the above, and assuming we did not take the
      'if (task && task != current)' branch) we'll be using the address of a
      variable that is no longer in scope. I believe this patch is the proper
      fix, but I freely admit that I'm not 100% certain.
      Signed-off-by: NJesper Juhl <jj@chaosbits.net>
      LKML-Reference: <alpine.LNX.2.00.1101242232590.10252@swampdragon.chaosbits.net>
      Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      2e5aa682
  18. 22 1月, 2011 1 次提交
    • B
      x86, hotplug: Fix powersavings with offlined cores on AMD · 93789b32
      Borislav Petkov 提交于
      ea530692 made a CPU use monitor/mwait
      when offline. This is not the optimal choice for AMD wrt to powersavings
      and we'd prefer our cores to halt (i.e. enter C1) instead. For this, the
      same selection whether to use monitor/mwait has to be used as when we
      select the idle routine for the machine.
      
      With this patch, offlining cores 1-5 on a X6 machine allows core0 to
      boost again.
      
      [ hpa: putting this in urgent since it is a (power) regression fix ]
      Reported-by: NAndreas Herrmann <andreas.herrmann3@amd.com>
      Cc: stable@kernel.org # 37.x
      Cc: H. Peter Anvin <hpa@linux.intel.com>
      Cc: Arjan van de Ven <arjan@linux.intel.com>
      Cc: Len Brown <lenb@kernel.org>
      Cc: Venkatesh Pallipadi <venki@google.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.hl>
      Signed-off-by: NBorislav Petkov <borislav.petkov@amd.com>
      LKML-Reference: <1295534572-10730-1-git-send-email-bp@amd64.org>
      Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      93789b32
  19. 21 1月, 2011 1 次提交
  20. 20 1月, 2011 1 次提交
  21. 19 1月, 2011 1 次提交
  22. 18 1月, 2011 2 次提交
    • B
      x86: Clear irqstack thread_info · 7b698ea3
      Brian Gerst 提交于
      Mathias Merz reported that v2.6.37 failed to boot on his
      system.
      
      Make sure that the thread_info part of the irqstack is
      initialized to zeroes.
      Reported-and-Tested-by: NMatthias Merz <linux@merz-ka.de>
      Signed-off-by: NBrian Gerst <brgerst@gmail.com>
      Acked-by: NPekka Enberg <penberg@kernel.org>
      Cc: Arjan van de Ven <arjan@linux.intel.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      LKML-Reference: <AANLkTimyKXfJ1x8tgwrr1hYnNLrPfgE1NTe4z7L6tUDm@mail.gmail.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      7b698ea3
    • S
      x86: Make relocatable kernel work with new binutils · 86b1e8dd
      Shaohua Li 提交于
      The CONFIG_RELOCATABLE=y option is broken with new binutils, which will make
      boot panic.
      
      According to Lu Hongjiu, the affected binutils are from 2.20.51.0.12 to
      2.21.51.0.3, which are release since Oct 22 this year. At least ubuntu 10.10 is
      using such binutils. See:
      
          http://sourceware.org/bugzilla/show_bug.cgi?id=12327
      
      The reason of the boot panic is that we have 'jiffies = jiffies_64;' in
      vmlinux.lds.S. The jiffies isn't in any section. In kernel build, there is
      warning saying jiffies is an absolute address and can't be relocatable. At
      runtime, jiffies will have virtual address 0.
      
      Signed-off-by: Shaohua Li<shaohua.li@intel.com>
      Cc: Lu Hongjiu<hongjiu.lu@intel.com>
      Cc: Huang Ying <ying.huang@intel.com>
      Cc: Sam Ravnborg <sam@ravnborg.org>
      LKML-Reference: <1295312269.1949.725.camel@sli10-conroe>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      86b1e8dd
  23. 15 1月, 2011 2 次提交
    • J
      x86, mrst: Set correct APB timer IRQ affinity for secondary cpu · 6550904d
      Jacob Pan 提交于
      Offlining the secondary CPU causes the timer irq affinity to be set to
      CPU 0. When the secondary CPU is back online again, the wrong irq
      affinity will be used.
      
      This patch ensures secondary per CPU timer always has the correct
      IRQ affinity when enabled.
      Signed-off-by: NJacob Pan <jacob.jun.pan@linux.intel.com>
      LKML-Reference: <1294963604-18111-1-git-send-email-jacob.jun.pan@linux.intel.com>
      Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      Cc: <stable@kernel.org> 2.6.37
      6550904d
    • J
      x86: tsc: Fix calibration refinement conditionals to avoid divide by zero · 62627bec
      John Stultz 提交于
      Konrad Wilk reported that the new delayed calibration crashes with a
      divide by zero on Xen. The reason is that Xen sets the pmtimer
      address, but reading from it returns 0xffffff. That results in the
      ref_start and ref_stop value being the same, so the delta is zero
      which causes the divide by zero later in the calculation.
      
      The conditional (!hpet && !ref_start && !ref_stop) which sanity checks
      the calibration reference values doesn't really make sense. If the
      refs are null, but hpet is on, we still want to break out.
      
      The div by zero would be possible to trigger by chance if both reads
      from the hardware provided the exact same value (due to hardware
      wrapping).
      
      So checking if both the ref values are the same should handle if we
      don't have hardware (both null) or if they are the same value (either by
      invalid hardware, or by chance), avoiding the div by zero issue.
      
      [ tglx: Applied the same fix to native_calibrate_tsc() where this
        	check was copied from ]
      Reported-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Tested-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Signed-off-by: NJohn Stultz <johnstul@us.ibm.com>
      LKML-Reference: <1295024788-15619-1-git-send-email-johnstul@us.ibm.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      62627bec
  24. 14 1月, 2011 2 次提交