1. 07 1月, 2010 1 次提交
    • S
      x86, irq: Check move_in_progress before freeing the vector mapping · 7f41c2e1
      Suresh Siddha 提交于
      With the recent irq migration fixes (post 2.6.32), Gary Hade has noticed
      "No IRQ handler for vector" messages during the 2.6.33-rc1 kernel boot on IBM
      AMD platforms and root caused the issue to this commit:
      
      > commit 23359a88
      > Author: Suresh Siddha <suresh.b.siddha@intel.com>
      > Date:   Mon Oct 26 14:24:33 2009 -0800
      >
      >    x86: Remove move_cleanup_count from irq_cfg
      
      As part of this patch, we have removed the move_cleanup_count check
      in smp_irq_move_cleanup_interrupt(). With this change, we can run into a
      situation where an irq cleanup interrupt on a cpu can cleanup the vector
      mappings associated with multiple irqs, of which one of the irq's migration
      might be still in progress. As such when that irq hits the old cpu, we get
      the "No IRQ handler" messages.
      
      Fix this by checking for the irq_cfg's move_in_progress and if the move
      is still in progress delay the vector cleanup to another irq cleanup
      interrupt request (which will happen when the irq starts arriving at the
      new cpu destination).
      Reported-and-tested-by: NGary Hade <garyhade@us.ibm.com>
      Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com>
      LKML-Reference: <1262804191.2732.7.camel@sbs-t61.sc.intel.com>
      Cc: Eric W. Biederman <ebiederm@xmission.com>
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      7f41c2e1
  2. 06 1月, 2010 2 次提交
  3. 05 1月, 2010 3 次提交
  4. 31 12月, 2009 3 次提交
  5. 29 12月, 2009 1 次提交
    • M
      x86: SGI UV: Fix writes to led registers on remote uv hubs · 39d30770
      Mike Travis 提交于
      The wrong address was being used to write the SCIR led regs on
      remote hubs.  Also, there was an inconsistency between how BIOS
      and the kernel indexed these regs.  Standardize on using the
      lower 6 bits of the APIC ID as the index.
      
      This patch fixes the problem of writing to an errant address to
      a cpu # >= 64.
      Signed-off-by: NMike Travis <travis@sgi.com>
      Reviewed-by: NJack Steiner <steiner@sgi.com>
      Cc: Robin Holt <holt@sgi.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: stable@kernel.org
      LKML-Reference: <4B3922F9.3060905@sgi.com>
      [ v2: fix a number of annoying checkpatch artifacts and whitespace noise ]
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      39d30770
  6. 28 12月, 2009 2 次提交
    • P
      x86, kmemcheck: Use KERN_WARNING for error reporting · c0ca9da4
      Pekka Enberg 提交于
      As suggested by Vegard Nossum, use KERN_WARNING for error
      reporting to make sure kmemcheck reports end up in syslog.
      Suggested-by: NVegard Nossum <vegard.nossum@gmail.com>
      Signed-off-by: NPekka Enberg <penberg@cs.helsinki.fi>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      LKML-Reference: <1261990935.4641.7.camel@penberg-laptop>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      c0ca9da4
    • P
      x86: Use KERN_DEFAULT log-level in __show_regs() · d015a092
      Pekka Enberg 提交于
      Andrew Morton reported a strange looking kmemcheck warning:
      
        WARNING: kmemcheck: Caught 32-bit read from uninitialized memory (ffff88004fba6c20)
        0000000000000000310000000000000000000000000000002413000000c9ffff
         u u u u u u u u u u u u u u u u i i i i i i i i u u u u u u u u
      
         [<ffffffff810af3aa>] kmemleak_scan+0x25a/0x540
         [<ffffffff810afbcb>] kmemleak_scan_thread+0x5b/0xe0
         [<ffffffff8104d0fe>] kthread+0x9e/0xb0
         [<ffffffff81003074>] kernel_thread_helper+0x4/0x10
         [<ffffffffffffffff>] 0xffffffffffffffff
      
      The above printout is missing register dump completely. The
      problem here is that the output comes from syslog which doesn't
      show KERN_INFO log-level messages. We didn't see this before
      because both of us were testing on 32-bit kernels which use the
      _default_ log-level.
      
      Fix that up by explicitly using KERN_DEFAULT log-level for
      __show_regs() printks.
      Signed-off-by: NPekka Enberg <penberg@cs.helsinki.fi>
      Cc: Vegard Nossum <vegard.nossum@gmail.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Arjan van de Ven <arjan@infradead.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      LKML-Reference: <1261988819.4641.2.camel@penberg-laptop>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      d015a092
  7. 27 12月, 2009 3 次提交
  8. 26 12月, 2009 1 次提交
    • H
      x86, compress: Force i386 instructions for the decompressor · 17a2a9b5
      H. Peter Anvin 提交于
      Recently, some distros have started shipping versions of gcc which
      default to -march=i686.  This breaks building kernels for pre-i686
      machines, even if they have been selected in Kconfig, due to the
      generation of CMOV instructions.
      
      There isn't enough benefit to try to preserve the generation of these
      instructions even when selected, so simply force -march=i386 for the
      decompressor when building a 32-bit kernel.
      Reported-and-tested-by: NChris Rankin <rankincj@yahoo.com>
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      LKML-Reference: <219280.97558.qm@web52907.mail.re2.yahoo.com>
      17a2a9b5
  9. 24 12月, 2009 1 次提交
    • L
      Revert "x86, ucode-amd: Ensure ucode update on suspend/resume after CPU off/online cycle" · 2f99f5c8
      Linus Torvalds 提交于
      This reverts commit 9f15226e.  It's just
      wrong, and broke resume for Rafael even on a non-AMD CPU.
      
      As Rafael says:
       "... it causes microcode_init_cpu() to be called during resume even for
        CPUs for which there's no microcode to apply.  That, in turn, results
        in executing request_firmware() (on Intel CPUs at least) which doesn't
        work at this stage of resume (we have device interrupts disabled, I/O
        devices are still suspended and so on).
      
        If I'm not mistaken, the "if (uci->valid)" logic means "if that CPU is
        known to us" , so before commit 9f15226e microcode_resume_cpu() was
        called for all CPUs already in the system during suspend, which was
        the right thing to do.  The commit changed it so that the CPUs without
        microcode to apply are now treated as "unknown", which is not quite
        right.
      
        The problem this commit attempted to solve has to be handled
        differently."
      
      Bisected-and -requested-by: Rafael J. Wysocki <rjw@sisk.pl>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      2f99f5c8
  10. 23 12月, 2009 1 次提交
    • A
      arch/x86/kernel/cpu/cpufreq/acpi-cpufreq.c: avoid cross-CPU interrupts by... · 4a28395d
      Andrew Morton 提交于
      arch/x86/kernel/cpu/cpufreq/acpi-cpufreq.c: avoid cross-CPU interrupts by using smp_call_function_any()
      
      Presently acpi-cpufreq will perform the MSR read on the first CPU in the
      mask.  That's inefficient if that CPU differs from the current CPU.
      Because we have to perform a cross-CPU call, but we could have run the
      rdmsr on the current CPU.
      
      So switch to using the new smp_call_function_any(), which will perform the
      call on the current CPU if that CPU is present in the mask (it is).
      
      Cc: "Zhang, Yanmin" <yanmin_zhang@linux.intel.com>
      Cc: Dave Jones <davej@redhat.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Jaswinder Singh Rajput <jaswinder@kernel.org>
      Cc: Len Brown <len.brown@intel.com>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
      Cc: Zhao Yakui <yakui.zhao@intel.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLen Brown <len.brown@intel.com>
      4a28395d
  11. 22 12月, 2009 5 次提交
    • A
      ACPI: processor: unify arch_acpi_processor_cleanup_pdc · 47817254
      Alex Chiang 提交于
      The x86 and ia64 implementations of the function in $subject are
      exactly the same.
      
      Also, since the arch-specific implementations of setting _PDC have
      been completely hollowed out, remove the empty shells.
      
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Signed-off-by: NAlex Chiang <achiang@hp.com>
      Signed-off-by: NLen Brown <len.brown@intel.com>
      47817254
    • A
      ACPI: processor: finish unifying arch_acpi_processor_init_pdc() · 6c5807d7
      Alex Chiang 提交于
      The only thing arch-specific about calling _PDC is what bits get
      set in the input obj_list buffer.
      
      There's no need for several levels of indirection to twiddle those
      bits. Additionally, since we're just messing around with a buffer,
      we can simplify the interface; no need to pass around the entire
      struct acpi_processor * just to get at the buffer.
      
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Signed-off-by: NAlex Chiang <achiang@hp.com>
      Signed-off-by: NLen Brown <len.brown@intel.com>
      6c5807d7
    • A
      ACPI: processor: factor out common _PDC settings · 08ea48a3
      Alex Chiang 提交于
      Both x86 and ia64 initialize _PDC with mostly common bit settings.
      
      Factor out the common settings and leave the arch-specific ones alone.
      
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Signed-off-by: NAlex Chiang <achiang@hp.com>
      Signed-off-by: NLen Brown <len.brown@intel.com>
      08ea48a3
    • A
      ACPI: processor: unify arch_acpi_processor_init_pdc · 407cd87c
      Alex Chiang 提交于
      The x86 and ia64 implementations of arch_acpi_processor_init_pdc()
      are almost exactly the same. The only difference is in what bits
      they set in obj_list buffer.
      
      Combine the boilerplate memory management code, and leave the
      arch-specific bit twiddling in separate implementations.
      
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Signed-off-by: NAlex Chiang <achiang@hp.com>
      Signed-off-by: NLen Brown <len.brown@intel.com>
      407cd87c
    • A
      ACPI: processor: introduce arch_has_acpi_pdc · 1d9cb470
      Alex Chiang 提交于
      arch dependent helper function that tells us if we should attempt to
      evaluate _PDC on this machine or not.
      
      The x86 implementation assumes that the CPUs in the machine must be
      homogeneous, and that you cannot mix CPUs of different vendors.
      
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Signed-off-by: NAlex Chiang <achiang@hp.com>
      Signed-off-by: NLen Brown <len.brown@intel.com>
      1d9cb470
  12. 21 12月, 2009 1 次提交
    • J
      x86/amd-iommu: Fix initialization failure panic · 0f764806
      Joerg Roedel 提交于
      The assumption that acpi_table_parse passes the return value
      of the hanlder function to the caller proved wrong
      recently. The return value of the handler function is
      totally ignored. This makes the initialization code for AMD
      IOMMU buggy in a way that could cause a kernel panic on
      initialization. This patch fixes the issue in the AMD IOMMU
      driver.
      
      Cc: stable@kernel.org
      Signed-off-by: NJoerg Roedel <joerg.roedel@amd.com>
      0f764806
  13. 19 12月, 2009 1 次提交
  14. 18 12月, 2009 4 次提交
    • F
      hw-breakpoints: Fix hardware breakpoints -> perf events dependency · 99e8c5a3
      Frederic Weisbecker 提交于
      The kbuild's select command doesn't propagate through the config
      dependencies.
      
      Hence the current rules of hardware breakpoint's config can't
      ensure perf can never be disabled under us.
      
      We have:
      
      config X86
      	selects HAVE_HW_BREAKPOINTS
      
      config HAVE_HW_BREAKPOINTS
      	select PERF_EVENTS
      
      config PERF_EVENTS
      	[...]
      
      x86 will select the breakpoints but that won't propagate to perf
      events. The user can still disable the latter, but it is
      necessary for the breakpoints.
      
      What we need is:
      
       - x86 selects HAVE_HW_BREAKPOINTS and PERF_EVENTS
       - HAVE_HW_BREAKPOINTS depends on PERF_EVENTS
      
      so that we ensure PERF_EVENTS is enabled and frozen for x86.
      
      This fixes the following kind of build errors:
      
       In file included from arch/x86/kernel/hw_breakpoint.c:31:
       include/linux/hw_breakpoint.h: In function 'hw_breakpoint_addr':
       include/linux/hw_breakpoint.h:39: error: 'struct perf_event' has no member named 'attr'
      
      v2: Select also ANON_INODES from x86, required for perf
      Reported-by: NCyrill Gorcunov <gorcunov@gmail.com>
      Reported-by: NMichal Marek <mmarek@suse.cz>
      Reported-by: NAndrew Randrianasulu <randrik_a@yahoo.com>
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Randy Dunlap <randy.dunlap@oracle.com>
      Cc: K.Prasad <prasad@linux.vnet.ibm.com>
      LKML-Reference: <1261010034-7786-1-git-send-regression-fweisbec@gmail.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      99e8c5a3
    • S
      x86, irq: Allow 0xff for /proc/irq/[n]/smp_affinity on an 8-cpu system · 18374d89
      Suresh Siddha 提交于
      John Blackwood reported:
      > on an older Dell PowerEdge 6650 system with 8 cpus (4 are hyper-threaded),
      > and  32 bit (x86) kernel, once you change the irq smp_affinity of an irq
      > to be less than all cpus in the system, you can never change really the
      > irq smp_affinity back to be all cpus in the system (0xff) again,
      > even though no error status is returned on the "/bin/echo ff >
      > /proc/irq/[n]/smp_affinity" operation.
      >
      > This is due to that fact that BAD_APICID has the same value as
      > all cpus (0xff) on 32bit kernels, and thus the value returned from
      > set_desc_affinity() via the cpu_mask_to_apicid_and() function is treated
      > as a failure in set_ioapic_affinity_irq_desc(), and no affinity changes
      > are made.
      
      set_desc_affinity() is already checking if the incoming cpu mask
      intersects with the cpu online mask or not. So there is no need
      for the apic op cpu_mask_to_apicid_and() to check again
      and return BAD_APICID.
      
      Remove the BAD_APICID return value from cpu_mask_to_apicid_and()
      and also fix set_desc_affinity() to return -1 instead of using BAD_APICID
      to represent error conditions (as cpu_mask_to_apicid_and() can return
      logical or physical apicid values and BAD_APICID is really to represent
      bad physical apic id).
      Reported-by: NJohn Blackwood <john.blackwood@ccur.com>
      Root-caused-by: NJohn Blackwood <john.blackwood@ccur.com>
      Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com>
      LKML-Reference: <1261103386.2535.409.camel@sbs-t61>
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      18374d89
    • A
      x86: Fix objdump version check in arch/x86/tools/chkobjdump.awk · 8c634507
      akpm@linux-foundation.org 提交于
      It says
      
      Warning: objdump version  is older than 2.19
      Warning: Skipping posttest.
      
      because it used the wrong field from `objdump -v':
      
      akpm:/usr/src/25> /opt/crosstool/gcc-4.0.2-glibc-2.3.6/x86_64-unknown-linux-gnu/bin/x86_64-unknown-linux-gnu-objdump -v
      GNU objdump 2.16.1
      Copyright 2005 Free Software Foundation, Inc.
      This program is free software; you may redistribute it under the terms of
      the GNU General Public License.  This program has absolutely no warranty.
      
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@elte.hu>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      LKML-Reference: <200912172326.nBHNQaQl024796@imap1.linux-foundation.org>
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      Cc: Masami Hiramatsu <mhiramat@redhat.com>
      8c634507
    • P
      x86: Reenable TSC sync check at boot, even with NONSTOP_TSC · 6c56ccec
      Pallipadi, Venkatesh 提交于
      Commit 83ce4009 did the following change
      If the TSC is constant and non-stop, also set it reliable.
      
      But, there seems to be few systems that will end up with TSC warp across
      sockets, depending on how the cpus come out of reset. Skipping TSC sync
      test on such systems may result in time inconsistency later.
      
      So, reenable TSC sync test even on constant and non-stop TSC systems.
      Set, sched_clock_stable to 1 by default and reset it in
      mark_tsc_unstable, if TSC sync fails.
      
      This change still gives perf benefit mentioned in 83ce4009 for systems
      where TSC is reliable.
      Signed-off-by: NVenkatesh Pallipadi <venkatesh.pallipadi@intel.com>
      Acked-by: NSuresh Siddha <suresh.b.siddha@intel.com>
      LKML-Reference: <20091217202702.GA18015@linux-os.sc.intel.com>
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      6c56ccec
  15. 17 12月, 2009 11 次提交
    • L
      x86/ptrace: make genregs[32]_get/set more robust · 04a1e62c
      Linus Torvalds 提交于
      The loop condition is fragile: we compare an unsigned value to zero, and
      then decrement it by something larger than one in the loop.  All the
      callers should be passing in appropriately aligned buffer lengths, but
      it's better to just not rely on it, and have some appropriate defensive
      loop limits.
      Acked-by: NRoland McGrath <roland@redhat.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      04a1e62c
    • R
      x86: Don't use POSIX character classes in gen-insn-attr-x86.awk · 4beb3d6d
      Roland Dreier 提交于
      Not all awk implementations (including the default awk in Ubuntu 9.10)
      support POSIX character classes.  Since x86-opcode-map.txt is plain
      ASCII, we can just use explicit ranges for lower case, alphabetic, and
      alphanumeric characters instead.
      Signed-off-by: NRoland Dreier <rolandd@cisco.com>
      Acked-by: NMasami Hiramatsu <mhiramat@redhat.com>
      LKML-Reference: <adabphy750b.fsf@roland-alpha.cisco.com>
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      4beb3d6d
    • F
      perf events, x86/stacktrace: Fix performance/softlockup by providing a special... · 06d65bda
      Frederic Weisbecker 提交于
      perf events, x86/stacktrace: Fix performance/softlockup by providing a special frame pointer-only stack walker
      
      It's just wasteful for stacktrace users like perf to walk
      through every entries on the stack whereas these only accept
      reliable ones, ie: that the frame pointer validates.
      
      Since perf requires pure reliable stacktraces, it needs a stack
      walker based on frame pointers-only to optimize the stacktrace
      processing.
      
      This might solve some near-lockup scenarios that can be triggered
      by call-graph tracing timer events.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Paul Mackerras <paulus@samba.org>
      LKML-Reference: <1261024834-5336-2-git-send-regression-fweisbec@gmail.com>
      [ v2: fix for modular builds and small detail tidyup ]
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      06d65bda
    • F
      perf events, x86/stacktrace: Make stack walking optional · 61c1917f
      Frederic Weisbecker 提交于
      The current print_context_stack helper that does the stack
      walking job is good for usual stacktraces as it walks through
      all the stack and reports even addresses that look unreliable,
      which is nice when we don't have frame pointers for example.
      
      But we have users like perf that only require reliable
      stacktraces, and those may want a more adapted stack walker, so
      lets make this function a callback in stacktrace_ops that users
      can tune for their needs.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Paul Mackerras <paulus@samba.org>
      LKML-Reference: <1261024834-5336-1-git-send-regression-fweisbec@gmail.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      61c1917f
    • R
      cpumask: rename tsk_cpumask to tsk_cpus_allowed · a4636818
      Rusty Russell 提交于
      Noone uses this wrapper yet, and Ingo asked that it be kept consistent
      with current task_struct usage.
      
      (One user crept in via linux-next: fixed)
      
      Signed-off-by: Rusty Russell <rusty@rustcorp.com.au.
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Tejun Heo <tj@kernel.org>
      a4636818
    • Y
      x86: Increase MAX_EARLY_RES; insufficient on 32-bit NUMA · 6a1e008a
      Yinghai Lu 提交于
      Due to recent changes wakeup and mptable, we run out of early
      reservations on 32-bit NUMA.  Thus, adjust the available number.
      Signed-off-by: NYinghai Lu <yinghai@kernel.org>
      LKML-Reference: <4B22D754.2020706@kernel.org>
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      6a1e008a
    • Y
      x86: Fix checking of SRAT when node 0 ram is not from 0 · 32996250
      Yinghai Lu 提交于
      Found one system that boot from socket1 instead of socket0, SRAT get rejected...
      
      [    0.000000] SRAT: Node 1 PXM 0 0-a0000
      [    0.000000] SRAT: Node 1 PXM 0 100000-80000000
      [    0.000000] SRAT: Node 1 PXM 0 100000000-2080000000
      [    0.000000] SRAT: Node 0 PXM 1 2080000000-4080000000
      [    0.000000] SRAT: Node 2 PXM 2 4080000000-6080000000
      [    0.000000] SRAT: Node 3 PXM 3 6080000000-8080000000
      [    0.000000] SRAT: Node 4 PXM 4 8080000000-a080000000
      [    0.000000] SRAT: Node 5 PXM 5 a080000000-c080000000
      [    0.000000] SRAT: Node 6 PXM 6 c080000000-e080000000
      [    0.000000] SRAT: Node 7 PXM 7 e080000000-10080000000
      ...
      [    0.000000] NUMA: Allocated memnodemap from 500000 - 701040
      [    0.000000] NUMA: Using 20 for the hash shift.
      [    0.000000] Adding active range (0, 0x2080000, 0x4080000) 0 entries of 3200 used
      [    0.000000] Adding active range (1, 0x0, 0x96) 1 entries of 3200 used
      [    0.000000] Adding active range (1, 0x100, 0x7f750) 2 entries of 3200 used
      [    0.000000] Adding active range (1, 0x100000, 0x2080000) 3 entries of 3200 used
      [    0.000000] Adding active range (2, 0x4080000, 0x6080000) 4 entries of 3200 used
      [    0.000000] Adding active range (3, 0x6080000, 0x8080000) 5 entries of 3200 used
      [    0.000000] Adding active range (4, 0x8080000, 0xa080000) 6 entries of 3200 used
      [    0.000000] Adding active range (5, 0xa080000, 0xc080000) 7 entries of 3200 used
      [    0.000000] Adding active range (6, 0xc080000, 0xe080000) 8 entries of 3200 used
      [    0.000000] Adding active range (7, 0xe080000, 0x10080000) 9 entries of 3200 used
      [    0.000000] SRAT: PXMs only cover 917504MB of your 1048566MB e820 RAM. Not used.
      [    0.000000] SRAT: SRAT not used.
      
      the early_node_map is not sorted because node0 with non zero start come first.
      
      so try to sort it right away after all regions are registered.
      
      also fixs refression by 8716273c (x86: Export srat physical topology)
      
      -v2: make it more solid to handle cross node case like node0 [0,4g), [8,12g) and node1 [4g, 8g), [12g, 16g)
      -v3: update comments.
      Reported-and-tested-by: NJens Axboe <jens.axboe@oracle.com>
      Signed-off-by: NYinghai Lu <yinghai@kernel.org>
      LKML-Reference: <4B2579D2.3010201@kernel.org>
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      32996250
    • S
      x86, cpuid: Add "volatile" to asm in native_cpuid() · 45a94d7c
      Suresh Siddha 提交于
      xsave_cntxt_init() does something like:
      
      	cpuid(0xd, ..);	// find out what features FP/SSE/.. etc are supported
      
      	xsetbv();	// enable the features known to OS
      
      	cpuid(0xd, ..);	// find out the size of the context for features enabled
      
      Depending on what features get enabled in xsetbv(), value of the
      cpuid.eax=0xd.ecx=0.ebx changes correspondingly (representing the
      size of the context that is enabled).
      
      As we don't have volatile keyword for native_cpuid(), gcc 4.1.2
      optimizes away the second cpuid and the kernel continues to use
      the cpuid information obtained before xsetbv(), ultimately leading to kernel
      crash on processors supporting more state than the legacy FP/SSE.
      
      Add "volatile" for native_cpuid().
      Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com>
      LKML-Reference: <1261009542.2745.55.camel@sbs-t61.sc.intel.com>
      Cc: stable@kernel.org
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      45a94d7c
    • B
      x86, msr: msrs_alloc/free for CONFIG_SMP=n · 6ede31e0
      Borislav Petkov 提交于
      Randy Dunlap reported the following build error:
      
      "When CONFIG_SMP=n, CONFIG_X86_MSR=m:
      
      ERROR: "msrs_free" [drivers/edac/amd64_edac_mod.ko] undefined!
      ERROR: "msrs_alloc" [drivers/edac/amd64_edac_mod.ko] undefined!"
      
      This is due to the fact that <arch/x86/lib/msr.c> is conditioned on
      CONFIG_SMP and in the UP case we have only the stubs in the header.
      Fork off SMP functionality into a new file (msr-smp.c) and build
      msrs_{alloc,free} unconditionally.
      Reported-by: NRandy Dunlap <randy.dunlap@oracle.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Signed-off-by: NBorislav Petkov <petkovbb@gmail.com>
      LKML-Reference: <20091216231625.GD27228@liondog.tnic>
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      6ede31e0
    • A
      x86, amd: Get multi-node CPU info from NodeId MSR instead of PCI config space · 9d260ebc
      Andreas Herrmann 提交于
      Use NodeId MSR to get NodeId and number of nodes per processor.
      Signed-off-by: NAndreas Herrmann <andreas.herrmann3@amd.com>
      LKML-Reference: <20091216144355.GB28798@alberich.amd.com>
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      9d260ebc
    • J
      PCI: fix section mismatch on update_res() · 57148688
      Jiri Slaby 提交于
      Remark update_res from __init to __devinit as it is called also
      from __devinit functions.
      
      This patch removes the following warning message:
      
        WARNING: vmlinux.o(.devinit.text+0x774a): Section mismatch
        in reference from the function pci_root_bus_res() to the
        function .init.text:update_res()
        The function __devinit pci_root_bus_res() references
        a function __init update_res().
        If update_res is only used by pci_root_bus_res then
        annotate update_res with a matching annotation.
      Signed-off-by: NJiri Slaby <jslaby@suse.cz>
      Cc: Aristeu Sergio <arozansk@redhat.com>
      Cc: Jesse Barnes <jbarnes@virtuousgeek.org>
      Cc: linux-pci@vger.kernel.org
      Cc: x86@kernel.org
      Signed-off-by: NHidetoshi Seto <seto.hidetoshi@jp.fujitsu.com>
      Signed-off-by: NJesse Barnes <jbarnes@virtuousgeek.org>
      57148688