1. 16 6月, 2011 1 次提交
  2. 13 5月, 2011 1 次提交
  3. 26 1月, 2011 1 次提交
    • Y
      x86: Move llc_shared_map out of cpu_info · b3d7336d
      Yinghai Lu 提交于
      cpu_info is already with per_cpu, We can take llc_shared_map out
      of cpu_info, and declare it as per_cpu variable directly.
      
      So later referencing could be simple and directly instead of
      diving to find cpu_info at first.
      
      Also could make smp_store_cpu_info() much simple to avoid to do
      save and restore trick.
      Signed-off-by: NYinghai Lu <yinghai@kernel.org>
      Cc: Hans Rosenfeld <hans.rosenfeld@amd.com>
      Cc: Alok N Kataria <akataria@vmware.com>
      Cc: Stephen Hemminger <shemminger@vyatta.com>
      Cc: Hans J. Koch <hjk@linutronix.de>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Borislav Petkov <borislav.petkov@amd.com>
      Cc: Andreas Herrmann <andreas.herrmann3@amd.com>
      Cc: Robert Richter <robert.richter@amd.com>
      Cc: Suresh Siddha <suresh.b.siddha@intel.com>
      LKML-Reference: <4D3A16E8.5020608@kernel.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      b3d7336d
  4. 26 10月, 2010 4 次提交
  5. 20 10月, 2010 1 次提交
    • R
      apic, x86: Use BIOS settings for IBS and MCE threshold interrupt LVT offsets · 27afdf20
      Robert Richter 提交于
      We want the BIOS to setup the EILVT APIC registers. The offsets
      were hardcoded and BIOS settings were overwritten by the OS.
      Now, the subsystems for MCE threshold and IBS determine the LVT
      offset from the registers the BIOS has setup. If the BIOS setup
      is buggy on a family 10h system, a workaround enables IBS. If
      the OS determines an invalid register setup, a "[Firmware Bug]:
      " error message is reported.
      
      We need this change also for upcomming cpu families.
      Signed-off-by: NRobert Richter <robert.richter@amd.com>
      LKML-Reference: <1286360874-1471-3-git-send-email-robert.richter@amd.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      27afdf20
  6. 11 10月, 2010 1 次提交
    • B
      x86, AMD, MCE thresholding: Fix the MCi_MISCj iteration order · 6dcbfe4f
      Borislav Petkov 提交于
      This fixes possible cases of not collecting valid error info in
      the MCE error thresholding groups on F10h hardware.
      
      The current code contains a subtle problem of checking only the
      Valid bit of MSR0000_0413 (which is MC4_MISC0 - DRAM
      thresholding group) in its first iteration and breaking out if
      the bit is cleared.
      
      But (!), this MSR contains an offset value, BlkPtr[31:24], which
      points to the remaining MSRs in this thresholding group which
      might contain valid information too. But if we bail out only
      after we checked the valid bit in the first MSR and not the
      block pointer too, we miss that other information.
      
      The thing is, MC4_MISC0[BlkPtr] is not predicated on
      MCi_STATUS[MiscV] or MC4_MISC0[Valid] and should be checked
      prior to iterating over the MCI_MISCj thresholding group,
      irrespective of the MC4_MISC0[Valid] setting.
      Signed-off-by: NBorislav Petkov <borislav.petkov@amd.com>
      Cc: <stable@kernel.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      6dcbfe4f
  7. 05 9月, 2010 1 次提交
    • A
      x86, mcheck: Avoid duplicate sysfs links/files for thresholding banks · 1389298f
      Andreas Herrmann 提交于
      kobject_add_internal failed for threshold_bank2 with -EEXIST,
      don't try to register things with the same name in the same
      directory:
      
        Pid: 1, comm: swapper Tainted: G        W  2.6.31 #1
        Call Trace:
        [<ffffffff81161b07>] ? kobject_add_internal+0x156/0x180
        [<ffffffff81161cc0>] ? kobject_add+0x66/0x6b
        [<ffffffff81161793>] ? kobject_init+0x42/0x82
        [<ffffffff81161cf9>] ? kobject_create_and_add+0x34/0x63
        [<ffffffff81393963>] ? threshold_create_bank+0x14f/0x259
        [<ffffffff8139310a>] ? mce_create_device+0x8d/0x1b8
        [<ffffffff81646497>] ? threshold_init_device+0x3f/0x80
        [<ffffffff81646458>] ? threshold_init_device+0x0/0x80
        [<ffffffff81009050>] ? do_one_initcall+0x4f/0x143
        [<ffffffff816413a0>] ? kernel_init+0x14c/0x1a2
        [<ffffffff8100c8da>] ? child_rip+0xa/0x20
        [<ffffffff81641254>] ? kernel_init+0x0/0x1a2
        [<ffffffff8100c8d0>] ? child_rip+0x0/0x20
        kobject_create_and_add: kobject_add error: -17
      
      (Probably the for_each_cpu loop should be entirely removed.)
      Signed-off-by: NAndreas Herrmann <andreas.herrmann3@amd.com>
      LKML-Reference: <20100827092006.GB5348@loge.amd.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      1389298f
  8. 30 3月, 2010 1 次提交
    • T
      include cleanup: Update gfp.h and slab.h includes to prepare for breaking... · 5a0e3ad6
      Tejun Heo 提交于
      include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
      
      percpu.h is included by sched.h and module.h and thus ends up being
      included when building most .c files.  percpu.h includes slab.h which
      in turn includes gfp.h making everything defined by the two files
      universally available and complicating inclusion dependencies.
      
      percpu.h -> slab.h dependency is about to be removed.  Prepare for
      this change by updating users of gfp and slab facilities include those
      headers directly instead of assuming availability.  As this conversion
      needs to touch large number of source files, the following script is
      used as the basis of conversion.
      
        http://userweb.kernel.org/~tj/misc/slabh-sweep.py
      
      The script does the followings.
      
      * Scan files for gfp and slab usages and update includes such that
        only the necessary includes are there.  ie. if only gfp is used,
        gfp.h, if slab is used, slab.h.
      
      * When the script inserts a new include, it looks at the include
        blocks and try to put the new include such that its order conforms
        to its surrounding.  It's put in the include block which contains
        core kernel includes, in the same order that the rest are ordered -
        alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
        doesn't seem to be any matching order.
      
      * If the script can't find a place to put a new include (mostly
        because the file doesn't have fitting include block), it prints out
        an error message indicating which .h file needs to be added to the
        file.
      
      The conversion was done in the following steps.
      
      1. The initial automatic conversion of all .c files updated slightly
         over 4000 files, deleting around 700 includes and adding ~480 gfp.h
         and ~3000 slab.h inclusions.  The script emitted errors for ~400
         files.
      
      2. Each error was manually checked.  Some didn't need the inclusion,
         some needed manual addition while adding it to implementation .h or
         embedding .c file was more appropriate for others.  This step added
         inclusions to around 150 files.
      
      3. The script was run again and the output was compared to the edits
         from #2 to make sure no file was left behind.
      
      4. Several build tests were done and a couple of problems were fixed.
         e.g. lib/decompress_*.c used malloc/free() wrappers around slab
         APIs requiring slab.h to be added manually.
      
      5. The script was run on all .h files but without automatically
         editing them as sprinkling gfp.h and slab.h inclusions around .h
         files could easily lead to inclusion dependency hell.  Most gfp.h
         inclusion directives were ignored as stuff from gfp.h was usually
         wildly available and often used in preprocessor macros.  Each
         slab.h inclusion directive was examined and added manually as
         necessary.
      
      6. percpu.h was updated not to include slab.h.
      
      7. Build test were done on the following configurations and failures
         were fixed.  CONFIG_GCOV_KERNEL was turned off for all tests (as my
         distributed build env didn't work with gcov compiles) and a few
         more options had to be turned off depending on archs to make things
         build (like ipr on powerpc/64 which failed due to missing writeq).
      
         * x86 and x86_64 UP and SMP allmodconfig and a custom test config.
         * powerpc and powerpc64 SMP allmodconfig
         * sparc and sparc64 SMP allmodconfig
         * ia64 SMP allmodconfig
         * s390 SMP allmodconfig
         * alpha SMP allmodconfig
         * um on x86_64 SMP allmodconfig
      
      8. percpu.h modifications were reverted so that it could be applied as
         a separate patch and serve as bisection point.
      
      Given the fact that I had only a couple of failures from tests on step
      6, I'm fairly confident about the coverage of this conversion patch.
      If there is a breakage, it's likely to be something in one of the arch
      headers which should be easily discoverable easily on most builds of
      the specific arch.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Guess-its-ok-by: NChristoph Lameter <cl@linux-foundation.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
      5a0e3ad6
  9. 08 3月, 2010 1 次提交
  10. 20 9月, 2009 1 次提交
  11. 04 9月, 2009 1 次提交
  12. 24 6月, 2009 1 次提交
    • T
      percpu: cleanup percpu array definitions · 204fba4a
      Tejun Heo 提交于
      Currently, the following three different ways to define percpu arrays
      are in use.
      
      1. DEFINE_PER_CPU(elem_type[array_len], array_name);
      2. DEFINE_PER_CPU(elem_type, array_name[array_len]);
      3. DEFINE_PER_CPU(elem_type, array_name)[array_len];
      
      Unify to #1 which correctly separates the roles of the two parameters
      and thus allows more flexibility in the way percpu variables are
      defined.
      
      [ Impact: cleanup ]
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Reviewed-by: NChristoph Lameter <cl@linux-foundation.org>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Jeremy Fitzhardinge <jeremy@xensource.com>
      Cc: linux-mm@kvack.org
      Cc: Christoph Lameter <cl@linux-foundation.org>
      Cc: David S. Miller <davem@davemloft.net>
      204fba4a
  13. 17 6月, 2009 1 次提交
  14. 29 5月, 2009 4 次提交
  15. 18 3月, 2009 1 次提交
  16. 13 3月, 2009 1 次提交
  17. 25 2月, 2009 2 次提交
  18. 21 2月, 2009 1 次提交
    • H
      x86, mce: remove incorrect __cpuinit for mce_cpu_features() · cc3ca220
      H. Peter Anvin 提交于
      Impact: Bug fix on UP
      
      Checkin 6ec68bff:
          x86, mce: reinitialize per cpu features on resume
      
      introduced a call to mce_cpu_features() in the resume path, in order
      for the MCE machinery to get properly reinitialized after a resume.
      However, this function (and its successors) was flagged __cpuinit,
      which becomes __init on UP configurations (on SMP suspend/resume
      requires CPU hotplug and so this would not be seen.)
      
      Remove the offending __cpuinit annotations for mce_cpu_features() and
      its successor functions.
      
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      cc3ca220
  19. 20 2月, 2009 2 次提交
    • A
      x86, mce: separate correct machine check poller and fatal exception handler · b79109c3
      Andi Kleen 提交于
      Impact: cleanup, performance enhancement
      
      The machine check poller is diverging more and more from the fatal
      exception handler. Instead of adding more special cases separate the code
      paths completely. The corrected poll path is actually quite simple,
      and this doesn't result in much code duplication.
      
      This makes both handlers much easier to read and results in
      cleaner code flow.  The exception handler now only needs to care
      about uncorrected errors, which also simplifies the handling of multiple
      errors. The corrected poller also now always runs in standard interrupt
      context and does not need to do anything special to handle NMI context.
      
      Minor behaviour changes:
      - MCG status is now not cleared on polling.
      - Only the banks which had corrected errors get cleared on polling
      - The exception handler only clears banks with errors now
      
      v2: Forward port to new patch order. Add "uc" argument.
      Signed-off-by: NAndi Kleen <ak@linux.intel.com>
      Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      b79109c3
    • A
      x86, mce: factor out duplicated struct mce setup into one function · b5f2fa4e
      Andi Kleen 提交于
      Impact: cleanup
      
      This merely factors out duplicated code to set up
      the initial struct mce state into a single function.
      Signed-off-by: NAndi Kleen <ak@linux.intel.com>
      Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      b5f2fa4e
  20. 18 2月, 2009 1 次提交
  21. 17 2月, 2009 1 次提交
  22. 12 1月, 2009 1 次提交
  23. 07 1月, 2009 1 次提交
    • L
      x86: fix section mismatch warnings in mcheck/mce_amd_64.c · 51d7a139
      Leonardo Potenza 提交于
      Mark the function local_allocate_threshold_blocks() with __cpuinit,
      in order to remove the following section mismatch messages:
      
      WARNING: arch/x86/kernel/cpu/mcheck/built-in.o(.text+0x1363): Section mismatch in reference from the function local_allocate_threshold_blocks() to the function .cpuinit.text:allocate_threshold_blocks()
      The function local_allocate_threshold_blocks() references
      the function __cpuinit allocate_threshold_blocks().
      This is often because local_allocate_threshold_blocks lacks a __cpuinit
      annotation or the annotation of allocate_threshold_blocks is wrong.
      
      WARNING: arch/x86/kernel/cpu/built-in.o(.text+0x1def): Section mismatch in reference from the function local_allocate_threshold_blocks() to the function .cpuinit.text:allocate_threshold_blocks()
      The function local_allocate_threshold_blocks() references
      the function __cpuinit allocate_threshold_blocks().
      This is often because local_allocate_threshold_blocks lacks a __cpuinit
      annotation or the annotation of allocate_threshold_blocks is wrong.
      
      WARNING: arch/x86/kernel/built-in.o(.text+0xef2b): Section mismatch in reference from the function local_allocate_threshold_blocks() to the function .cpuinit.text:allocate_threshold_blocks()
      The function local_allocate_threshold_blocks() references
      the function __cpuinit allocate_threshold_blocks().
      This is often because local_allocate_threshold_blocks lacks a __cpuinit
      annotation or the annotation of allocate_threshold_blocks is wrong.
      
      All the callsites of this function are __cpuinit already, and all the
      functions it calls are __cpuinit as well.
      Signed-off-by: NLeonardo Potenza <lpotenza@inwind.it>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      51d7a139
  24. 17 12月, 2008 2 次提交
  25. 23 8月, 2008 1 次提交
    • R
      x86 MCE: Fix CPU hotplug problem with multiple multicore AMD CPUs · 8735728e
      Rafael J. Wysocki 提交于
      During CPU hot-remove the sysfs directory created by
      threshold_create_bank(), defined in
      arch/x86/kernel/cpu/mcheck/mce_amd_64.c, has to be removed before
      its parent directory, created by mce_create_device(), defined in
      arch/x86/kernel/cpu/mcheck/mce_64.c .  Moreover, when the CPU in
      question is hotplugged again, obviously the latter has to be created
      before the former.  At present, the right ordering is not enforced,
      because all of these operations are carried out by CPU hotplug
      notifiers which are not appropriately ordered with respect to each
      other.  This leads to serious problems on systems with two or more
      multicore AMD CPUs, among other things during suspend and hibernation.
      
      Fix the problem by placing threshold bank CPU hotplug callbacks in
      mce_cpu_callback(), so that they are invoked at the right places,
      if defined.  Additionally, use kobject_del() to remove the sysfs
      directory associated with the kobject created by
      kobject_create_and_add() in threshold_create_bank(), to prevent the
      kernel from crashing during CPU hotplug operations on systems with
      two or more multicore AMD CPUs.
      
      This patch fixes bug #11337.
      Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl>
      Acked-by: NAndi Kleen <andi@firstfloor.org>
      Tested-by: NMark Langsdorf <mark.langsdorf@amd.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      8735728e
  26. 24 5月, 2008 1 次提交
  27. 20 4月, 2008 1 次提交
  28. 30 1月, 2008 4 次提交