1. 21 2月, 2010 1 次提交
    • R
      MM: Pass a PTE pointer to update_mmu_cache() rather than the PTE itself · 4b3073e1
      Russell King 提交于
      On VIVT ARM, when we have multiple shared mappings of the same file
      in the same MM, we need to ensure that we have coherency across all
      copies.  We do this via make_coherent() by making the pages
      uncacheable.
      
      This used to work fine, until we allowed highmem with highpte - we
      now have a page table which is mapped as required, and is not available
      for modification via update_mmu_cache().
      
      Ralf Beache suggested getting rid of the PTE value passed to
      update_mmu_cache():
      
        On MIPS update_mmu_cache() calls __update_tlb() which walks pagetables
        to construct a pointer to the pte again.  Passing a pte_t * is much
        more elegant.  Maybe we might even replace the pte argument with the
        pte_t?
      
      Ben Herrenschmidt would also like the pte pointer for PowerPC:
      
        Passing the ptep in there is exactly what I want.  I want that
        -instead- of the PTE value, because I have issue on some ppc cases,
        for I$/D$ coherency, where set_pte_at() may decide to mask out the
        _PAGE_EXEC.
      
      So, pass in the mapped page table pointer into update_mmu_cache(), and
      remove the PTE value, updating all implementations and call sites to
      suit.
      
      Includes a fix from Stephen Rothwell:
      
        sparc: fix fallout from update_mmu_cache API change
      Signed-off-by: NStephen Rothwell <sfr@canb.auug.org.au>
      Acked-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      4b3073e1
  2. 19 1月, 2010 2 次提交
  3. 16 1月, 2010 2 次提交
  4. 13 1月, 2010 6 次提交
    • I
      x86: xen: 64-bit kernel RPL should be 0 · e68266b7
      Ian Campbell 提交于
      Under Xen 64 bit guests actually run their kernel in ring 3,
      however the hypervisor takes care of squashing descriptor the
      RPLs transparently (in order to allow them to continue to
      differentiate between user and kernel space CS using the RPL).
      Therefore the Xen paravirt backend should use RPL==0 instead of
      1 (or 3). Using RPL==1 causes generic arch code to take
      incorrect code paths because it uses "testl $3, <CS>, je foo"
      type tests for a userspace CS and this considers 1==userspace.
      
      This issue was previously masked because get_kernel_rpl() was
      omitted when setting CS in kernel_thread(). This was fixed when
      kernel_thread() was unified with 32 bit in
      f443ff42.
      Signed-off-by: NIan Campbell <ian.campbell@citrix.com>
      Cc: Christian Kujau <lists@nerdbynature.de>
      Cc: Jeremy Fitzhardinge <Jeremy.Fitzhardinge@citrix.com>
      Cc: Cyrill Gorcunov <gorcunov@gmail.com>
      Cc: Brian Gerst <brgerst@gmail.com>
      LKML-Reference: <1263377768-19600-2-git-send-email-ian.campbell@citrix.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      e68266b7
    • C
      x86: kernel_thread() -- initialize SS to a known state · 864a0922
      Cyrill Gorcunov 提交于
      Before the kernel_thread was converted into "C" we had
      pt_regs::ss set to __KERNEL_DS (by SAVE_ALL asm macro).
      
      Though I must admit I didn't find any *explicit* load of
      %ss from this structure the better to be on a safe side
      and set it to a known value.
      Signed-off-by: NCyrill Gorcunov <gorcunov@openvz.org>
      Signed-off-by: NIan Campbell <ian.campbell@citrix.com>
      Cc: Christian Kujau <lists@nerdbynature.de>
      Cc: Jeremy Fitzhardinge <Jeremy.Fitzhardinge@citrix.com>
      Cc: Brian Gerst <brgerst@gmail.com>
      LKML-Reference: <1263377768-19600-1-git-send-email-ian.campbell@citrix.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      864a0922
    • F
      x86/agp: Fix agp_amd64_init and agp_amd64_cleanup · 42590a75
      FUJITA Tomonori 提交于
      This fixes the regression introduced by the commit
      f405d2c0.
      
      The above commit fixes the following issue:
      
        http://marc.info/?l=linux-kernel&m=126192729110083&w=2
      
      However, it doesn't work properly when you remove and insert the
      agp_amd64 module again.
      
      agp_amd64_init() and agp_amd64_cleanup should be called only
      when gart_iommu is not called earlier (that is, the GART IOMMU
      is not enabled). We need to use 'gart_iommu_aperture' to see if
      GART IOMMU is enabled or not.
      Signed-off-by: NFUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
      Cc: mitov@issp.bas.bg
      Cc: davej@redhat.com
      LKML-Reference: <20100104161603L.fujita.tomonori@lab.ntt.co.jp>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      42590a75
    • M
      x86: SGI UV: Fix mapping of MMIO registers · fcfbb2b5
      Mike Travis 提交于
      This fixes the problem of the initialization code not correctly
      mapping the entire MMIO space on a UV system.  A side effect is
      the map_high() interface needed to be changed to accommodate
      different address and size shifts.
      Signed-off-by: NMike Travis <travis@sgi.com>
      Reviewed-by: NMike Habeck <habeck@sgi.com>
      Cc: <stable@kernel.org>
      Cc: Jack Steiner <steiner@sgi.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      LKML-Reference: <4B479202.7080705@sgi.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      fcfbb2b5
    • A
      x86: mce.h: Fix warning in header checks · df39a2e4
      Alan Cox 提交于
      Someone isn't reading their build output: Move the definition
      out of the exported header.
      Signed-off-by: NAlan Cox <alan@linux.intel.com>
      Cc: linux-kernel@vger.kernelorg
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      df39a2e4
    • F
      perf: Stop stack frame walking off kernel addresses boundaries · c2c5d45d
      Frederic Weisbecker 提交于
      While processing kernel perf callchains, an bad entry can be
      considered as a valid stack pointer but not as a kernel address.
      
      In this case, we hang in an endless loop. This can happen in an
      x86-32 kernel after processing the last entry in a kernel
      stacktrace.
      
      Just stop the stack frame walking after we encounter an invalid
      kernel address.
      
      This fixes a hard lockup in x86-32.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Paul Mackerras <paulus@samba.org>
      LKML-Reference: <1262227945-27014-1-git-send-regression-fweisbec@gmail.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      c2c5d45d
  5. 12 1月, 2010 4 次提交
  6. 07 1月, 2010 1 次提交
    • S
      x86, irq: Check move_in_progress before freeing the vector mapping · 7f41c2e1
      Suresh Siddha 提交于
      With the recent irq migration fixes (post 2.6.32), Gary Hade has noticed
      "No IRQ handler for vector" messages during the 2.6.33-rc1 kernel boot on IBM
      AMD platforms and root caused the issue to this commit:
      
      > commit 23359a88
      > Author: Suresh Siddha <suresh.b.siddha@intel.com>
      > Date:   Mon Oct 26 14:24:33 2009 -0800
      >
      >    x86: Remove move_cleanup_count from irq_cfg
      
      As part of this patch, we have removed the move_cleanup_count check
      in smp_irq_move_cleanup_interrupt(). With this change, we can run into a
      situation where an irq cleanup interrupt on a cpu can cleanup the vector
      mappings associated with multiple irqs, of which one of the irq's migration
      might be still in progress. As such when that irq hits the old cpu, we get
      the "No IRQ handler" messages.
      
      Fix this by checking for the irq_cfg's move_in_progress and if the move
      is still in progress delay the vector cleanup to another irq cleanup
      interrupt request (which will happen when the irq starts arriving at the
      new cpu destination).
      Reported-and-tested-by: NGary Hade <garyhade@us.ibm.com>
      Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com>
      LKML-Reference: <1262804191.2732.7.camel@sbs-t61.sc.intel.com>
      Cc: Eric W. Biederman <ebiederm@xmission.com>
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      7f41c2e1
  7. 06 1月, 2010 2 次提交
  8. 05 1月, 2010 3 次提交
  9. 31 12月, 2009 3 次提交
  10. 29 12月, 2009 1 次提交
    • M
      x86: SGI UV: Fix writes to led registers on remote uv hubs · 39d30770
      Mike Travis 提交于
      The wrong address was being used to write the SCIR led regs on
      remote hubs.  Also, there was an inconsistency between how BIOS
      and the kernel indexed these regs.  Standardize on using the
      lower 6 bits of the APIC ID as the index.
      
      This patch fixes the problem of writing to an errant address to
      a cpu # >= 64.
      Signed-off-by: NMike Travis <travis@sgi.com>
      Reviewed-by: NJack Steiner <steiner@sgi.com>
      Cc: Robin Holt <holt@sgi.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: stable@kernel.org
      LKML-Reference: <4B3922F9.3060905@sgi.com>
      [ v2: fix a number of annoying checkpatch artifacts and whitespace noise ]
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      39d30770
  11. 28 12月, 2009 2 次提交
    • P
      x86, kmemcheck: Use KERN_WARNING for error reporting · c0ca9da4
      Pekka Enberg 提交于
      As suggested by Vegard Nossum, use KERN_WARNING for error
      reporting to make sure kmemcheck reports end up in syslog.
      Suggested-by: NVegard Nossum <vegard.nossum@gmail.com>
      Signed-off-by: NPekka Enberg <penberg@cs.helsinki.fi>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      LKML-Reference: <1261990935.4641.7.camel@penberg-laptop>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      c0ca9da4
    • P
      x86: Use KERN_DEFAULT log-level in __show_regs() · d015a092
      Pekka Enberg 提交于
      Andrew Morton reported a strange looking kmemcheck warning:
      
        WARNING: kmemcheck: Caught 32-bit read from uninitialized memory (ffff88004fba6c20)
        0000000000000000310000000000000000000000000000002413000000c9ffff
         u u u u u u u u u u u u u u u u i i i i i i i i u u u u u u u u
      
         [<ffffffff810af3aa>] kmemleak_scan+0x25a/0x540
         [<ffffffff810afbcb>] kmemleak_scan_thread+0x5b/0xe0
         [<ffffffff8104d0fe>] kthread+0x9e/0xb0
         [<ffffffff81003074>] kernel_thread_helper+0x4/0x10
         [<ffffffffffffffff>] 0xffffffffffffffff
      
      The above printout is missing register dump completely. The
      problem here is that the output comes from syslog which doesn't
      show KERN_INFO log-level messages. We didn't see this before
      because both of us were testing on 32-bit kernels which use the
      _default_ log-level.
      
      Fix that up by explicitly using KERN_DEFAULT log-level for
      __show_regs() printks.
      Signed-off-by: NPekka Enberg <penberg@cs.helsinki.fi>
      Cc: Vegard Nossum <vegard.nossum@gmail.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Arjan van de Ven <arjan@infradead.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      LKML-Reference: <1261988819.4641.2.camel@penberg-laptop>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      d015a092
  12. 27 12月, 2009 3 次提交
  13. 26 12月, 2009 1 次提交
    • H
      x86, compress: Force i386 instructions for the decompressor · 17a2a9b5
      H. Peter Anvin 提交于
      Recently, some distros have started shipping versions of gcc which
      default to -march=i686.  This breaks building kernels for pre-i686
      machines, even if they have been selected in Kconfig, due to the
      generation of CMOV instructions.
      
      There isn't enough benefit to try to preserve the generation of these
      instructions even when selected, so simply force -march=i386 for the
      decompressor when building a 32-bit kernel.
      Reported-and-tested-by: NChris Rankin <rankincj@yahoo.com>
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      LKML-Reference: <219280.97558.qm@web52907.mail.re2.yahoo.com>
      17a2a9b5
  14. 24 12月, 2009 1 次提交
    • L
      Revert "x86, ucode-amd: Ensure ucode update on suspend/resume after CPU off/online cycle" · 2f99f5c8
      Linus Torvalds 提交于
      This reverts commit 9f15226e.  It's just
      wrong, and broke resume for Rafael even on a non-AMD CPU.
      
      As Rafael says:
       "... it causes microcode_init_cpu() to be called during resume even for
        CPUs for which there's no microcode to apply.  That, in turn, results
        in executing request_firmware() (on Intel CPUs at least) which doesn't
        work at this stage of resume (we have device interrupts disabled, I/O
        devices are still suspended and so on).
      
        If I'm not mistaken, the "if (uci->valid)" logic means "if that CPU is
        known to us" , so before commit 9f15226e microcode_resume_cpu() was
        called for all CPUs already in the system during suspend, which was
        the right thing to do.  The commit changed it so that the CPUs without
        microcode to apply are now treated as "unknown", which is not quite
        right.
      
        The problem this commit attempted to solve has to be handled
        differently."
      
      Bisected-and -requested-by: Rafael J. Wysocki <rjw@sisk.pl>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      2f99f5c8
  15. 23 12月, 2009 1 次提交
    • A
      arch/x86/kernel/cpu/cpufreq/acpi-cpufreq.c: avoid cross-CPU interrupts by... · 4a28395d
      Andrew Morton 提交于
      arch/x86/kernel/cpu/cpufreq/acpi-cpufreq.c: avoid cross-CPU interrupts by using smp_call_function_any()
      
      Presently acpi-cpufreq will perform the MSR read on the first CPU in the
      mask.  That's inefficient if that CPU differs from the current CPU.
      Because we have to perform a cross-CPU call, but we could have run the
      rdmsr on the current CPU.
      
      So switch to using the new smp_call_function_any(), which will perform the
      call on the current CPU if that CPU is present in the mask (it is).
      
      Cc: "Zhang, Yanmin" <yanmin_zhang@linux.intel.com>
      Cc: Dave Jones <davej@redhat.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Jaswinder Singh Rajput <jaswinder@kernel.org>
      Cc: Len Brown <len.brown@intel.com>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
      Cc: Zhao Yakui <yakui.zhao@intel.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLen Brown <len.brown@intel.com>
      4a28395d
  16. 22 12月, 2009 5 次提交
    • A
      ACPI: processor: unify arch_acpi_processor_cleanup_pdc · 47817254
      Alex Chiang 提交于
      The x86 and ia64 implementations of the function in $subject are
      exactly the same.
      
      Also, since the arch-specific implementations of setting _PDC have
      been completely hollowed out, remove the empty shells.
      
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Signed-off-by: NAlex Chiang <achiang@hp.com>
      Signed-off-by: NLen Brown <len.brown@intel.com>
      47817254
    • A
      ACPI: processor: finish unifying arch_acpi_processor_init_pdc() · 6c5807d7
      Alex Chiang 提交于
      The only thing arch-specific about calling _PDC is what bits get
      set in the input obj_list buffer.
      
      There's no need for several levels of indirection to twiddle those
      bits. Additionally, since we're just messing around with a buffer,
      we can simplify the interface; no need to pass around the entire
      struct acpi_processor * just to get at the buffer.
      
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Signed-off-by: NAlex Chiang <achiang@hp.com>
      Signed-off-by: NLen Brown <len.brown@intel.com>
      6c5807d7
    • A
      ACPI: processor: factor out common _PDC settings · 08ea48a3
      Alex Chiang 提交于
      Both x86 and ia64 initialize _PDC with mostly common bit settings.
      
      Factor out the common settings and leave the arch-specific ones alone.
      
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Signed-off-by: NAlex Chiang <achiang@hp.com>
      Signed-off-by: NLen Brown <len.brown@intel.com>
      08ea48a3
    • A
      ACPI: processor: unify arch_acpi_processor_init_pdc · 407cd87c
      Alex Chiang 提交于
      The x86 and ia64 implementations of arch_acpi_processor_init_pdc()
      are almost exactly the same. The only difference is in what bits
      they set in obj_list buffer.
      
      Combine the boilerplate memory management code, and leave the
      arch-specific bit twiddling in separate implementations.
      
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Signed-off-by: NAlex Chiang <achiang@hp.com>
      Signed-off-by: NLen Brown <len.brown@intel.com>
      407cd87c
    • A
      ACPI: processor: introduce arch_has_acpi_pdc · 1d9cb470
      Alex Chiang 提交于
      arch dependent helper function that tells us if we should attempt to
      evaluate _PDC on this machine or not.
      
      The x86 implementation assumes that the CPUs in the machine must be
      homogeneous, and that you cannot mix CPUs of different vendors.
      
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Signed-off-by: NAlex Chiang <achiang@hp.com>
      Signed-off-by: NLen Brown <len.brown@intel.com>
      1d9cb470
  17. 21 12月, 2009 1 次提交
    • J
      x86/amd-iommu: Fix initialization failure panic · 0f764806
      Joerg Roedel 提交于
      The assumption that acpi_table_parse passes the return value
      of the hanlder function to the caller proved wrong
      recently. The return value of the handler function is
      totally ignored. This makes the initialization code for AMD
      IOMMU buggy in a way that could cause a kernel panic on
      initialization. This patch fixes the issue in the AMD IOMMU
      driver.
      
      Cc: stable@kernel.org
      Signed-off-by: NJoerg Roedel <joerg.roedel@amd.com>
      0f764806
  18. 19 12月, 2009 1 次提交