1. 23 10月, 2010 12 次提交
  2. 15 10月, 2010 1 次提交
    • L
      Don't dump task struct in a.out core-dumps · 0eead9ab
      Linus Torvalds 提交于
      akiphie points out that a.out core-dumps have that odd task struct
      dumping that was never used and was never really a good idea (it goes
      back into the mists of history, probably the original core-dumping
      code).  Just remove it.
      
      Also do the access_ok() check on dump_write().  It probably doesn't
      matter (since normal filesystems all seem to do it anyway), but he
      points out that it's normally done by the VFS layer, so ...
      
      [ I suspect that we should possibly do "vfs_write()" instead of
        calling ->write directly.  That also does the whole fsnotify and write
        statistics thing, which may or may not be a good idea. ]
      
      And just to be anal, do this all for the x86-64 32-bit a.out emulation
      code too, even though it's not enabled (and won't currently even
      compile)
      Reported-by: Nakiphie <akiphie@lavabit.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      0eead9ab
  3. 12 10月, 2010 1 次提交
    • Y
      x86, numa: For each node, register the memory blocks actually used · 73cf624d
      Yinghai Lu 提交于
      Russ reported SGI UV is broken recently. He said:
      
      | The SRAT table shows that memory range is spread over two nodes.
      |
      | SRAT: Node 0 PXM 0 100000000-800000000
      | SRAT: Node 1 PXM 1 800000000-1000000000
      | SRAT: Node 0 PXM 0 1000000000-1080000000
      |
      |Previously, the kernel early_node_map[] would show three entries
      |with the proper node.
      |
      |[    0.000000]     0: 0x00100000 -> 0x00800000
      |[    0.000000]     1: 0x00800000 -> 0x01000000
      |[    0.000000]     0: 0x01000000 -> 0x01080000
      |
      |The problem is recent community kernel early_node_map[] shows
      |only two entries with the node 0 entry overlapping the node 1
      |entry.
      |
      |    0: 0x00100000 -> 0x01080000
      |    1: 0x00800000 -> 0x01000000
      
      After looking at the changelog, Found out that it has been broken for a while by
      following commit
      
      |commit 8716273c
      |Author: David Rientjes <rientjes@google.com>
      |Date:   Fri Sep 25 15:20:04 2009 -0700
      |
      |    x86: Export srat physical topology
      
      Before that commit, register_active_regions() is called for every SRAT memory
      entry right away.
      
      Use nodememblk_range[] instead of nodes[] in order to make sure we
      capture the actual memory blocks registered with each node.  nodes[]
      contains an extended range which spans all memory regions associated
      with a node, but that does not mean that all the memory in between are
      included.
      Reported-by: NRuss Anderson <rja@sgi.com>
      Tested-by: NRuss Anderson <rja@sgi.com>
      Signed-off-by: NYinghai Lu <yinghai@kernel.org>
      LKML-Reference: <4CB27BDF.5000800@kernel.org>
      Acked-by: NDavid Rientjes <rientjes@google.com>
      Cc: <stable@kernel.org> 2.6.33 .34 .35 .36
      Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      73cf624d
  4. 11 10月, 2010 3 次提交
    • Z
      KVM: x86: Move TSC reset out of vmcb_init · 47008cd8
      Zachary Amsden 提交于
      The VMCB is reset whenever we receive a startup IPI, so Linux is setting
      TSC back to zero happens very late in the boot process and destabilizing
      the TSC.  Instead, just set TSC to zero once at VCPU creation time.
      
      Why the separate patch?  So git-bisect is your friend.
      Signed-off-by: NZachary Amsden <zamsden@redhat.com>
      Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
      47008cd8
    • Z
      KVM: x86: Fix SVM VMCB reset · 58877679
      Zachary Amsden 提交于
      On reset, VMCB TSC should be set to zero.  Instead, code was setting
      tsc_offset to zero, which passes through the underlying TSC.
      Signed-off-by: NZachary Amsden <zamsden@redhat.com>
      Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
      58877679
    • B
      x86, AMD, MCE thresholding: Fix the MCi_MISCj iteration order · 6dcbfe4f
      Borislav Petkov 提交于
      This fixes possible cases of not collecting valid error info in
      the MCE error thresholding groups on F10h hardware.
      
      The current code contains a subtle problem of checking only the
      Valid bit of MSR0000_0413 (which is MC4_MISC0 - DRAM
      thresholding group) in its first iteration and breaking out if
      the bit is cleared.
      
      But (!), this MSR contains an offset value, BlkPtr[31:24], which
      points to the remaining MSRs in this thresholding group which
      might contain valid information too. But if we bail out only
      after we checked the valid bit in the first MSR and not the
      block pointer too, we miss that other information.
      
      The thing is, MC4_MISC0[BlkPtr] is not predicated on
      MCi_STATUS[MiscV] or MC4_MISC0[Valid] and should be checked
      prior to iterating over the MCI_MISCj thresholding group,
      irrespective of the MC4_MISC0[Valid] setting.
      Signed-off-by: NBorislav Petkov <borislav.petkov@amd.com>
      Cc: <stable@kernel.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      6dcbfe4f
  5. 08 10月, 2010 1 次提交
  6. 06 10月, 2010 1 次提交
    • L
      modules: Fix module_bug_list list corruption race · 5336377d
      Linus Torvalds 提交于
      With all the recent module loading cleanups, we've minimized the code
      that sits under module_mutex, fixing various deadlocks and making it
      possible to do most of the module loading in parallel.
      
      However, that whole conversion totally missed the rather obscure code
      that adds a new module to the list for BUG() handling.  That code was
      doubly obscure because (a) the code itself lives in lib/bugs.c (for
      dubious reasons) and (b) it gets called from the architecture-specific
      "module_finalize()" rather than from generic code.
      
      Calling it from arch-specific code makes no sense what-so-ever to begin
      with, and is now actively wrong since that code isn't protected by the
      module loading lock any more.
      
      So this commit moves the "module_bug_{finalize,cleanup}()" calls away
      from the arch-specific code, and into the generic code - and in the
      process protects it with the module_mutex so that the list operations
      are now safe.
      
      Future fixups:
       - move the module list handling code into kernel/module.c where it
         belongs.
       - get rid of 'module_bug_list' and just use the regular list of modules
         (called 'modules' - imagine that) that we already create and maintain
         for other reasons.
      Reported-and-tested-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: Adrian Bunk <bunk@kernel.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: stable@kernel.org
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      5336377d
  7. 05 10月, 2010 1 次提交
  8. 01 10月, 2010 4 次提交
  9. 30 9月, 2010 1 次提交
  10. 29 9月, 2010 2 次提交
  11. 27 9月, 2010 1 次提交
  12. 25 9月, 2010 1 次提交
  13. 24 9月, 2010 1 次提交
    • R
      perf, x86: Catch spurious interrupts after disabling counters · 63e6be6d
      Robert Richter 提交于
      Some cpus still deliver spurious interrupts after disabling a
      counter. This caused 'undelivered NMI' messages. This patch
      fixes this. Introduced by:
      
        4177c42a: perf, x86: Try to handle unknown nmis with an enabled PMU
      Reported-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NRobert Richter <robert.richter@amd.com>
      Cc: Don Zickus <dzickus@redhat.com>
      Cc: gorcunov@gmail.com <gorcunov@gmail.com>
      Cc: fweisbec@gmail.com <fweisbec@gmail.com>
      Cc: ying.huang@intel.com <ying.huang@intel.com>
      Cc: ming.m.lin@intel.com <ming.m.lin@intel.com>
      Cc: yinghai@kernel.org <yinghai@kernel.org>
      Cc: andi@firstfloor.org <andi@firstfloor.org>
      Cc: eranian@google.com <eranian@google.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      LKML-Reference: <20100915162034.GO13563@erda.amd.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      63e6be6d
  14. 23 9月, 2010 5 次提交
    • J
      x86/amd-iommu: Fix rounding-bug in __unmap_single · 04e0463e
      Joerg Roedel 提交于
      In the __unmap_single function the dma_addr is rounded down
      to a page boundary before the dma pages are unmapped. The
      address is later also used to flush the TLB entries for that
      mapping. But without the offset into the dma page the amount
      of pages to flush might be miscalculated in the TLB flushing
      path. This patch fixes this bug by using the original
      address to flush the TLB.
      
      Cc: stable@kernel.org
      Signed-off-by: NJoerg Roedel <joerg.roedel@amd.com>
      04e0463e
    • J
      x86/amd-iommu: Work around S3 BIOS bug · 4c894f47
      Joerg Roedel 提交于
      This patch adds a workaround for an IOMMU BIOS problem to
      the AMD IOMMU driver. The result of the bug is that the
      IOMMU does not execute commands anymore when the system
      comes out of the S3 state resulting in system failure. The
      bug in the BIOS is that is does not restore certain hardware
      specific registers correctly. This workaround reads out the
      contents of these registers at boot time and restores them
      on resume from S3. The workaround is limited to the specific
      IOMMU chipset where this problem occurs.
      
      Cc: stable@kernel.org
      Signed-off-by: NJoerg Roedel <joerg.roedel@amd.com>
      4c894f47
    • J
      x86/amd-iommu: Set iommu configuration flags in enable-loop · e9bf5197
      Joerg Roedel 提交于
      This patch moves the setting of the configuration and
      feature flags out out the acpi table parsing path and moves
      it into the iommu-enable path. This is needed to reliably
      fix resume-from-s3.
      
      Cc: stable@kernel.org
      Signed-off-by: NJoerg Roedel <joerg.roedel@amd.com>
      e9bf5197
    • S
      tracing/x86: Don't use mcount in kvmclock.c · 258af474
      Steven Rostedt 提交于
      The guest can use the paravirt clock in kvmclock.c which is used
      by sched_clock(), which in turn is used by the tracing mechanism
      for timestamps, which leads to infinite recursion.
      
      Disable mcount/tracing for kvmclock.o.
      
      Cc: stable@kernel.org
      Cc: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
      Cc: Avi Kivity <avi@redhat.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      258af474
    • J
      tracing/x86: Don't use mcount in pvclock.c · 9ecd4e16
      Jeremy Fitzhardinge 提交于
      When using a paravirt clock, pvclock.c can be used by sched_clock(),
      which in turn is used by the tracing mechanism for timestamps,
      which leads to infinite recursion.
      
      Disable mcount/tracing for pvclock.o.
      
      Cc: stable@kernel.org
      Signed-off-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
      LKML-Reference: <4C9A9A3F.4040201@goop.org>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      9ecd4e16
  15. 22 9月, 2010 2 次提交
  16. 21 9月, 2010 2 次提交
  17. 17 9月, 2010 1 次提交
    • F
      x86: Fix instruction breakpoint encoding · 89e45aac
      Frederic Weisbecker 提交于
      Lengths and types of breakpoints are encoded in a half byte
      into CPU registers. However when we extract these values
      and store them, we add a high half byte part to them: 0x40 to the
      length and 0x80 to the type.
      When that gets reloaded to the CPU registers, the high part
      is masked.
      
      While making the instruction breakpoints available for perf,
      I zapped that high part on instruction breakpoint encoding
      and that broke the arch -> generic translation used by ptrace
      instruction breakpoints. Writing dr7 to set an inst breakpoint
      was then failing.
      
      There is no apparent reason for these high parts so we could get
      rid of them altogether. That's an invasive change though so let's
      do that later and for now fix the problem by restoring that inst
      breakpoint high part encoding in this sole patch.
      Reported-by: NKelvie Wong <kelvie@ieee.org>
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Prasad <prasad@linux.vnet.ibm.com>
      Cc: Mahesh Salgaonkar <mahesh@linux.vnet.ibm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      89e45aac