1. 06 11月, 2009 1 次提交
  2. 04 11月, 2009 1 次提交
    • P
      x86/hw-breakpoints: Actually flush thread breakpoints in flush_thread(). · 41a48d14
      Paul Mundt 提交于
      flush_thread() tries to do a TIF_DEBUG check before calling in to
      flush_thread_hw_breakpoint() (which subsequently clears the thread flag),
      but for some reason, the x86 code is manually clearing TIF_DEBUG
      immediately before the test, so this path will never be taken.
      
      This kills off the erroneous clear_tsk_thread_flag() and lets
      flush_thread_hw_breakpoint() actually get invoked.
      
      Presumably folks were getting lucky with testing and the
      free_thread_info() -> free_thread_xstate() path was taking care of the
      flush there.
      Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
      Acked-by: N"K.Prasad" <prasad@linux.vnet.ibm.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Alan Stern <stern@rowland.harvard.edu>
      LKML-Reference: <20091005102306.GA7889@linux-sh.org>
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      41a48d14
  3. 15 10月, 2009 1 次提交
  4. 14 10月, 2009 9 次提交
  5. 13 10月, 2009 9 次提交
    • D
      sparc64: Set IRQF_DISABLED on LDC channel IRQs. · c58543c8
      David S. Miller 提交于
      With lots of virtual devices it's easy to generate a lot of
      events and chew up the kernel IRQ stack.
      Reported-by: Nhyl <heyongli@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      c58543c8
    • J
      x86/paravirt: Use normal calling sequences for irq enable/disable · 71999d98
      Jeremy Fitzhardinge 提交于
      Bastian Blank reported a boot crash with stackprotector enabled,
      and debugged it back to edx register corruption.
      
      For historical reasons irq enable/disable/save/restore had special
      calling sequences to make them more efficient.  With the more
      recent introduction of higher-level and more general optimisations
      this is no longer necessary so we can just use the normal PVOP_
      macros.
      
      This fixes some residual bugs in the old implementations which left
      edx liable to inadvertent clobbering. Also, fix some bugs in
      __PVOP_VCALLEESAVE which were revealed by actual use.
      Reported-by: NBastian Blank <bastian@waldi.eu.org>
      Signed-off-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
      Cc: Stable Kernel <stable@kernel.org>
      Cc: Xen-devel <xen-devel@lists.xensource.com>
      LKML-Reference: <4AD3BC9B.7040501@goop.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      71999d98
    • I
      perf_events, x86: Fix event constraints code · 7a693d3f
      Ingo Molnar 提交于
      There was namespace overlap due to a rename i did - this caused
      the following build warning, reported by Stephen Rothwell against
      linux-next x86_64 allmodconfig:
      
        arch/x86/kernel/cpu/perf_event.c: In function 'intel_get_event_idx':
        arch/x86/kernel/cpu/perf_event.c:1445: warning: 'event_constraint' is used uninitialized in this function
      
      This is a real bug not just a warning: fix it by renaming the
      global event-constraints table pointer to 'event_constraints'.
      Reported-by: NStephen Rothwell <sfr@canb.auug.org.au>
      Cc: Stephane Eranian <eranian@gmail.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      LKML-Reference: <20091013144223.369d616d.sfr@canb.auug.org.au>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      7a693d3f
    • P
      sh: ftrace: Fix up syscall tracepoint support. · 99222622
      Paul Mundt 提交于
      Sync up with latest core changes in the syscalls tracing area:
      
      - tracing: Map syscall name to number (syscall_name_to_nr())
      - tracing: Call arch_init_ftrace_syscalls at boot
      - tracing: add support tracepoint ids (set_syscall_{enter,exit}_id())
      
      Taken from the s390 change.
      Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
      99222622
    • P
      sh: force dcache flush if dcache_dirty bit set. · 964f7e5a
      Paul Mundt 提交于
      This too follows the ARM change, given that the issue at hand applies to
      all platforms that implement lazy D-cache writeback.
      
      This fixes up the case when a page mapping disappears between the
      flush_dcache_page() call (when PG_dcache_dirty is set for the page) and
      the update_mmu_cache() call -- such as in the case of swap cache being
      freed early. This kills off the mapping test in update_mmu_cache() and
      switches to simply testing for PG_dcache_dirty.
      Reported-by: NNitin Gupta <ngupta@vflare.org>
      Reported-by: NHugh Dickins <hugh.dickins@tiscali.co.uk>
      Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
      964f7e5a
    • P
      sh: update die() output. · af67c3a9
      Paul Mundt 提交于
      This follows the ARM change, as SH had all of the same issues:
      
      Make die() better match x86:
      - add printing of the last accessed sysfs file
      - ensure console_verbose() is called under the lock
      - ensure we panic outside of oops_exit()
      Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
      af67c3a9
    • H
      x86: fix kernel panic on 32 bits when profiling · d1705c55
      H. Peter Anvin 提交于
      Latest kernel has a kernel panic in booting on i386 machine when
      profile=2 setting in cmdline.  It is due to 'sp' being incorrect in
      profile_pc().
      
      BUG: unable to handle kernel NULL pointer dereference at 00000246
      IP: [<c01288b6>] profile_pc+0x2a/0x48
      *pde = 00000000
      Oops: 0000 [#1] SMP
      
      This differs from the original version by Alex Shi in that we use the
      kernel_stack_pointer() inline already defined in <asm/ptrace.h> for
      this purpose, instead of #ifdef.
      Originally-by: NAlex Shi <alex.shi@intel.com>
      Cc: "Chen, Tim C" <tim.c.chen@intel.com>
      Cc: "Rafael J. Wysocki" <rjw@sisk.pl>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      d1705c55
    • N
      ARM: force dcache flush if dcache_dirty bit set · 787b2faa
      Nitin Gupta 提交于
      On ARM, update_mmu_cache() does dcache flush for a page only if
      it has a kernel mapping (page_mapping(page) != NULL). The correct
      behavior would be to force the flush based on dcache_dirty bit only.
      
      One of the cases where present logic would be a problem is when
      a RAM based block device[1] is used as a swap disk. In this case,
      we would have in-memory data corruption as shown in steps below:
      
      do_swap_page()
      {
          - Allocate a new page (if not already in swap cache)
          - Issue read from swap disk
              - Block driver issues flush_dcache_page()
              - flush_dcache_page() simply sets PG_dcache_dirty bit and does not
                actually issue a flush since this page has no user space mapping yet.
          - Now, if swap disk is almost full, this newly read page is removed
            from swap cache and corrsponding swap slot is freed.
          - Map this page anonymously in user space.
          - update_mmu_cache()
              - Since this page does not have kernel mapping (its not in page/swap
                cache and is mapped anonymously), it does not issue dcache flush
                even if dcache_dirty bit is set by flush_dcache_page() above.
      
          <user now gets stale data since dcache was never flushed>
      }
      
      Same problem exists on mips too.
      
      [1] example:
       - brd (RAM based block device)
       - ramzswap (RAM based compressed swap device)
      Signed-off-by: NNitin Gupta <ngupta@vflare.org>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      787b2faa
    • J
      x86: Fix Suspend to RAM freeze on Acer Aspire 1511Lmi laptop · 7a4b7e5e
      Jan Beulich 提交于
      Move the trampoline and accessors back out of .cpuinit.* for the
      case of 64-bits+ACPI_SLEEP.
      
      This solves s2ram hangs reported in:
      
        http://bugzilla.kernel.org/show_bug.cgi?id=14279Reported-and-bisected-by: NChristian Casteyde <casteyde.christian@free.fr>
      Signed-off-by: NJan Beulich <jbeulich@novell.com>
      Cc: <bugzilla-daemon@bugzilla.kernel.org>
      Cc: "Andrew Morton" <akpm@linux-foundation.org>
      Cc: "Rafael J. Wysocki" <rjw@sisk.pl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      7a4b7e5e
  6. 12 10月, 2009 8 次提交
  7. 11 10月, 2009 5 次提交
  8. 10 10月, 2009 3 次提交
  9. 09 10月, 2009 3 次提交
    • I
      Revert "x86, timers: Check for pending timers after (device) interrupts" · e7ab0f7b
      Ingo Molnar 提交于
      This reverts commit 9bcbdd9c.
      
      The real bug producing LatencyTop latencies has been fixed in:
      
        f5dc3753: sched: Update the clock of runqueue select_task_rq() selected
      
      And the commit being reverted here triggers local timer processing
      from every device IRQ. If device IRQs come in at a high frequency,
      this could cause a performance regression.
      
      The commit being reverted here purely 'fixed' the reported latency
      as a side effect, because CPUs were being moved out of idle more
      often.
      Acked-by: NPeter Zijlstra <peterz@infradead.org>
      Cc: Arjan van de Ven <arjan@linux.intel.com>
      Cc: Frans Pop <elendil@planet.nl>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      LKML-Reference: <20091008064041.67219b13@infradead.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      e7ab0f7b
    • P
      perf, x86: Add simple group validation · fe9081cc
      Peter Zijlstra 提交于
      Refuse to add events when the group wouldn't fit onto the PMU
      anymore.
      
      Naive implementation.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Stephane Eranian <eranian@gmail.com>
      LKML-Reference: <1254911461.26976.239.camel@twins>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      fe9081cc
    • S
      perf_events: Add event constraints support for Intel processors · b690081d
      Stephane Eranian 提交于
      On some Intel processors, not all events can be measured in all
      counters. Some events can only be measured in one particular
      counter, for instance. Assigning an event to the wrong counter does
      not crash the machine but this yields bogus counts, i.e., silent
      error.
      
      This patch changes the event to counter assignment logic to take
      into account event constraints for Intel P6, Core and Nehalem
      processors. There is no contraints on Intel Atom. There are
      constraints on Intel Yonah (Core Duo) but they are not provided in
      this patch given that this processor is not yet supported by
      perf_events.
      
      As a result of the constraints, it is possible for some event
      groups to never actually be loaded onto the PMU if they contain two
      events which can only be measured on a single counter. That
      situation can be detected with the scaling information extracted
      with read().
      Signed-off-by: NStephane Eranian <eranian@gmail.com>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <1254840129-6198-3-git-send-email-eranian@gmail.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      b690081d