1. 06 2月, 2009 3 次提交
    • J
      wait: prevent exclusive waiter starvation · 777c6c5f
      Johannes Weiner 提交于
      With exclusive waiters, every process woken up through the wait queue must
      ensure that the next waiter down the line is woken when it has finished.
      
      Interruptible waiters don't do that when aborting due to a signal.  And if
      an aborting waiter is concurrently woken up through the waitqueue, noone
      will ever wake up the next waiter.
      
      This has been observed with __wait_on_bit_lock() used by
      lock_page_killable(): the first contender on the queue was aborting when
      the actual lock holder woke it up concurrently.  The aborted contender
      didn't acquire the lock and therefor never did an unlock followed by
      waking up the next waiter.
      
      Add abort_exclusive_wait() which removes the process' wait descriptor from
      the waitqueue, iff still queued, or wakes up the next waiter otherwise.
      It does so under the waitqueue lock.  Racing with a wake up means the
      aborting process is either already woken (removed from the queue) and will
      wake up the next waiter, or it will remove itself from the queue and the
      concurrent wake up will apply to the next waiter after it.
      
      Use abort_exclusive_wait() in __wait_event_interruptible_exclusive() and
      __wait_on_bit_lock() when they were interrupted by other means than a wake
      up through the queue.
      
      [akpm@linux-foundation.org: coding-style fixes]
      Reported-by: NChris Mason <chris.mason@oracle.com>
      Signed-off-by: NJohannes Weiner <hannes@cmpxchg.org>
      Mentored-by: NOleg Nesterov <oleg@redhat.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Matthew Wilcox <matthew@wil.cx>
      Cc: Chuck Lever <cel@citi.umich.edu>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: <stable@kernel.org>		["after some testing"]
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      777c6c5f
    • A
      revert "rlimit: permit setting RLIMIT_NOFILE to RLIM_INFINITY" · 60fd760f
      Andrew Morton 提交于
      Revert commit 0c2d64fb because it causes
      (arguably poorly designed) existing userspace to spend interminable
      periods closing billions of not-open file descriptors.
      
      We could bring this back, with some sort of opt-in tunable in /proc, which
      defaults to "off".
      
      Peter's alanysis follows:
      
      : I spent several hours trying to get to the bottom of a serious
      : performance issue that appeared on one of our servers after upgrading to
      : 2.6.28.  In the end it's what could be considered a userspace bug that
      : was triggered by a change in 2.6.28.  Since this might also affect other
      : people I figured I'd at least document what I found here, and maybe we
      : can even do something about it:
      :
      :
      : So, I upgraded some of debian.org's machines to 2.6.28.1 and immediately
      : the team maintaining our ftp archive complained that one of their
      : scripts that previously ran in a few minutes still hadn't even come
      : close to being done after an hour or so.  Downgrading to 2.6.27 fixed
      : that.
      :
      : Turns out that script is forking a lot and something in it or python or
      : whereever closes all the file descriptors it doesn't want to pass on.
      : That is, it starts at zero and goes up to ulimit -n/RLIMIT_NOFILE and
      : closes them all with a few exceptions.
      :
      : Turns out that takes a long time when your limit -n is now 2^20 (1048576).
      :
      : With 2.6.27.* the ulimit -n was the standard 1024, but with 2.6.28 it is
      : now a thousand times that.
      :
      : 2.6.28 included a patch titled "rlimit: permit setting RLIMIT_NOFILE to
      : RLIM_INFINITY" (0c2d64fb)[1] that
      : allows, as the title implies, to set the limit for number of files to
      : infinity.
      :
      : Closer investigation showed that the broken default ulimit did not apply
      : to "system" processes (like stuff started from init).  In the end I
      : could establish that all processes that passed through pam_limit at one
      : point had the bad resource limit.
      :
      : Apparently the pam library in Debian etch (4.0) initializes the limits
      : to some default values when it doesn't have any settings in limit.conf
      : to override them.  Turns out that for nofiles this is RLIM_INFINITY.
      : Commenting out "case RLIMIT_NOFILE" in pam_limit.c:267 of our pam
      : package version 0.79-5 fixes that - tho I'm not sure what side effects
      : that has.
      :
      : Debian lenny (the upcoming 5.0 version) doesn't have this issue as it
      : uses a different pam (version).
      Reported-by: NPeter Palfrader <weasel@debian.org>
      Cc: Adam Tkac <vonsch@gmail.com>
      Cc: Michael Kerrisk <mtk.manpages@googlemail.com>
      Cc: <stable@kernel.org>		[2.6.28.x]
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      60fd760f
    • A
      kernel/async.c: fix printk warnings · 58763a29
      Andrew Morton 提交于
      alpha:
      
      kernel/async.c: In function 'run_one_entry':
      kernel/async.c:141: warning: format '%lli' expects type 'long long int', but argument 2 has type 'async_cookie_t'
      kernel/async.c:149: warning: format '%lli' expects type 'long long int', but argument 2 has type 'async_cookie_t'
      kernel/async.c:149: warning: format '%lld' expects type 'long long int', but argument 4 has type 's64'
      kernel/async.c: In function 'async_synchronize_cookie_special':
      kernel/async.c:250: warning: format '%lli' expects type 'long long int', but argument 3 has type 's64'
      
      Cc: Arjan van de Ven <arjan@infradead.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      58763a29
  2. 04 2月, 2009 1 次提交
  3. 03 2月, 2009 1 次提交
    • E
      modules: Use a better scheme for refcounting · 720eba31
      Eric Dumazet 提交于
      Current refcounting for modules (done if CONFIG_MODULE_UNLOAD=y) is
      using a lot of memory.
      
      Each 'struct module' contains an [NR_CPUS] array of full cache lines.
      
      This patch uses existing infrastructure (percpu_modalloc() &
      percpu_modfree()) to allocate percpu space for the refcount storage.
      
      Instead of wasting NR_CPUS*128 bytes (on i386), we now use
      nr_cpu_ids*sizeof(local_t) bytes.
      
      On a typical distro, where NR_CPUS=8, shiping 2000 modules, we reduce
      size of module files by about 2 Mbytes. (1Kb per module)
      
      Instead of having all refcounters in the same memory node - with TLB misses
      because of vmalloc() - this new implementation permits to have better
      NUMA properties, since each  CPU will use storage on its preferred node,
      thanks to percpu storage.
      Signed-off-by: NEric Dumazet <dada1@cosmosbay.com>
      Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      720eba31
  4. 01 2月, 2009 6 次提交
  5. 31 1月, 2009 4 次提交
    • T
      hrtimer: prevent negative expiry value after clock_was_set() · b0a9b511
      Thomas Gleixner 提交于
      Impact: prevent false positive WARN_ON() in clockevents_program_event()
      
      clock_was_set() changes the base->offset of CLOCK_REALTIME and
      enforces the reprogramming of the clockevent device to expire timers
      which are based on CLOCK_REALTIME. If the clock change is large enough
      then the subtraction of the timer expiry value and base->offset can
      become negative which triggers the warning in
      clockevents_program_event().
      
      Check the subtraction result and set a negative value to 0.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      b0a9b511
    • S
      hrtimers: allow the hot-unplugging of all cpus · 94df7de0
      Sebastien Dugue 提交于
      Impact: fix CPU hotplug hang on Power6 testbox
      
      On architectures that support offlining all cpus (at least powerpc/pseries),
      hot-unpluging the tick_do_timer_cpu can result in a system hang.
      
      This comes from the fact that if the cpu going down happens to be the
      cpu doing the tick, then as the tick_do_timer_cpu handover happens after the
      cpu is dead (via the CPU_DEAD notification), we're left without ticks,
      jiffies are frozen and any task relying on timers (msleep, ...) is stuck.
      That's particularly the case for the cpu looping in __cpu_die() waiting
      for the dying cpu to be dead.
      
      This patch addresses this by having the tick_do_timer_cpu handover happen
      earlier during the CPU_DYING notification. For this, a new clockevent
      notification type is introduced (CLOCK_EVT_NOTIFY_CPU_DYING) which is triggered
      in hrtimer_cpu_notify().
      Signed-off-by: NSebastien Dugue <sebastien.dugue@bull.net>
      Cc: <stable@kernel.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      94df7de0
    • F
      hrtimers: increase clock min delta threshold while interrupt hanging · 7f22391c
      Frederic Weisbecker 提交于
      Impact: avoid timer IRQ hanging slow systems
      
      While using the function graph tracer on a virtualized system, the
      hrtimer_interrupt can hang the system on an infinite loop.
      
      This can be caused in several situations:
      
       - the hardware is very slow and HZ is set too high
      
       - something intrusive is slowing the system down (tracing under emulation)
      
      ... and the next clock events to program are always before the current time.
      
      This patch implements a reasonable compromise: if such a situation is
      detected, we share the CPUs time in 1/4 to process the hrtimer interrupts.
      This is enough to let the system running without serious starvation.
      
      It has been successfully tested under VirtualBox with 1000 HZ and 100 HZ
      with function graph tracer launched. On both cases, the clock events were
      increased until about 25 ms periodic ticks, which means 40 HZ.
      
      So we change a hard to debug hang into a warning message and a system that
      still manages to limp along.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      7f22391c
    • S
      generic-ipi: use per cpu data for single cpu ipi calls · d7240b98
      Steven Rostedt 提交于
      The smp_call_function can be passed a wait parameter telling it to
      wait for all the functions running on other CPUs to complete before
      returning, or to return without waiting. Unfortunately, this is
      currently just a suggestion and not manditory. That is, the
      smp_call_function can decide not to return and wait instead.
      
      The reason for this is because it uses kmalloc to allocate storage
      to send to the called CPU and that CPU will free it when it is done.
      But if we fail to allocate the storage, the stack is used instead.
      This means we must wait for the called CPU to finish before
      continuing.
      
      Unfortunatly, some callers do no abide by this hint and act as if
      the non-wait option is mandatory. The MTRR code for instance will
      deadlock if the smp_call_function is set to wait. This is because
      the smp_call_function will wait for the other CPUs to finish their
      called functions, but those functions are waiting on the caller to
      continue.
      
      This patch changes the generic smp_call_function code to use per cpu
      variables if the allocation of the data fails for a single CPU call. The
      smp_call_function_many will fall back to the smp_call_function_single
      if it fails its alloc. The smp_call_function_single is modified
      to not force the wait state.
      
      Since we now are using a single data per cpu we must synchronize the
      callers to prevent a second caller modifying the data before the
      first called IPI functions complete. To do so, I added a flag to
      the call_single_data called CSD_FLAG_LOCK. When the single CPU is
      called (which can be called when a many call fails an alloc), we
      set the LOCK bit on this per cpu data. When the caller finishes
      it clears the LOCK bit.
      
      The caller must wait till the LOCK bit is cleared before setting
      it. When it is cleared, there is no IPI function using it.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Acked-by: NJens Axboe <jens.axboe@oracle.com>
      Acked-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      d7240b98
  6. 30 1月, 2009 4 次提交
  7. 28 1月, 2009 2 次提交
  8. 27 1月, 2009 2 次提交
  9. 22 1月, 2009 1 次提交
  10. 21 1月, 2009 7 次提交
    • S
      trace: set max latency variable to zero on default · 1092307d
      Steven Rostedt 提交于
      Impact: trace max latencies on start of latency tracing
      
      This patch sets the max latency to zero whenever one of the
      irq variant tracers or the wakeup tracer is set to current tracer.
      
      Most developers expect to see output when starting up a latency
      tracer. But since the max_latency is already set to max, and
      it takes a latency greater than max_latency to be recorded, there
      is no trace. This is not the expected behavior and has even confused
      myself.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      1092307d
    • S
      trace: stop all recording to ring buffer on ftrace_dump · a442e5e0
      Steven Rostedt 提交于
      Impact: limit ftrace dump output
      
      Currently ftrace_dump only calls ftrace_kill that is a fast way
      to prevent the function tracer functions from being called (just sets
      a flag and clears the function to call, nothing else). It is better
      to also turn off any recording to the ring buffers as well.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      a442e5e0
    • S
      trace: print ftrace_dump at KERN_EMERG log level · faf6861e
      Steven Rostedt 提交于
      Impact: fix to print out ftrace_dump when expected
      
      I was debugging a hard race condition to only find out that
      after I hit the race, my log level was not at level to show
      KERN_INFO. The time it took to trigger the race was wasted because
      I did not capture the trace.
      
      Since ftrace_dump is only called from kernel oops (and only when
      it is set in the kernel command line to do so), or when a
      developer adds it to their own local tree, the log level of
      the print should be at KERN_EMERG to make sure the print appears.
      
      ftrace_dump is not called by a normal user setup, and will not
      add extra unwanted print out to the console. There is no reason
      it should be at KERN_INFO.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      faf6861e
    • L
      ring_buffer: reset write when reserve buffer fail · 551b4048
      Lai Jiangshan 提交于
      Impact: reset struct buffer_page.write when interrupt storm
      
      if struct buffer_page.write is not reset, any succedent committing
      will corrupted ring_buffer:
      
      static inline void
      rb_set_commit_to_write(struct ring_buffer_per_cpu *cpu_buffer)
      {
      	......
      		cpu_buffer->commit_page->commit =
      			cpu_buffer->commit_page->write;
      	......
      }
      
      when "if (RB_WARN_ON(cpu_buffer, next_page == reader_page))", ring_buffer
      is disabled, but some reserved buffers may haven't been committed.
      we need reset struct buffer_page.write.
      
      when "if (unlikely(next_page == cpu_buffer->commit_page))", ring_buffer
      is still available, we should not corrupt it.
      Signed-off-by: NLai Jiangshan <laijs@cn.fujitsu.com>
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      551b4048
    • F
      tracing/function-graph-tracer: fix a regression while suspend to disk · 00f57f54
      Frederic Weisbecker 提交于
      Impact: fix a crash while kernel image restore
      
      When the function graph tracer is running and while suspend to disk, some racy
      and dangerous things happen against this tracer.
      
      The current task will save its registers including the stack pointer which
      contains the return address hooked by the tracer. But the current task will
      continue to enter other functions after that to save the memory, and then
      it will store other return addresses, and finally loose the old depth which
      matches the return address saved in the old stack (during the registers saving).
      
      So on image restore, the code will return to wrong addresses.
      And there are other things: on restore, the task will have it's "current"
      pointer overwritten during registers restoring....switching from one task to
      another... That would be insane to try to trace function graphs at these
      stages.
      
      This patch makes the function graph tracer listening on power events, making
      it's tracing disabled for the current task (the one that performs the
      hibernation work) while suspend/resume to disk, making the tracing safe
      during hibernation.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      00f57f54
    • P
      dma-coherent: Restore dma_alloc_from_coherent() large alloc fall back policy. · 0609697e
      Paul Mundt 提交于
      When doing large allocations (larger than the per-device coherent area)
      the generic memory allocators are silently fallen back on regardless of
      consideration for the per-device constraints.
      
      In the DMA_MEMORY_EXCLUSIVE case falling back on generic memory is not
      an option, as it tends not to be addressable by the DMA hardware in
      question. This issue showed up with the 8139too breakage on the
      Dreamcast, where non-addressable buffers were silently allocated due to
      the size mismatch calculation -- while it should have simply errored out
      upon being unable to satisfy the allocation with the given device
      constraints.
      
      This restores fall back behaviour to what it was before the oversized
      request change caused multiple regressions.
      Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
      0609697e
    • A
      dma-coherent: per-device coherent area is in pages, not bytes. · cdf57cab
      Adrian McMenamin 提交于
      Commit 58c6d3df ("dma-coherent: catch
      oversized requests to dma_alloc_from_coherent()") attempted to add a
      sanity check to bail out on allocations larger than the coherent area.
      
      Unfortunately when this was implemented, the fact the coherent area
      is tracked in pages rather than bytes was overlooked, which subsequently
      broke every single dma_alloc_from_coherent() user, forcing the allocation
      silently through generic memory instead.
      
      Signed-off-by: Adrian McMenamin <adrian@mcmen.demon.co.uk >
      Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
      cdf57cab
  11. 20 1月, 2009 3 次提交
  12. 19 1月, 2009 3 次提交
    • M
      cpuset: fix possible deadlock in async_rebuild_sched_domains · f90d4118
      Miao Xie 提交于
      Lockdep reported some possible circular locking info when we tested cpuset on
      NUMA/fake NUMA box.
      
      =======================================================
      [ INFO: possible circular locking dependency detected ]
      2.6.29-rc1-00224-ga6525042 #111
      -------------------------------------------------------
      bash/2968 is trying to acquire lock:
       (events){--..}, at: [<ffffffff8024c8cd>] flush_work+0x24/0xd8
      
      but task is already holding lock:
       (cgroup_mutex){--..}, at: [<ffffffff8026ad1e>] cgroup_lock_live_group+0x12/0x29
      
      which lock already depends on the new lock.
      ......
      -------------------------------------------------------
      
      Steps to reproduce:
      # mkdir /dev/cpuset
      # mount -t cpuset xxx /dev/cpuset
      # mkdir /dev/cpuset/0
      # echo 0 > /dev/cpuset/0/cpus
      # echo 0 > /dev/cpuset/0/mems
      # echo 1 > /dev/cpuset/0/memory_migrate
      # cat /dev/zero > /dev/null &
      # echo $! > /dev/cpuset/0/tasks
      
      This is because async_rebuild_sched_domains has the following lock sequence:
      run_workqueue(async_rebuild_sched_domains)
      	-> do_rebuild_sched_domains -> cgroup_lock
      
      But, attaching tasks when memory_migrate is set has following:
      cgroup_lock_live_group(cgroup_tasks_write)
      	-> do_migrate_pages -> flush_work
      
      This patch fixes it by using a separate workqueue thread.
      Signed-off-by: NMiao Xie <miaox@cn.fujitsu.com>
      Signed-off-by: NLai Jiangshan <laijs@cn.fujitsu.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      f90d4118
    • P
      hrtimers: fix inconsistent lock state on resume in hres_timers_resume · 1d4a7f1c
      Peter Zijlstra 提交于
      Andrey Borzenkov reported this lockdep assert:
      
      > [17854.688347] =================================
      > [17854.688347] [ INFO: inconsistent lock state ]
      > [17854.688347] 2.6.29-rc2-1avb #1
      > [17854.688347] ---------------------------------
      > [17854.688347] inconsistent {in-hardirq-W} -> {hardirq-on-W} usage.
      > [17854.688347] pm-suspend/18240 [HC0[0]:SC0[0]:HE1:SE1] takes:
      > [17854.688347]  (&cpu_base->lock){++..}, at: [<c0136fcc>] retrigger_next_event+0x5c/0xa0
      > [17854.688347] {in-hardirq-W} state was registered at:
      > [17854.688347]   [<c01443cd>] __lock_acquire+0x79d/0x1930
      > [17854.688347]   [<c01455bc>] lock_acquire+0x5c/0x80
      > [17854.688347]   [<c03092e5>] _spin_lock+0x35/0x70
      > [17854.688347]   [<c0136e61>] hrtimer_run_queues+0x31/0x140
      > [17854.688347]   [<c0128d98>] run_local_timers+0x8/0x20
      > [17854.688347]   [<c0128dd3>] update_process_times+0x23/0x60
      > [17854.688347]   [<c013e274>] tick_periodic+0x24/0x80
      > [17854.688347]   [<c013e2e2>] tick_handle_periodic+0x12/0x70
      > [17854.688347]   [<c0104e24>] timer_interrupt+0x14/0x20
      > [17854.688347]   [<c01607b9>] handle_IRQ_event+0x29/0x60
      > [17854.688347]   [<c0161c59>] handle_level_irq+0x69/0xe0
      > [17854.688347]   [<ffffffff>] 0xffffffff
      > [17854.688347] irq event stamp: 55771
      > [17854.688347] hardirqs last  enabled at (55771): [<c0309125>] _spin_unlock_irqrestore+0x35/0x60
      > [17854.688347] hardirqs last disabled at (55770): [<c0309419>] _spin_lock_irqsave+0x19/0x80
      > [17854.688347] softirqs last  enabled at (54836): [<c0124f54>] __do_softirq+0xc4/0x110
      > [17854.688347] softirqs last disabled at (54831): [<c01049ae>] do_softirq+0x8e/0xe0
      > [17854.688347]
      > [17854.688347] other info that might help us debug this:
      > [17854.688347] 3 locks held by pm-suspend/18240:
      > [17854.688347]  #0:  (&buffer->mutex){--..}, at: [<c01dd4c5>] sysfs_write_file+0x25/0x100
      > [17854.688347]  #1:  (pm_mutex){--..}, at: [<c015056f>] enter_state+0x4f/0x140
      > [17854.688347]  #2:  (dpm_list_mtx){--..}, at: [<c027880f>] device_pm_lock+0xf/0x20
      > [17854.688347]
      > [17854.688347] stack backtrace:
      > [17854.688347] Pid: 18240, comm: pm-suspend Not tainted 2.6.29-rc2-1avb #1
      > [17854.688347] Call Trace:
      > [17854.688347]  [<c0306248>] ? printk+0x18/0x20
      > [17854.688347]  [<c0141fac>] print_usage_bug+0x16c/0x1d0
      > [17854.688347]  [<c0142bcf>] mark_lock+0x8bf/0xc90
      > [17854.688347]  [<c0106b8f>] ? pit_next_event+0x2f/0x40
      > [17854.688347]  [<c01441b0>] __lock_acquire+0x580/0x1930
      > [17854.688347]  [<c030916d>] ? _spin_unlock+0x1d/0x20
      > [17854.688347]  [<c0106b8f>] ? pit_next_event+0x2f/0x40
      > [17854.688347]  [<c013dd38>] ? clockevents_program_event+0x98/0x160
      > [17854.688347]  [<c0142fe8>] ? mark_held_locks+0x48/0x90
      > [17854.688347]  [<c0309125>] ? _spin_unlock_irqrestore+0x35/0x60
      > [17854.688347]  [<c0143229>] ? trace_hardirqs_on_caller+0x139/0x190
      > [17854.688347]  [<c014328b>] ? trace_hardirqs_on+0xb/0x10
      > [17854.688347]  [<c01455bc>] lock_acquire+0x5c/0x80
      > [17854.688347]  [<c0136fcc>] ? retrigger_next_event+0x5c/0xa0
      > [17854.688347]  [<c03092e5>] _spin_lock+0x35/0x70
      > [17854.688347]  [<c0136fcc>] ? retrigger_next_event+0x5c/0xa0
      > [17854.688347]  [<c0136fcc>] retrigger_next_event+0x5c/0xa0
      > [17854.688347]  [<c013711a>] hres_timers_resume+0xa/0x10
      > [17854.688347]  [<c013aa8e>] timekeeping_resume+0xee/0x150
      > [17854.688347]  [<c0273384>] __sysdev_resume+0x14/0x50
      > [17854.688347]  [<c0273407>] sysdev_resume+0x47/0x80
      > [17854.688347]  [<c02791ab>] device_power_up+0xb/0x20
      > [17854.688347]  [<c015043f>] suspend_devices_and_enter+0xcf/0x150
      > [17854.688347]  [<c0150c2f>] ? freeze_processes+0x3f/0x90
      > [17854.688347]  [<c0150614>] enter_state+0xf4/0x140
      > [17854.688347]  [<c01506dd>] state_store+0x7d/0xc0
      > [17854.688347]  [<c0150660>] ? state_store+0x0/0xc0
      > [17854.688347]  [<c0202da4>] kobj_attr_store+0x24/0x30
      > [17854.688347]  [<c01dd53c>] sysfs_write_file+0x9c/0x100
      > [17854.688347]  [<c019916c>] vfs_write+0x9c/0x160
      > [17854.688347]  [<c0103494>] ? restore_nocheck_notrace+0x0/0xe
      > [17854.688347]  [<c01dd4a0>] ? sysfs_write_file+0x0/0x100
      > [17854.688347]  [<c01992ed>] sys_write+0x3d/0x70
      > [17854.688347]  [<c0103371>] sysenter_do_call+0x12/0x31
      
      Andrey's analysis:
      
      > timekeeping_resume() is called via class ->resume
      > method; and according to comments in sysdev_resume() and
      > device_power_up(), they are called with interrupts disabled.
      >
      > Looking at suspend_enter, irqs *are* disabled at this point.
      >
      > So it actually looks like something (may be some driver)
      > unconditionally enabled irqs in resume path.
      
      Add a debug check to test this theory. If it triggers then it
      triggers because the resume code calls it with irqs enabled,
      which is a no-no not just for timekeeping_resume(), but also
      bad for a number of other resume handlers.
      Reported-by: NAndrey Borzenkov <arvidjaar@mail.ru>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      1d4a7f1c
    • J
      relay: fix lock imbalance in relay_late_setup_files · b786c6a9
      Jiri Slaby 提交于
      One fail path in relay_late_setup_files() omits
      mutex_unlock(&relay_channels_mutex);
      Add it.
      Signed-off-by: NJiri Slaby <jirislaby@gmail.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      b786c6a9
  13. 17 1月, 2009 2 次提交
  14. 16 1月, 2009 1 次提交