1. 13 10月, 2007 3 次提交
  2. 12 10月, 2007 1 次提交
  3. 11 10月, 2007 6 次提交
  4. 10 10月, 2007 1 次提交
  5. 08 10月, 2007 2 次提交
  6. 02 10月, 2007 1 次提交
  7. 01 10月, 2007 2 次提交
  8. 27 9月, 2007 1 次提交
  9. 23 9月, 2007 1 次提交
    • T
      clockevents: remove the suspend/resume workaround^Wthinko · b7e113dc
      Thomas Gleixner 提交于
      In a desparate attempt to fix the suspend/resume problem on Andrews
      VAIO I added a workaround which enforced the broadcast of the oneshot
      timer on resume. This was actually resolving the problem on the VAIO
      but was just a stupid workaround, which was not tackling the root
      cause: the assignement of lower idle C-States in the ACPI processor_idle
      code. The cpuidle patches, which utilize the dynamic tick feature and
      go faster into deeper C-states exposed the problem again. The correct
      solution is the previous patch, which prevents lower C-states across
      the suspend/resume.
      
      Remove the enforcement code, including the conditional broadcast timer
      arming, which helped to pamper over the real problem for quite a time.
      The oneshot broadcast flag for the cpu, which runs the resume code can
      never be set at the time when this code is executed. It only gets set,
      when the CPU is entering a lower idle C-State.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Tested-by: NAndrew Morton <akpm@linux-foundation.org>
      Cc: Len Brown <lenb@kernel.org>
      Cc: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
      Cc: Rafael J. Wysocki <rjw@sisk.pl>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      b7e113dc
  10. 21 9月, 2007 1 次提交
    • D
      signalfd simplification · b8fceee1
      Davide Libenzi 提交于
      This simplifies signalfd code, by avoiding it to remain attached to the
      sighand during its lifetime.
      
      In this way, the signalfd remain attached to the sighand only during
      poll(2) (and select and epoll) and read(2).  This also allows to remove
      all the custom "tsk == current" checks in kernel/signal.c, since
      dequeue_signal() will only be called by "current".
      
      I think this is also what Ben was suggesting time ago.
      
      The external effect of this, is that a thread can extract only its own
      private signals and the group ones.  I think this is an acceptable
      behaviour, in that those are the signals the thread would be able to
      fetch w/out signalfd.
      Signed-off-by: NDavide Libenzi <davidel@xmailserver.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      b8fceee1
  11. 20 9月, 2007 6 次提交
  12. 16 9月, 2007 5 次提交
    • T
      clockevents: prevent stale tick update on offline cpu · 5e41d0d6
      Thomas Gleixner 提交于
      Taking a cpu offline removes the cpu from the online mask before the
      CPU_DEAD notification is done. The clock events layer does the cleanup
      of the dead CPU from the CPU_DEAD notifier chain. tick_do_timer_cpu is
      used to avoid xtime lock contention by assigning the task of jiffies
      xtime updates to one CPU. If a CPU is taken offline, then this
      assignment becomes stale. This went unnoticed because most of the time
      the offline CPU went dead before the online CPU reached __cpu_die(),
      where the CPU_DEAD state is checked. In the case that the offline CPU did
      not reach the DEAD state before we reach __cpu_die(), the code in there
      goes to sleep for 100ms. Due to the stale time update assignment, the
      system is stuck forever.
      
      Take the assignment away when a cpu is not longer in the cpu_online_mask.
      We do this in the last call to tick_nohz_stop_sched_tick() when the offline
      CPU is on the way to the final play_dead() idle entry.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      5e41d0d6
    • T
      clockevents: do not shutdown the oneshot broadcast device · 31d9b393
      Thomas Gleixner 提交于
      When a cpu goes offline it is removed from the broadcast masks. If the
      mask becomes empty the code shuts down the broadcast device. This is
      wrong, because the broadcast device needs to be ready for the online
      cpu going idle (into a c-state, which stops the local apic timer).
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      31d9b393
    • T
      clockevents: Enforce oneshot broadcast when broadcast mask is set on resume · 07eec6af
      Thomas Gleixner 提交于
      The jinxed VAIO refuses to resume without hitting keys on the keyboard
      when this is not enforced. It is unclear why the cpu ends up in a lower
      C State without notifying the clock events layer, but enforcing the
      oneshot broadcast here is safe.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      07eec6af
    • T
      timekeeping: Prevent time going backwards on resume · 6a669ee8
      Thomas Gleixner 提交于
      Timekeeping resume adjusts xtime by adding the slept time in seconds and
      resets the reference value of the clock source (clock->cycle_last).
      clock->cycle last is used to calculate the delta between the last xtime
      update and the readout of the clock source in __get_nsec_offset(). xtime
      plus the offset is the current time. The resume code ignores the delta
      which had already elapsed between the last xtime update and the actual
      time of suspend. If the suspend time is short, then we can see time
      going backwards on resume.
      
      Suspend:
      offs_s = clock->read() - clock->cycle_last;
      now = xtime + offs_s;
      timekeeping_suspend_time = read_rtc();
      
      Resume:
      sleep_time = read_rtc() - timekeeping_suspend_time;
      xtime.tv_sec += sleep_time;
      clock->cycle_last = clock->read();
      offs_r = clock->read() - clock->cycle_last;
      now = xtime + offs_r;
      
      if sleep_time_seconds == 0 and offs_r < offs_s, then time goes
      backwards.
      
      Fix this by storing the offset from the last xtime update and add it to
      xtime during resume, when we reset clock->cycle_last:
      
      sleep_time = read_rtc() - timekeeping_suspend_time;
      xtime.tv_sec += sleep_time;
      xtime += offs_s;	/* Fixup xtime offset at suspend time */
      clock->cycle_last = clock->read();
      offs_r = clock->read() - clock->cycle_last;
      now = xtime + offs_r;
      
      Thanks to Marcelo for tracking this down on the OLPC and providing the
      necessary details to analyze the root cause.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: John Stultz <johnstul@us.ibm.com>
      Cc: Tosatti <marcelo@kvack.org>
      6a669ee8
    • T
      timekeeping: access rtc outside of xtime lock · 3be90950
      Thomas Gleixner 提交于
      Lockdep complains about the access of rtc in timekeeping_suspend
      inside the interrupt disabled region of the write locked xtime lock.
      Move the access outside.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: John Stultz <johnstul@us.ibm.com>
      3be90950
  13. 12 9月, 2007 3 次提交
    • T
      Fix "no_sync_cmos_clock" logic inversion in kernel/time/ntp.c · 298a5df4
      Tony Breeds 提交于
      Seems to me that this timer will only get started on platforms that say
      they don't want it?
      Signed-off-by: NTony Breeds <tony@bakeyournoodle.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Gabriel Paubert <paubert@iram.es>
      Cc: Zachary Amsden <zach@vmware.com>
      Acked-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: John Stultz <johnstul@us.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      298a5df4
    • M
      Restore call_usermodehelper_pipe() behaviour · 3210f0ec
      Michael Ellerman 提交于
      The semantics of call_usermodehelper_pipe() used to be that it would fork
      the helper, and wait for the kernel thread to be started.  This was
      implemented by setting sub_info.wait to 0 (implicitly), and doing a
      wait_for_completion().
      
      As part of the cleanup done in 0ab4dc92,
      call_usermodehelper_pipe() was changed to pass 1 as the value for wait to
      call_usermodehelper_exec().
      
      This is equivalent to setting sub_info.wait to 1, which is a change from
      the previous behaviour.  Using 1 instead of 0 causes
      __call_usermodehelper() to start the kernel thread running
      wait_for_helper(), rather than directly calling ____call_usermodehelper().
      
      The end result is that the calling kernel code blocks until the user mode
      helper finishes.  As the helper is expecting input on stdin, and now no one
      is writing anything, everything locks up (observed in do_coredump).
      
      The fix is to change the 1 to UMH_WAIT_EXEC (aka 0), indicating that we
      want to wait for the kernel thread to be started, but not for the helper to
      finish.
      Signed-off-by: NMichael Ellerman <michael@ellerman.id.au>
      Acked-by: NAndi Kleen <ak@suse.de>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      3210f0ec
    • A
      futex_compat: fix list traversal bugs · 179c85ea
      Arnd Bergmann 提交于
      The futex list traversal on the compat side appears to have
      a bug.
      
      It's loop termination condition compares:
      
              while (compat_ptr(uentry) != &head->list)
      
      But that can't be right because "uentry" has the special
      "pi" indicator bit still potentially set at bit 0.  This
      is cleared by fetch_robust_entry() into the "entry"
      return value.
      
      What this seems to mean is that the list won't terminate
      when list iteration gets back to the the head.  And we'll
      also process the list head like a normal entry, which could
      cause all kinds of problems.
      
      So we should check for equality with "entry".  That pointer
      is of the non-compat type so we have to do a little casting
      to keep the compiler and sparse happy.
      
      The same problem can in theory occur with the 'pending'
      variable, although that has not been reported from users
      so far.
      
      Based on the original patch from David Miller.
      Acked-by: NIngo Molnar <mingo@elte.hu>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: David Miller <davem@davemloft.net>
      Signed-off-by: NArnd Bergmann <arnd@arndb.de>
      Cc: <stable@kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      179c85ea
  14. 11 9月, 2007 1 次提交
    • R
      Fix spurious syscall tracing after PTRACE_DETACH + PTRACE_ATTACH · 7d941432
      Roland McGrath 提交于
      When PTRACE_SYSCALL was used and then PTRACE_DETACH is used, the
      TIF_SYSCALL_TRACE flag is left set on the formerly-traced task.  This
      means that when a new tracer comes along and does PTRACE_ATTACH, it's
      possible he gets a syscall tracing stop even though he's never used
      PTRACE_SYSCALL.  This happens if the task was in the middle of a system
      call when the second PTRACE_ATTACH was done.  The symptom is an
      unexpected SIGTRAP when the tracer thinks that only SIGSTOP should have
      been provoked by his ptrace calls so far.
      
      A few machines already fixed this in ptrace_disable (i386, ia64, m68k).
      But all other machines do not, and still have this bug.  On x86_64, this
      constitutes a regression in IA32 compatibility support.
      
      Since all machines now use TIF_SYSCALL_TRACE for this, I put the
      clearing of TIF_SYSCALL_TRACE in the generic ptrace_detach code rather
      than adding it to every other machine's ptrace_disable.
      Signed-off-by: NRoland McGrath <roland@redhat.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      7d941432
  15. 05 9月, 2007 6 次提交