1. 09 1月, 2006 1 次提交
  2. 07 1月, 2006 1 次提交
  3. 16 12月, 2005 1 次提交
  4. 09 11月, 2005 1 次提交
    • N
      [PATCH] sched: resched and cpu_idle rework · 64c7c8f8
      Nick Piggin 提交于
      Make some changes to the NEED_RESCHED and POLLING_NRFLAG to reduce
      confusion, and make their semantics rigid.  Improves efficiency of
      resched_task and some cpu_idle routines.
      
      * In resched_task:
      - TIF_NEED_RESCHED is only cleared with the task's runqueue lock held,
        and as we hold it during resched_task, then there is no need for an
        atomic test and set there. The only other time this should be set is
        when the task's quantum expires, in the timer interrupt - this is
        protected against because the rq lock is irq-safe.
      
      - If TIF_NEED_RESCHED is set, then we don't need to do anything. It
        won't get unset until the task get's schedule()d off.
      
      - If we are running on the same CPU as the task we resched, then set
        TIF_NEED_RESCHED and no further action is required.
      
      - If we are running on another CPU, and TIF_POLLING_NRFLAG is *not* set
        after TIF_NEED_RESCHED has been set, then we need to send an IPI.
      
      Using these rules, we are able to remove the test and set operation in
      resched_task, and make clear the previously vague semantics of
      POLLING_NRFLAG.
      
      * In idle routines:
      - Enter cpu_idle with preempt disabled. When the need_resched() condition
        becomes true, explicitly call schedule(). This makes things a bit clearer
        (IMO), but haven't updated all architectures yet.
      
      - Many do a test and clear of TIF_NEED_RESCHED for some reason. According
        to the resched_task rules, this isn't needed (and actually breaks the
        assumption that TIF_NEED_RESCHED is only cleared with the runqueue lock
        held). So remove that. Generally one less locked memory op when switching
        to the idle thread.
      
      - Many idle routines clear TIF_POLLING_NRFLAG, and only set it in the inner
        most polling idle loops. The above resched_task semantics allow it to be
        set until before the last time need_resched() is checked before going into
        a halt requiring interrupt wakeup.
      
        Many idle routines simply never enter such a halt, and so POLLING_NRFLAG
        can be always left set, completely eliminating resched IPIs when rescheduling
        the idle task.
      
        POLLING_NRFLAG width can be increased, to reduce the chance of resched IPIs.
      Signed-off-by: NNick Piggin <npiggin@suse.de>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Con Kolivas <kernel@kolivas.org>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      64c7c8f8
  5. 31 10月, 2005 1 次提交
  6. 28 10月, 2005 2 次提交
  7. 03 10月, 2005 1 次提交
  8. 23 9月, 2005 1 次提交
  9. 20 9月, 2005 1 次提交
  10. 18 9月, 2005 1 次提交
  11. 11 9月, 2005 2 次提交
    • N
      [PATCH] alpha: fix-up schedule_timeout() usage · 20c6abd1
      Nishanth Aravamudan 提交于
      Use schedule_timeout_interruptible() instead of
      set_current_state()/schedule_timeout() to reduce kernel size.
      Signed-off-by: NNishanth Aravamudan <nacc@us.ibm.com>
      Cc: Richard Henderson <rth@twiddle.net>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      20c6abd1
    • I
      [PATCH] spinlock consolidation · fb1c8f93
      Ingo Molnar 提交于
      This patch (written by me and also containing many suggestions of Arjan van
      de Ven) does a major cleanup of the spinlock code.  It does the following
      things:
      
       - consolidates and enhances the spinlock/rwlock debugging code
      
       - simplifies the asm/spinlock.h files
      
       - encapsulates the raw spinlock type and moves generic spinlock
         features (such as ->break_lock) into the generic code.
      
       - cleans up the spinlock code hierarchy to get rid of the spaghetti.
      
      Most notably there's now only a single variant of the debugging code,
      located in lib/spinlock_debug.c.  (previously we had one SMP debugging
      variant per architecture, plus a separate generic one for UP builds)
      
      Also, i've enhanced the rwlock debugging facility, it will now track
      write-owners.  There is new spinlock-owner/CPU-tracking on SMP builds too.
      All locks have lockup detection now, which will work for both soft and hard
      spin/rwlock lockups.
      
      The arch-level include files now only contain the minimally necessary
      subset of the spinlock code - all the rest that can be generalized now
      lives in the generic headers:
      
       include/asm-i386/spinlock_types.h       |   16
       include/asm-x86_64/spinlock_types.h     |   16
      
      I have also split up the various spinlock variants into separate files,
      making it easier to see which does what. The new layout is:
      
         SMP                         |  UP
         ----------------------------|-----------------------------------
         asm/spinlock_types_smp.h    |  linux/spinlock_types_up.h
         linux/spinlock_types.h      |  linux/spinlock_types.h
         asm/spinlock_smp.h          |  linux/spinlock_up.h
         linux/spinlock_api_smp.h    |  linux/spinlock_api_up.h
         linux/spinlock.h            |  linux/spinlock.h
      
      /*
       * here's the role of the various spinlock/rwlock related include files:
       *
       * on SMP builds:
       *
       *  asm/spinlock_types.h: contains the raw_spinlock_t/raw_rwlock_t and the
       *                        initializers
       *
       *  linux/spinlock_types.h:
       *                        defines the generic type and initializers
       *
       *  asm/spinlock.h:       contains the __raw_spin_*()/etc. lowlevel
       *                        implementations, mostly inline assembly code
       *
       *   (also included on UP-debug builds:)
       *
       *  linux/spinlock_api_smp.h:
       *                        contains the prototypes for the _spin_*() APIs.
       *
       *  linux/spinlock.h:     builds the final spin_*() APIs.
       *
       * on UP builds:
       *
       *  linux/spinlock_type_up.h:
       *                        contains the generic, simplified UP spinlock type.
       *                        (which is an empty structure on non-debug builds)
       *
       *  linux/spinlock_types.h:
       *                        defines the generic type and initializers
       *
       *  linux/spinlock_up.h:
       *                        contains the __raw_spin_*()/etc. version of UP
       *                        builds. (which are NOPs on non-debug, non-preempt
       *                        builds)
       *
       *   (included on UP-non-debug builds:)
       *
       *  linux/spinlock_api_up.h:
       *                        builds the _spin_*() APIs.
       *
       *  linux/spinlock.h:     builds the final spin_*() APIs.
       */
      
      All SMP and UP architectures are converted by this patch.
      
      arm, i386, ia64, ppc, ppc64, s390/s390x, x64 was build-tested via
      crosscompilers.  m32r, mips, sh, sparc, have not been tested yet, but should
      be mostly fine.
      
      From: Grant Grundler <grundler@parisc-linux.org>
      
        Booted and lightly tested on a500-44 (64-bit, SMP kernel, dual CPU).
        Builds 32-bit SMP kernel (not booted or tested).  I did not try to build
        non-SMP kernels.  That should be trivial to fix up later if necessary.
      
        I converted bit ops atomic_hash lock to raw_spinlock_t.  Doing so avoids
        some ugly nesting of linux/*.h and asm/*.h files.  Those particular locks
        are well tested and contained entirely inside arch specific code.  I do NOT
        expect any new issues to arise with them.
      
       If someone does ever need to use debug/metrics with them, then they will
        need to unravel this hairball between spinlocks, atomic ops, and bit ops
        that exist only because parisc has exactly one atomic instruction: LDCW
        (load and clear word).
      
      From: "Luck, Tony" <tony.luck@intel.com>
      
         ia64 fix
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NArjan van de Ven <arjanv@infradead.org>
      Signed-off-by: NGrant Grundler <grundler@parisc-linux.org>
      Cc: Matthew Wilcox <willy@debian.org>
      Signed-off-by: NHirokazu Takata <takata@linux-m32r.org>
      Signed-off-by: NMikael Pettersson <mikpe@csd.uu.se>
      Signed-off-by: NBenoit Boissinot <benoit.boissinot@ens-lyon.org>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      fb1c8f93
  12. 10 9月, 2005 3 次提交
  13. 09 9月, 2005 1 次提交
  14. 08 9月, 2005 1 次提交
  15. 30 8月, 2005 1 次提交
    • S
      [PATCH] convert signal handling of NODEFER to act like other Unix boxes. · 69be8f18
      Steven Rostedt 提交于
      It has been reported that the way Linux handles NODEFER for signals is
      not consistent with the way other Unix boxes handle it.  I've written a
      program to test the behavior of how this flag affects signals and had
      several reports from people who ran this on various Unix boxes,
      confirming that Linux seems to be unique on the way this is handled.
      
      The way NODEFER affects signals on other Unix boxes is as follows:
      
      1) If NODEFER is set, other signals in sa_mask are still blocked.
      
      2) If NODEFER is set and the signal is in sa_mask, then the signal is
      still blocked. (Note: this is the behavior of all tested but Linux _and_
      NetBSD 2.0 *).
      
      The way NODEFER affects signals on Linux:
      
      1) If NODEFER is set, other signals are _not_ blocked regardless of
      sa_mask (Even NetBSD doesn't do this).
      
      2) If NODEFER is set and the signal is in sa_mask, then the signal being
      handled is not blocked.
      
      The patch converts signal handling in all current Linux architectures to
      the way most Unix boxes work.
      
      Unix boxes that were tested:  DU4, AIX 5.2, Irix 6.5, NetBSD 2.0, SFU
      3.5 on WinXP, AIX 5.3, Mac OSX, and of course Linux 2.6.13-rcX.
      
      * NetBSD was the only other Unix to behave like Linux on point #2. The
      main concern was brought up by point #1 which even NetBSD isn't like
      Linux.  So with this patch, we leave NetBSD as the lonely one that
      behaves differently here with #2.
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      69be8f18
  16. 24 8月, 2005 1 次提交
  17. 05 8月, 2005 1 次提交
  18. 28 7月, 2005 1 次提交
  19. 27 7月, 2005 1 次提交
  20. 01 7月, 2005 2 次提交
    • I
      [PATCH] alpha smp fix (part #2) · 4a89a04f
      Ivan Kokshaysky 提交于
      This fixes the bug that caused BUG_ON(!irqs_disabled()) to trigger in
      run_posix_cpu_timers() on alpha/smp.  We didn't disable interrupts
      properly before calling smp_percpu_timer_interrupt().
      
      We *do* disable interrupts everywhere except this unfortunate
      smp_percpu_timer_interrupt().  Fixed thus.
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      4a89a04f
    • I
      [PATCH] alpha smp fix · eaf05be0
      Ivan Kokshaysky 提交于
      As usual, the reason of this breakage is quite silly: in do_entIF, we
      are checking for PS == 0 to see whether it was a kernel BUG() or
      userspace trap.
      
      It works, unless BUG() happens in interrupt - PS is not 0 in kernel mode
      due to non-zero IPL, and the things get messed up horribly then.  In
      this particular case it was BUG_ON(!irqs_disabled()) triggered in
      run_posix_cpu_timers(), so we ended up shooting "current" with the
      bursts of one SIGTRAP and three SIGILLs on every timer tick.  ;-)
      eaf05be0
  21. 17 5月, 2005 1 次提交
  22. 01 5月, 2005 2 次提交
  23. 22 4月, 2005 1 次提交
  24. 17 4月, 2005 1 次提交
    • L
      Linux-2.6.12-rc2 · 1da177e4
      Linus Torvalds 提交于
      Initial git repository build. I'm not bothering with the full history,
      even though we have it. We can create a separate "historical" git
      archive of that later if we want to, and in the meantime it's about
      3.2GB when imported into git - space that would just make the early
      git days unnecessarily complicated, when we don't have a lot of good
      infrastructure for it.
      
      Let it rip!
      1da177e4